Search is not available for this dataset
source
string | text
string | token_count
int64 | metadata
dict |
---|---|---|---|
epfl-llm/guidelines | All MMWR HTML documents published before January 1993 electronic conversions from ASCII text into HTML. This conversion may have resulted in character translation or format errors in the HTML version. Users should not rely on this HTML document, but are referred to the original MMWR paper copy for the official text, figures, and tables. An original paper copy of this issue can be obtained from the Superintendent of Documents, U.S. Government Printing Office (GPO), Washington, DC 20402-9371; telephone: (202) 512-1800. Contact GPO for current prices.# Glossary
# Foreword
In the past decade, public health emergencies have occurred with great frequency --and the number of people affected has captured the attention of the world. Many of these emergencies involved some degree of forced population migration, and almost all have been associated with severe food shortages. Natural disasters, such as droughts and floods, have been partially responsible, but the most common causes of these emergencies have been war and civil strife. Since 1984, the number of refugees dependent for their survival on international assistance has more than doubled to a current estimate of approximately 17 million persons --almost all in developing countries. Kurdish refugees fleeing Iraq captured the world's attention briefly in early 1991, but the desperate plight of many others --especially the 5 million refugees in Africa --receives scant attention from the world media. Even more obscure are the estimated 16-20 million displaced persons who are trapped within their countries by civil wars and are unable to cross borders to seek help from the international community. This situation represents an unprecedented challenge to the international public health community.
CDC has had a long-standing institutional commitment to the problem of famine-affected, refugee, and displaced populations for many years. During the Nigerian Civil War in the 1960s, 20 Epidemic Intelligence Service officers helped maintain public health programs for millions of displaced civilians, who were deprived of their basic needs by that war. Since then, CDC has provided technical assistance to relief agencies working in most of the world's major refugee emergency communities including those in, for example, Ethiopia, Kenya, Malawi, Pakistan, Somalia, Sudan, Thailand, Turkey, and West Africa. CDC, United Nations agencies, countries of asylum, and private voluntary organizations (PVOs) have attempted to adapt traditional epidemiologic techniques and public health programs to the realities of refugee camps and scattered, famine-affected communities. As a result, a considerable body of knowledge and experience has accumulated and has been documented in various issues of the MMWR. This report represents a compilation of this knowledge for dissemination and for providing guidance on certain technical subjects for those involved in future relief programs.
By necessity, this document is unable to cover all aspects of emergency relief. The recommendations provided here will not be effective unless they are supported by adequate preparedness planning, coordination, communications, logistics, personnel management, and relief worker training. Even more critical is ensuring access by relief workers to internally displaced populations --many needy communities are caught in areas of contested sovereignty. Unless the international community can devise ways of providing assistance to communities in these circumstances, it will be impossible to implement these basic public health programs. Finally, the situation of refugees and displaced persons is a timely reminder of the clear interface between public health and social justice. The most effective measure to prevent the high mortality experienced by these populations would be to eliminate the causes of the violence and conflict from which they fled.
# INTRODUCTION
During the past three decades, the most common emergencies affecting the health of large populations in developing countries have involved famine and forced migrations. The public health consequences of mass population displacement have been extensively documented. On some occasions, these migrations have resulted in extremely high rates of mortality, morbidity, and malnutrition. The most severe consequences of population displacement have occurred during the acute emergency phase, when relief efforts are in the early stage. During this phase, deaths --in some cases --were 60 times the crude mortality rate (CMR) among non-refugee populations in the country of origin (1). Although the quality of international disaster response efforts has steadily improved, the human cost of forced migration remains high.
Since the early 1960s, most emergencies involving refugees and displaced persons have taken place in less developed countries where local resources have been insufficient for providing prompt and adequate assistance. The international community's response to the health needs of these populations has been at times inappropriate, relying on teams of foreign medical personnel with little or no training. Hospitals, clinics, and feeding centers have been set up without assessment of preliminary needs, and essential prevention programs have been neglected. More recent relief programs, however, emphasize a primary health care (PHC) approach, focusing on preventive programs such as immunization and oral rehydration therapy (ORT), promoting involvement by the refugee community in the provision of health services, and stressing more effective coordination and information gathering. The PHC approach offers long-term advantages, not only for the directly affected population, but also for the country hosting the refugees. A PHC strategy is sustainable and strengthens the national health development program.
# BACKGROUND
# Classification of Disasters
One way of describing the evolution of disasters is in terms of a "trigger event" leading to "primary effects" and "secondary effects" on vulnerable groups in the population (2). In the case of a rapid-onset natural disaster like an earthquake, the primary effects, deaths and injuries, may be high, but there are few secondary effects. In the case of slow-onset natural disasters like drought and manmade disasters, like war and civil strife, the secondary effects (i.e., decreased food availability, environmental damage, and population displacement) may lead to a higher delayed death toll than that of the initial event. Although population displacement may result from a number of different types of disasters --manmade and natural --the two most common recent trigger events have been food deficits and war. In many parts of the world where food shortages have become common, war and civil strife are major causative factors. Consequently, war, food deficits, famine, and population displacement have been inextricably linked risk factors for increased mortality in certain large populations in Africa, Asia, Latin America, and the Middle East.
The purpose of this report is to describe the public health consequences of famine and population displacement in developing countries and to present the most current recommendations on public health programs of major importance.
indicators and are not useful for early famine detection and the initiation of prevention or mitigation measures. More important in the early detection of famine are "leading" and "intermediate" indicators that reflect changes in the economic, social, and environmental factors that influence the evolution of food shortages and famine.
The leading and intermediate indicators will be useful if they trigger early interventions aimed at ensuring adequate food supplies for the population and at maintaining the purchasing power of vulnerable groups. These measures have included temporary government subsidies for food crops, "food-for-work" programs; government-run, fixed-price food shops; rural employment schemes; the distribution of drought-resistant seeds; and the release of food reserves.
Effective early warning systems might help avert major population movements, thereby allowing local government and international and private voluntary organizations (PVOs) to provide assistance in situ without major disruption in traditional social structures and lifestyle patterns. Affected communities can be surveyed, needy households identified, food and other relief supplies distributed, and major epidemics averted with greater ease and effectiveness in a stable population than in a temporary refugee settlement. National early warning systems have proved effective in preventing famine during the past decade in India and Botswana (8).
When populations are forced to migrate en masse, they usually end up in camps or urban slums characterized by overcrowding, poor sanitation, substandard housing, and limited access to health services. These conditions hamper the effective and equitable distribution of relief supplies and promote the transmission of communicable diseases.
# REPORTS
The most direct and obvious results of famine are severe undernutrition and death. While longitudinal studies have demonstrated that undernourished persons --particularly children -are at higher risk of mortality, the immediate cause of death is usually a communicable disease. Malnutrition causes an increased case-fatality ratio (CFR) in the most common childhood communicable diseases (i.e., measles, diarrheal disease, malaria, and acute respiratory infections (ARIs)). Those at highest risk of mortality during nonfamine times -namely, the poor, the elderly, women, and young children --are the same groups most at risk for the morbidity and mortality caused by famine. In addition, the movement of populations into crowded and unsanitary camps, the violence associated with forced migrations, and the negative psychological effects of fear, uncertainty, and dependency contribute to the health problems experienced by displaced persons.
# Mortality
Mortality rates are the most specific indicators of the health status of emergency-affected populations. Mortality rates have been estimated retrospectively from hospital and burial records, or from community-based surveys, and prospectively from 24-hour burial site surveillance. Among the many problems encountered in estimating mortality under emergency conditions are recall bias in surveys, families' failure to report perinatal deaths, inaccurate denominators (overall population size, births, age-specific populations), and lack of standard reporting procedures. In general, bias tends to underestimate mortality rates, since deaths are usually underreported or undercounted, and population size is often exaggerated. Most reports of famine-related mortality have come from populations that have experienced considerable displacement. It is possible that mortality rates are lower in those populations that remain in their original villages and homes. A comparison of mortality in displaced vs. nondisplaced, famine-affected populations is problematic because displacement itself may reflect a more serious baseline situation. Nonetheless, comparisons between displaced and nondisplaced populations during famine on one hand, and between refugees and local, host country populations on the other hand, show that in nearly all cases the displaced and refugee populations experience a markedly higher CMR.
The CMRs reported in various refugee, internally displaced, and famine-affected (but nondisplaced) populations, respectively, during the emergency phase of relief operations in the past 15 years are listed in Tables 2, 3, and 4. These rates are compared with baseline CMRs reported for nonfamine-affected and nondisplaced populations, or, in the case of refugees, with CMRs in their country of origin. CMRs in these tables are expressed as deaths per 1,000 per month to reflect the short reporting periods; comparison rates have been extrapolated from annual CMRs published by the United Nations Children's Fund (UNICEF) (13). Although CMRs reported in refugee emergencies have not been adjusted for age and sex, it is unlikely that demographic differences between refugee and non-refugee populations account for the excess mortality found among many of the latter.
Monthly CMRs recorded immediately after the initial influx of Cambodian refugees into Thailand (1979) Somalia (1980) andin Sudan (1985). In eastern Ethiopia in 1988-1989, initially low mortality rates among Somali refugees increased after 6 months, reaching a peak at 9 months (Figure 3). Overall, less than 1% of Cambodian refugees in Thai camps died during the first 12 months; 9% of refugees in eastern Sudan died during the same period of time (1).
Political and security factors often obstruct the accurate documentation of death rates among internally displaced populations; however, a few situations have been well documented. In Mozambique (1983), Ethiopia (1984-1985, and Sudan (1988), CMRs estimated by surveillance or population-based surveys of internally displaced persons ranged between 4 and 70 times the death rates in nondisplaced populations in the same country. In the Korem area of Ethiopia, CMRs recorded among camp populations displaced by famine in 1985 were 7-10 times those of settled villagers in a similar highland zone affected by the famine. In Monrovia, the capital of Liberia, the death rate among civilians displaced during the 1990 civil war was 7 times the pre-war death rate (Holland MSF, unpublished data, January 1991).
As in stable populations in developing countries, age-specific death rates in displaced and refugee populations are highest in children less than 5 years of age. A mortality survey of Kurdish refugees at the Turkey-Iraq border during 1991 revealed that 63% of all deaths occurred among children less than 5 years of age, who comprised approximately 18% of the population (11). Although absolute death rates are highest in infants less than 1 year of age, the relative increase in mortality during emergencies may be highest in children 1-12 years of age (1).
# Cause-specific mortality
The major reported causes of death in refugee and displaced populations have been those same diseases that cause high death rates in nondisplaced populations in developing countries: malnutrition, diarrheal diseases, measles, ARIs, and malaria. These diseases consistently account for 60%-95% of all reported causes of death in these populations (Figures 4,5,and 6). Specific reports on these and other communicable diseases are presented in a later section.
In those situations where malnutrition was not classified as an immediate cause of death (i.e., Sudan and Somalia), it was a major underlying factor accounting for the high CFRs from communicable diseases. This synergism between high malnutrition prevalence and increased incidence of communicable diseases explains much of the excess mortality seen in refugee and displaced populations.
A study of 42 refugee populations completed in 1989 examined acute protein energy malnutrition (PEM) prevalence and crude unadjusted monthly mortality rates, gathered from 1984-1988. Analysis of the data showed a strong positive association between PEM prevalence and CMRs. Populations with PEM prevalence rates of less than 5% had a mean CMR of 0.9/1,000/month. Refugee populations with PEM prevalences of greater than or equal to 40%, however, experienced a mean CMR of 37/1,000/month with a range of 4/1,000/month to 177/1,000/month (Figure 7). The rate ratio between the lowest and highest CMR values was 40. 7 (14).
The close correlation between malnutrition prevalence and crude mortality during a relief operation for Somali refugees in eastern Ethiopia in 1988-1989 is clearly demonstrated in Figure 8. Malnutrition prevalence was estimated by serial, cross-sectional, cluster sample surveys of children less than 5 years of age, and monthly death rates were estimated retrospectively by a population-based survey in August 1989. During the period of high malnutrition prevalence and high mortality (March-May 1989), food rations provided an average of approximately 1,400 kilocalories (kcal)/person/day instead of the recommended minimum of 1,900 kcal/person/day (9). Likewise, in eastern Sudan in 1985, inadequate amounts of food (1,870 kcal/person/day) were distributed to Ethiopian refugees during the first 5 months after their arrival in the camps. Malnutrition rates, as well as mortality rates, remained high during this period (Figure 3) (Table 5). In addition, a severe measles outbreak in the Sudanese camps added to the high mortality (21).
# Nutritional Diseases
Protein-energy malnutrition PEM can refer to either acute or chronic undernutrition. Because children less than 5 years of age are among the most acutely affected by undernutrition, assessment of this age group by anthropometry is usually done to determine PEM prevalence in a population (see "Indicators of Acute Undernutrition"). In general, acute undernutrition results in wasting and is assessed by an index of weight-for-height (WFH); however, edema of the extremities may be associated with acute undernutrition in which case, a clinical assessment is necessary. Chronic undernutrition produces stunting and typically results in a diminished height-for-age index.
The prevalence of moderate to severe acute undernutrition in a random sample of children less than 5 years of age is generally a reliable indicator of this condition in a population. Since weight is more sensitive to sudden changes in food availability than height, nutritional assessments during emergencies focus on measuring WFH. Also, WFH is a more appropriate measurement for ongoing monitoring of the effectiveness of feeding programs. As a screening measurement, the mid-upper arm circumference (MUAC) may also be used to assess acute undernutrition, although there is not complete agreement on which cutoff values should be used as indicators. Nutritional assessment methods are fully described in the Rapid Nutrition Assessment Manual - Anthropometric indices such as WFH and height-for-age are interpreted by comparison with a "reference population". Index values are assigned a "Z-score" based on the number of standard deviations above or below the median value in the reference population. Currently, the World Health Organization (WHO) recommends the use of the CDC/NCHS reference population for nutritional assessments in all countries (22). Before the mid-1980's, anthropometric data was reported as a percentage of the median of the reference population value. Current international guidelines, however, recommend the use of Z-scores to report nutritional assessment data. Tables in this report define acute undernutrition on the basis of percentage median in order to allow comparisons of recent data with data from surveys performed before the mid-1980s.
In a well-nourished population in which WFH values are distributed normally (i.e., the reference population), approximately 3% of children less than 5 years of age will have WFH Z-scores of less than -2. For less developed countries with lower "normal" nutritional intake levels, 5% of the children may have a Z-score less than -2 when compared with the reference population median, particularly at certain times of the year. Relief organizations agree that a nutritional emergency exists if greater than 8% of the children sampled have a Z-score less than -2. An excess of even 1% of children with Z-scores less than -3 indicates a need for immediate action. Acute PEM prevalence rates have been high in recent famine-affected populations, especially in Africa (Table 6).
In addition, acute undernutrition prevalence rates have been elevated in many displaced and refugee populations during the past 12 years, ranging as high as 50% in eastern Sudan in 1985 (Tables 5 and 7). PEM rates have decreased rapidly in situations where effective emergency relief operations have been mounted promptly, i.e., Thailand (1979) andPakistan (1980). However, in other emergencies, such as in Somalia (1980) and Sudan (1985), PEM rates have remained high (greater than 20%) for 6-8 months. Of even greater concern is the observation that acute undernutrition rates among Somali refugees in Ethiopia (1988Ethiopia ( -1989 actually increased 6 months after a relief program was launched. Although most high acute undernutrition prevalence has been associated with inadequate food rations, it appears that malnutrition developed among Kurdish children 1-2 years of age in Turkey within a period of 1-2 months, primarily because of the high incidence of diarrheal diseases in the camps (10). Among internally displaced civilian populations, high PEM prevalence has been associated with the intentional use of food as a weapon by competing military forces (30).
The use of serial anthropometry surveys as monitoring tools has certain limitations when mortality rates are high. For example, an analysis of anthropometric data from two crosssectional surveys in a refugee camp in Sudan in 1985 initially implied a relatively stable nutritional situation. In January, the prevalence of acute malnutrition in children less than 5 years of age was 26.3%; in March, the rate was 28.4%. During these two months, almost 13% of the children in the camp died, mainly from measles and diarrheal diseases. In this instance, the elevated child mortality rate masked diminished nutritional status in the population. Many malnourished children in the first survey, who had died, were "replaced" in the second survey sample by surviving children whose nutritional status had meanwhile deteriorated (31). Thus, anthropometry data need to be interpreted in the context of concurrent mortality rates.
# Micronutrient deficiency diseases
In addition to PEM, micronutrient deficiencies play a key role in nutrition-related morbidity and mortality. The importance of micronutrient deficiencies in famine-affected and displaced populations has recently been extensively documented. In addition to deficiencies of vitamin A and iron, conditions widely recognized as important childhood problems in developing countries (i.e., epidemics of scurvy and pellagra) have also been reported in refugee populations during the past decade (Table 8).
# Vitamin A deficiency
The most common deficiency syndrome in emergency affected populations is caused by lack of vitamin A. Ocular signs of vitamin A deficiency --known as xerophthalmia --include night blindness and Bitot's spots in the earlier stages. Xerophthalmia progresses to corneal xerosis, ulceration and scarring, and eventually blindness. Signs of xerophthalmia were detected in 7% of children surveyed in one region of Somalia during the drought of 1986-1987 (27); 2.1% in drought-affected Niger in 1985 (24); 4.3% among Kampuchean refugees in Thailand (36); and 2.7% in a region of Mauritania in 1984 (23). Recent data suggest that vitamin A deficiency is linked with high childhood mortality (37-38).
Famine-affected and displaced populations often have low levels of dietary vitamin A intake before experiencing famine or displacement, and therefore, may have very low vitamin A reserves. Furthermore, the typical rations provided in large-scale relief operations lack vitamin A, putting these populations at high risk. In addition, some communicable diseases that are highly incident in refugee camps --measles and diarrheal diseases --rapidly deplete vitamin A stores. Depleted vitamin A stores need to be adequately replenished during recovery from these diseases to prevent the deficiency from becoming clinically important.
# Vitamin C deficiency (scurvy)
Although scurvy has been reported rarely in stable populations in developing countries, many outbreaks have occurred in displaced and famine-affected populations in recent years, primarily because of inadequate vitamin C in rations. In 1981, an outbreak of more than 2,000 cases of scurvy occurred in the refugee camps of the Gedo region of Somalia. These Ethiopian refugees had traditionally obtained sufficient dietary vitamin C from camel's milk.
Once in refugee camps they subsisted on a ration devoid of vitamin C. The outbreak was precipitated when local markets, where refugees had exchanged rations for fresh fruit and vegetables, were suddenly closed (39).
Active surveillance for scurvy among Ethiopian refugees in Somalia and Sudan in 1987 revealed cumulative incidence rates of up to 19.8% in some camps, with initial onset reported between 3-10 months after the arrival of the refugees (32). Cross-sectional surveys performed in 1986-1987 reported point prevalence rates as high as 45% among females and 36% among males; prevalence increased with age. The prevalence of scurvy was associated with the period of residence in camps, and the time exposed to rations lacking in vitamin C. In 1989, a population survey of children less than 5 years of age in Hartisheik camp in eastern Ethiopia in 1989 found the prevalence of clinical scurvy to be 2% (19). The international community has not developed an adequate strategy to prevent scurvy in refugee camps at the Horn of Africa, as demonstrated by an outbreak that took place among adult males (former Ethiopian soldiers) in a camp in eastern Sudan during 1991 (Bhatia R, personal communication, October 1991).
# Niacin deficiency
Pellagra is the condition resulting from a severe deficiency of biologically available niacin in the diet. Once common in the southeastern United States, Italy, and Spain, pellagra now occurs mainly in maize-or sorghum-consuming populations in southern Africa, North Africa, and India. An outbreak of pellagra occurred in Malawi among Mozambican refugees between July and October 1989. Eleven camps reported a total of 1,169 patients; 20% of the patients were children less than 5 years of age (40). The French agency Medecins Sans Frontieres (MSF) instituted active surveillance at the time. Another outbreak occurred between February and October 1990 with 17,878 cases reported among 285,942 refugees in the same 11 sites (attack rate of 6.3%). More than 18,000 cases of deficiency were reported from all districts hosting approximately 900,000 refugees in southern Malawi, for an overall attack rate of 2.0% (35). Food rations contained an average of 4.9 mg of available niacin/person/day; the Food and Agriculture Organization (FAO)/WHO recommendations for daily niacin intake range from 5.4 mg for infants to 20.3 mg for adults. This outbreak occurred when relief efforts failed to include an adequate supply of groundnuts (peanuts), the major source of niacin in refugee rations. The lack of variety in basic relief rations is a major risk factor for pellagra and other micronutrient deficiency syndromes. Treatment of maize flour with lime (which converts niacin to a biologically available form of niacin) and the inclusion of beans, groundnuts, or fortified cereals in daily rations increase the total intake of available niacin and will prevent the development of pellagra (35).
# Anemia
The high prevalence of anemia in refugee and displaced populations has been noted in few publications to date, but unpublished data from CDC assessments suggest that it may be a serious problem in some areas. In 1990, a survey of Palestinian refugees in Syria, Jordan, and the West Bank revealed that the prevalence of anemia among infants and young children was between 50% and 70%. Anemia among both nonpregnant and pregnant women was shown to be 25%-50%, whereas a low anemia prevalence rate was found among the male population. (In this study anemia was defined as a hemoglobin concentration of less than 11 g/dL among children and less than 12 g/dL among nonpregnant women. Pregnant women were considered to be anemic if their hemoglobin concentration was less than 11.5 g/dL during either the first or third trimester, or less than 11.0 g/dL during the second trimester.) These findings suggest that iron deficiency, which preferentially affects women and children, was the primary cause of anemia in this population.
A 1987 study among refugees in Somalia demonstrated an anemia prevalence rate of 44%-71% among pregnant women, with that proportion being even greater if only women in the third trimester of pregnancy were considered. The cutoff point for hemoglobin concentration in this study was 10 g/dL; with the WHO cutoff of 11 g/dL, the prevalence would have been greater. Among children 9-36 months of age, 59%-90% were below the 10 g/dL cutoff. The inadequacy of the general ration was identified as the major factor causing iron deficiency anemia in this population. In a 1990 study, the prevalence rate of anemia was 13% among children less than 5 years of age in an Ethiopian camp for Somali refugees (Save the Children Fund UK, unpublished data). In addition to dietary iron deficiency, the high incidence of malaria in many refugee populations probably contributes to the high prevalence of anemia in children. This high prevalence of anemia found in some refugee populations may not be significantly greater than that found in local, non-refugee populations, since the latter group has been poorly documented. Nevertheless, anemia may be an additional important preventable risk factor for high mortality in refugee populations. The high prevalence of anemia is often correlated with a subset of the population with severe anemia (hemoglobin (Hb) less than 5 g). Severe anemia in itself can be a major cause of mortality for young children and pregnant women during the peripartum period.
# Other micronutrient deficiencies
Beriberi (thiamine deficiency) has been reported from several refugee populations that subsist on rice-based food rations (Thailand, 1980;Guinea, 1990). Data regarding iodine deficiency in displaced populations are difficult to find, anecdotal evidence suggests that iodine deficiency, as evidenced by the presence of goiter, has been a problem in at least some camps in Pakistan and Ethiopia (CDC. Toole M, trip report, 1991).
# Communicable Diseases
Measles, diarrheal diseases, ARIs, and in some cases, malaria are the primary causes of morbidity and mortality among refugee and displaced populations (1,16,41). Figures 4-6 illustrate patterns of mortality typical among those found in refugee camps. Other communicable diseases, i.e., meningococcal meningitis, hepatitis, typhoid fever, and relapsing fever have also been observed among refugee populations; however, the contribution of these illnesses to the overall burden of disease among refugees has been relatively small.
Densely populated camps with poor sanitation, inadequate clean water supplies, and lowquality housing all contribute to the rapid spread of disease in refugee settings. In addition, the interaction between malnutrition and infection in these populations, particularly among young children, has contributed to the high rates of morbidity and mortality from communicable diseases. Available and affordable technology could prevent much of this morbidity and mortality either through primary prevention (e.g., immunization, adequate planning, and sanitation) or through appropriate case management (e.g., treatment of dehydration caused by diarrhea with oral rehydration salts and continued feeding).
# Measles
Outbreaks of measles within refugee camps have been common and have caused many deaths. Low levels of immunization coverage, coupled with high rates of undernutrition and vitamin A deficiency, have played a critical role in the spread of measles and the subsequent mortality within some refugee camps. Measles has been one of the leading causes of death among children in refugee camps. In addition, measles has contributed to high malnutrition rates among those who have survived the initial illness. Measles infection may lead to or exacerbate vitamin A deficiency, compromising immunity and leaving the patient susceptible to xerophthalmia, blindness, and premature death (42). In early 1985, the crude, measlesspecific death rate in one eastern Sudan camp reached 13/1,000/month; among children less than 5 years of age, the measles-specific death rate was 30/1,000/month. Over 2,000 measles deaths were reported in this camp from February through May 1985. Figure 9 illustrates the proportion of all deaths that were due to measles in this camp during the course of the outbreak (16). The CFR was reported to be 33% during this outbreak; however, mild cases may have been underreported. Large numbers of measles deaths have been reported in camps in Somalia, Bangladesh, Sudan, and Ethiopia (1). Mass immunization campaigns were effective in reducing the measles morbidity and mortality rates in camps in both Somalia and Thailand (16). Measles outbreaks probably did not occur during certain other major refugee emergencies (e.g., Somalis in Ethiopia in 1989; Iraqis in Turkey in 1991), because immunization coverage rates were already high in those refugee populations before their flight (9,10).
# Diarrheal diseases
Diarrheal diseases are a major cause of morbidity and mortality among refugee and displaced populations, primarily because of the inadequacy of the water supply (both in terms of quality and quantity), and the insufficient and poorly maintained sanitation facilities. In eastern Sudan in 1985, between 25%-50% of all deaths in four major camps were attributed to diarrheal diseases. In Somalia (1980), Malawi (1988), and Ethiopia (1989, between 28%-40% of all deaths in refugee camps were attributed to diarrhea (1). Between March and October 1991, 35% of deaths among Somali refugees in the Liboi camp in Kenya were caused by diarrhea. Among Central American refugees in Honduras, diarrheal diseases were responsible for 22.3% of mortality among children less than 5 years of age during a 3-year period (43). In April 1991, in camps for Iraqi refugees on the Turkish border, approximately 70% of all patients arriving at clinics had diarrhea (10). Of these, approximately 25% complained of bloody diarrhea during the first 2 weeks of April. Figure 10 shows the gradual decline in diarrheal disease among clinic outpatients at a Kurdish refugee camp in Turkey.
Improvements in camp sanitation and water supply were probably responsible for this trend.
Although the etiologies of diarrheal illness during refugee emergencies have not been well documented, the responsible pathogens are most likely to be the same agents that cause diarrhea in non-refugee populations in developing countries. In one study in a camp for famine victims in Ethiopia, of 200 patients with diarrhea, 15.6% had positive cultures for Escherichia coli (pathogenicity not specified by authors), 3.5% for Shigella spp., and 2% for Salmonella spp. (44).
# Cholera
Outbreaks of cholera have occurred in several refugee populations, although overall, other diarrheal diseases have probably caused many more deaths than cholera. In addition to the morbidity and mortality directly caused by cholera, epidemics of this severe disease cause serious disruption to camp health services. Outbreaks of cholera have been reported in refugee camps in Thailand (16,45), Sudan (46), Ethiopia (11-12), Malawi (47), Somalia (48), and Turkey (10). The Somali Refugee Health Unit reported 6,560 cases of cholera and 1,069 cholera deaths in 1985. During the course of the epidemic, one camp (Gannet) experienced a CFR of 25%. The CFR in the remaining camps was 2.9%, with some areas reporting a CFR of less than 1% (Figure 11) (48). During the same year, two adjacent refugee camps in the Sudan reported a total of 1,175 cases of cholera with 51 deaths (CFR = 4%) over the course of a 2week epidemic (46). Mozambican refugees in Malawi have been especially vulnerable to cholera; 20 separate outbreaks have been reported in Malawian camps since 1988 (49).
Outbreak investigations have identified polluted water sources, shared water containers and cooking pots, lack of soap, failure to reheat leftover food, and possibly contaminated food (dried fish) as important risk factors for infection. Nearly 2,000 cases were reported among 80,000 refugees in one camp (Nyamithutu) during a 4-month period in 1990 (Figure 12). Among 26,165 new arrivals during this period, 1,651 cases were reported for an attack rate of 6.3% in this group. The variation in CFRs between camps reflects the different levels of organizational preparedness, health worker training and experience, and available resources. One group of relief workers speculated that high CFRs in some Malawian camps may be associated with concurrent niacin deficiency, although their hypothesis has not yet been proven (Moren A, personal communication).
# Acute respiratory infections
ARIs are among the leading causes of death among refugee populations. In Thailand (1979), Somalia (1980), Sudan (1985), and Honduras (1984-1987, ARIs were cited among the three main causes of mortality in refugee camps, particularly among children (16,43). Among children less than 5 years of age in refugee camps in Honduras, respiratory infections were responsible for slightly greater than 1 of every 5 deaths during a 3-year period (43).
# Tuberculosis (TB)
TB is well recognized as a health problem among refugee populations. The crowded living conditions and underlying poor nutritional status of refugee populations may foster the spread of the disease. Although not a leading cause of mortality during the emergency phase, TB often emerges as a critical problem once measles and diarrheal diseases have been adequately controlled. For example, 26% of adult deaths among refugees in Somalia in 1985 were attributed to TB (16). During this time, TB was the third leading cause of death, and the leading cause among adults (48). In eastern Sudan, between 38% and 50% of all deaths in two camps were caused by TB during the 9 to 10 months period after the camps opened (16). TB has been cited as a major health problem among Afghan refugees in Pakistan (CDC. Serdula M, trip report). Although it may be theoretically easier to ensure patient compliance with protracted chemotherapy in the confined space of a refugee camp, the personnel needed to supervise treatment may not be available. In addition, the uncertain duration of stay, frequent changes of camp locations, and poor camp organization may hinder TB treatment programs.
# Malaria
Malaria is a major health problem in many areas that host large refugee populations, including Somalia, Sudan, Ethiopia, Thailand, Guinea, Cote d'Ivoire, Malawi, Pakistan, and Kenya. Malnutrition and anemia, conditions that are common among refugees, may be directly related to recurrent or persistent malaria infection or may compound the effects of malaria and lead to high mortality. Malaria is the leading cause of morbidity among adult refugees in Malawi and in 1990 caused 18% of all deaths and 25% of deaths among children less than 5 years of age (CDC, unpublished data). Malaria is of particular concern when the displaced population has traveled through, or into, an area of higher endemicity than its region of origin (1). During the period 1979-1980, Khmer refugees traveled from the central valley of Kampuchea, where malaria transmission is very low, into Thailand. Those refugees who arrived at the Sakaeo camp traveled through mountain regions where malaria is highly endemic year round, while refugees who arrived at Khao I-Dang camp had traveled a route that remained within the areas of low malaria transmission. As a result of the differences in exposure during transit, the initial malaria prevalence rate at Sakaeo was 39% compared with a 4% prevalence rate at Khao I-Dang. During this time, malaria was a major cause of death at Sakaeo (50). Similarly, Ethiopian refugees from the highland areas of Tigray province arrived in eastern Sudan in 1985 with decreased immunity against the malaria that is seasonally endemic in that region of Sudan. Not surprisingly, malaria was an important cause of death among these refugees. Farther north, in the Kassala region of eastern Sudan, a major outbreak of malaria occurred among refugees from Eritrea following extensive flooding in the area in September 1988. In contrast to the Tigrayan refugees, the Eritreans were largely from lowland areas and had been previously exposed to malaria. The severity of this outbreak may have been due to the emergence of chloroquine resistant Plasmodium falciparum malaria in eastern Sudan at that time, and the subsequent widespread failure of first-line treatment regimens (Toussie S, personal communication, 1989).
Afghan refugees living in the North-West Frontier Province of Pakistan have a higher incidence of clinical malaria than that observed among the local population. A comparison of the epidemiologic trends of malaria between the refugees and the local population over a period of several years demonstrated that the increased rate of malaria illness among refugees was a result of having resettled in an area of higher transmission than that from which they had fled. Because of their limited exposure history, the Afghan refugees had lower levels of immunity to malaria illness than did the local population (51). Few deaths associated with malaria have been reported in this population because the majority of cases have been associated with Plasmodium vivax, a milder form of malaria than that caused by Plasmodium falciparum, the form that is more commonly reported in African camps.
# Hepatitis
Hepatitis has not been among the most common diseases reported in refugee and displaced populations worldwide, however, since 1985 it has emerged as a serious problem in camps at the Horn of Africa, where access to adequate supplies of clean water has been severely limited. In Somalia during the period 1985-1986, an outbreak of greater than 2,000 cases occurred in two refugee camps, with an overall attack rate of 8% among adults. Of 87 hepatitis deaths, 46% were among pregnant women. The overall CFR was 4%, the CFR in second-and third-trimester pregnant women was 17%. By a process of exclusion, the outbreak was attributed to enterically transmitted non-A, non-B hepatitis (now known as hepatitis E) (52). Figure 13 depicts an outbreak of hepatitis that occurred in the Hartisheik refugee camp in Ethiopia between 1989 and 1990.
During an 18-month period, greater than 6,000 cases were reported. Between March and October of 1991, a major outbreak of hepatitis occurred among Somali refugees living in Kenya's Liboi camp; a total of 1,700 cases were reported, yielding an attack rate of 6.3%. The overall CFR was 3.7% and in pregnant women the CFR was 14%. Hepatitis was responsible for one of every five deaths in the camp during that time period. The hepatitis E virus was identified in stool and serum specimens from ill patients. The Ethiopian and Kenyan outbreaks were associated with inadequate water supply. In both camps, refugees had access to an average of only 1-3 liters of clean water/person/day (the United Nations High Commissioner for Refugees (UNHCR) recommends a minimum of 15 liters/person/day) (53).
# Meningitis
Overcrowding and limited access to medical care are contributing factors in outbreaks of meningococcal meningitis among refugee populations. Also, many large refugee populations are found in what is termed the "meningitis belt" of sub-Saharan Africa. Although children less than 5 years of age are at greatest risk for meningitis, meningococcal meningitis also occurs among older children and adults, particularly in densely populated settings i.e., refugee camps (54). During an outbreak of group A meningococcal disease at the Sakaeo refugee camp in Thailand in 1980, children less than 5 years of age experienced a CFR of 50%. The overall CFR during that outbreak was just over 28% (55). Outbreaks of meningococcal meningitis have also been reported among Ethiopian refugees in eastern Sudan (1985) and among displaced Sudanese in Khartoum and southern Sudan during 1988 (56).
# Other Health Issues
Although these reports focus on the major causes of morbidity and mortality during the emergency phase of refugee displacements, other health problems warrant the attention of public health practitioners in these settings.
# Injuries
Thus far, injuries related to armed conflict and psychological problems relating to war, persecution, and the flight of the refugee have been poorly quantified. In a recent report on Iraqi refugees on the Turkish border, 8% of the deaths during a 2-month period were attributed to trauma. Sixty percent of these trauma-related deaths were attributable to shootings by armed soldiers (CDC. Toole M, trip report, September 1991). Anecdotal reports support the existence of high rates of physical disabilities caused by war injuries in some refugee camps, such as those for Afghan refugees in Pakistan, Cambodian refugees in Thailand, and Mozambican refugees in Malawi.
# Maternal health
The problem of morbidity and mortality related to pregnancy and childbirth has been inadequately documented, although earlier sections of this report described high anemia rates and high hepatitis-specific mortality rates among pregnant women (52). Also, studies of scurvy and pellagra among refugees in Africa have consistently revealed higher incidence rates in women than in men, and a study in Somalia showed that pregnancy was a risk factor for the development of clinical scurvy (32,35).
# Sexually transmitted diseases and HIV
Few published reports have referred to sexually transmitted diseases (STD) in refugee populations. However, there is no evidence that the incidence of STDs in camps is any higher (or lower) than in non-refugee communities. Similarly, practically no data exist on the prevalence of HIV infection, nor on rates of transmission in these populations. Many of the large displaced and refugee populations of the world are either located in, or have fled to, countries where HIV prevalence rates are high. Malawi, 1998. MMWR 198938:455-6,461-3. 19. CDC. Nutritional status of Somali refugees --Eastern Ethiopia, September 1988-May 1989. MMWR 198938:455-6,461-3. 20. CDC. Health and nutritional status of Liberian refugee children -- Guinea, 1990. MMWR 199140:
# RECOMMENDATIONS
The technical recommendations in this report focus on the public health elements of an appropriate response program for refugees and displaced persons, however, the effectiveness of relief efforts will be enhanced if the affected communities and host countries have prepared for the emergency. Preparedness for sudden population displacement is critical and should be targeted at the most important public health problems identified in previous emergencies: malnutrition, measles, diarrheal diseases, malaria, ARI, and other communicable diseases (e.g., meningitis and hepatitis) that result in high death rates. Preparedness requires that planning for emergencies be included as an integral part of routine health development programs in countries where sudden population displacements might occur. These programs include:
- Health Information Systems (HIS).
- Diarrheal Disease Control Programs.
- Expanded Programs on Immunization (EPI).
- Control Programs for Endemic Communicable Diseases.
- Nutrition Programs.
- Continuing Education Programs for Health Workers.
National public health programs should include detailed contingency planning for sudden population movements, both internally and from neighboring countries.
# Response Preparedness
The critical components of a relief program responding to sudden population displacement comprise the provision of adequate food, clean water, sanitation, and shelter. In addition, the following elements of a health program should be established as soon as possible. The detailed recommendations that follow are organized according to either disease group (e.g., diarrheal diseases or malnutrition) or technical methods (e.g., rapid assessment). Nevertheless, it is critical to keep in mind the demographic groups that are most at risk during emergencies, namely young children and women. It is important that health services in refugee settings be organized in a way that facilitates access by these groups. In general, MCH services should be given higher priority than general outpatient dispensaries and hospitals.
# Maternal and Child Health Care
MCH clinics should be established (ideally one MCH clinic per 5,000 population) and staffed by trained personnel to provide routine screening and preventive, and curative services to pregnant and lactating women and to children less than 2 years of age. If resources are adequate, these services should be extended to children between 2 and 5 years of age. Services for children should include routine growth monitoring, immunization, nutritional rehabilitation, vitamin A supplementation, and curative care, as well as health education for their mothers.
Female health workers should be trained and employed to provide culturally appropriate health education both at MCH clinics and within the community, and to refer pregnant women to the clinic for antenatal care. At least some of these health workers should be recruited from among traditional birth attendants in the community. Antenatal care should include screening for high-risk pregnancies and providing iron and folic acid supplementation (as well as iodine supplementation in areas of endemic goiter), tetanus toxoid immunization, and health education. Postnatal care should include nutritional supplementation, counselling on family spacing, provision of contraceptives, and education about breastfeeding and infant care. In certain cultural situations, curative care may need to be provided to all women of child-bearing age in a setting physically segregated from male outpatient facilities.
# Program-Specific Recommendations
The following content areas are covered in these recommendations:
- Rapid Health Assessment - Health Information Systems - Nutrition - Control of Vaccine-Preventable Diseases - Control of Diarrheal Diseases - Malaria Control - Tuberculosis Control - Epidemic Investigations
# Rapid Health Assessment
Rapid health assessment of an acute population displacement is conducted to:
- Assess the magnitude of the displacement.
- Determine the major health and nutrition needs of the displaced population.
- Initiate a health and nutrition surveillance system.
- Assess the local response capacity and immediate needs.
# Preparations
The amount of time required to conduct an initial assessment of a refugee influx depends on the remoteness of the location, availability of transport, security situation in the area, availability of appropriate specialists, and willingness of the host country government to involve external agencies in refugee relief programs. In small countries with functioning communications facilities and secure borders, the assessment might be conducted in 4 days; in other countries, it might take 2 weeks.
Before the field visit, relevant information relating to the status of the incoming refugees, as well as the available resources of the host community, should be obtained from local ministries or organizations based in the capital city. Any maps of the area where the refugees are arriving and settling should likewise be obtained. Aerial photographs will also be of value, but may be considered sensitive by the military of the host country. International organizations like UNICEF, WHO, and the Red Cross/Red Crescent may also have demographic and health data concerning the refugee population.
In preparation for the field visit, establish whether food, medical supplies (including vaccines), or other relief supplies have been ordered or procured by any of the relief agencies involved. Additionally, the following conditions should be included in a field assessment.
# Field assessment
The following demographic information is required to determine the health status of the population.
- Total refugee or displaced population - Age-sex breakdown - Identification of at-risk groups; e.g., children less than 5 years of age, pregnant and lactating women, disabled and wounded persons, and unaccompanied minors - Average family or household size Why this information is needed. The total population will be used as the denominator for all birth, death, injury, morbidity, and malnutrition rates to be estimated later. The total population is necessary for the calculation of quantities of relief supplies. The breakdown of the population by age and sex allows for the calculation of age-and sex-specific rates and enables interventions to be targeted effectively (e.g., immunization campaigns).
# Sources of information.
Local government officials or camp authorities may be able to provide registration records. If no registration system is in effect, one should be established immediately. Information recorded should include the names of household heads, the number of family members by age and sex, former village and region of residence, and ethnic group, if applicable.
Refugee leaders may also have records, particularly if entire villages have fled together. In certain situations, political groups may have organized the exodus and may have detailed lists of refugee families.
A visual inspection of the settlement may provide a general impression of the demographic composition of the population. However, information obtained in this manner should be used judiciously as it is likely to provide a distorted view of the situation.
It may be necessary to conduct a limited survey on a convenience sample in order to obtain demographic information. Beginning at a randomly selected point, survey a sample (e.g., 50) of dwellings. Visit every fifth or 10th house until the predetermined number of houses have been surveyed. At each house, record the number of family members, the age and sex of each person, and the number of pregnant or lactating women. This process will establish an initial estimate of the demographic composition of the population. Estimate the number of persons in each house, as well as the total number of houses in the settlement, to gain a provisional estimate of the camp population. At the very least, this quick survey should give a rough estimate of the proportion of the total population made up of "vulnerable" groups; i.e., children less than 5 years of age and women of child bearing age. To determine the total population, a census may need to be conducted later.
# Background health information
The information required includes:
- Main health problems in country of origin.
- Previous sources of health care (e.g., traditional healers).
- Important health beliefs and traditions (e.g., food taboos during pregnancy).
- Social structure (e.g., whether the refugees are grouped in their traditional villages and what type of social or political organization exists). - Strength and coverage of public health programs in country of origin (e.g., immunization).
Why this information is needed. Effective planning of health services will depend on this information. Planners need to be aware of traditional beliefs, taboos, and practices in order to avoid making costly mistakes and alienating the population.
# Sources of information.
Obtain documents and reports from the host government, international organizations, and nongovernment organizations pertaining to endemic diseases and public health programs in the displaced population's region of origin.
Interview refugee leaders, heads of households, women leaders (e.g., traditional midwives), and health workers among the refugee population.
Seek information from development agencies, private companies, missionaries, or other groups having experience with the displaced population.
# Nutritional status
The information required includes:
- Prevalence of protein-energy undernutrition in the population less than 5 years of age.
- Nutritional status before arrival in host country.
- Prevalence of micronutrient deficiencies in the population less than 5 years of age.
Why this information is needed. Evidence exists to support the fact that the nutritional status of displaced populations is closely linked with their chances of survival. Initial assessment of nutritional status serves to establish the degree of urgency in delivering food rations, the need for immediate supplementary feeding programs (SFPs), and the presence of micronutrient deficiencies that require urgent attention.
# Sources of nutritional information
If refugees are still arriving at the site:
- Initiate nutritional screening of new arrivals immediately.
- Measure all children (or every third or fourth child, if insufficient trained personnel are available or the refugee influx is too great) for mid-upper arm circumference (MUAC) or, if time and personnel permit, WFH. Estimate the proportion of undernourished children using the methods described in the Rapid Nutrition Assessment Manual. - - Look for clinical signs of severe anemia and vitamin A, B, and C deficiencies.
- If refugees are continuing to arrive, set up a permanent screening program for new arrivals. A screening program also can be used to administer measles vaccination and vitamin A supplements to new arrivals.
If refugees are already located in a settlement:
- Walk through the settlement, select houses randomly, and observe the nutritional status of the children less than 5 years of age. Visual assessment should only be done by persons who are experienced in the assessment of malnutrition. The observer should enter the homes as malnourished children are likely to be bedridden. - Combine the visual inspection with a rapid assessment of nutritional status, using either MUAC or WFH measurements. This can be done during the demographic survey described above. (See "Rapid Health Assessment") - Review the records of local hospitals treating members of the displaced population.
Note admissions or consultations for undernutrition and deaths related to undernutrition. - Interview refugee leaders to establish food availability before displacement and the duration of the journey from place of origin to their present location.
In order to gather baseline data for evaluation of nutrition programs, plan to conduct a valid, cluster sample survey of the population as soon as possible (within 2 weeks). Appropriate technical expertise will be needed for the implementation and analysis of the survey.
# Mortality rates
The information required includes crude, age-, sex-, and cause-specific mortality rates. Why this information is needed. In the initial stages of a population displacement, mortality rates, expressed as deaths/10,000/day, are a critical indicator of improving or deteriorating health status.
In many African countries, the daily CMR (extrapolated from published annual rates) is approximately 0.5/10,000/day during non-emergency conditions. In general, health workers should be extremely concerned when CMRs in a displaced population exceed 1/10,000/day, or when less than 5 years of age mortality rates exceed 4/10,000/day.
Sources of mortality information. Check local hospital records and the records of local burial contractors. Interview community leaders.
Establish a mortality surveillance system. One approach is to designate a single burial site for the camp, which should be monitored by 24-hour grave watchers. Grave watchers should be trained to interview families, using a standard questionnaire, and then to record the data to determine gender, approximate age, and probable cause of death.
Other methods of collecting mortality data include registering deaths, issuing burial shrouds to families of the deceased to ensure compliance, or employing volunteer community informants who report deaths for a defined section of the population.
Demographic data are absolutely essential for calculating mortality rates. These provide the denominator for estimating death rates in the entire population and within specific vulnerable groups, such as children less than 5 years of age.
The population needs to be assured that death registration will have no adverse consequences (e.g., ration reductions).
# Morbidity
The information required includes age-and sex-specific data regarding the incidence of common diseases of public health importance, i.e., measles, malaria, diarrheal diseases, and ARI, as well as diseases of epidemic potential such as hepatitis and meningitis. The data should be collected by all health facilities, including feeding centers.
Why this information is needed. Data on diseases of public health importance may help plan an effective preventive and curative health program for refugees. These data will also facilitate the procurement of appropriate medical supplies and the recruitment and training of appropriate medical personnel, as well as focus environmental sanitation efforts (e.g., toward mosquito control in areas of high malaria prevalence).
Sources of morbidity information. Review the records of local clinics and hospitals to which refugees have access.
Where a clinic, hospital, or feeding center has already been established within the camp, examine patient records or registers and tally common causes of morbidity. Interview refugee leaders and health workers within the refugee population.
A simple morbidity surveillance system should be established as soon as curative services are established in the camp. Feeding centers should be included in the surveillance system. Community health workers should be trained as soon as possible to report diseases at the community level.
The initiation of certain public health actions should not be delayed until the disease appears.
For example, measles immunization should be implemented immediately. Do not wait for the appearance of measles in the camp. Also, oral rehydration centers should be routinely established in all situations.
# Environmental conditions
The information required includes:
- Climatic conditions (average temperatures and rainfall patterns).
- Geographic features (soil, slope, and drainage).
- Water sources (local wells, reservoirs, rivers, tanks).
- Local disease epidemiology (endemic infectious diseases, e.g., malaria, schistosomiasis). - Local disease vectors (mosquitoes, flies, ticks), including breeding sites.
- Availability of local materials for shelter and fuel.
- Existing shelters.
# Existing sanitation arrangements (latrines and open areas).
Why this information is needed. Information on local environmental conditions affecting the health of displaced populations will help relief planners create priorities for public health programs.
# Sources of information.
This assessment is made largely by visual inspection. In addition, interviews with local government and technical specialists will yield important information. In some cases, special surveys need to be conducted; e.g., entomologists may need to survey for local disease vectors, and water engineers may need to assess water sources.
# Resources available Food supplies
Efforts to evaluate food supplies should include:
- Attempting to assess the quantity and type of food currently available to the population. - Calculating the average per capita caloric intake over the period of time for which records are available, if food is already being officially distributed. - Inspecting any local markets for food availability and prices.
- Conducting a quick survey of dwellings and estimating the average food stores in each household. This should be done during the demographic survey (see "Rapid Health Assessment"). Look for obvious inequities between different families or different ethnic or regional groups.
Food sources. Local, regional, and national markets need to be assessed. The cash and material resources of the displaced population should also be assessed in order to estimate its local purchasing power.
Food logistics . Assess transport and fuel availability, storage facilities (size, security), and seasonal conditions of access roads.
Feeding programs. Follow these guidelines to evaluate feeding programs:
- Look for any established feeding programs (mass, supplementary, and therapeutic feedings). These may have been set up by local officials, PVOs, church groups, or local villagers. - Assess enrollment and discharge criteria, enrollment and attendance figures, quantity and quality of food being provided, availability of water, managerial competence, utensils, and storage. - Determine whether measles vaccine is being administered.
Local health services. Follow these guidelines for assessing the capabilities of health services:
- Determine the ease of access by refugees (official attitudes, location, hours of operation). - Evaluate the condition and size of facilities.
- Note the extent and appropriateness of medicines, equipment, and services.
- Determine the type and number of personnel.
- Review cold storage facilities, vaccine supplies, logistics, and communication systems. Camp health services. Follow these guidelines for assessing camp health services: - Note the type of facility (clinic, hospital, feeding center), as well as the size, capacity, and structure (tent, local materials). - Determine the adequacy of health-facility water supply.
- Assess refrigeration facilities, fuel, and generator.
- Assess supplies of essential drugs (whether generic or brandname) and medical supplies. - Determine the need for essential vaccines and immunization equipment.
- Note the type of health personnel (doctors, nurses, nutritionists, sanitarians) and their relevant experience and skills. - Review storage facilities.
- Assess adequacy of transport, fuel, and communications.
- Locate health workers in refugee population (traditional healers, birth attendants, "modern" practitioners). - Determine whether there is a need for interpreters.
# Taking action
- An itemized summary of the findings should be prepared, following the sequence of activities outlined in this document. - Estimate and quantify the need for outside assistance, based on preliminary findings.
- Prepare and convey assessment findings to appropriate emergency health officials at the local, national, and international levels.
Checklist For Rapid Health Assessment - Preparation - Obtain available information regarding refugees and resources from host country ministries and organizations. - Obtain available maps or aerial photographs.
- Obtain demographic and health data from international organizations.
# Field assessment
- Determine total displaced population.
- Determine age and sex breakdown of population.
- Identify groups at increased risk.
- Determine average household size.
# Health information
- Identify primary health problems in country of origin.
- Identify previous sources of health care.
- Ascertain important health beliefs and traditions.
- Determine the existing social structure.
- Determine the strength and coverage of public health programs in country of origin.
# Nutritional status
- Determine prevalence of PEM in population less than 5 years of age.
- Ascertain prior nutritional status.
- Determine prevalence of micronutrient deficiencies in the population less than 5 years of age.
# Mortality rates
- Calculate crude, age-, sex-, and cause-specific mortality rates.
# Morbidity
- Determine age-and sex-specific incidence rates of diseases that have public health importance.
# Environmental conditions
- Determine climatic conditions.
- Identify geographic features.
- Identify water sources.
- Ascertain local disease epidemiology.
- Identify local disease vectors.
- Assess availability of local materials for shelter and fuel.
- Assess existing shelters and sanitation arrangements.
# Resources available
- Assess food supplies and distribution systems.
- Identify and assess local, regional, and national food sources.
- Assess the logistics of food transport and storage.
- Assess feeding programs.
- Identify and assess local health services.
- Assess camp health services.
# Health Information System
A health information system (HIS) provides continuous information on the health status of the refugee community and comprises both ongoing routine surveillance and intermittent population-based sample surveys. This information may be used to:
- Follow trends in the health status of the community and establish health-care priorities. - Detect and respond to epidemics.
- Evaluate program effectiveness and service coverage.
- Ensure that resources are targeted to the areas of greatest need.
- Evaluate the quality of care delivered.
# Data collection
As soon as health services are established for a refugee population, a surveillance system should be instituted and should ideally be set up at the time of an initial, rapid assessment. Any agency or facility (including feeding centers) providing health services to the refugee population should be part of the reporting network. Any host community services to which the refugees might have access should also be part of the system.
Health information should be reported on a simple, standardized surveillance form. (A sample form, adapted from WHO Emergency Relief Operations, is located at the end of this section.) Each health facility should be held accountable for completing the reporting form at the appropriate interval and for returning it to the person or agency charged with compiling the reports, analyzing the information, and providing feedback. Each refugee settlement or camp should have a person responsible for coordinating the HIS. Forms should be translated into the appropriate local language(s) if community health workers are involved in information collection.
Health facilities should keep a daily record of patients; age, sex, clinical and laboratory diagnosis, and treatment should be specified. If personnel time is limited, a simple tally sheet should be used. In addition, the patient should be issued a health record card on which the date, diagnosis, and treatment are recorded. Each time a patient contacts the health-care system, whether for curative or preventive services, this should be noted on the health record card. Laboratory data should accompany diagnostic information whenever possible. Collecting Processing, Storing, and Shipping Diagnostic Specimens in Refugee Health-Care Environments - provides an overview of procedures for collecting and processing diagnostic specimens in the field.
Data collection should be limited to that information that can and will be acted upon. Information that is not immediately useful should not be collected during the emergency phase of a refugee relief operation. Overly detailed or complex reporting requirements will result in noncompliance.
The most valuable data are generally simple to collect and to analyze. Standard case definitions for the most common causes of morbidity and mortality should be developed and put in writing. The data collected will fall into one of the following categories: a) demographic, b) mortality, c) morbidity, d) nutritional status, and e) health program activities.
Population. Camp registration records should provide most of the demographic information needed. If registration records are inadequate, a population census may be necessary. Conducting a census is often politically sensitive and may be delayed by the administrative authorities for a long period of time. Consequently, innovative methods may need to be devised. For example, organize a nutritional screening of all children less than 5 years of age. Count the children and estimate the percentage of the total population less than 5 years of age by doing a sample survey. From this information, estimate the total population size. For other methods to determine population size and structure see "Rapid Health Assessment".
It is important that population figures be updated on a regular basis, taking into account new arrivals, departures, births, and deaths. The total population is used as the denominator in the calculation of disease incidence, birth, and death rates. This total is also necessary to determine requirements for food and medical supplies and to estimate program coverage rates. Information about the population structure is needed to calculate age-and sex-specific morbidity and mortality rates, to estimate ration requirements, and to determine the target population for specific interventions, i.e., antenatal care and immunizations.
The rate of new arrivals and departures gives an indication of the stability of the population and will influence policy decisions about long-term interventions, such as TB therapy. This information is also used to predict future resource and program needs.
A birth registration system is usually simple, since the community expects an increase in the family food ration as a result of a new birth. Births might be reported in the community to volunteer health workers or traditional birth attendants. Alternatively, if good antenatal care services are established, follow-up of pregnant mothers will allow for a relatively complete registration of births. Examples of mortality surveillance systems are described in "Rapid Health Assessment". Deaths may be underreported if there is a fear of possible ration reduction; thus, an agreement might be negotiated with camp authorities not to decrease rations after a death occurs --at least during the emergency phase. Arrivals and departures should be monitored through the camp registration system.
Mortality. Each health facility should keep a log of all patient deaths (with cause of death and relevant demographic information) and report the deaths on a standardized form. Because many deaths occur outside of the health-care system, a community-based mortality surveillance system should be established. Such a system may include the employment of grave watchers, the routine issuance of burial shrouds, and the use of community informants (see "Rapid Health Assessment").
Death rates are the most specific indicators of a population's health status and are the category of data to which donors and relief agencies most readily respond. During the emergency phase of a relief operation, death rates should be expressed as deaths/10,000/day to allow for detection of sudden changes. In refugee camps, relief programs should aim at achieving a CMR of less than 1/10,000/day as soon as possible. This rate still represents approximately twice the "normal" CMR for non-displaced populations in most developing nations and should not signal a relaxation of efforts. After the emergency phase, death rates should be expressed as deaths/1,000/month to reflect the usual reporting frequency and to facilitate comparison with baseline, non-refugee death rates.
Age-and sex-specific mortality rates will indicate the need for interventions targeted at specific vulnerable groups. During the early stage of a relief operation, specific death rates for persons less than 5 years of age and greater than 5 years of age may suffice. Later, further disaggregation by age may be feasible --for example, less than 1 year, 1-4 years, 5-14 years, and greater than 15 years. Different male-and female-specific death rates may reflect inequitable access to resources or health services. Cause-specific mortality rates will reflect those health problems having the greatest impact on the refugee community and requiring the highest priority in public health program planning. Morbidity. Health facilities and feeding centers should report morbidity information on the same form on which mortality is reported. Each disease reported in the system must have a written case definition that will guide health workers in their diagnosis and ensure the validity of data. Where practical, case definitions that rely on clinical signs and symptoms should be tested periodically for sensitivity and specificity as compared with a laboratory standard (e.g., malaria).
Knowledge of the major causes of illness and the groups in the affected population that are at greatest risk allows for the efficient planning of intervention strategies and the most effective use of resources. Morbidity rates are more useful than a simple tallying of cases, as trends can be followed over time, or rates compared with those from different populations. The monitoring of proportional morbidity (e.g., percentage of all morbidity caused by specific diseases) may be useful when specific control measures are being evaluated, although caution is needed in the interpretation of trends. A relative decrease in disease-specific proportional morbidity may merely reflect an absolute increase in the incidence of another disease.
Nutritional status. Data regarding nutritional status can be obtained through a nutritional assessment survey or a mass screening exercise. Surveys should be repeated at regular intervals to determine changes in nutritional status; however, not so frequently as to obscure true differences between surveys. All children less than 5 years of age should undergo a nutritional screening upon arrival at the camp and should continue to be weighed and measured monthly at MCH clinics in the camp. Information collected during these screenings should be included in HIS reports. If the initial screening identifies high prevalence rates of undernutrition, cross-sectional surveys should be repeated at intervals of 6-8 weeks until the undernutrition prevalence rate is below 10%. Thereafter, surveys every 6-12 months will suffice, unless routine surveillance data indicate that nutritional status has deteriorated. Measurement of nutritional status is described in the Rapid Nutrition Assessment Manual. *
The prevalence of acute malnutrition acts as an indicator of the adequacy of the relief ration. A high prevalence of malnutrition in the presence of an adequate average daily ration may indicate inequities in the food distribution system, or high incidence rates of communicable diseases (e.g., measles and diarrhea). The presence of nutritional deficiency disorders (i.e., pellagra, anemia, or xerophthalmia) indicates the need for ration supplementation.
Programs. Each health facility should keep a log of all activities. Immunizations should be recorded in a central record, as well as on the person's health record card. Records of health sector activities will be useful in determining whether certain groups in the population are underserved, and in planning measures to reach a broader population base. Although approximate immunization coverage may be estimated from the number of vaccine doses administered, the preferred method is by annual population surveys.
# Analysis and interpretation
Most data can be analyzed locally using a pen and paper. The use of computers and a data entry and analysis program, such as Epi Info, version 5, may be practical at the regional or national level. Trends in mortality, morbidity, and nutritional status should be monitored closely. Careful attention should be paid to changing denominators, and changes in proportional mortality or morbidity should be interpreted with particular caution. Where applicable, correlations between mortality, morbidity or nutritional status, and health sector activities should be examined. Likewise, the proportion of malnourished children identified in population surveys as enrolled in feeding programs can be used to estimate program coverage. All components of the HIS should be analyzed and interpreted in an integrated fashion. A single element examined alone will reveal only a small portion of the entire picture and may be easily misinterpreted. For example, an apparent decrease in malnutrition prevalence should be interpreted in the context of childhood mortality rates (1). The use of health information to guide program decision-making will be facilitated if targets and critical indicators are established at the beginning. For example, a measles incidence rate of 1/1,000/month might be an indicator that would initiate specific preventive actions. Similarly, during a cholera outbreak, a CFR of 3% in a given week might stimulate a critical review of case management procedures.
# Control measures
The information gathered through the HIS should be used to develop recommendations and to implement specific control measures. Objectives for disease control programs should be established and progress towards these objectives regularly assessed. The presentation of data to decision-makers should make use of simple, clear tables and graphs. Most importantly, there should be regular feedback to the data providers through newsletters, bulletins, and frequent supervisory visits.
# Assessment
The HIS should be periodically assessed to determine its accuracy, completeness, simplicity, flexibility, and timeliness. The utilization of the data by program planners and key decisionmakers should also be assessed. The HIS should evolve as the need for information changes.
# Nutrition Rations
For populations totally dependent upon food aid, a general ration of at least 1,900 kcal/person/day is required. At least 10% of the calories in the general ration should be in the form of fats and at least 12% should be derived from proteins.
- Each of the rations above provides at least minimum quantities of energy, protein, and fat. - Ration 2 provides additional quantities of various micronutrients through the inclusion of a fortified blended cereal. When provided in the general ration, fortified cereal blends should be used for the whole family.
The calculation of rations should account for calorie loss during transport and food preparation. Similarly, when the mean daily temperature falls below 20 C, the caloric requirement should be increased accordingly by 1% per degree of temperature below 20 C.
The standard requirement of 1,900 kcals is based on the following demographic structure of a population:
- Children less than 5 years of age (20%).
- Children 5-14 years of age (35%).
- Women 15-44 years of age (20%), of whom 40% are pregnant or lactating.
- Males 15-44 years of age (10%).
- Adults greater than 44 years of age (15%).
The calculation of ration requirements should be adjusted for deviations from the above population structure (age/gender breakdown), the underlying health and nutritional status of the population, and relative activity levels of the community.
# Guidelines for ration distribution
- Food should be distributed in a community setting. Camps and mass feedings should be avoided if at all possible. - Ration distribution should complement, not replace, any food that the refugees are able to provide for themselves. - Distributed food should be familiar and culturally acceptable to the refugees.
- If food is distributed in uncooked form, adequate fuel and cooking utensils should be made available. - Grains should be provided in ground form, or grinders must be made available.
- Distribution must be done on a regular basis, with no longer period than 10-14 days between distributions. - If a specified food item in the ration cannot be supplied, the energy and nutrient content of the missing item should be provided by including additional quantities of another available commodity. This type of substitution is appropriate only as a shortterm measure. - Breast-feeding should be encouraged and supported.
- Lactating women should be provided with extra sources of calories and protein.
Appropriate weaning foods should be included in the general ration (fats and oils). - Bottle feeding should be discouraged. Infant bottles and formula should not be distributed. - Dry skim milk (DSM) and other milk products should not be included in the ration as such, except where milk consumption is part of the traditional diet. Milk products should be mixed with milled grains to form a cereal. Any milk product that is included in the rations should be fortified with vitamin A. - If fresh fruits and vegetables are not available, fortified blended foods (e.g., corn-soya milk (CSM)), CSB, or similar local products) should be provided to meet micronutrient requirements. - Refugees should be encouraged to grow vegetables. Seeds, gardening implements, and suitable land should be made available for kitchen gardens. This is critical for the prevention of pellagra and scurvy. - Refugees should be permitted access to local markets and be allowed to create markets. Trading or selling of ration commodities may be a necessary part of the camp economy. It enables refugees to supplement their diets with foods otherwise unavailable to them and to obtain essential nonfood items. - It may be advisable to include certain culturally significant items i.e., tea, sugar, and spices in the food basket. Where such items are highly valued, refugees will sell or trade part of their ration to obtain them. This results in a reduction of caloric intake. Providing these items eliminates this overall reduction.
# Supplementary feeding programs
SFPs are designed to help prevent severe malnutrition and to rehabilitate moderately malnourished persons. SFPs are not intended to be used as a method of targeting food during an emergency phase. Similarly, SFPs are inappropriate as a long-term supplement to an inadequate general ration. Implementation of a SFP is necessary under the following circumstances:
- When the general ration is less than 1,500 kcal/person.
- Where nutritional assessment reveals that greater than 20% of children less than 5 years of age are acutely malnourished, as determined by a Z-score indicator of less than -2. - When the acute malnutrition prevalence (as determined by a Z-score indicator of less than -2) falls between 10%-20% and the general ration is between 1,500-1,900 kcal. - Where there is a high incidence of measles or diarrheal disease.
Inclusion and discharge criteria. The following groups should be targeted for inclusion in a SFP:
- Acutely undernourished children less than 5 years of age (WFH Z-score less than -2 or less than 80% of reference median). - Pregnant and lactating women.
- Elderly, chronically ill (e.g., TB patients), or disadvantaged groups.
Children should be discharged from the SFP after they have maintained greater than 85% of median WFH (or a Z-score greater than -1.5) for a period of 1 month.
Caloric requirements . A SFP should provide at least 500 kcal and 15 g protein/day in one or two feedings. High energy milk (HEM), a calorie-dense milk mixture, may be used in a SFP. One milliliter of HEM provides 1 kcal of energy. The formula below makes 5 L of HEM: 420 g dried skimmed milk 250 g sugar 320 g oil 4.4 L water If the general ration is inadequate (less than 1,900 kcal/person/day), the supplementary ration should provide 700-1,000 kcal/person/day in two to three feedings.
# Types of SFPs
. SFPs fall into two categories, either on-site feeding or take-home rations. Listed below are some of the advantages and disadvantages of each type of SFP (1). On-site feeding. "Wet" rations are prepared by SFP staff and served to recipients in the feeding center. Listed below are the advantages of wet rations:
- The likelihood that the ration will be shared among family members is reduced.
- SFP staff maintain control over the preparation and consumption of the supplementary meals. - Additional services can be incorporated into the feeding program.
These are the disadvantages of wet rations:
- Young children must be accompanied to the center. This may lead to poor attendance rates and create a hardship for many mothers who must also provide for other family members.
- Feeding centers must be located near the homes of the recipients.
- In order to increase motivation and attendance, other services may need to be offered.
- Feeding centers are a drain on health personnel resources.
- Feeding center meals may be substituted for meals at home, resulting in a net food intake deficit. - On-site feedings are not appropriate for targeting entire families or community groups. - Children less than 2 years of age are generally underserved by on-site feedings.
- On-site feedings remove the family's responsibility and control over providing for family members. - The possibility of cross-contamination and infection is increased in mass feedings. Take-home programs. "Dry" rations are provided on a regular basis to supplement the general ration normally received. These are the advantages of dry rations:
- Daily attendance of the enrollee or other family members is not required.
- Fewer centers are needed, and these may be located at a greater distance from homes.
- The supplementary ration increases the purchasing power of the family.
- The ration is intended to provide supplementation 365 days/year. (No missed days for holidays) - Dry rations generally achieve higher coverage rates than wet rations.
- There is less disruption of family activities, as daily attendance is not required.
- The family is able to maintain control over feeding practices.
These are the disadvantages of dry rations:
- Dry rations are less effective at targeting person beneficiaries.
- Sharing of the ration among family members is increased.
# Other elements of SFPs
- Vitamin A should be administered upon admission to the SFP and every 3 months thereafter.
- If vitamin C is not included from the ration, vitamin C supplements should be administered weekly to all persons enrolled in SFPs. - If iron deficiency anemia is highly prevalent, the provision of iron syrup to children enrolled in SFPs should be considered. - All enrollees in the SFP should have their measles immunization status checked upon admission, and vaccine administered if needed. - Mebendazole, an anthelminthic, should be administered along with the vitamin A, if it is available. Each child should be administered two 100 mg tablets to be chewed. Mebendazole should not be administered to infants less than 12 months of age or to pregnant women. - On-site feeding centers require a regular supply of clean water and cooking fuel.
# Therapeutic feeding programs
Therapeutic feeding programs (TFPs) are considered a medical intervention, the purpose of which is to save lives and restore the nutritional health of severely malnourished children. The recommendations listed below are adapted from the procedures for selective feeding (2).
# Enrollment criteria.
Children should be enrolled in a TFP if they meet one of the following criteria:
- Children less than 5 years of age (or less than 115 cm in height) with WFH Z-score of less than -3 (less than 70% median). - Children with clinically evident edema.
- Children referred to TFP by medical personnel.
# Caloric requirements
- Children enrolled in a TFP should receive 150 kcal and 3 g of protein for each kg body weight/day. - Feeding should be done in four to six meals/day. Feeding centers that provide meals on a 24-hour basis are likely to be most effective. - HEM should be included in the TFP ration.
- All children enrolled in the TFP should receive a full course of vitamin A upon admission. - Severely malnourished children typically have poor appetites and may require nasogastric feedings for short intervals. Trained and experienced personnel are needed for this procedure.
- As a general practice, all doses of vitamin A should be documented on the child's growth record chart.
Full treatment schedule . A full treatment schedule of oral vitamin A should be administered to all persons suffering from severe malnutrition (WFH Z-score less than -3) or exhibiting eye symptoms of vitamin A deficiency (xerosis, Bitot's spots, keratomalacia, or corneal ulceration). The dose schedule is given below: 200,000 IU on day 1 200,000 IU on day 2 200,000 IU 1 to 4 weeks later.
Children less than 12 months of age receive half doses.
Anemia. The prevalence of anemia can be determined through a rapid anemia survey using a portable Hb photometer (HemoCue system).
The CDC has established the following criteria for defining anemia:
- Children 15 years of age: Hb less than 11.0 g/dL - Pregnant women: Hb less than 11.0 g/dL - Nonpregnant women: Hb less than 12 g/dL - Men: Hb less than 13.5 g/dL
The risk of anemia is highest in pregnant and lactating women, and in children ages 9-36 months. If the general ration contains inadequate amounts of absorbable iron, folate, and vitamin C, anemia may be prevented through the daily administration of iron/folate tablets and vitamin C supplements. Supplementary feeding of high-risk groups with CSM will also help to reduce the likelihood of anemia (CSM contains 18 g iron/100 g).
Iron/folic acid. Routine iron/folate supplements should be provided to all pregnant and lactating women through antenatal and postnatal clinics. Female health workers should be employed to seek out pregnant and lactating women and encourage their participation in these programs.
Vitamin C. Fortification of foods with vitamin C is problematic because vitamin C is unstable. Further study is needed on the appropriate vehicle for fortification. The best solution is to provide a variety of fresh foods either by including them in the general ration or by promoting access to local markets. In addition, local cultivation of vitamin C-containing foods should be encouraged. Patients with clinical scurvy should be treated with 250 mg of oral vitamin C two times daily for 3 weeks.
Niacin. Maize-eating populations are at greatest risk for niacin deficiency, which causes pellagra. Recent studies of pellagra outbreaks among refugee populations found groundnut consumption, garden ownership, and home maize milling (as an indicator of higher socioeconomic status) to be protective factors. Niacin-fortified flour should be included in the general ration. The process of fortifying maize flour with niacin is simple and relatively inexpensive.
Clinical cases of pellagra can be treated with nicotinamide. The recommended treatment schedule is 100 mg three times daily for 3 weeks. The total daily dose of nicotinamide should not exceed 600 mg. Where the diet is deficient in niacin, vitamin B complex tablets can be used to prevent pellagra.
Iodine. If the general ration is naturally deficient of iodine, fortification of items such as salt or monosodium glutamate should be considered.
Vacine-Preventable Diseases
- Measles - Diphtheria - Pertussis - Tetanus - Polio - Tuberculosis - Meningitis
# Overview
Only measles immunization should be part of the initial emergency relief effort; however, a complete EPI should be planned as an integral part of an ongoing long-term health program. Diphtheria, tetanus toxoids (TT) and pertussis vaccine (DTP), oral polio vaccine (OPV), and bacille Calmette-Guerin (BCG) vaccinations are recommended. None should not be undertaken, however, unless the following criteria are met: the population is expected to remain stable for at least 3 months; the operational capacity to administer vaccine is adequate, and the program can be integrated into the national immunization program within a reasonable length of time.
It is essential that adequate immunization records be kept. At the very minimum, personal immunization cards (i.e., "Road to Health" cards) should be issued.
In addition, a central register of all immunizations is desirable.
# Measles
Priority. Measles vaccination campaigns should be assigned the highest priority early in emergency situations. Measles immunization programs should begin as soon as the necessary personnel, vaccine, cold chain equipment, and other supplies are available. Measles immunization should not be delayed until other vaccines become available or until cases of measles have been reported.
In refugee populations fleeing from countries with high immunization coverage rates, measles immunization should still be accorded high priority. Studies of urban populations (e.g., Kinshasa, Zaire) and densely populated refugee camps (e.g., camps in Malawi) have shown that large outbreaks of measles may still occur even if vaccine coverage rates exceed 80%. For example, in a camp of 50,000 refugees, approximately 10,000 would be children less than 5 years of age. If the vaccine coverage rate was 80% and vaccine efficacy was 90%, approximately 2,800 children in this camp would still be susceptible to measles. In addition, certain countries achieved high coverage in the 12 to 23 month age group, leaving large numbers of older children unprotected.
# Program management.
Responsibilities for each aspect of the immunization program need to be explicitly assigned to agencies and persons by the coordination agency.
The national EPI should be involved from the outset of the emergency. National guidelines regarding immunization should be applied in refugee settings.
A pre-immunization count should be conducted to estimate the number of children eligible for vaccination. This should not be allowed, however, to delay the start of the vaccination program.
# Choice of vaccine.
The standard Schwarz vaccine is recommended. The use of medium or high titer Edmonston-Zagreb (E-Z) vaccine is not yet recommended for refugee populations, since there are still concerns about its safety.
Target population. During the emergency phase, defined as that time during which the CMR is higher than 1/10,000/day, all children ages 6 months-5 years should be vaccinated upon arrival at the camp.
In long-term refugee health programs, vaccination should be targeted at all children ages 9 months-5 years, except during outbreaks when the lower age limit should again be dropped to 6 months.
Any child who has been vaccinated between the ages of 6 and 9 months should be revaccinated as soon as possible after reaching 9 months of age, or 1 month later if the child was 8 months old at first vaccination.
If there is insufficient vaccine available to immunize all susceptible children, the immunization program should be targeted at the following high-risk groups, in order of priority:
- Undernourished or sick children ages 6 months-12 years who are enrolled in feeding centers or inpatient wards. - All other children ages 6-23 months.
- All other children ages 24-59 months. Older children, adolescents, and adults may also need to be immunized if surveillance data show that these groups are being affected during an outbreak.
Undernutrition is not a contraindication for measles vaccination! Undernutrition should be considered a strong indication for vaccination. Similarly, fever, respiratory tract infection, and diarrhea are not contraindications for measles vaccination. Unimmunized persons who are infected with HIV should receive the vaccine. Measles vaccine should also be administered in the presence of active TB (1).
Outbreak control. Measles immunization programs should not be stopped or postponed because of the presence of measles in the camp or settlement. On the contrary, immunization efforts should be accelerated.
Among persons who have already been exposed to the measles virus, measles vaccine may provide some protection or modify the clinical severity of the disease, if administered within 3 days of exposure.
Isolation of patients with measles is not indicated in an emergency camp setting.
Case management. All children who develop clinical measles in refugee camps should have their nutritional status monitored and be enrolled in a feeding program if indicated.
Children with measles complications should be administered standard treatment, e.g., ORT for diarrhea and antibiotics for acute lower respiratory infection (ALRI).
If they have not received vitamin A during the previous month, all children with clinical measles should receive 200,000 IU vitamin A orally. Children less than 12 months of age should receive 100,000 IU. This should be repeated every 3 months as part of the routine vitamin A supplementation schedule.
Children with complicated measles (pneumonia, otitis, croup, diarrhea with moderate or severe dehydration, or neurological problems) should receive a second dose of vitamin A on day 2.
If any eye symptoms of vitamin A deficiency are observed (xerosis, Bitot's spots, keratomalacia, or corneal ulceration), the following treatment schedule should be followed: 200,000 IU oral vitamin A on day 1 200,000 IU oral vitamin A on day 2 200,000 IU oral vitamin A 1-4 weeks later.
Children less than 12 months of age receive half doses.
# Diphtheria-tetanus-pertussis
Once a comprehensive EPI has been established, all children ages 6 weeks-5 years should receive three doses of DTP, 4-8 weeks apart.
# Poliomyelitis
One dose of OPV should be administered at birth, followed by three doses 4-8 weeks apart to all children 6 weeks-5 years of age.
Tuberculosis BCG vaccination should be offered as part of the comprehensive EPI, rather than as a separate TB program. One dose of BCG is administered subcutaneously at birth. Recommendations for TB control are presented in a separate section.
# Neonatal tetanus
All women between the ages of 15-44 years should receive a full schedule of TT vaccination.
Vaccination should commence at a younger age if girls less than 15 years of age commonly bear children in the refugee community. TT vaccination should be included as part of a standard antenatal care program. Female health workers should be employed to educate women about the need for the TT vaccination and to refer pregnant women to the antenatal care clinic. Although WHO recommends a 5-dose schedule for TT vaccination (see "WHO Tetanus Toxoid Vaccination Schedule"), the number of doses of TT administered varies from country to country. The schedule in refugee camps should be consistent with host country national policies.
# Meningococcal meningitis
Surveillance. In areas where epidemics of meningococcal meningitis are known to occur, as in Africa's "meningitis belt," surveillance for meningitis should be a routine part of a HIS. Such surveillance requires a standard case definition, the identification (in advance) of laboratory facilities and a source of supplies (e.g., spinal needles, antiseptics, test tubes), and a clearly established reporting network.
Outbreak identification and control. If an outbreak of meningococcal meningitis is suspected, early priority should be given to the determination of etiology and serogroup. This may be accomplished through the use of latex agglutination tests. It is also important to determine antibiotic resistance patterns. Cerebral spinal fluid (CSF) or petechial washings should be placed in suitable transport media and kept at 37 C during transport to a local or regional laboratory with the capacity to perform the needed analysis. If transport media are unavailable, CSF specimens should be placed in a test tube and transported at body temperature as soon as possible.
After an outbreak has been confirmed, a presumptive diagnosis of meningococcal meningitis among persons with suggestive symptoms and signs can be made by visual inspection of CSF from lumbar punctures; CSF will appear cloudy in probable cases. Clinical characteristics include fever, severe headache, neck stiffness, vomiting, and photophobia.
Endemic rates of meningococcal disease vary by geographic area, season, and age; thus it is not possible to define a rate that can be applied universally to identify an epidemic disease. In one study, an average incidence rate of disease that exceeded 15 cases/100,000/week for a period of 2 consecutive weeks was predictive of an epidemic (defined as greater than 100 cases/100,000). Since this threshold may only be valid for populations greater than 100,000 and because the population in a refugee camp may be unknown, a doubling of the baseline number of cases from 1 week to the next over a period of 3 weeks may be used as a rough indicator of a meningitis outbreak.
# Vaccination.
Vaccination of refugees against meningococcal meningitis during nonepidemic periods is generally not considered to be an effective measure because of the short duration of protection in young children. If there are compelling reasons to believe that the refugee population is at high risk for an epidemic, preventive vaccination before the meningitis season may be warranted. In the event of an outbreak, vaccination should be considered if the following criteria are met:
- The presence of meningococcal disease is laboratory confirmed.
- Serogrouping indicates the presence of group A or group C organisms.
- The disease is affecting children greater than 1 year of age (for group A) or greater than or equal to 2 years (for group C).
If it is logistically feasible, the household contacts of identified cases should be checked for vaccination status and immunized if necessary. It may be simpler to organize a mass immunization program.
Because cases of meningococcal meningitis are likely to cluster geographically within a refugee camp, it may be most efficient to focus the vaccination campaign on the affected area(s) first. Although the target group for immunization should be determined from the epidemiology of the specific outbreak, vaccination of children and young adults between the ages of 1-25 years will generally cover the at-risk population.
# Chemoprophylaxis .
Mass chemoprophylaxis is ineffective for control of epidemic meningococcal disease and is to be discouraged in a refugee setting. If chemoprophylaxis is to be instituted, the following guidelines should be implemented:
- Chemoprophylaxis should be administered simultaneously to all members of a household where an infected person has been diagnosed to prevent reinfection.
Recovering patients should receive chemoprophylaxis to eliminate carriage. - Adults: 600 mg rifampicin twice a day for 2 days.
- Children greater than 1 month old: 10 mg/kg rifampicin twice a day for 2 days.
- Neonates: 5 mg/kg rifampicin twice a day for 2 days. Rifampicin should not be administered to pregnant women.
Patients should be warned that the drug will temporarily turn the urine and saliva orange.
Ceftriaxone and ciprofloxacin may be used as alternatives to rifampicin. These drugs, like rifampicin, are expensive and are generally not considered appropriate in a refugee setting. Because of widespread resistance, sulfonamides should not be used unless susceptibility tests show the organism to be sensitive. Widespread use of rifampicin may encourage drug resistance and could cause iatrogenic morbidity due to adverse drug reactions.
Treatment. IV-administered penicillin, which requires relatively intensive nursing care and medical equipment, is the treatment of choice for meningococcal disease in developed countries. However, in areas where such intensive care is not possible, a single intramuscular (IM) dose of long-acting chloramphenicol in oil suspension (Tifomycin) upon admission has been demonstrated to be effective. The dosage should be adjusted for age as follows:
- greater than or equal to 15 years of age, 3.0 g (6 mL).
- 11-14 years of age, 2.5 g (5 mL).
- 7-10 years of age, 2.0 g (4 mL).
- 3-6 years of age, 1.5 g (3 mL).
- 1-2 years of age, 1.0 g (2 mL).
- less than 1 year old, 50 mg/kg.
In about 25% of cases, a second dose of chloramphenicol will be needed. Patients should be admitted as inpatients and monitored closely to determine whether the additional dose is required. The efficacy of this regimen of one or two doses of IM chloramphenicol has been proven in studies in both Europe and Africa.
Febrile seizures are common in small children, and acetaminophen (paracetamol) in either oral suspension or rectal suppositories should be administered to patients upon admission.
# Typhoid and cholera
Vaccination for typhoid or cholera is not recommended in refugee situations. The resources required for such a campaign are better spent on improving sanitation conditions (see "Diarrheal Diseases").
# Diarrheal Diseases
The critical elements of a diarrheal disease control program in a refugee camp are: a) prevention of morbidity, b) prevention of mortality through appropriate case management, c) surveillance for morbidity and mortality attributed to diarrheal diseases, and d) preparedness for outbreaks of severe diarrheal diseases (e.g., cholera and dysentery). The objectives of a camp diarrheal diseases control program should include the following:
- Maintaining the incidence of diarrheal cases at less than 1% per month.
- Achieving a CFR of less than 1% for diarrheal cases, including cholera.
# Prevention
Efforts aimed at reducing the incidence of diarrheal diseases and other enterically transmitted diseases should focus primarily on the provision of adequate quantities of clean water, improvements in camp sanitation, promotion of breast-feeding, and personal hygiene education.
The following recommendations relating to water and sanitation are largely based on the UNHCR Handbook for Emergencies (1) and Environmental Health Engineering in the Tropics (2).
Water. In general, the supply of adequate quantities of water to refugees in a camp setting has greater overall impact on health than a supply of small quantities of microbially pure water. The provision of adequate quantities of water is particularly effective in the prevention of bacillary dysentery. Nevertheless, whenever possible, sources of clean water should be sought or disinfection systems established. An additional health benefit derived from the provision of ample supplies of water, at a convenient distance from the camp, is the decrease in the daily workload of women, upon whom the burden of water collection usually falls.
Appropriate water sources should be identified before refugees arrive in an area. An adequate water supply is a crucial component of attempts to prevent disease and protect health and, as such, should be among the highest priorities for camp planners and administrators.
Standards. WHO has set standards for the microbiological quality of water supplies. These are as follows:
- For treated water supplies, the water entering the system should be free from coliforms. The water at the tap should be free of coliforms in 95% of samples taken over a 1-year period and should never have greater than 10 coliforms/100 mL. E. coli should never be present in the water. - For untreated water supplies, less than 10 coliforms/100 mL and no evidence of E.coli.
The water quality should be tested before using a water source, at regular intervals thereafter, and during any outbreak of diarrheal disease in which the water source may be implicated. Sources. Whatever water source is chosen, it must be protected from contamination. Safety measures include:
- Springs protected by a spring box.
- Wells equipped with a well head, drainage apron, and a pulley, windlass, or pump.
- Surface water, such as lakes, dams, or rivers, provided there there is a large mass of moving water. If surface water is to be used, water for drinking should be drawn upstream, away from obvious sources of contamination. - Rainwater is not generally a practical source in a refugee setting.
Treatment. The selection of a water source should take into consideration the potential need for water treatment. Whether or not treatment is needed, the water should be tested routinely to ensure that it is of suitable quality.
When surface water is used as a communal source, covered storage will allow suspended particles to settle on the bottom, improving the quality of the water. Longer standing times and higher temperatures will yield a greater improvement in water quality.
Filtration and chlorination may require considerable effort and resources, but should be considered if the situation warrants.
Although boiling is an effective means of removing water pathogens, it is not generally a practical solution in refugee camps where fuel supplies are limited.
As a short-term measure during an emergency (e.g., a cholera outbreak, and when treatment of all water sources is not feasible), purification agents (such as chlorine) may be distributed to each household. In this way, water can be treated in household storage containers. However, a massive education effort is required and such measures usually cannot be maintained for longer than a few weeks.
Water storage containers with narrow necks or covers that prevent people from introducing their hands into the container are likely to reduce further contamination of water once it is stored in the home. The use of separate containers to store water for drinking and water for washing is preferable.
Supply. The chosen water supply should be adequate to meet the needs of the camp yearround. Seasonal variations in rainfall and in camp population should be taken into consideration when selecting a water source.
The UNHCR recommends that a minimum quantity of 20 L of water/person/day be provided. Health clinics, feeding centers, and hospitals require 40-60 L/patient/day.
Ideally, no individual dwelling should be located greater than 150 m from a water source. At any greater distance, the use of water for hygiene is greatly diminished.
Sanitation. Camp sanitation plans should be drawn up before refugees arrive. Because of the crucial role it plays in disease prevention, sanitation should be an early priority for camp planners.
Community attitudes and cultural practices regarding sanitation and disposal of excreta are vital to the success of a sanitation project and should be taken into careful consideration. All efforts should be made to separate garbage and human waste from water and food supplies. Excreta should be contained within a specific area. Defecation fields may be used as a short-term measure until a more appropriate sanitation system can be implemented. This is particularly suitable in hot, dry climates.
The design and installation of latrines should also take into consideration the attitudes and practices of the refugee population. Latrines should be located so as to remove the possibility of contamination of the water source.
Latrines that are poorly maintained will not be used. For this reason, personal or family latrines are the best solution. However, limitations on building supplies, money, and space may make this impossible. If communal latrines are to be used, no more than 20 people should share one latrine and responsibility for maintaining cleanliness should be clearly assigned.
# Breast-feeding.
Breast-feeding is an effective measure for preventing diarrheal illness among infants. Exclusive breast-feeding for the first 4-6 months of a baby's life, and continued breast-feeding until the child is 2 years of age, should be encouraged through educational campaigns targeted at pregnant and lactating women. Distribution of milk products should be restricted, and feeding bottles should never be distributed within a camp (see "Nutrition").
Personal hygiene. Community health education should reinforce the importance of handwashing with soap and of general domestic and personal hygiene, in particular safe foodhandling practices. Soap should be made readily available by relief agencies.
# Case management
Assessment (see "Patient Assessment"
). An adequate history should be taken from the patient or the patient's family. The duration of illness; quantity, frequency, and consistency of stool; presence or absence of blood in the stool; frequency of vomiting; and the presence of fever or convulsions should be assessed.
Assessment of dehydration and fluid deficit through careful physical examination should receive particular attention. Fever, rapid breathing, and hypovolemic shock may accompany severe dehydration.
Careful monitoring of the patient's weight and the signs of dehydration throughout the course of therapy will help assess the adequacy of rehydration. Adults with acute, dehydrating diarrhea should be carefully assessed by a physician to rule out cholera.
# Management of patients.
In the camp setting, all patients with diarrhea should be encouraged to report to a clinic or health post for assessment, advice on feeding, fluid intake, and diarrhea prevention. The treatment of dehydration should always be initiated in the clinic. Ideally, a central clinic should be supplemented with several small ORT centers in the camp, staffed by trained community health workers.
# Prevention of dehydration.
Case management should focus on the prevention of dehydration under two sets of circumstances: a) when a patient with diarrhea shows no signs of dehydration, b) when a patient has already been treated for dehydration in the ORT corner and is being released from medical care. Management of patients in these situations includes the following.
# ORS.
Mothers should be shown how to mix and give ORS and initially be given a 2-day supply. The amount to be given at home is as follows.
- Children less than 2 years old: 50-100 mL (1/4 to 1/2 large cup) of ORS solution after each stool. - Older children: 100-200 mL after each stool.
- Adults: As much as they want; however, dehydrated adults who fail to respond promptly to ORS should be reassessed to exclude cholera.
# Increased fluids.
Patients should be instructed to increase their normal intake of fluids. Any locally available fluids known to prevent dehydration, especially those that can be prepared in the home (e.g., cereal-based gruels, soup, and rice water), should be encouraged. Soft drinks are not recommended because of their high osmolality.
Continued feeding. Infants who are breast-fed should continue to receive breast milk. If an infant is receiving milk formula in a feeding center, the milk should be diluted with an equal volume of clean water until the diarrhea stops.
For children greater than 4-6 months of age:
- Give freshly prepared foods, including mixes of cereal and beans or cereal and meat, with a few drops of vegetable oil added. - Offer food every 3-4 hours or more often for very young children.
- Encourage the child to eat as much as he or she wants.
- After the diarrhea stops, give one extra meal each day for a week.
Monitor condition. The mother should be advised to return to the clinic with the child if he/she continues to pass many stools, is very thirsty, has sunken eyes, has a fever, or does not generally seem to be getting better.
# Management of the dehydrated patient
Every health center in a refugee camp should have an area allocated for supervised oral rehydration (see "Guidelines for Rehydration Therapy"). Staff assigned to this activity need to be well-trained in the assessment and treatment of the dehydrated patient. Individual patients should be monitored to determine whether the recommended doses are adequate for their needs or whether rehydration proceeds faster than is expected.
For babies who are unable to drink but are not in shock, a nasogastric tube can be used to administer ORS solution at the rate of 15 mL/kg body weight/hour. For infants in shock, a nasogastric tube should be used only if IV equipment and fluids are not available.
Reassessment. The patient's hydration status should be reassessed after 3-4 hours, and treatment continued according to the degree of dehydration at that time. Note: If the child is still dehydrated, rehydration should continue in the center. The mother should offer the child small amounts of food.
If the child is less than 12 months of age, the mother should be advised to continue breastfeeding. If the child is not being breast-fed, 100-200 mL of clean, plain water should be given before continuing the ORS. Older children and adults should consume plain water as often as they wish throughout the course of rehydration with ORS solution.
Nutritional maintenance. Infants should resume feeding as outlined above. For children greater than 4-6 months old and adults, feeding should begin as soon as the appetite returns. Energy-rich, easily digestible foods will help maintain their nutritional status. There is no reason to delay feeding until the diarrhea stops and there is no justification for "resting" the bowel through fasting. Note: Children enrolled in SFPs or TFPs who develop diarrhea with dehydration should be fed HEM diluted with ORS in a ratio of 1:1, alternating with plain ORS. The overall volume of fluid should be calculated according to the child's weight and degree of dehydration.
Use of chemotherapy. Antimicrobial drugs are contraindicated for the routine treatment of uncomplicated, watery diarrhea. Specific indications for their use include:
- Cholera.
- Shigella dysentery.
- Amoebic dysentery.
- Acute giardiasis. For specific recommendations see "Cholera" and "Dysentery". Antidiarrheal agents are contraindicated for the treatment of diarrheal disease. Stimulants, steroids, and purgatives are not indicated for treatment of diarrheal disease and may produce adverse effects.
# Surveillance for Diarrheal Diseases
All health facilities that serve the refugee population should maintain case records of diarrheal diseases as part of the routine HIS. Records should include the degree of dehydration at the time of presentation. Case definitions should be standardized. Dysentery cases should be recorded as a separate category.
Any increase in the number or severity of cases, change in the type of diarrhea, rise in diarrhea-specific mortality, or change in the demographic breakdown of the cases should be reported. A case definition for cholera should be established for the purpose of surveillance. Any suspected cholera cases should be reported immediately.
Sample case definitions for cholera and dysentery are provided below.
# Cholera
Identification of the pathogen by laboratory culture is necessary to confirm the presence of cholera. Initially, rectal swabs of patients with suspected cholera should be transported to the laboratory in Cary-Blair transport medium (see Collecting, Processing, Storing and Shipping Diagnostic Specimens in Refugee Health-Care Environments - ). The laboratory should determine the antibiotic sensitivity of the cultured strain. Once an outbreak is confirmed, it is not necessary to culture every case. Additionally, it is not necessary to wait until an outbreak has been confirmed to begin treatment and preventive measures.
# Epidemics
In the event of an outbreak of cholera, early case-finding will allow for rapid initiation of treatment. Aggressive case-finding by trained community health workers should be coupled with community education to prevent panic and to promote good domestic hygiene.
Treatment centers should be easily accessible. Most patients can be treated with ORS alone in the local clinic and still achieve a CFR less than 1%. If the attack rate for cholera is high, it may be necessary to establish temporary cholera wards to handle the patient load. Health centers should be adequately stocked with ORS, IV fluids, and appropriate antibiotics. Health workers must be trained in the management of cholera.
Surveillance should be intensified and should change from passive to active case-finding. The number of new cholera cases and deaths should be reported daily, along with other relevant information (e.g., age, sex, location in camp, length of stay in camp).
# Treatment
The goal of cholera treatment is to maintain the CFR at less than 1%.
# Rehydration therapy
Rehydration needs to be aggressive. However, careful supervision is necessary to prevent fluid overload, especially when children are rehydrated with IV fluids. Most cases of cholera can be treated through the administration of ORS solution (see "Patient Assessment" and "Guidelines for Rehydration Therapy". Persons with severe disease may require IV fluid, which should be administered following the guidelines outlined in "Diarrheal Diseases".
# Antibiotics
Antibiotics reduce the volume and duration of diarrhea in cholera patients. Antibiotics should be administered orally. Doxycycline should be used when available in a single dose of 300 mg for adults and 6 mg/kg/day for children less than 15 years of age. Tetracycline should be reserved for severely dehydrated persons, who are the most efficient transmitters because of their greater fecal losses. Tetracycline should be administered according to the following schedule.
- Adults: 500 mg every 6 hours for 72 hours - Children: 50 mg/kg/day every 6 hours for 72 hours Chloramphenicol can be used as an alternative to tetracycline; the dosage is the same. When tetracycline and chloramphenicol resistance is present, furazolidone, erythromycin, or trimethoprim-sulfamethoxazole (TMP-SMX) may be used.
# Epidemiologic investigation
Epidemiologic studies to determine the extent of the outbreak and the primary modes of transmission should be conducted so that specific control measures can be applied. The CFR should be monitored closely to evaluate the quality of treatment.
Case-control studies may be undertaken to identify risk factors for infection. Environmental sampling, examination of food, and the use of Moore swabs for sewage sampling may be useful to confirm the results of epidemiologic studies and define modes of transmission.
# Control and prevention
Health education. The community should be kept informed as to the extent and severity of the outbreak, as well as educated on the ease and effectiveness of treatment. Emphasis should be placed on the benefits of prompt reporting and early treatment. The community should be advised about suspected vehicles of transmission. The need for good sanitation, personal hygiene, and food safety should be stressed. Health workers involved in treating cholera patients need to observe strict personal hygiene, by washing their hands with soap after examining each patient. Smoking should be prohibited in cholera wards and clinics.
Water supply. Any water supplies implicated through epidemiologic studies should be tested. Any contaminated water sources should be identified and access to those sources cut off. Alternative sources of safe drinking water should be identified and developed as a matter of urgency.
Food safety. Community members should be informed of any food item that has been implicated as a possible vehicle of transmission. Health education messages regarding food preparation and storage should be disseminated.
During an outbreak, feeding centers should be extremely vigilant in the preparation of meals because of the potential for mass infection. Food workers should have easy access to soap and water for handwashing. Food workers should always wash their hands after defecating, and any food worker who is experiencing diarrhea should be prohibited from working.
Chemoprophylaxis . Mass chemoprophylaxis is not an effective cholera control measure and is not recommended. Although the WHO Guidelines for Cholera Control suggest that chemoprophylaxis may be justified for closed groups (such as refugee camps), CDC studies indicate that focusing on other preventive activities (i.e., providing an adequate water supply, improving camp sanitation, and providing adequate and prompt treatment) results in a more effective use of resources. If resources are adequate and transmission rates are high (greater than 15%), consideration should be given to providing a single dose of doxycycline to immediate family members of diagnosed patients.
Vaccines. Currently available vaccines are not recommended for the control of cholera among refugee populations. The efficacy of these vaccines is low and the duration of protection provided is short. Vaccination campaigns divert funds and personnel from more important cholera control activities and give refugee and surrounding populations a false sense of security.
# Dysentery
When possible, patients presenting with signs and symptoms of dysentery should have stool specimens examined by microscopy to identify Entamoeba histolytica. Care should be taken to distinguish large white cells (a nonspecific indicator of dysentery) from trophozoites. Amebic dysentery tends to be misdiagnosed.
# Shigellosis
If a microscope is unavailable for diagnosis, or if definite trophozoites are not seen, persons with bloody diarrhea should be treated initially for shigellosis. Appropriate treatment with antimicrobial drugs decreases the severity and duration of dysentery caused by Shigella and reduces the duration of pathogen excretion. The selection of an antimicrobial treatment regimen is often complicated by the presence of multiresistant strains of Shigella. The choice of a first-line drug should be based on knowledge of local susceptibility patterns. If no clinical response occurs within 2 days, the antibiotic should be changed to another recommended for that particular strain of shigellosis. If no improvement occurs after an additional 2 days of treatment, the patient should be referred to a hospital or laboratory for stool microscopy. At this stage, a diagnosis of resistant shigellosis is still more likely than amebiasis.
Drugs of choice. Treatment guidelines for shigellosis are listed below.
- Ampicillin Children: 100 mg/kg/day in four divided doses for 5 days. Adults: 500 mg four times daily for 5 days. - TMP-SMX Children: 10 mg/kg/day TMP and 50 mg/kg/day SMX in two divided doses for 5 days.
Adults: 160 mg TMP and 800 mg SMX twice daily for 5 days.
For strains resistant to these regimens, alternative treatment with nalidixic acid or tetracycline is indicated.
- Nalidixic acid 55 mg/kg/day in four divided doses for 5 days. - Tetracycline 50 mg/kg/day in four divided doses for 5 days.
The fluoroquinolones (e.g., ciprofloxacin and ofloxacin) are highly effective for the treatment of shigellosis, but are expensive and have not yet been approved for treatment of children or pregnant or lactating women with shigellosis.
Because multiresistant strains of Shigella have become widespread and because Shigella strains can rapidly acquire resistance in endemic and epidemic settings, it is advisable that periodic antibiotic susceptibility testing be performed by a reference laboratory in the region. Note: WHO does not recommend mass prophylaxis or prophylaxis of family members as a control measure for shigellosis.
# Amebiasis and giardiasis
Treatment for amebiasis or giardiasis should not be considered unless microscopic examination of fresh feces shows amebic or Giardia trophozoites, or two different antibiotics given for shigellosis have not resulted in clinical improvement.
Treatment guidelines for amebiasis are as follows:
- Metronidazole Children: 30 mg/kg/day for 5-10 days.
Adults: 750 mg/3 times/day for 5-10 days. Treatment guidelines for giardiasis are as follows:
- Metronidazole Children: 15 mg/kg/day for 5 days. Adults: 250 mg/3 times/day for 5 days.
diagnosis of malaria. These smears will also provide the basis for transmission surveillance in camps or geographic areas. If the patient load exceeds the capability of the laboratory to perform thick smears on all suspected cases, a system of microscopic diagnosis for a percentage of suspected cases should be established. When diagnoses are made by locally trained microscopists in small field laboratories, a randomly selected sample of both positive and negative slides should be sent to a reference laboratory for verification in order to maintain quality control.
When laboratory facilities are not available, clinical symptoms (paroxysmal fever, chills, sweats, and headache) and signs (measured fever) are the best predictors of malaria infection.
In situations in which year-round high malaria endemicity has been established, all episodes of fever illness can be assumed to be caused by Plasmodium falciparum. However, health workers should bear in mind other causes of fever, including pneumonia, ALRI, or meningitis. In areas where transmission is highly seasonal, surveys should be conducted each year at the beginning of the high transmission season.
The presence of Plasmodium on blood smears does not prove that malaria is the cause of febrile illness, even in areas where malaria is highly prevalent. Other causes should be considered and ruled out.
Treatment with chemotherapy. In areas without chloroquine resistance, the oral regimen of chloroquine usually employed in the treatment of uncomplicated attacks of malaria is as follows:
- Adults: A total dose of 1,500 mg chloroquine (approximately 25 mg/kg body weight) should be given during a 3-day period. This can be given as 600 mg, 600 mg, and 300 mg at 0, 24, and 48 hours, respectively. - Pregnant women: Pregnant women with malaria should be treated aggressively using the regimen for adults. Chloroquine is safe during pregnancy. (Quinine is also safe although pregnant women receiving IV-administered quinine should be monitored carefully for hypoglycemia.) - Children: A total dose of 25 mg/kg body weight chloroquine should be given during a 3-day period. This can be administered as 10 mg/kg, 10 mg/kg, and 5 mg/kg body weight at 0, 24 and 48 hours, respectively.
In areas where the likelihood of reinfection is low, consideration may be given to supplementation of chloroquine treatment with primaquine for persons infected with Plasmodium vivax.
- Adults: 15 mg daily for 14 days.
- Children: 0.3 mg/kg/day.
Among populations in which severe glucose-6-phosphate dehydrogenase (G-6-PD) deficiency is common (notably among Asians), however, primaquine should not be administered for greater than 5 days. Administration of primaquine for longer periods may result in lifethreatening hemolysis. Whenever possible, persons needing primaquine should first have a blood test for G-6-PD deficiency.
When laboratory analysis is performed, the first dose of chloroquine should be administered when the blood smear is taken. The patient should be instructed to return the second day for the results of the smear. If the smear is positive, chemotherapy should be continued. If the smear is negative and the patient remains febrile, other causes of fever should be identified.
If supervised therapy during a 3-day period is not possible, the first dose of chloroquine should be given under supervision and the additional doses may be given to the patient with appropriate instructions.
Patients who remain symptomatic longer than 3 days into therapy should have a repeat thick smear examined. Alternative therapy should be instituted if the degree of parasitemia has not diminished markedly by this time.
In areas with chloroquine resistance, treatment of patients may be the same as in areas of chloroquine-sensitive malaria; or may include an alternative first-line drug. Additional care in the follow-up of patients is required.
- If the patient continues to have symptoms of malaria after 48-72 hours from the start of recommended chloroquine treatment, the patient should be treated with a secondline drug. - The choice of an alternative drug depends on the availability of the drugs and the relative sensitivity of the parasites. Possible alternative drugs include sulfa drugs in combination with pyrimethamine (Fansidar, Maloprim), tetracycline, quinine, and newer drugs such as mefloquine. Use of alternative drugs should be consistent with national malaria control policies in the host country.
Fever control. Antipyretics (i.e., acetaminophen, paracetamol) and anticonvulsives are often necessary for the care of the patient with malaria.
Children with high fevers should be frequently sponged with tepid water. Patients should increase their intake of fluids as the febrile illness will most likely be accompanied by mild dehydration. Patients with signs of moderate dehydration should be given ORS.
# Chemoprophylaxis
During epidemics (seasons of high rates of transmission), malaria chemoprophylaxis should be considered for the following high-risk groups:
- Children less than 5 years of age, especially those suffering from malnutrition, anemia, or other debilitating diseases. - Pregnant women.
- Other groups that are at increased risk for complications of malaria illness due to compromised health status.
The decision to provide chemoprophylaxis to high-risk persons should be based upon the capabilities of the health-care system to accomplish the following:
- At-risk persons can be readily identified and assembled.
- Follow-up can be assured.
- Sufficient personnel and medication are available to ensure regular administration of services. - The parasite is known to be generally sensitive to the drug used.
Administration of chemoprophylaxis to high-risk groups can be logistically difficult and may be too great a strain on the capacities of the health-care system to be feasible.
Expatriates working in an endemic area should be on weekly chloroquine ( 300 mg chloroquine base) during the entire period of exposure and for an additional 6 weeks after leaving the area. In areas where chloroquine resistance is documented, prophylaxis with mefloquine is recommended (250 mg weekly dose).
# Severe malaria
Severe malaria is considered a medical emergency and demands prompt and specific medical care. Signs and symptoms of severe malaria include:
- Severe anemia.
- Hemoglobinuria, oliguria, or anuria.
- Hypotension and respiratory distress.
In the presence of signs of volume depletion, fluid (which includes dextrose) should be administered to maintain cardiac output and renal perfusion.
- Care in the administration of fluid therapy is required, since fluid overload can precipitate pulmonary edema or adult respiratory distress syndrome (ARDS), which can worsen cerebral edema. - The IV fluid of choice is 5% dextrose with 1/2 normal saline, since this mixture provides dextrose to prevent hypoglycemia and less salt to leak into pulmonary and cerebral tissues. Alternative IV fluids should be considered if this is unavailable.
Hypoglycemia is a complicating factor in patients with cerebral malaria and a risk factor for fatal outcome. When possible, blood glucose levels should be monitored. Hypoglycemia should be suspected whenever there is a deterioration in clinical status, especially in the presence of new neurologic findings. Hypoglycemia can be treated presumptively with 50 mL of 50% IV dextrose.
Blood transfusion is indicated when Hb less than 4 g/dL, or Hb less than 6 g/dL is detected and the patient has signs of heart failure (i.e., dyspnea, enlarging liver, gallop rhythm).
The administration of steroids has an adverse effect on outcome in cerebral malaria. Therefore, steroids are no longer recommended.
# Anemia
Most anemias caused by malaria will reverse spontaneously after anti-malarial therapy. However, anemia may progress for several weeks after successful treatment of severe malaria and may require treatment.
For some patients (especially children), blood transfusion may be lifesaving. Recent studies indicate that blood transfusion should be given for Hb less than 4 g/dL or Hb less than 6 g/dL in the presence of symptoms of respiratory distress. Because of the potential for HIV or hepatitis B transmission, blood transfusion should be reserved for medical emergencies for which no alternative treatment exists. Facilities for screening blood for HIV antibodies are rare in refugee camps. Whenever feasible, patients requiring transfusion should be transferred to hospitals where such facilities exist.
The anemia of malaria is not associated with iron loss, and replacement is helpful only if a coexisting iron deficiency exists. Folic acid replacement may be helpful during the recovery period when rapid erythrocyte replacement occurs.
# Renal failure
Replacement of fluid losses (sweat, vomit, and diarrhea) is recommended to prevent renal failure. If renal failure is suspected, strict monitoring of fluid intake and output is necessary.
In the presence of oliguria, a fluid challenge followed by furosemide injection can help to differentiate acute renal failure from prerenal causes. If renal failure is demonstrated, fluid intake must be limited to daily replacement of insensible loss plus urine/vomitus volume in the previous 24 hours. Protein intake should be limited to less than 30 mg/day, and all drug doses should be adjusted for renal failure.
Selected Reading Ministry of Health. Malawi guidelines for the management of malaria. Malawi, October 1991. CDC. Steketee RW, Campbell CC. Control of malaria among refugees and displaced persons. Atlanta, GA :1988 (unpublished).
- The accessibility of cases.
- The level of expertise of available health personnel.
- The amount of subjectivity involved in the diagnosis.
A case-finding mechanism should be established. The dynamics of this system will depend upon the disease being investigated and the specific attributes of the camp involved. Casefinding will be facilitated if a cadre of refugee community health workers has been identified and trained. The presence of an active camp health committee will also promote effective case-finding.
Time, place, and person. Certain information should be collected from each patient, or from their families, and recorded in a register. This should include:
- The date (and perhaps the time) of onset of symptoms.
- The length of time between arrival in camp and the onset of symptoms.
- Patient's age and gender.
- Place of residence.
- Ethnic group (if applicable).
Determining who is at risk. The data collected from patients should be used in an ongoing analysis to determine who is at greatest risk and to target specific interventions most effectively.
Prepare a graph showing the number of cases per day. This "epidemic curve" will indicate the point at which the outbreak first occurred, the magnitude of the outbreak, the incubation period, and possible modes of transmission.
Using a current map of the camp, mark the residence or section of the camp of each case as it is reported. This will allow investigators to identify clusters of patients and may help to pinpoint a common source of infection.
A breakdown of cases by age, gender, length of stay in camp, vaccination status, if pertinent, and perhaps ethnic group will enable investigators to identify those groups or persons who are at highest risk for infection.
Testing a hypothesis. As preliminary data are collected and analyzed, a hypothesis on the causative exposure should be developed and tested. A case-control study and analysis will help determine likely risk factors and sources of exposure. Laboratory analysis of environmental samples may be used to confirm a suspected source of infection.
Preparing a report. Meetings should be held regularly with camp administrative officials, UNHCR and NGO representatives, local health officials, and refugee community leaders to discuss the evolution of the outbreak and to stress current control measures. In some cases, a written report may be necessary before any control and prevention efforts are undertaken. The report should include an estimate of the magnitude and health impact of the outbreak in numbers of projected cases and deaths. It should also include an estimation of the need for outside assistance and supplies. A written report will also provide a valuable record for use in future investigations. Moreover, the written report can serve as a useful teaching tool.
# Control and prevention
As the epidemiologic investigation progresses, it is important that decision-makers be informed as to the findings so that appropriate control measures may be instituted. Continued disease surveillance will determine the effectiveness of control measures.
Discharge criteria. Discharge from a TFP to a SFP should occur when the following criteria are met:
- The child has maintained 80% WFH (or a Z-score of -2) for a period of 2 weeks.
- Weight gain has occurred without edema.
- The child is active and free from obvious illness.
- The child exhibits a good appetite.
# Monitoring requirements
- A register should be maintained with the details of each patient.
- Each patient should be given a personal ration card and an identification bracelet.
- Each patient should be weighed daily at first, and then twice weekly to monitor progress. - TFPs should aim for a weight gain of 10 g/kg body weight/day.
- All absentees should be followed up at home and encouraged to resume attendance.
- Regular nutrition surveys should be conducted, and malnourished children who are not enrolled in a feeding program should be referred to either the SFP or the TFP. Feeding programs should aim for at least 80% enrollment and 80% daily attendance.
In addition, health workers should be involved in active case-finding in the community.
# Provision of micronutrients
Ideally, the recommended daily allowances for all essential nutrients should be provided in the general rations. However, specific measures may be necessary to provide certain micronutrients.
Vitamin A Risk factors for vitamin A deficiency. Provide vitamin A supplements whenever any of the following conditions are present:
- The refugee population originates from a geographic area at high risk for vitamin A deficiency. - There is evidence of severe vitamin A deficiency in the population.
- The general ration provides inadequate quantities of vitamin A (less than 2,000-2,500 IU/person/day).
# Supplemental doses and schedule
- Children 12 months 5 years of age should receive 200,000 IU every 3 months.
- Infants less than 12 months of age should receive 400,000 IU total dose in the first year of life, administered as follows: - If a dose can be assured every 3 months: 100,000 IU to the infant every 3 months for 1 year. - If 3-month dosing is impractical but 6-month dosing is anticipated: 200,000 IU to the infant every 6 months for 1 year. - If any subsequent dosing is unlikely: 200,000 IU to the infant when examined.
In all cases, mothers should be administered 200,000 IU within 2 months of giving birth in order to provide adequate quantities of vitamin A in the breast milk. If it is not possible to provide supplements to the mother at or within 2 months of giving birth, then the mother should receive 100,000 IU during the third trimester of pregnancy.
- If xerophthalmia is observed in older children and adults, include the affected age groups in the standard 200,000 IU preventive vitamin A supplementation program administered to younger children.
# Malaria
Knowledge of the epidemiology of transmission, including local vectors, is essential to a malaria control effort. Information regarding the local epidemiology may be available from the MOH, WHO, and regional health authorities. In certain instances, a vector survey may need to be done. The national malaria control program or WHO staff are often able to conduct such surveys.
Information on previous exposure can be obtained from the refugees themselves, or more detailed information on previous exposure to specific species can be obtained through international channels via WHO.
Within a camp, the proportion of fever illness attributable to malaria at a particular time can be determined by obtaining thick and thin blood smears from a sample of consecutive clinic patients with a history of recent fever (e.g., 50 children less than 5 years of age). The malaria infection prevalence rate among these patients can then be compared with a control group that is free of the signs and symptoms of malaria.
Laboratory examination will determine whether malaria illness is caused by Plasmodium falciparum or Plasmodium vivax.
# Control of Transmission
Control of malaria transmission may be achieved through a combination of the following strategies.
Personal protection. The use of protective clothing, insecticide-impregnated bed nets, and insect repellents will help limit human exposure to malaria-infected mosquitoes.
Residual insecticides. Periodic spraying of the inside surfaces of permanent dwellings may reduce transmission. The use of residual insecticides, however, may be toxic to those involved in spraying and can also be detrimental to the environment. Spraying can be expensive and time consuming. Careful consideration should be given to the technical aspects of spraying, local vector behavior and susceptibility, personnel training, safety, and community motivation before undertaking such a program.
# Source reduction.
The elimination of breeding sites by draining or filling may reduce the density of vectors in the area. Knowledge of the local vectors is essential to ensure that source reduction efforts are effectively targeted.
Ultra low-volume insecticide spraying. Adult mosquitoes may be killed through frequent fogging with nonresidual insecticides. Fogging is generally repeated on a daily basis.
# Gametocidal drug use.
Gametocidal drugs (e.g., primaquine) are not generally recommended for use in refugee camps.
Selection of control strategies will depend upon the local epidemiologic factors, availability of resources, and environmental and cultural factors.
# Case Management
Case definition. Malaria infection is defined as the presence of malaria parasites in the peripheral blood smear. Malaria illness is defined as the presence of "malaria signs and symptoms" in the presence of malaria infection. The signs and symptoms of malaria typically include fever, chills, body aches, and headache.
Diagnosis. If possible, a thick blood smear and Giemsa stain should be the basis for the - Jaundice.
- Hemorrhagic diatheses.
- Cerebral malaria. Signs of abnormal central nervous system (CNS) function, which may be present in cerebral malaria, include drowsiness, mental confusion, coma, and seizures.
Management of severe malaria. The following guidelines for the management of severe malaria are based upon those prepared by the MOH in Malawi.
Outpatient setting. If severe malaria is diagnosed in an outpatient setting, the patient should be referred for hospitalization. However, treatment should begin immediately and not be delayed until the patient has been transferred.
If the patient can swallow, sulfadoxine-pyrimethamine (SP) tablets (500 mg-25 mg) should be administered orally in the following doses according to the patient's age.
- less than 3 years old: 1/2 tablet - 4-8 years old: 1 tablet - 9-14 years old: 2 tablets - greater than 14 years old: 3 tablets If the patient vomits within 30 minutes, the dose should be repeated.
If the patient cannot swallow or is vomiting repeatedly, an IM injection of quinine dihydrochloride (10 mg/kg) should be administered. This can be repeated every 4 hours for two additional doses, and every 8 hours thereafter if a long delay is anticipated for transport of the patient to a hospital. The patient's fever should be reduced by sponging with lukewarm water or by using paracetamol or aspirin. Patients should be given ORS. In a patient who cannot drink, administer 20 mL/kg ORS with one teaspoon of glucose powder via naso-gastric tube every 4 hours.
If convulsions occur, administer 0.2 mL/kg paraldehyde by IM injection. If convulsions recur, repeat the treatment. If convulsions persist, give the patient a phenobarbitone 10-mg/kg IM injection.
In a child with altered consciousness or repeated convulsions, the physician should perform a lumbar puncture if possible. If the CSF is cloudy, treatment for meningococcal meningitis is indicated and anti-malarial treatment should be discontinued. If a lumbar puncture cannot be performed, treatment for meningitis should be administered while continuing treatment for malaria.
Inpatient setting. The following tests should be performed immediately upon admission: thick blood film, hemoglobin, blood glucose, and lumbar puncture. If hemoglobin is below 4 g/dL, blood grouping and cross-matching should be done.
If the patient can swallow, give oral SP as described above. If the patient cannot swallow or has persistent vomiting, give IV-administered quinine as follows:
- An initial dose of 20 mg(salt)/kg body weight is injected into 10 mL/kg 5% dextrose (half-strength Darrow's solution) and infused during a 3-hour period. (If the patient has already received quinine before admission, the initial dose should be 10 mg/kg.) - Subsequent doses of 10 mg/kg should be repeated as above every 12 hours. In between doses of quinine, the IV fluid (10 mL/kg during a 3-hour period) should be continued. Patients should be switched to oral medications as soon as their conditions allow.
# Tuberculosis
The TB control program should establish a policy covering areas of case definition, casefinding, treatment regimen, and the supervision of chemotherapy. This policy should be agreed upon and adhered to by all organizations and agencies providing health services to the refugees.
During the emergency phase of a refugee relief operation, TB activities should be limited to the treatment of patients who present themselves to the health-care system and in whom tubercle bacilli have been demonstrated by sputum smear examination.
# Control of transmission Target population.
Because of the limited resources available, efforts to control transmission of TB within a refugee settlement should focus on the primary sources of infection, i.e., those patients for whom microscopic analysis of sputum smears demonstrates the presence of acid-fast bacilli (AFB). (Specimens should be stained using the Ziehl-Neelsen method with the results graded quantitatively.)
Case identification. Passive case-finding will be most efficient in the refugee setting. Patients with respiratory symptoms (chest pain, cough) of greater than 3 weeks' duration, hemoptysis of any duration, or significant weight loss should have a direct microscopic examination of their sputum for AFB. If the sputum smear is negative for AFB but pulmonary TB is still suspected, the patient should be given a 10-day course of antibiotics and then be reexamined after 2-4 weeks. Specific anti-TB chemotherapy should not begin unless the presence of AFB has been confirmed. Symptomatic family members of an identified patient should also have sputum specimens examined.
Children who show signs and symptoms compatible with TB and who are either: a) a close contact of a patient with a confirmed case of TB, or b) tuberculin skin-test positive (in the absence of a BCG vaccination scar) should undergo a full course of anti-TB treatment if they do not respond to an appropriate regimen of alternative antibiotics.
Case management. The selection of a first-line chemotherapy regimen should generally be consistent with the national policy set forth by the host country MOH. However, it should be recognized that the crowded conditions of a refugee camp may foster an abnormally high rate of transmission. Additionally, uncertainty exists regarding the duration of stay in the country of asylum, and it may be more difficult to maintain adherence to an extended therapy regimen. Short-course therapy (6 months) should be considered for use in a refugee camp even when the national policy prescribes a longer course of treatment, provided the additional expense is not prohibitive.
Before enrolling refugees in a TB treatment program, consideration should be given to the stability of the populations and the capacity of the health-care program to supervise therapy and to follow-up patients who do not adhere to treatment. Administration of anti-TB drugs to persons in whom adherence is likely to be sporadic will foster increased drug resistance in that population.
The following drugs are used for the treatment of TB with chemotherapy: isoniazid, rifampin, pyrazinamide, streptomycin, ethambutol, and thiacetazone. The selection of a particular treatment regimen must take into consideration the organism susceptibility, cost, and duration of therapy. The decision regarding implementation of a specific therapeutic regimen will generally be made by the UNHCR in consultation with the MOH of the host government. Case-holding. Whenever possible, chemotherapy should be observed by a health-care provider, especially during the first 2-3 months of treatment. Treatment efficacy should be assessed through a series of sputum smears. Patients participating in observed therapy who do not respond to treatment and whose sputum smears remain positive for AFB after 2 months should be reviewed by a physician and should begin a second-line treatment regimen.
Enrolling TB patients in a SFP may improve adherence to the treatment regimen and acts as a point of contact for follow-up.
The success of a TB control program depends on good management and close supervision. The responsibilities of staff assigned to the program need to be clearly defined, adequate records of patient progress should be maintained, and a system to follow-up patients who do not adhere to treatment should be established. The cooperation of the community is essential for success. A community education program should be established to help ensure adherence.
# Prevention
Preventive chemotherapy for subclinical TB usually does not play a substantial role in TB control in a refugee camp. However, immediate family members of active TB patients should be examined for active TB and referred for treatment. This is particularly important for young children.
BCG vaccination should be administered as part of the comprehensive immunization schedule and not as a separate TB control activity. BCG vaccination is contraindicated for persons with symptomatic HIV infection, but can be administered to asymptomatic persons. In: Allegra DT, Nieburg P, Grabe M, eds. Emergency refugee health care --a chronicle of the Khmer refugee-assistance operation 1979-19801983:61-4.
# Selected Reading
# Epidemic Investigations
An epidemic is an unusually large or unexpected increase in the number of cases of a certain disease for a given place and time period. The general conditions of many refugee settlements (i.e., overcrowding, poor water and sanitation, inadequate rations) create an environment conducive to epidemics of infectious diseases. In the event of a suspected outbreak, an epidemiologic investigation should be conducted as quickly as possible.
# Purpose
Epidemiologic investigations are conducted in order to:
- Confirm the threat or existence of an epidemic and identify the causative agent, its source and mode of transmission. - Determine the geographic distribution and the public health impact of an epidemic, identifying those groups or persons who are at highest risk for disease. - Assess local response capacity and identify the most effective control measures.
# Preparations
Each camp should have an established HIS with standardized reporting practices. This will allow for prompt recognition of and rapid response to an epidemic.
An accurate assessment of available laboratory facilities is necessary in order to identify appropriate sites for microbiologic confirmation of an epidemic and to address deficiencies that may hamper an investigation.
Appropriate specimen containers and transport media should be procured. Arrangements should be made to meet the need for additional technical support.
A recognized administrative and reporting structure should be established, with a clear chain of command and delegation of responsibility. Lines of command should be well defined, and specific persons should be assigned responsibility for addressing the media and acting as liaisons to the camp leaders and the refugee population.
Current maps showing settlements, water sources, transport routes, and health facilities should be made available to investigators.
# Conducting the investigation Determining the existence of an epidemic.
An established HIS will allow for prompt recognition and confirmation of an epidemic. The need for routine health surveillance in a refugee camp cannot be overstated. Even if such a system is firmly in place and implemented, reports of an epidemic may be the result of artifactual causes, i.e., changes in reporting practices, an increased interest in a particular disease, a change in diagnostic methods, the arrival of new health staff, or an increase in the number of health facilities.
Confirming the diagnosis. The diagnosis of an epidemic disease should be confirmed using standard clinical or laboratory techniques. However, once the presence of an epidemic is established, it is not necessary to confirm the diagnosis for each person before treatment. Ongoing laboratory confirmation of a sample of cases is generally sufficient.
# Determining the number of cases.
A workable case definition must be established in order to determine the scope of the outbreak. The sensitivity and specificity of the case definition depend upon:
- The usual apparent-to-inapparent case ratio.
- Whether pathognomonic signs and symptoms exist.
- The need for laboratory support for diagnosis.
Selected Reading CDC Monograph. Toole MJ, Foster S. Famines. In: Gregg MB, ed. The public health consequences of disasters 1989. Atlanta, GA: 1989. __________ | 32,038 | {
"id": "4638b239ab20f756304a087e16e6c3c8505d2806",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Nothing will ever be attempted if all possible objections must be first overcome.# A Strategic Plan for the Elimination of Tuberculosis in the United States Message to the Readers of Morbidity and Mortality Weekly Report
I am pleased to provide you with "A Strategic Plan for the Elimination of Tuberculosis in the United States" describing actions necessary to achieve the goal by the year 2010, with an interim target of a case rate of 3.5 per 100,000 population by the year 2000. At a national conference in 1984, Dr. James 0. Mason, then Director of the Centers for Disease Control, challenged the public health community to develop a strategy to eliminate tuberculosis from the United States. This plan was developed by the Centers for Disease Control/Department of Health and Human Services' Advisory Committee for Elimination of Tuberculosis. Many experts from both within and outside the Department played a significant role in its development. We are grateful to all those who participated in the process.
I am pleased to report that the House of Delegates of the American Medical Association and the Governing Council of the American Public Health Association have passed resolutions in support of the plan, and the American Lung Association and the American College of Preventive Medicine have also endorsed the goal. We thank these organizations for their support and anticipate that other organizations will take similar actions in the near future.
We must commit ourselves to the objective of eliminating tuberculosis and making that objective widely known so others can join in this effort. The Centers for Disease Control is identifying activities for short-and long-term implementation. The plan is being distributed to a wide variety of public and private organizations with the recommendation that they take similar action.
# INTRODUCTION
In 1987, the Secretary of the U.S. Department of Health and Human Services established an Advisory Committee for Elimination of Tuberculosis- to provide recommendations for developing new technology, applying prevention and control methods, and managing state and local tuberculosis programs targeted at eliminat ing tuberculosis as a public health problem. After review and feedback from numerous interested people and organizations, this plan was completed.
The committee urges the nation to establish the goal o f tuberculosis elimination (a case rate o f less than one per million population) by the year 2010, with an interim target o f a case rate o f 3.5 per 100,000 population by the year 2000. The U.S. case rate for 1987 was 9.3 per 100,000 (1).
Three factors make this a realistic goal: 1) tuberculosis is retreating into geograph ically and demographically defined pockets; 2) biotechnology now has the potential for generating better diagnostic, treatment, and prevention modalities; and 3) computer, telecommunications, and other technologies can enhance technology transfer. Therefore, a three-step plan of action was developed:
1. more effective use of existing prevention and control methods, especially in high-risk populations 2. the development and evaluation of new technologies for treatment, diagnosis, and prevention 3. the rapid assessment and transfer of newly developed technologies into clinical and public health practice The committee brings this plan to the attention of the medical community and the public to stimulate positive, constructive discussion and action, to increase the public's level of awareness of tuberculosis, and to encourage a commitment to the elimination of tuberculosis. In the past, the United States has spent hundreds of millions of dollars annually on tuberculosis treatment and control activities. Unless alternative action is taken, large and unnecessary expenditures w ill continue indefi nitely. Expenditures on tuberculosis treatment may increase over the next few years because of high morbidity rates among people infected with the human immunode ficiency virus (HIV), the homeless, the foreign-born, the elderly, and various minority groups. Furthermore, tuberculosis has the potential for spreading more widely in the community.
The mission of all tuberculosis control programs should now be to eliminate this disease by the year 2010. Strategies for eliminating tuberculosis are set out in this document. These strategies are based on the needs and responsibilites of the various groups of people involved in this effort (Appendix). The committee realizes the task will not be an easy one; it will require considerable commitment at all levels. The challenge to carry out this plan is a test of our willingness and ability as a society to respond to a very serious health problem that disproportionately affects its disen franchised members. A great nation such as ours can carry out this plan. It is time to commit to a tuberculosis-free society!
# BACKGROUND INFORMATION
Tuberculosis is a communicable disease caused by bacteria (Mycobacterium tuberculosis complex) that are usually spread from person to person through the air. When people with tuberculosis of the respiratory tract cough, airborne infectious particles are produced. If these bacteria are inhaled by other people, they cause an infection that spreads throughout the body. Most individuals who become infected do not develop a clinical illness because the body's immune system brings the infection under control; however, infected people do develop a positive reaction to a tuberculin skin test. The infection can persist for years, perhaps for life, and infected persons remain at risk of developing disease at any time, especially if the immune system becomes impaired. Although the disease usually affects the lung, it can occur at virtually any site in the body.
Despite the great strides that have been made in the control of tuberculosis, the disease continues to be a public health problem in the United States. There remain isolated and potentially dangerous enclaves of this preventable, but frequently severe and occasionally fatal, disease. If more aggressive action is not taken, thousands of preventable cases and deaths will continue to occur in the United States each year until well into the next century. This document has been prepared to identify the steps necessary to eliminate tuberculosis.
Ongoing analyses of tuberculosis morbidity data continue to identify the magni tude and extent of the problem. These data have important implications for the control and elimination of tuberculosis in the United States.
From 1953 through 1984, the number of tuberculosis cases reported decreased an average of 5% annually (7). However, in 1985, the number of tuberculosis cases remained stable and, in 1986, cases increased by 2.6% (2,3). This increase was, at least in part, caused by the occurrence of tuberculosis among persons infected with HIV (2)(3)(4). HIV infection appears to have increased the incidence of tuberculosis by causing immunosuppression, which allows latent tuberculous infection to progress to clinically apparent disease. Therefore, tuberculosis screening and prevention efforts will need to be targeted to persons with, or at risk for, HIV infection.
Although tuberculosis case rates have progressively declined for all races over the past two decades, the decrease has been much less among nonwhites than among whites. Nearly two-thirds of cases now occur among blacks, Hispanics, Asians, and Native Americans (5-10). Although specific data are not available, the higher risk in these minority populations may be related primarily to socioeconomic conditions, such as poor housing and nutrition. Thus, prevention and control strategies should be formulated in consultation with, and targeted toward, these high-risk minority populations.
Tuberculosis is also common among immigrants, refugees, and migrant workers from countries where the disease is prevalent (10). In these patients, organisms responsible for disease are frequently resistant to commonly used antituberculosis drugs, especially isoniazid (INH). If not recognized and managed appropriately, drug-resistant disease and infection may lead to failure of treatment or preventive measures. Almost half the cases among immigrant Asians occur within the first 2 years of arrival in the United States. Specific control efforts should thus be directed at recent immigrants before or shortly after their arrival. Over two-thirds of cases in foreign-born persons occur in those who are less than 35 years old at the time of arrival in the United States (10). These cases are potentially preventable.
More than 80% of childhood cases occur in minority groups. Childhood cases are geographically focal. Less than 12% (363) of U.S. counties reported one or more tuberculosis cases among children in 1986 (CDC unpublished data). Using childhood cases as sentinel health events, health departments can target certain populations for preventive intervention.
Among all racial and ethnic groups and both sexes, tuberculosis case rates are highest among the elderly. Although case rates are higher among the 5% of the elderly living in nursing homes, the majority of cases occur among 95% of the elderly who live in the community.
Good epidemiologic surveillance data are essential for an effective tuberculosiselimination effort. These data target the populations and geographic areas experi encing the problem and provide clues as to how to deal with it. Additional data are needed to define the extent to which correctional institution populations, homeless people, lower socioeconomic groups, and others are at increased risk. While analyses of national data are useful, analyses of state and local data will be even more important for targeting elimination efforts.
# STEP 1 -MORE EFFECTIVE USE OF EXISTING PREVENTION AND CONTROL METHODS
The emphasis in this section is on currently available prevention and control strategies not being fully utilized and on new strategies using existing technology. Although new technologies will be needed to eliminate tuberculosis (see Step 2), much can be achieved through efforts to improve existing tuberculosis control programs. Detailed recommendations for diagnosis, prevention, treatment, and program management are included in various American Thoracic Society and CDC statements and in CDC's Tuberculosis Policy Manual. Our recommendations are quite general and will need to be adapted at the state, community, and individual level. They are meant to serve as guidelines and not to establish rigid standards of practice or to discourage creativity and innovation.
Priorities for Step 1 include adequate and appropriate treatment for all persons with tuberculosis, identification of high-risk population groups within each geo graphic area, and the use of preventive treatment in the appropriate members of these groups. Strategies are organized in terms of surveillance, prevention of disease and infection, containment of disease, and program evaluation and assessment.
# Improving Surveillance
The identification and reporting of tuberculosis cases, suspected cases, and contacts is often slow or incomplete, thus delaying treatment and preventive intervention. Some cases are not diagnosed or reported, and contact investigation is not done. This is more likely to occur among the poor, the elderly, the homeless, drug users, and prisoners.
By January 1, 1991, systems should be in place to assure that: 1) all persons with signs and symptoms suggestive of tuberculosis receive an appropriate diagnostic evaluation within 2 weeks of initial contact with a health-care provider; 2) suspected or diagnosed cases are reported to health departments within 3 days of the time the diagnosis is made or suspected, or a positive laboratory result is obtained, so that contacts can be identified and examined; 3) active population-specific casefinding, screening, and preventive intervention programs are established and maintained by health departments; and 4) achievement of the above objectives is measured and assessed.
# Methods
1. Health departments, medical and nursing schools, schools of public health, volunteer agencies, professional societies, and minority advocacy groups should educate health-care providers and high-risk groups in the community about the signs and symptoms of tuberculosis and the methods of diagnosis, treatment, and prevention. 2. To speed up case reporting and make it easier for health-care providers to report, health departments should initiate telephone reporting systems for reportable infectious diseases, including tuberculosis, as a replacement or supplement to written notification. This system should include a telephone answering machine to record off-hour reports. Computer-to-computer reporting using telecommuni cations systems should be developed to further improve surveillance. 3. Physician, laboratory, and hospital reporting of cases to health departments should continue. Pharmacy reporting of persons who receive a supply of antituberculosis drugs should be undertaken on a pilot basis to determine whether additional cases are found. 4. Health department staffs should routinely monitor the time between the diagno sis of tuberculosis and the date the case is reported to the health department. Delays of more than 3 days should be investigated and action taken to prevent similar delays in the future. 5. Health department staff should conduct periodic reviews of selected records systems (e.g., laboratory reports, pharmacy reports, AIDS registries, and death certificates) to validate the surveillance system and to detect any failures to report cases. 6. Clinicians and public health officials should identify groups of people in the community among whom tuberculosis and transmission of infection are occur ring. This may require collection and analysis of data (e.g., residence, occupation, socioeconomic-status indicators, and HIV-antibody status) not now included on the individual case-report form. These data are necessary to identify high-risk populations and areas in which active casefinding and preventive intervention programs should be conducted. Members of high-risk groups and their health care providers should be apptised of the problem and involved in the design, implementation, and evaluation of casefinding and prevention programs.
# Improving Case Prevention
Preventable tuberculosis cases continue to occur in the United States. By defini tion, preventable cases include all those for whom one or more of the currently recommended interventions should have been utilized but were not. These interven tions include contact identification and examination, preventive therapy for infection, prompt diagnosis of disease, prompt reporting, isolation of persons with suspected and diagnosed tuberculosis, adequate ventilation of buildings, the use of ultraviolet lights in high-risk areas of buildings, chemotherapy for disease, and directly observed therapy. Some of these interventions (e.g., isolation) are designed to prevent transmission of infection among residents and staff of high-risk institutions such as hospitals, nursing homes, correctional institutions, and shelters for the homeless. Other interventions (e.g., preventive therapy) are designed to prevent disease among those already infected. INH preventive therapy reduces the risk of tuberculosis by more than 90% among persons who complete a full course of treatment.
To prevent infection among persons potentially exposed to an infectious case of tuberculosis and to prevent tuberculosis among contacts and other infected persons for whom preventive therapy is recommended, the following methods should be implemented by January 1, 1991.
# Methods
1. All U.S. residents should have the results of at least one tuberculin skin test in their medical records, and those whose test result is positive should be evaluated and counseled regarding their risk of developing tuberculosis. 2. Each health department should assess the prevalence, incidence, and socio demographic characteristics of cases and infected persons in the community. On the basis of these data, health departments should initiate tuberculin screening programs specifically targeted to each community's high-risk groups. Screening may be done to identify suspects, undiagnosed cases, diagnosed cases lost to follow-up, and infected persons in need of preventive therapy. At a minimum, health departments should ensure that such programs are extended to persons with symptoms compatible with tuberculosis, all foreign-born persons (and their . families) from high-prevalence areas, high-risk minority groups, the homeless, migrant workers, persons being admitted to nursing homes, people entering correctional institutions, and people known to be infected with HIV. In addition to their prevention value, screening programs have the potential for providing publicity and generating goodwill for the tuberculosis program. 3. Tuberculin skin-testing programs should be conducted annually among the staffs of tuberculosis clinics, mycobacteriology laboratories, shelters for the homeless, nursing homes, substance-abuse treatment centers, dialysis units, and correc tional institutions. The staffs of hospitals, mental institutions, and home health care agencies should be tested annually if the prevalence of infection exceeds 5%.
4. Consideration should be given to installing and properly maintaining ultraviolet lights in high-risk facilities such as jails, prisons, and shelters for the homeless (10-13). 5. Hospitals that admit untreated tuberculosis patients or persons suspected of having tuberculosis should have proper facilities and procedures for instituting respiratory isolation. 6. Consideration should be given to routinely obtaining sputum for mycobacterial smear and culture from symptomatic nursing home residents thought to have a lower respiratory infection. 7. Tuberculosis patients and persons suspected of having tuberculosis should be interviewed within 3 days after the health department receives the case report by a person who has had specific training in contact interviewing. 8. Close contacts should be examined within 7 days after the index case is diagnosed. 9. Infected contacts should be placed on preventive therapy if there is no evidence of clinical disease. 10. A child whose skin test shows no evidence of infection and who is a close contact of someone with infectious tuberculosis should be placed on preventive therapy until repeat skin testing (3 months after contact is broken) confirms the absence of infection. 11. All persons identified with HIV infection should be tuberculin tested. Those with positive tuberculin reactions or a history of a positive tuberculin reaction (without active disease) should be considered for preventive therapy, regardless of age. 12. All other recognized high-risk, infected persons should be considered for preven tive therapy regardless of age. This includes recently infected persons (i.e., persons who had negative skin-test results who have converted to positive test results), persons with chest radiographic findings consistent with past tubercu losis, and those with medical risk factors known to substantially increase the risk of tuberculosis (e.g., silicosis, below ideal body weight, gastrectomy, immuno suppressive therapy). 13. Screening of refugees, immigrants, and entrants from high-incidence countries should be continued and infectious persons with tuberculosis excluded until they become noninfectious. These groups should also be screened for tuberculous infection (without disease). Unless contraindicated, those with infection (without disease) should be started on preventive therapy either before, or within 2 months after, their entry into the United States.
# Improving Disease Containment
Many tuberculosis patients do not complete a recommended course of therapy. More than 25% of sputum-positive patients are not known to have converted from positive to negative sputum culture within 6 months. In addition, almost 12% of patients are not known to be currently receiving therapy, and more than 17% of tuberculosis patients do not take their medication continuously (74 ).
Beginning immediately, all patients with tuberculosis should complete treatment with an appropriate regimen.
# Methods
1. For each new case of tuberculosis in the United States, a specific health department employee should be assigned the responsibility and held account able for ensuring the education of the patient about tuberculosis and its treat ment, ensuring continuity of therapy, and ensuring that contacts are examined.
The health-care worker should visit the patient within 3 days of diagnosis to identify contacts and possible problems related to compliance with therapy. 2. For each new infectious case, a specific treatment and monitoring plan should be developed within 4 days of diagnosis. This plan should include drugs to be used (doses, duration, and frequency of administration), assessment of toxicity, and methods to be used to assess and ensure compliance. 3. Appropriate antituberculosis drugs, laboratory services, contact investigations, contact examinations, and other necessary services should be provided to patients by health departments without regard to the patients' ability to pay. 4. Incentives may be necessary to enhance compliance. To be most effective, incentives should be tailored to the individual needs and desires of the patients. An incentive may be as simple as offering a cup of coffee and talking with a patient while he or she is waiting in the clinic, or as complex as providing food and housing for a homeless patient. Particular attention must be given to ensuring that patients have transportation to the clinic. 5. Twice-weekly, directly observed therapy should be used whenever needed.
Specific funding for outreach staff should be encouraged at the federal, state, and local level. Alternatives to federally funded outreach staff might include, for example, appropriately instructed home health-care workers or maternal and child health staff to supervise therapy. 6. Quarantine measures, including temporary institutionalization, should be used in those instances when an infectious patient refuses to comply with selfadministered or directly observed therapy. State and local laws and regulations should be modernized to facilitate the cure of persons with infectious tuberculo sis. For example, court-ordered compliance with directly observed therapy should be available.
# Program Assessment and Evaluation
In many areas, there is incomplete assessment of community tuberculosis control problems and inadequate evaluation of community prevention and control efforts. As a result, programs do not function as effectively and efficiently as they should.
By January 1, 1991, a system should be in place to achieve an ongoing, effective assessment of the tuberculosis problem and evaluation of the activities being performed at all levels for the control and elimination of tuberculosis.
# Methods
1. The Federal Government and state and large metropolitan health departments should annually evaluate their progress toward the elimination of tuberculosis. This evaluation should include an analysis of morbidity and mortality data, case reporting, case finding, treatment, and prevention activities. Annual evaluations could be done in collaboration with interested constituencies such as lung associations, minority organizations, and professional societies. Regional meet ings to share information among states are encouraged. 2. Expert assessment should be conducted annually for local health departments by the state, CDC, or the American Lung Association/American Thoracic Society and for state health departments by CDC or the American Lung Association/American Thoracic Society. A similar assessment of federal tuberculosis prevention and control activities should be conducted by the CDC Advisory Committee for Elimination of Tuberculosis or other outside consultants. 3. Priority for continued federal funding of state and local programs should be, at least in part, contingent upon improved program performance and productive activities in high-risk populations. 4. A prototype computerized record system should be developed by CDC for use by local programs for case reporting, patient management, and program assess ment. CDC should provide microcomputers, with appropriate software and training, to state and major city health department tuberculosis control programs in high-incidence areas. 5. Each state and major metropolitan area should develop and publish an annual community tuberculosis summary and program plan (including objectives, meth ods, a discussion of program progress or failure, and corrective action needed). 6. Health departments should review each new tuberculosis case and each death from tuberculosis to determine if the case or death could have been prevented had the American Thoracic Society/CDC recommendations been followed. Based on these reviews, new policies should be developed and implemented to reduce the number of preventable cases.
# Conclusion
Implementation of Step 1 of this plan will require strong commitment at the national, state, and community levels. State and local tuberculosis advisory groups, with broad representation from public, private, and voluntary medical groups, should develop and help implement Step 1 strategies appropriate for the state or community.
# STEP 2 -DEVELOPMENT AND EVALUATION OF NEW PREVENTION, DIAGNOSTIC, AND TREATMENT TECHNOLOGIES
In June 1985, CDC, the National Institutes of Health, the American Thoracic Society, and the Pittsfield Antituberculosis Association cosponsored a conference in Pittsfield, Massachusetts. The objective of this conference was to identify areas for research that would lead to improved technologies for eliminating tuberculosis from the United States. The complete report of this conference was published in the August 1986 issue of the American Review o f Respiratory Disease (15). Conse quently, only selected priority projects are mentioned here.
The five headings under which research projects and activities are listed represent critical objectives for eliminating tuberculosis. They are presented in priority order. These projects should also be regarded in terms of type of research, i.e., basic, applied, and epidemiologic/operational (Table 1). Basic research is intended to obtain a better understanding of structure, processes, and mechanisms of tuberculosis. While the findings from this research can often be applied for some clinical or public health purpose, basic research does not proceed with a specific application in mind. As already implied, applied research has as its goal the application of knowledge to the solution of a particular clinical or public health problem. Epidemiologic studies are designed to assess the magnitude, distribution, and determinants of disease in a population. Operational research studies assess the actual impact of interventions on health outcomes in the population.
# Improving Methods for Preventing Disease in Infected Persons
The vast majority of new cases of tuberculosis arise in persons who have had a latent period of infection. The most critical element in tuberculosis elimination is the detection and treatment o f infected persons before disease emerges. At present, INH is usually administered for 6-12 months for preventive therapy. However, this approach has major deficiencies. These include the expense of treating and monitor ing patients for such a long time, noncompliance with preventive therapy of long duration, and the occurrence of drug toxicity, especially hepatotoxicity.
The first-priority objective of Step 2 is to develop shorter, safer, more effective and more economical means of preventing the emergence of clinical disease from the infected state.
# Methods
Areas of research to be pursued are as follows:
1. A drug that is more effective and less toxic than INH should be identified. This will require research at several levels. a. The mechanisms by which current antituberculosis drugs act are poorly understood. Consequently, studies of microbial metabolism, with identification of target sites for drug activity, should be conducted to develop new ap proaches to preventive therapy. b. In vitro models of drug efficacy, especially those allowing assessment of the interactions of drug, phagocyte, and organism, should be developed to help select drugs for in vivo studies. 4. High priority should be given to the development of a postinfection vaccine to boost the specific immune response. 5. Modification of the host immune response to M. tuberculosis infection should be evaluated as a supplement to chemoprophylaxis to boost the immune response and increase destruction of intracellular parasites.
# Improving Methods for Identifying Infected Persons at Risk of Disease
The use of the Mantoux test with tuberculin, purified protein derivative (PPD), to identify persons infected with M. tuberculosis is well established. However, the test suffers from a lack of sensitivity and specificity. In addition, the test is difficult to interpret in serial testing programs because of the "booster effect." In conjunction with the skin test, epidemiologic factors and patient characteristics have been used to identify persons at high risk of developing disease. The risk of developing clinically apparent tuberculosis is increased by recently acquired infection, young adulthood, leanness, and disease states in which T-lymphocyte function is disturbed ( 16). Despite this knowledge, we currently have a very limited ability to predict which infected persons are most likely to develop disease.
Preventive therapy would be a much more efficient intervention for reducing tuberculosis morbidity if there were better methods for identifying persons infected with M. tuberculosis (i.e., those who are at risk of developing disease). Ideally, a test is needed that is capable of specifically differentiating those who harbor living tubercle bacilli from those who do not.
# Methods
1. Genes that code for species-specific epitopes of mycobacteria should be identi fied and characterized. This should permit amino acid analysis and synthesis of epitopes which may be useful for serologic or skin-test diagnosis.
# T-lymphocytes are the key responding cellular elements of the immune response
to M. tuberculosis. These cells should be studied to identify subpopulations of T-lymphocytes that influence progression or containment of infection. 3. Genetic differences, such as those related to histocompatibility complex antigens, between infected persons who develop disease and those who do not should be sought.
4. The absolute and relative risk of disease among persons infected with both the tubercle bacillus and HIV should be determined. 5. Socioeconomic status, immunodeficiency status, stress (both physical and psy chological), body weight, nutrition, and environmental factors (e.g., incarceration, sunlight, ventilation) should be investigated as risk factors in prospective or case-control studies.
# Improving Methods for Preventing Infection or the Establishment of Infection in Various Body Sites
Transmission of M. tuberculosis infection usually occurs via the airborne route. Current methods for preventing the transmission of infection to uninfected persons include early identification, isolation, and treatment of infectious source cases; environmental control to reduce the number of airborne infectious particles through the use of nonrecirculated ventilation to the outdoors and ultraviolet light; and drug therapy of uninfected persons who are exposed to infection sources. Vaccination with BCG does not prevent infection but may lim it the spread and complications of the disease (77). Each of these measures has certain drawbacks, and transmission of infection is still occurring at unacceptable levels. Therefore, more reliable methods are needed for preventing infection and for limiting its spread within the body. 6. Studies should be done to determine if persons or groups differ with regard to infectibility and whether this could be altered in some way.
# Methods
1. High priority should be placed on the search for genus-and species-specific epitopes. 2. DNA probes specific for M. tuberculosis should be produced and evaluated. 3. Similarly, genus-and species-specific monoclonal antibodies should be produced and evaluated for their ability to rapidly detect and identify M. tuberculosis and other medically significant mycobacteria. 4. Studies to detect free mycobacterial antigens with appropriate antibodies or probes in clinical specimens should be undertaken. 5. Systems that would assay material produced by the diseased host's lymphocytes or macrophages might be useful for diagnosis. A related area to be explored is the study of the T-lymphocyte antigen receptor.
# Conclusion
There is an urgent need for competent investigators to submit well-designed proposals for all of the above studies. CDC should oontinue to work with the National Institutes of Health, the Food and Drug Administration, state and local health departments, private industry, academia, volunteer agencies, foundations, and other groups to encourage the funding and conducting of this research. A report from federal and private research funding agencies should be submitted annually to the Secretary's Advisory Committee for Elimination of Tuberculosis detailing progress made in achieving these research objectives.
# STEP 3 -TECHNOLOGY ASSESSMENT AND TRANSFER
This step of the elimination plan focuses on the actions necessary to facilitate the adoption of new tools, procedures, and ideas into clinical and public health practice. (This has been called technology transfer or translation.) Because we are generally discussing the assessment and transfer of technologies not yet developed, it is not possible to be as specific in this section as in the preceding two sections.
# Technology Assessment and Transfer in General
Impediments to the technology and assessment transfer process must be identi fied and strategies devised to resolve them. Problems arise if the new technology requires retraining, additional resources, or a change in habits. Problems also arise if cost-effectiveness data on a new technology are not available or if cost savings from a new procedure do not accrue to those who spend the resources to adopt the new technology.
It is important to ensure the widespread, rapid, and efficient use of new technol ogies for tuberculosis control in field operations.
# Methods
Technologies for transfer should be chosen on the basis of their potential impact on the tuberculosis-elimination effort. Before a program is initiated to "sell" a new technology, it is crucial to develop a consensus on its appropriateness, identify persons who will be using the new technology, enlist their aid, identify potential resistance points to the introduction of new technologies and innovations, and develop strategies for overcoming the resistance.
# Special Technology Transfer Issues
This section concerns specific recommendations about methods and strategy for the transfer and implementation of specific new technologies that have been developed, but not adopted, or that are likely to be developed in the near future. The examples are divided into the same categories under which research efforts were outlined in Step 2 of this plan.
# Transferring Technologies for Preventing Disease in Infected Persons
1. Because the largest number of cases arise from the pool of persons infected in the past, highest priority must be given to the rapid evaluation and implementation of new technology in this area. It is likely that shorter (e.g., 2-or 3-month) preventive treatment regimens can be developed using currently available drugs. 2. To have a major impact on tuberculosis morbidity, newly developed short-course preventive therapy must be widely implemented by the private medical sector. This will require comprehensive education programs aimed at all physicians likely to see patients with tuberculosis. Professional organizations should assist in this effort. Increased emphasis on the prevention of tuberculosis in the curricula of medical schools and schools of public health should be undertaken. Cooper ative agreement, grant, or contract funds might be used to stimulate these educational efforts. 3. Health departments should begin now to develop registers containing the names and addresses of untreated infected persons so that they can be readily identified, contacted, and treated when improved prevention methods are available. 4. Public education is important for creating a demand for this service. Consumer groups and organizations representing populations experiencing high rates of tuberculosis should be involved.
# Transferring Technologies for Defining Infected Persons at Risk o f Disease
New diagnostic tests will probably be developed in the near future. Before widespread adoption of the new tests, evaluation studies will be needed to determine whether they have significant advantages over the PPD-tuberculin skin test.
# Transferring Technologies for Preventing Infection
A reduction in the number of tubercle bacilli in the environment may be achieved by the proper use of ultraviolet lights. Field studies of this technology in selected high-risk settings (e.g., hospital emergency rooms, correctional institutions, nursing homes, and shelters) should be undertaken as soon as possible.
Transferring New Technologies for Treating Disease 1. Recent experiences with the slow diffusion of short-course, directly observed, and intermittent therapy into clinical and public health practice suggest that new approaches will be required in the future to promote more rapid and widespread adoption of new treatments. 2. A strong recommendation for a new therapy from the American Thoracic Society and the Advisory Committee for Elimination of Tuberculosis will be essential. Demonstration projects in selected states and communities showing feasibility, patient acceptability, and improvement in program performance will speed acceptance of new approaches to treatment. 3. Federal support will be necessary for the wider application of new therapy regimens in the public sector. For the private sector, incorporation of instruction on new tuberculosis therapies into continuing medical education programs will be helpful. Such programs could be funded by pharmaceutical companies that market the drugs used in the recommended treatment regimens.
# Transferring and Assessing New Technologies for Diagnosis o f Tuberculosis
With the use of new DNA probes, it may soon be possible to make a specific diagnosis of tuberculosis in several hours. Before the use of these new diagnostic tests is recommended, they should be carefully assessed. Support for technology assessments might come from the commercial firm developing the product, other private groups, or the public sector.
# Transfer of Communication Technologies
1. Improved communication is essential for transfer of state-of-the-art technology.
The principal means by which this will be accomplished is frequent and timely telephone communication and on-line electronic transfer of information. It will be essential to develop and maintain an electronic network in which the public and private sector can submit and rapidly obtain current information about tubercu losis. CDC should standardize hardware and software requirements for this communication network and, to the extent possible, assist state and local health departments in obtaining hardware and software. The American Medical Asso ciation, the National Medical Association, and other professional groups should be approached regarding communication with the private sector. 2. Computerization of health department data bases will enhance the actions advocated in Step 1 of this plan.
# Conclusion
Technology assessment and transfer in our society is a complex process involving many participants. Successful achievement will require federal coordination and some federal funding but w ill depend heavily on the active participation of state and local officials, private practitioners, private industry, and volunteer groups. The Advisory Committee can function as a focal point for these interactions.
Current drug regimens are effective, well tolerated, and can be given with minimal effects on the patient's mode of living. Nevertheless, major problems remain. With current drugs, a minimum of 6 months of multi-drug therapy is necessary to achieve a high probability of cure ( 18). There are at least four obstacles to achieving a cure: 1) the failure of patients to comply with regimens of long duration, 2) drug-resistant organisms, 3) adverse reactions that require interruption and modification of the original, and usually optimal, drug regimen, and 4) the cost of the most effective regimens.
The following steps should be taken to develop more effective approaches to therapy for tuberculosis.
# Methods
1. All new antimicrobial agents should be routinely evaluated in vitro for antimycobacterial activity. In addition, drug combinations should be examined for synergy. 2. Classes of compounds that should receive high priority in the search for better treatment regimens are beta-lactam compounds, long-acting sulfones and sulfo namides, rifamycins, and 4-quinolones. 3. Improved delivery system s-including longer acting drug-release systems, such as injectable suspensions, implantable rods, and membrane-enclosed d ru g sshould be sought. 4. Research to define enzyme targets for antituberculosis drugs should be carried out. This information will facilitate the design of new drugs with higher affinity for these target enzymes, or indicate rational modifications of existing drugs for the same purpose. High priority should be placed on finding rapidly effective, bactericidal agents that affect mycobacteria in all metabolic states. 5. Because mycobacteria within macrophages are not readily accessible to many drugs, a knowledge of drug transport mechanisms should be acquired to allow modification of drugs to facilitate uptake. 6. Pharmacologic agents that activate T-lymphocytes and macrophages should be identified and evaluated in patients with mycobacterial diseases. 7. Manipulation of the microenvironment in the phagosome should be attempted to improve the efficiency of the microbicidal activity of phagocytic cells. 8. Effective compliance enhancers or incentives should be identified and psycho metric and other tests should be developed to identify persons for whom various enhancers are likely to be most effective.
# Improving Methods for Diagnosing Disease
Current techniques for diagnosing tuberculosis are beset by a number of serious limitations and problems. Available techniques are slow, resource intensive, and not ideally sensitive and specific.
One objective of this program is to develop better diagnostic techniques that will rapidly identify persons with current disease and distinguish them from persons with past disease or infection without disease.
# Improving Methods for Treating Disease
# Appendix: Planning Assumptions
A. The public needs:
1. current and accurate information about tuberculosis and progress toward its elimination; 2. to be protected from infection with M. tuberculosis', and 3. if already infected, to be protected from disability and death from tuberculosis.
The public has a responsibility to:
1. insist that adequate resources be made available for tuberculosis control and that those resources be used efficiently and effectively.
# B. Tuberculosis patients need:
1. quality diagnostic, preventive, and curative medical care that is available, accessible, and acceptable; 2. accurate and current information about the nature and risks of the disease and about the risks and benefits associated with treatment; and 3. confidentiality to the extent possible.
# Tuberculosis patients have a responsibility to:
1. prevent transmission of their infection to others; 2. assist in the identification of contacts; 3. take medicine as prescribed and to cooperate with necessary clinical, radiographic, and sputum examinations; and 4. report problems with taking prescribed treatment, improvement or deteriora tion of symptoms, and symptoms suggestive of adverse drug effects.
C. Health-care providers need the following services from local health departments: 1. contact identification and evaluations; 2. regular follow-up of patients on treatment; 3. up-to-date patient-management guidelines and expert medical consultation; and 4. antituberculosis drugs, laboratory services, and directly observed therapy for their patients when needed. Health-care providers have a responsibility to:
1. maintain a high index of suspicion for tuberculosis, especially for persons from high-risk populations; 2. report cases, suspected cases, and laboratory results to the health department within 72 hours; 3. treat patients and monitor their compliance and response to therapy, and monitor for adverse drug reactions within accepted medical guidelines; 4. promptly notify health departments when patients under their care do not take treatment as prescribed or do not return for necessary follow-up examina tions; 5. update health departments at specified intervals about current treatment (including degree of compliance), laboratory results, and disease status of each patient; and 6. cooperate with the health department in: a. contact identification and examination; b. the screening of other high-risk groups for infection and disease; and c. the application of preventive therapy in those groups.
D. State and local health departments need: 1. information from health-care providers to ensure that persons with diagnosed tuberculosis do not continue to spread the disease within the community and that persons with tuberculosis, suspected tuberculosis, or tuberculous infection are receiving appropriate examinations and treatment; and 2. the resources and authority to carry out their responsibilities.
# State and local health departments have a responsibility to:
1. establish guidelines for the identification, reporting, treatment, and prevention of tuberculosis in the community; 2. ensure that patients with tuberculosis do not continue to transmit infection in the community; 3. provide rapid follow-up for persons diagnosed or suspected of having tuber culosis; 4. ensure that laboratory services, drugs, and the staff needed to provide follow-up, contact examination, and directly observed therapy are available; 5. ensure that high-risk groups, including those under the supervision of other state agencies (e.g., prisoners), are identified and screened for tuberculous infection and tuberculosis; 6. provide health education to the public; 7. appropriately use preventive therapy; 8. provide quality tuberculosis medical consultation for medical-care providers; 9. provide outpatient and inpatient medical care for tuberculosis at no cost to the patient when such care is not available from other sources; 10. rapidly transfer patient information from one jurisdiction to another when a patient moves; 11. establish and maintain a records system (register) that is effective for moni toring and evaluating the tuberculosis problem in the community; and 12. coordinate all state and local tuberculosis control activities.
# E. The Federal Government needs:
1. the resources required to carry out its responsibilities; and 2. accurate and timely reports from state and local programs (e.g., case reports, reports of outbreaks). The Federal Government has a responsiblity to:
1. establish goals and priorities and publicize guidelines and standards; 2. provide expert medical and program consultation, program evaluation, public health education and training, and national morbidity and mortality surveil lance; 3. conduct or support the basic and applied research studies, developmental work, demonstration projects, and technology assessments necessary for the development of new technologies for prevention and control; 4. ensure that aliens entering the United States are free of infectious or poten tially infectious tuberculosis; 5. supplement, as necessary, local and state resources to carry out the elimina tion plan set forth in this document; 6. publicize progress made toward the elimination of tuberculosis in the United States;
7. assure that federal programs that provide direct or indirect clinical services to high-risk populations (e.g., Bureau of Prisons, Indian Health Service) appropri ately screen for and treat tuberculosis infections and tuberculosis; and 8. direct health-care providers to coordinate patient care with state and local public health authorities.
F. Academic centers need to: 1. compete fairly for federal, state, and foundation funds for tuberculosis educa tion and research projects.
# Academic centers have a responsibility to:
1. provide high-quality educational and tuberculosis control programs for their students, staff, and affiliated hospitals; 2. provide high-quality care to tuberculosis patients; and 3. use grant and contract funds to carry out well-designed and well-executed research studies and disseminate the results.
G. Private enterprise needs: 1. adequate return for its investments in tuberculosis prevention and control.
Private enterprise has a responsibility to:
1. use its resources to develop more effective methods for prevention and control of tuberculosis.
H. The American Lung Association and other voluntary/professional agencies should continue to: 1. be advocates for the appropriation of resources and the enactment of legislation to achieve elimination of tuberculosis; 2. support, and encourage the support of, medical research and pilot demonstra tion projects; 3. provide public, medical, and professional health-worker education; 4. participate in developing standards for prevention, diagnosis, and treatment; and 5. form coalitions with other organizations to achieve the above aims.
Finally, it is assumed that our society will continue to make progress in ensuring that all citizens have access to adequate nutrition, housing, and medical care. | 9,291 | {
"id": "60141a531dad106123ebdeaa222c252cbfb3cd30",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Inactivated Vero Cell Culture-Derived JE Vaccine (IXIARO ) Inactivated Mouse Brain-Derived JE Vaccine (JE-VAX ) ....# Introduction
Japanese encephalitis virus (JEV), a mosquito-borne flavivirus, is the most common vaccine-preventable cause of encephalitis in Asia (1,2). Japanese encephalitis (JE) occurs throughout most of Asia and parts of the western Pacific (1,3). Among an estimated 35,000-50,000 annual cases, approximately 20%-30% of patients die, and 30%-50% of survivors have neurologic or psychiatric sequelae (4,5). In endemic countries, JE is primarily a disease of children. However, travel-associated JE, although rare, can occur among persons of any age (6)(7)(8)(9). For most travelers to Asia, the risk for JE is very low but varies based on destination, duration, season, and activities (9,10).
JEV is transmitted in an enzootic cycle between mosquitoes and amplifying vertebrate hosts, primarily pigs and wading birds (11,12). JEV is transmitted to humans through the bite of an infected mosquito, but disease develops in <1% of infected persons. JEV transmission occurs primarily in rural agricultural areas. In most temperate areas of Asia, JEV transmission is seasonal, and substantial epidemics can occur. In the subtropics and tropics, transmission can occur year-round, often intensifying during the rainy season.
This report provides recommendations for use of the two JE vaccines licensed in the United States for prevention of JE among travelers and laboratory workers. An inactivated mouse brain-derived JE vaccine (JE-MB) has been available since 1992 for use in travelers aged ≥1 year (13). In March 2009, the Food and Drug Administration (FDA) approved a
# Background JEV transmission
JEV is a single-stranded RNA virus that belongs to the genus Flavivirus and is closely related to West Nile, St. Louis encephalitis, yellow fever, and dengue viruses (15,16). Four genotypes of JEV have been identified (17). JEV is an arthropod-borne virus (arbovirus) that is transmitted in an enzootic cycle between mosquitoes and amplifying vertebrate hosts, primarily pigs and wading birds (Figure 1) (12,(18)(19)(20)(21)(22). Because of rapid population turnover with a large number of susceptible offspring and the development of high-titered JEV viremias, domestic pigs are the most important source of infection for mosquitoes that transmit JEV to humans (11,(22)(23)(24)(25)(26).
JEV is transmitted to humans through the bites of infected mosquitoes. Humans usually do not develop a level or duration of viremia sufficient to infect mosquitoes (12,27). Therefore, humans are dead-end hosts, and human JE cases imported into nonendemic areas represent a minimal risk for subsequent transmission of the virus. Direct person-to-person spread of JEV does not occur except rarely through intrauterine transmission (1,28,29). On the basis of experience with similar flaviviruses, blood transfusion and organ transplantation also are considered potential modes of JEV transmission (30,31).
Culex mosquitoes, especially Cx. tritaeniorhynchus, are the principal vector for both zoonotic and human JEV transmission throughout Asia (11,12,18,21,(32)(33)(34)(35)(36)(37)(38)(39). Cx. tritaeniorhynchus is an evening-and nighttime-biting mosquito that feeds preferentially on large domestic animals and birds and only infrequently on humans. Cx. tritaeniorhynchus feed most often in the outdoors, with peak feeding activity occurring after sunset (20). Larvae are found in flooded rice fields, marshes, and other small stagnant collections of water (37,38). In temperate zones, this vector is present in greatest density from June through November; it is inactive during winter months (20,40,41). In certain parts of Asia, other mosquito species also might be important JEV vectors (12,36,38,40).
# Epidemiology of JE Geographic Distribution and Spread
JE occurs throughout most of Asia and parts of the western Pacific (Figure 2). During the first half of the 20th century, the disease was recognized principally in temperate areas of Asia including Japan, Korea, Taiwan, and mainland China (42)(43)(44)(45)(46)(47)(48)(49)(50). Over the past few decades, the disease appears to have spread south and west with increased JEV transmission reported in Southeast Asia, India, Bangladesh, Sri Lanka, and Nepal (35,44,48,(51)(52)(53)(54)(55)(56)(57)(58)(59)(60)(61)(62). In the 1990s, JEV spread east and - A list of the members appears on page 27 of this report. was recognized for the first time in Saipan and then Australia, initially in the outer Torres Strait islands and subsequently on the northern mainland (63)(64)(65). The reasons for this increased geographic distribution are uncertain but might include population shifts or changes in climate, ecology, agricultural practices, animal husbandry, or migratory bird patterns (38,48,65). These factors could contribute to further spread, including potentially beyond Asia and the western Pacific.
# Incidence and Burden of Disease
In the early 1970s, more than 100,000 cases of JE were reported each year, with the vast majority from China (49,66). Because of vaccine use, increased urbanization, changes in agricultural practices, and mosquito control, annual JE case counts have declined substantially during the past 30 years (66). Up to 30,000 cases of JE still are reported each year (5). However, as a result of poor diagnostic and surveillance capacity in many endemic countries, this number likely represents an underestimate of the true burden of disease (5,66). Among children in endemic countries, the incidence of laboratory-confirmed JE varies widely from year to year and area to area (range: 5-50 cases per 100,000 children per year) (12,41,42,47,(66)(67)(68)(69)(70).
# Ecologic and Seasonal Patterns
The risk for JE varies by local ecology and season. JEV transmission occurs primarily in rural agricultural areas, often associated with rice production and flooding irrigation, where large numbers of vector mosquitoes breed in close proximity to animal reservoirs (18,21). In some areas of Asia, these ecologic conditions might occur near, or rarely within, urban centers (71).
In temperate areas of Asia, JEV transmission is seasonal, and human disease usually peaks in the summer and fall (41,42,46,47,50,60,62,72). Seasonal epidemics can be explosive, with thousands of JE cases occurring over a period of several months. In the subtropics and tropics, transmission can occur year-round, often with a peak during the rainy season (48,51,56,70).
# Age-Specific Patterns
In endemic areas, JE is primarily a disease of childhood, with the vast majority of cases occurring among children aged <15 years (41,46,47,(50)(51)(52)60,62,70,(73)(74)(75). However, in areas with childhood JE immunization programs, the overall incidence of JE decreases, and similar numbers of cases are observed among children and adults (42,43,50). In both Japan in 2002 and northern China in 2006, outbreaks were reported in which the majority of cases occurred among older adults
# FIGURE 1. Transmission cycle of Japanese encephalitis virus (JEV)*
*JEV is transmitted in an enzootic cycle between Culex mosquitoes and amplifying vertebrate hosts, primarily pigs and wading birds. Humans are a deadend host in the JEV transmission cycle with brief and low levels of viremia. Humans play no role in the maintenance or amplification of JEV, and the virus is not transmitted directly from person to person. (76,77). Because unvaccinated travelers from nonendemic countries usually are immunologically naïve, travel-associated JE can occur in persons of any age.
# Culex
# Clinical Manifestations and Diagnosis
# Signs and Symptoms
The majority of human infections with JEV are asymptomatic; <1% of people infected with JEV develop clinical disease (12,67,68,73,(77)(78)(79). Acute encephalitis is the most commonly identified clinical syndrome with JEV infection (12,72,(80)(81)(82). Milder forms of disease (e.g., aseptic meningitis or undifferentiated febrile illness) also can occur but have been reported more commonly among adults (72,83,84).
Among patients who develop clinical symptoms, the incubation period is 5-15 days. Illness usually begins with acute onset of fever, headache, and vomiting (55,85,86). Mental status changes, focal neurologic deficits, generalized weakness, and movement disorders might occur over the next few days (55,82,(85)(86)(87)(88)(89)(90). Seizures are common, especially among children (85)(86)(87)(90)(91)(92). The classical description of JE includes a parkinsonian syndrome with mask-like facies, tremor, cogwheel rigidity, and choreoathetoid movements (82,93). Acute flaccid paralysis, with clinical and pathological features similar to poliomyelitis, also has been associated with JEV infection (93,94). Status epilepticus, brain hypoxia, increased intracranial pressure, brainstem herniation, and aspiration pneumonia are the most common complications associated with poor outcome and death (82,85,91,95).
Although information on the burden of JEV infection in pregnancy is limited, miscarriages and an intrauterine infection following maternal JE have been reported. In India, four miscarriages were reported among nine infected pregnant women; all of the women who miscarried were in the first or second trimester of pregnancy (28,29). JEV was isolated in one of the four aborted fetuses, suggesting that intrauterine transmission of JEV can occur (28).
# Clinical Laboratory Findings and neuroimaging
Clinical laboratory findings with JE are nonspecific and might include moderately elevated white blood cell count, mild anemia, and hyponatremia (72,82,85,86,90). Thrombocytopenia and elevated hepatic enzymes have been noted (86). Cerebrospinal fluid (CSF) usually shows a lymphocytic pleocytosis with moderately elevated protein (52,55,72,74,82,85,87,90).
Magnetic resonance imaging (MRI) of the brain is better than computed tomography (CT) for detecting JEV associated abnormalities such as changes in the thalamus, basal ganglia, midbrain, pons, and medulla (96,97). Thalamic lesions are the most commonly described abnormality; although these can be highly specific for JE in the appropriate clinical context, they are not a very sensitive marker of JE (98).
# Laboratory Diagnosis
JEV infections are confirmed most frequently by detection of virus-specific antibody in CSF or serum (12,(99)(100)(101)(102)(103). Because humans have low or undetectable levels of viremia by the time distinctive clinical symptoms are recognized, virus isolation and nucleic acid amplification tests (NAAT) are insensitive and should not be used for ruling out a diagnosis of JE (104,105). In one study in Thailand, of 30 nonfatal cases involving persons with JEV infection of the central nervous system (CNS), none had virus isolated from plasma or CSF (100). By contrast, JEV was isolated from CSF in five (33%) of 15 fatal cases, and from brain tissue in 8 (73%) of 11 fatal cases. More recent studies have shown the utility of NAAT for diagnosing JE in some patients with encephalitis or aseptic meningitis, but this method still lacks the sensitivity needed for routine diagnosis (84,106).
Acute-phase specimens should be tested for JEV-specific immunoglobulin (Ig) M antibodies using a capture enzyme-linked immunosorbent assay (MAC ELISA) (12,(99)(100)(101)(102)(103). JEV-specific IgM antibodies can be measured in the CSF of most patients by 4 days after onset of symptoms and in serum by 7 days after onset (99,100). JEV-specific IgM antibodies in CSF indicate recent CNS infection and can help distinguish clinical disease attributed to JEV from previous vaccination (99). With clinical and epidemiologic correlation, a positive IgM test has good diagnostic predictive value, but cross-reaction with arboviruses from the same family can occur. Plaque reduction neutralization tests (PRNT) can be performed to measure virus-specific neutralizing antibodies. A fourfold or greater rise in virus-specific neutralizing antibodies between acute-and convalescent-phase serum specimens collected 2-3 weeks apart may be used to confirm recent infection or to discriminate between cross-reacting antibodies attributed to other primary flaviviral infections. In patients who have been infected previously by another flavivirus or vaccinated with a flaviviral vaccine (e.g., JE or yellow fever vaccine) and who then acquire a secondary flaviviral infection, cross-reactive antibodies in both the ELISA and neutralization assays might make identifying a specific etiologic agent difficult. Vaccination history, date of onset of symptoms, and information regarding other arboviruses known to circulate in the geographic area that might cross-react in serologic assays should be considered when interpreting results. Diagnostic testing for JE is available in select state public health laboratories and at CDC's Division of Vector-Borne Infectious Diseases in Colorado (available at . htm; telephone 970-221-6400).
# treatment and Management
JE therapy consists of supportive care and management of complications. No specific antiviral agent or other medication to mitigate the effects of JEV infection is available (107). In controlled clinical trials, corticosteroids, interferon alpha-2a, or ribavirin did not improve clinical outcome (108)(109)(110).
# outcome and Sequelae
JE has a case-fatality ratio of approximately 20%-30% (46,47,53,56,62,74,75,82,(85)(86)(87)111,112). Although some motor deficits and movement disorders improve after the acute illness, 30%-50% of JE survivors have neurologic or psychiatric sequelae even years later (66,74,82,85,87,93,109,111,(113)(114)(115)(116)(117). These include seizures, upper and lower motor neuron weakness, cerebellar and extrapyramidal signs, flexion deformities of the arms, hyperextension of the legs, cognitive deficits, language impairment, learning difficulties, and behavioral problems (82). Because of the lack of specific antiviral therapy, high case fatality, and substantial morbidity, prevention of JE through vaccination and mosquito precautions is important.
# JE Among travelers
For most travelers to Asia, the risk for JE is very low but varies on the basis of destination, duration, season, and activities (3,9,10,13,118). The overall incidence of JE among persons from nonendemic countries traveling to Asia is estimated to be less than one case per 1 million travelers. However, the risk for JE among expatriates and travelers who stay for prolonged periods in rural areas with active JEV transmission is likely similar to that among the susceptible resident population. Recurrent travelers or travelers on brief trips might be at increased risk if they have extensive outdoor or nighttime exposure in rural areas during periods of active transmission (119)(120)(121). Shortterm (<1 month) travelers whose visits are restricted to major urban areas are at minimal risk for JE. Because JEV is maintained in an enzootic cycle between animals and mosquitoes, in endemic areas where few human cases occur among residents as a result of vaccination or natural immunity, susceptible visitors might be at risk for infection. JE should be suspected in any patient with evidence of a neurologic infection (e.g., encephalitis, meningitis, or acute flaccid paralysis) who has recently returned from a country in Asia or the western Pacific in which JE is endemic.
During 1973-2008, a total of 55 cases of travel-associated JE among persons from nonendemic countries were reported in the literature (6)(7)(8). A small increase in the number of reported cases occurred in each of the three most recent 10-year periods: 1999-2008 (n = 20), 1989-1998 (n = 17), and 1979-1988 (n = 14). Two cases were reported during 1973-1978, and for two additional published cases, the date of onset was unknown but occurred before 1993. Overall, 33 (60%) cases occurred in tourists, nine (16%) were in expatriates, and six (11%) were in soldiers; the type of travel was unknown in seven (13%) cases. The tourist category included three case-patients who were traveling to visit friends and relatives and two students on study-abroad programs. The casepatients were citizens of 17 different countries. The countries where infection was most commonly acquired were Thailand (n = 19), Indonesia (n = 8), China (n = 7), the Philippines (n = 5), Japan (n = 4), and Vietnam (n = 3). Among the 46 cases for which age was recorded, patients ranged in age from 1 to 91 years (median: 34 years); five (9%) of the 55 cases occurred among children aged ≤10 years and 10 (18%) among adults aged ≥60 years. Overall, 29 (53%) of the 55 case-patients were male, and 22 (40%) were female; sex was unknown for four cases (7%). Ten (18%) of the reported cases were fatal, 24 (44%) patients survived but had sequelae, and 12 (22%) patients recovered fully; outcome was unknown for nine (16%) patients. None of the patients was known to have received JE vaccine.
For 37 (67%) of the travel-associated cases, more complete information on itineraries and activities was available (8). Many reports documented exposures that likely increased risk for infection, including travel to rural areas, living in proximity to farms or in the jungle, staying in unscreened accommodations, or participating in outdoor activities such as trekking. Duration of travel for these cases ranged from 10 days to 34 years and was ≥1 month for 24 (65%) travelers. Of the 13 travelers staying <1 month, 10 (77%) had trip duration of 2-<4 weeks, and three (23%) traveled for 10-12 days. Among these shorter-term travelers, three (23%) travelers spent the majority of their time in rural areas, six (46%) stayed in coastal or nonrural areas but took day trips to rural areas or national parks, and one (8%) stayed in a coastal area and took day trips to unspecified destinations; no exposure-related information was provided for three (23%) travelers. No cases occurred among business or other short-term travelers who visited only urban areas.
Before 1973, >300 cases of JE were reported among U.S. military personnel or their family members (72,79,(140)(141)(142)(143). Of 15 JE cases reported among travelers from the United States during 1973-2008, only four were reported after 1992, when JE-MB was first licensed in the United States; none of these patients had received JE vaccine (7,122).
In 2004, an estimated 5.5 million entries of U.S. travelers occurred into JE-endemic countries (144). The proportion of these travelers who received JE vaccine or for whom JE vaccine should have been recommended is unknown. However, 30 days in a JE-endemic country and another 85 (5%) shorter-term travelers who planned to spend a substantial proportion of their time in endemic rural areas (145). Of these at-risk travelers, only 47 (11%) reported receiving ≥1 dose of JE vaccine. Among 164 unvaccinated at-risk travelers who had visited a health-care provider to prepare for their trip, 113 (69%) indicated that their health-care provider had not offered or recommended JE vaccine. In Europe, an assessment based on the number of JE vaccine doses distributed suggested that <1% of travelers to endemic countries received JE vaccine (118), underscoring the need for health-care providers to understand the risks for JE disease among travelers and the measures available to prevent it.
# JE Vaccines JE Vaccines Licensed in the United States
Two JE vaccines are licensed in the United States; an inactivated mouse brain-derived vaccine (JE-VAX ) and an inactivated Vero cell culture-derived vaccine (IXIARO ) (Table 1). JE-MB has been licensed in the United States since 1992 for use in travelers aged ≥1 year (13,146)
# Correlates of Protection
Because several effective JE vaccines are available in Asia, randomized, controlled efficacy trials to evaluate new JE vaccines would be logistically difficult and potentially unethical. JE-VC was licensed on the basis of its ability to induce JEV-specific neutralizing antibodies, which is thought to be a reliable surrogate of efficacy (149,150). Observations from the 1930s indicated that laboratory workers who had been exposed accidentally to JEV were protected from disease when they had measurable neutralizing antibodies (151). These observations are further supported by passive antibody transfer and active immunization studies in animals using both licensed and experimental JE vaccines. Subsequent studies in mice indicated that passive transfer of neutralizing antibodies protected animals against JEV challenge and established a dose-response relationship between antibody titer and level of protection (69,(151)(152)(153). These studies also indicated that animals that have been actively primed to respond to JEV antigen but that have no detectable neutralizing antibodies are protected from lethal challenge, demonstrating an effec-tive anamnestic immune response (153). A more recent study indicated that hyperimmune ascitic fluid raised against two JE vaccines derived from genotype III JEV strains (i.e., JE-MB derived from the Nakyama-NIH strain and a developmental chimeric vaccine derived from the SA 14 -14-2 strain) protected mice against intracerebral challenge with JEV strains of all four genotypes. These data demonstrate that neutralizing antibodies provide protection against heterologous JEV genotypes (154). In another study, mice were passively immunized with pooled sera with JEV neutralizing antibody titers (range: 20-200 titers) from humans immunized with JE-VC. The mice were challenged 18 hours later with a lethal dose of either a genotype I (KE093) or genotype III (SA 14 ) JEV strain (155). Mice with ex vivo JE neutralizing antibody titers of ≥10 had survival rates of 86% (6/7) and 100% (10/10) after challenge with the genotype I and III JEV strains, respectively. In mice receiving lower-titer sera, protection against JEV challenge correlated with the anti-JEV neutralizing antibody titer of the immunizing sera. Mice actively vaccinated with varying doses of JE-VC and JE-MB also had dose-dependent protection against intraperitoneal challenge with the JEV SA 14 strain (155). Finally, in a study designed to develop a JE animal model in nonhuman primates, 16 rhesus macaques were challenged intranasally with a 90% effective dose of JEV (i.e., a dose that when administered via intranasal challenge would be expected to cause encephalitis in 90% of the animals), including four monkeys that were immunized with 4 doses of inactivated mouse brain-derived JE vaccine, eight monkeys immunized with one of two developmental JE poxvirus vaccines, and four JEV-naïve control monkeys (156,157). The minimum neutralizing antibody titer required to protect the monkeys from lethal challenge was between 30 and 46. The higher titers required for protection in this study might have been caused by the high challenge dose used to develop the model. The PRNT is the most commonly accepted test to measure functional antibody able to inactivate or neutralize virus. A World Health Organization (WHO) expert panel accepted a 50% PRNT (PRNT 50 ) titer of ≥10 as an immunologic correlate of protection from JE in humans (150). The PRNT 50 titer is the reciprocal of the endpoint serum dilution that reduces the challenge virus plaque count by 50%. PRNT is a functional assay that can be performed using various protocols, and the validity and comparability of PRNT results depend on detailed components of the selected assay (e.g., endpoint neutralization, incubation conditions, cell substrate, and target virus) (150,158). JEV PRNTs are performed only at selected reference laboratories, but careful attention must be paid to the characteristics and validation of a PRNT assay that is used to measure and compare JEV neutralizing titers as a surrogate for efficacy.
# Inactivated Vero Cell Culture-Derived JE Vaccine (IXIARo ) Vaccine Composition, Storage, and Handling
JE-VC is an inactivated vaccine derived from the attenuated SA 14 -14-2 JEV strain propagated in Vero cells (Table 1) (14,159,160). Each 0.5-mL dose contains approximately 6 μg of purified, inactivated JEV proteins and 0.1% aluminum hydroxide as an adjuvant. The finished product does not include gelatin stabilizers, antibiotics, or thimerosal.
The vaccine should be stored at 35°-46°F (2°-8°C); it should not be frozen. The vaccine should be protected from light. During storage, the vaccine might appear as a clear liquid with a white precipitate. After agitation, it forms a cloudy white suspension.
# Immunogenicity of JE-VC noninferiority of Immunogenicity Compared with JE-MB
No efficacy data exist for JE-VC. The vaccine was licensed on the basis of its ability to induce JEV neutralizing antibodies as a surrogate for protection and safety evaluations in approximately 5,000 adults. The pivotal noninferiority immunogenicity study compared 2 doses of JE-VC given on days 0 and 28 to 3 doses of JE-MB given on days 0, 7, and 28 to adults aged ≥18 years in the United States, Austria, and Germany (161). In the "per protocol" analysis, 352 (96%) of 365 JE-VC recipients developed a PRNT 50 ≥10 compared with 347 (94%) of 370 JE-MB recipients at 28 days after the last dose (Table 2) (14,161). The proportion of recipients who seroconverted differ slightly between the package insert and published manuscript because of the inclusion of four additional JE-VC recipients and six additional JE-MB recipients in the per protocol analysis provided in the package insert (14,161). Most subjects in each group had a PRNT 50 >40 (Figure 3). The PRNT 50 geometric mean titer (GMT) for JE-VC recipients was 244 compared with 102 for JE-MB recipients. However, the target JEV strain in the neutralizing antibody assay was SA 14 -14-2, which is the JEV strain used in JE-VC, whereas JE-MB is produced from the Nakayama JEV strain. In a subset of these specimens that were evaluated using Nakayama as the target JEV strain, the PRNT 50 GMT for JE-VC recipients (n = 88) was 240 compared with 1,219 for the JE-MB recipients (n = 89) (Intercell Biomedical, unpublished data, 2007). In another smaller subset of specimens, PRNT 50 GMTs against other JEV strains were variable (Figure 4) (Intercell Biomedical, unpublished data, 2007).
# Immunogenicity Data Supporting the 2-Dose Primary Series
The licensed vaccine schedule was derived in part on the basis of a study that compared two 6-μg doses of vaccine administered 28 days apart to a single dose of either 6 μg or 12 μg (162). At 28 days after receiving 1 dose of the standard 6-μg regimen, only 95 (41%) of 230 JE-VC recipients had seroconverted with a PRNT 50 ≥10 (Figure 5). At 56 days after receiving their first dose of vaccine, 97% (110/113) of the subjects who had received 2 doses had a PRNT 50 ≥10 compared with only 26% (30/117) and 41% (47/114) of the subjects who received a single 6-μg or 12-μg dose, respectively. All of the 2-dose recipients who seroconverted had protective antibodies as early as 7 days after receiving the second dose of vaccine.
# Immunogenicity in Persons with Preexisting Flavivirus Antibodies
A study that evaluated the effect of pre-existing antibodies against tick-borne encephalitis virus (TBEV), another flavivirus, determined that TBEV antibodies enhanced the response to JE-VC after the first dose but had no effect following the 2-dose primary series (163). Following 1 dose of JE-VC, 62 (77%) of 81 subjects with preexisting TBEV IgG antibodies developed protective antibodies against JEV compared with only 166 (49%) of 339 JE-VC recipients with no preexisting TBEV antibodies (Table 3). However, after the second dose of JE-VC, subjects with and without TBEV antibodies had similarly high rates of seroconversion against JEV at 96% (78/81) and 91% (310/339), respectively. JEV PRNT 50 GMTs were also similar between the groups after 2 doses of JE-VC (207 and 187, respectively).
# Duration of neutralizing Antibodies
In a study performed in central Europe to evaluate the duration of neutralizing antibodies, 95% (172/181) of the subjects who received 2 doses of JE-VC maintained protective neutralizing antibodies (PRNT 50 ≥10) at 6 months after receiving the first dose, and 83% (151/181) still had protective antibodies at 12 months after the first dose (164). However, another study that used similar methods but was performed in western and northern Europe concluded that only 83% (96/116) of adults receiving 2 doses of JE-VC had protective antibodies at 6 months after their first vaccination, and the seroprotection rate had dropped to 58% (67/116) and 48% (56/116) at 12 and 24 months, respectively (165). Among 44 subjects who no longer had protective antibodies (17 subjects at 6 months after their first dose and 27 subjects at 12 months after their first dose), all developed PRNT 50 ≥10 after receiving a booster dose.
# Comcomitant Administration of Hepatitis A Vaccine
A clinical trial in which the first dose of JE-VC was administered concomitantly with hepatitis A vaccine (HAVRIX) indicated no interference with the immune response to JE-VC or hepatitis A vaccine (166). Among the 58 subjects who received both JE-VC and hepatitis A vaccine in the per-protocol analysis, all had protective neutralizing antibodies against the SA 14 -14-2 JEV strain compared with 98% (57/58) of subjects who received JE-VC alone (Table 4). PRNT 50 GMTs also were similar at 203 and 192, respectively. Subjects receiving JE-VC and hepatitis A vaccine also had similar seroconversion rates for anti-hepatitis A virus (anti-HAV) antibody (100%; 58/58) compared with subjects receiving hepatitis A vaccine alone (96%; 50/52). However, some differences were noted between men and women in the levels of anti-HAV antibody achieved, and both seroconversion rates and antibody titers varied depending on which anti-HAV assay was used. Whether these observations have any clinical significance is not known.
# Immunogenicity in Children
A phase 2 trial investigated the safety and immunogenicity of JE-VC in healthy children aged 1 and 2 years in India, using a standard (6-μg) or half (3-μg) dose (167). Children in both groups received 2 doses of JE-VC administered 28 days apart. A third group of children received 3 doses of an inactivated mouse brain-derived JE vaccine (JenceVac) on days 0, 7, and 28. JenceVac is produced by the Korean Green Cross Vaccine Corporation and is not licensed in the United States. At 28 days after the vaccination series was complete, seroconversion rates in the 6-μg (n = 21) and 3-μg (n = 23) JE-VC recipient groups and the inactivated mouse brain-derived group (n = 11) were 95%, 96%, and 91%, and PRNT 50 GMTs were 218 (95% confidence interval = 121-395), 201 (CI = 106-380), and 230 (CI = 68-784), respectively. None of the differences in seroconversion rates or GMTs was statistically significant. Further pediatric clinical trials are planned.
# Adverse Events with JE-VC
Local and systemic adverse events caused by JE-VC are similar to those reported for JE-MB or placebo adjuvant alone. No serious hypersensitivity reactions or neurologic adverse events were identified among JE-VC recipients enrolled in the clinical trials. However, because JE-VC was studied in <5,000 adults, the possibility of rare serious adverse events cannot be excluded. Additional postlicensure studies and monitoring of surveillance data are planned to evaluate the safety of JE-VC in a larger population.
# Local and Systemic Adverse Events of JE-VC Compared with Placebo Adjuvant
The pivotal safety study comparing 1,993 subjects who received 2 doses of JE-VC with 657 subjects who received 2 doses of placebo adjuvant (phosphate buffered saline with 0.1% aluminum hydroxide) indicated similar reactogenicity and adverse events (Tables 5 and 6) (168). The most common local reactions after JE-VC administration were pain and tenderness. Two cases of urticaria were noted during the study: one case of urticaria localized to both inner thighs that occurred 6 days after the second vaccination in the placebo group, and one case of generalized urticaria (affecting the face, chest, arms, and abdomen) that occurred 8 days after the second vaccination in the JE-VC group. The case of urticaria in the JE-VC group was described as being of "moderate intensity"; it was treated with cetirizine hydrochloride and resolved after 3 days. Angioedema was not observed. The event was considered by the investigator to be unlikely to be related to study vaccine, and the subject completed the study. A total of 17 subjects, 12 (0.6%) in the JE-VC group and five (0.8%) in the placebo group, terminated the study prematurely because of adverse events. Two of the events (gastroenteritis and rash) in the JE-VC group were severe, and eight of them (headache , influenza-like illness, allergic dermatitis, injection site pain, nausea, fatigue, and rash ) were considered to be at least possibly related to study treatment.
No serious neurologic events were identified.
# Local and Systemic Adverse Events of JE-VC Compared with JE-MB
In the noninferiority immunogenicity trial, the frequency of adverse events reported following JE-VC vaccination (428 subjects) was similar to that reported by persons receiving JE-MB (435 subjects) (161). Severe redness, swelling, tenderness, and pain at the injection site were each reported by ≤1% of JE-VC recipients (Table 7). Reported systemic adverse events following JE-VC vaccination generally were mild; the most commonly reported adverse events in the 7 days after each dose were headache (26%), myalgia (21%), influenza-like illness (13%), and fatigue (13%). One serious adverse event was reported in the JE-VC group; a male aged 50 years had a nonfatal myocardial infarction 3 weeks after the second vaccination. The event was considered by the investigator to be unlikely to be related to study vaccine.
# Pooled Safety Data
In a pooled analysis of 6-month safety data from seven studies, severe injection-site reactions were reported by 3% of the 3,558 JE-VC subjects, which was comparable to the 3% among 657 placebo adjuvant recipients but lower than the 14% among 435 JE-MB recipients (169). Systemic symptoms were reported with similar frequency among subjects who received JE-VC (40%), JE-MB (36%), or placebo (40%). Serious adverse events were reported by 1% of the subjects in the JE-VC group. Serious allergic reactions were not observed in any study group, including JE-VC, JE-MB, or placebo recipients.
# Vaccination of Women During Pregnancy and Breastfeeding
FDA classifies JE-VC as a "Pregnancy Category B" medication (14). No controlled studies have assessed the safety, immunogenicity, or efficacy of JE-VC in pregnant women. Preclinical studies of JE-VC in pregnant rats did not show evidence of harm to the mother or fetus. No data exist on the safety or efficacy of JE-VC in breastfeeding women.
# Inactivated Mouse Brain-Derived JE Vaccine (JE-VAX ) Vaccine Composition, Storage, and Handling
JE-MB is an inactivated vaccine prepared by inoculating mice intracerebrally with the JEV Nakayama-NIH strain (Table 1) (146). Thimerosal is added as a preservative to a final concentration of 0.007%. Each 1.0-mL dose also contains approximately 500 μg of gelatin, <100 μg of formaldehyde, <0.0007% v/v Polysorbate 80, and <50 ng of mouse serum protein. No myelin basic protein can be detected at the detection threshold of the assay (<2 ng/mL).
The lyophilized vaccine should be stored at 35°-46°F (2°-8°C); it should not be frozen. The vaccine should be reconstituted according to the package insert only with the supplied diluent. After reconstitution the vaccine should be stored at 35°-46°F (2°-8°C) and used within 8 hours. Reconstituted vaccine should not be frozen.
# Efficacy of Inactivated Mouse Brain-Derived JE Vaccine
An inactivated mouse brain-derived JE vaccine was first licensed in Japan in 1954 and then modified in the 1960s and 1980s. Similar vaccines are produced in several Asian countries using the Nakayama or Beijing-1 JEV strains. Inactivated mouse brain-derived vaccines have been used effectively to control disease in several JE-endemic countries, including Japan, South Korea, Taiwan, and Thailand. They also have been used for several decades to prevent infection in tourists and military personnel from nonendemic countries traveling to JE-endemic regions.
Efficacy of inactivated mouse brain-derived JE vaccine has been demonstrated in two large controlled trials. A precursor of the current mouse brain-derived JE vaccine was studied in Taiwan in 1965 (170). Approximately 265,000 children were enrolled and received 1 dose of JE vaccine (22,194 children), 2 doses of JE vaccine (111,749 children), or tetanus toxoid (131,865 children). Another 140,514 unvaccinated children also were observed. After 1 year of follow-up, JE incidence among children who received 2 doses of JE vaccine was 3.6 per 100,000 population compared with 18.2 per 100,000 in recipients of tetanus toxoid, for a vaccine efficacy of approximately 80%. A single dose of JE vaccine was not efficacious.
Efficacy of JE-MB was demonstrated in a placebo-controlled, randomized trial conducted among 65,000 children aged 1-14 years in Thailand (68). Study participants were randomized to receive 2 doses of monovalent vaccine prepared with the Nakayama-NIH JEV strain administered 7 days apart (21,628 children), 2 doses of a bivalent JE vaccine that contained both the Nakayama-NIH and Beijing-1 JEV strains (22,080 children), or tetanus toxoid (21,516 children). After 2 years, one JE case was identified in each of the two study vaccine groups (five cases per 100,000) compared with 11 JE cases (51 cases per 100,000) in the children who received tetanus toxoid. The efficacy in both JE vaccine groups combined was 91% (CI = 70%-97%) with no difference observed between the monovalent and bivalent vaccines.
# Immunogenicity of Inactivated Mouse Brain-Derived JE Vaccine
# Immunogenicity of 2 Versus 3 Doses for travelers
For travelers from nonendemic countries, the recommended primary vaccination series for JE-MB is 3 doses administered on days 0, 7, and 30. Children in JE-endemic countries usually receive 2 doses of inactivated mouse brain-derived JE vaccine separated by 1-4 weeks and followed by a booster dose 1 year later (147). However, immunogenicity studies performed in adults from nonendemic countries indicated that ≤80% of participants seroconverted following 2 doses of vaccine and only about 30% still had measurable neutralizing titers after 6-12 - Analysis includes all participants who entered into the study and received ≥1 dose of vaccine † Two doses administered at days 0 and 28 with one dose placebo at 7 days. § Three doses administered at days 0, 7, and 28. ¶ Difference between two vaccines p<0.01 (Fisher's exact test).
# MMWR
March 12, 2010 months. By contrast, between 87% and 100% of adults from nonendemic settings developed neutralizing antibodies after receiving 3 doses of vaccine (Table 8) (13,138,(171)(172)(173). The vaccine's efficacy and immunogenicity after 2 doses in Asian subjects might be a result of prior immunity or subsequent exposures to flaviviruses present in Asia including JEV, West Nile virus, and dengue virus (174). Although exposure to flaviviruses is almost universal at an early age in most countries in Asia, flaviviral infections are much less common in North America and Europe.
# Immunogenicity of two 3-Dose Regimens
The immunogenicity of two different, 3-dose vaccination regimens was evaluated in 528 U.S. military personnel in 1990 (173). Vaccine was given on days 0, 7, and either 14 or 30. All vaccine recipients demonstrated neutralizing antibodies at 2 months and 6 months after initiation of vaccination. The longer schedule of days 0, 7, and 30 produced higher antibody titers than the days 0, 7, and 14 schedule. When 273 of the original study participants were tested at 12 months after vaccination, a statistically significant difference in GMTs did not exist between the two groups (146).
# neutralizing Antibodies Following a Booster Dose
Marked anamnestic responses with ≥10-fold increases in neutralizing antibody titers have been demonstrated after a booster dose of JE-MB (147,175,176). In a study in Japan, among 152 subjects who had received either a 1-or 2-dose primary vaccination schedule, a booster dose at 1 year increased neutralizing antibody GMTs from ≤20 immediately prior to the booster to ≥1,360 at 4 weeks (175).
# Duration of neutralizing Antibodies
Only a few studies have measured the duration of protection after primary or booster vaccinations in populations from nonendemic or low-endemicity areas. Studies conducted in most parts of Asia are complicated by the boosting effect of naturally acquired flaviviral infections. In U.S. military personnel who received a 3-dose primary vaccination course, 100% (21/21) and 94% (16/17) still had protective neutralizing antibody titers at 2 years and 3 years, respectively (177). In a study in a nonendemic area of Japan, 38 (92%) of 41 recipients maintained protective antibody titers 2 years after a booster dose (175). Another study in an area of low JE endemicity in Japan applied a random coefficient model to data from 17 children and estimated that 82% would have protective antibodies at 5 years after a booster dose (4th dose) of vaccine and that 53% would still be protected 10 years following the booster dose (178). In contrast to these studies, which indicated persistence of immunity for several years, a study conducted on Badu Island in the Torres Strait, Australia, indicated that only 70 (32%) of 219 persons had protective antibodies 30-36 months after either primary vaccination or receipt of a booster dose (179). The reason for the unusually low level of immunity in this study was not clear, although high prevalence of chronic medical conditions among the population studied was proposed as a contributing factor.
# Immunogenicity in Persons with Preexisting Flavivirus Antibodies
Immunogenicity studies with another flaviviral vaccine (inactivated tick-borne encephalitis vaccine) have indicated that previous yellow fever vaccination augmented the antibody response to TBE vaccine (180). This effect has not been observed among JE-MB recipients (135,173).
# Adverse Events with Inactivated Mouse Brain-Derived JE Vaccine Local and Systemic Adverse Events
Inactivated mouse brain-derived JE vaccine has been associated with localized erythema, tenderness, and swelling at the injection site in about 20% of recipients. Mild systemic side effects (e.g., fever, chills, headache, rash, myalgia, and gastroin- testinal symptoms) have been reported in approximately 10% of vaccinees (138,146,173).
# Allergic Hypersensitivity Reactions
JE-MB has been associated with serious, but rare, allergic and neurologic adverse events. Allergic hypersensitivity reactions, including generalized urticaria and angioedema of the extremities, face, and oropharynx, have been reported primarily among adult travelers and military personnel (13,66,147,(181)(182)(183)(184)(185)(186)(187)(188). Accompanying bronchospasm, respiratory distress, and hypotension were observed in some patients. Although most of these reactions occurred within 24-48 hours after the first dose, when they occurred following a subsequent dose, symptom onset often was delayed (median: 3 days; range: 1-14 days) (185). Most of these reactions were treated with antihistamines or corticosteroids on an outpatient basis; however, up to 10% of vaccinees with these reactions have been hospitalized. Several deaths attributed to anaphylactic shock have been associated temporally with receipt of this vaccine, but none of these patients had evidence of urticaria or angioedema, and two had received other vaccines simultaneously (43,185,189). Estimates of the frequency of severe hypersensitivity reactions range from 10 to 260 cases per 100,000 vaccinees, and vary by country, year, case definition, surveillance method, and vaccine lot (Table 9) (13,(181)(182)(183)(184)(185)(186)(187)(188). Persons with a history of anaphylaxis, urticaria, or other allergies are 2-11 times more likely to develop a hypersensitivity reaction following receipt of JE vaccine (185,190). Gelatin, which is used as a vaccine stabilizer, might be responsible for some of these allergic reactions (191)(192)(193). In one study from Japan, among 10 children who developed an immediate hypersensitivity reaction within 1 hour after receiving inactivated mouse brain-derived JE vaccine, all had measurable IgE antibodies against gelatin (193). By contrast, only one (4%) of 28 children who developed a delayed hypersensitivity reaction at 1-48 hours following administration of inactivated mouse brain-derived JE vaccine, and none of 15 controls had evidence of anti-gelatin IgE antibodies.
# neurologic Adverse Events
JE-MB contains no myelin basic protein at the detection threshold of the assay. However, the use of mouse brains as the substrate for virus growth has raised concerns about the possibility of neurologic side effects associated with the JE vaccine. Moderate to severe neurologic symptoms, including encephalitis, seizures, gait disturbances, and parkinsonism, have been reported at a rate of 0.1-2 cases per 100,000 vaccinees with variation by country, case definition, and surveillance method (Table 9) (13,186,188,194). In addition, cases of severe or fatal acute disseminated encephalomyelitis (ADEM) temporally associated with JE vaccination of children in Japan and Korea have been reported (43,186,(195)(196)(197)(198)(199). In 2005, in response to these cases, Japan suspended routine vaccination with mouse brain-derived JE vaccines (5,200). In reviewing this decision, the WHO Global Advisory Committee on Vaccine Safety determined that no evidence existed of an increased risk for ADEM associated with mouse brain-derived JE vaccine and that a causal link had not been demonstrated. The committee recommended that, although current use and policies should not be changed, the inactivated mouse brain-derived vaccine should be replaced gradually by new-generation JE vaccines (5,200).
# Vaccination of Women During Pregnancy and Breastfeeding
FDA classifies JE-MB as a "Pregnancy Category C" medication (146). No specific information is available on the safety of JE-MB in pregnant women, and animal reproductive studies have not been conducted with JE-MB. In addition, no data exist on the safety or efficacy of JE-MB in breastfeeding women.
# Cost Effectiveness of JE Vaccines
Several studies have demonstrated that using JE vaccine to immunize children in JE-endemic countries is cost saving (201)(202)(203). However, given the large numbers of travelers to Asia (5.5 million entries of U.S. travelers into JE-endemic countries in 2004), the very low risk for JE for most travelers to Asia (less than one case per 1 million travelers), and the high cost of JE vaccine ($390 per 2-dose primary series for JE-VC in 2009) (204), providing JE vaccine to all travelers to Asia would not be cost-effective. In addition, for some travelers, even a low risk for serious adverse events attributable to JE vaccine might be higher than the risk for disease. Therefore, JE vaccine should be targeted to travelers who, on the basis of their planned travel itinerary and activities, are at increased risk for disease. Travel vaccines typically are not covered by insurance plans, and the travelers themselves usually must pay for vaccine administration. Therefore, travelers should be counseled about their individual risk on the basis of their planned itinerary and activities.
# Summary of Rationale for JE Vaccine Recommendations
When making recommendations regarding the use of JE vaccine for travelers, health-care providers should weigh the overall low risk for travel-associated JEV disease, the high morbidity and mortality when JE does occur, the low probability of serious adverse events following vaccination, and the cost of the vaccine. Evaluation of an individual traveler's risk should take into account their planned itinerary including travel location, duration, season, and activities (Box 1).
The risk for JE for most travelers to Asia is very low but varies based on destination, duration, season, and activities (3,(8)(9)(10)13,118). Since 1992, when a vaccine was first licensed for use in the United States, only four cases of JE have been reported among travelers from the United States; none of the patients had received JE vaccine. The overall incidence of JE among people from nonendemic countries traveling to Asia is estimated to be less than one case per 1 million travelers. However, the risk for JE among expatriates and travelers who stay for prolonged periods in rural areas with active JEV transmission is likely similar to the risk among the susceptible resident population (6,9). Recurrent travelers or travelers on brief trips might be at increased risk if they have extensive outdoor or nighttime exposure in rural areas during periods of active transmission (119)(120)(121). Short-term (<1 month) travelers whose visits are restricted to major urban areas are at minimal risk for JE.
# Duration of travel
Most reported travel-associated JE cases have - occurred among expatriates or long-term travelers (i.e., ≥1 month).
Although no specific duration of travel puts a - traveler at risk for JE, a longer itinerary increases the likelihood that a traveler might be exposed to a JEV-infected mosquito.
# Season
In most temperate areas of Asia, JEV transmission is - seasonal, and human disease usually peaks in summer and fall.
In the subtropics and tropics, JEV transmission - patterns vary, and human disease can be sporadic or occur year-round.
# Activities
The mosquitoes that transmit JEV feed on humans - most often in the outdoors, with peak feeding times after sunset and again after midnight. Extensive outdoor activities (e.g., camping, hiking, - trekking, biking, fishing, hunting, or farming), especially during the evening or night, increase the risk of being exposed to a JEV-infected mosquito.
Accommodations with no air conditioning, screens, - or bed nets increase the risk of exposure to mosquitoes that transmit JEV and other vector-borne diseases (e.g., dengue and malaria).
# Additional information
Information on expected JEV transmission by coun-- try is available from CDC at . gov/travel/yellowbook/2010/chapter-2/japaneseencephalitis.aspx.
The highest risk for JEV exposure occurs in rural agricultural areas, often those associated with rice production and flooding irrigation. In most temperate areas of Asia, JEV transmission is seasonal, and human disease usually peaks in summer and fall. In the subtropics and tropics, transmission patterns vary, and human disease can be sporadic or occur year-round.
Although no minimum duration of travel eliminates a traveler's risk for JE, a longer itinerary increases the likelihood that a traveler will spend time in an area with active JEV transmission. The mosquitoes that transmit JEV feed most often in the outdoors with peaks after sunset and again after midnight. Outdoor activities, especially during the evening or night, increase the risk for being exposed to a JEV-infected mosquito.
# Recommendations for the Prevention of JE Among travelers
Travelers to JE-endemic countries should be advised of the risks of JEV disease and the importance of personal protective measures to reduce the risk for mosquito bites. For some travelers who will be in a high-risk setting based on season, location, duration, and activities, JE vaccine can further reduce the risk for infection.
# Personal Protective Measures
All travelers should take precautions to avoid mosquito bites to reduce the risk for JE and other vector-borne infectious diseases (Box 2). These precautions include using insect repellent, permethrin-impregnated clothing, and bed nets, and staying in accommodations with screened or air-conditioned rooms. Additional information on protection against mosquitoes and other arthropods is available at / yellowbook/2010/chapter-2/protection-against-mosquitoesticks-insects-arthropods.aspx.
# Recommendations for the Use of JE Vaccine
JE vaccine is recommended for travelers who plan to spend a month or longer in endemic areas during the JEV transmission season (Box 3). This includes long-term travelers, recurrent travelers, or expatriates who will be based in urban areas but are likely to visit endemic rural or agricultural areas during a high-risk period of JEV transmission. JE vaccine should be considered for the following persons:
Short-term (<1 month) travelers to endemic areas during - the JEV transmission season if they plan to travel out-side of an urban area and have an increased risk for JEV exposure. Examples of higher-risk activities or itineraries include 1) spending substantial time outdoors in rural or agricultural areas, especially during the evening or night; 2) participating in extensive outdoor activities (e.g., camping, hiking, trekking, biking, fishing, hunting, or farming); and 3) staying in accommodations without air conditioning, screens, or bed nets.
Travelers to an area with an ongoing JE outbreak.
Travelers to endemic areas who are uncertain of specific - destinations, activities, or duration of travel. JE vaccine is not recommended for short-term travelers whose visit will be restricted to urban areas or times outside of a well-defined JEV transmission season.
Information on expected JEV transmission by country can be obtained from CDC at . These data should be interpreted cautiously because JEV transmission activity varies within countries and from year to year.
# Recommendations for the Use of JE Vaccines in Laboratory Workers
At least 22 laboratory-acquired JEV infections have been reported in the literature (205). Although work with JEV is restricted to Biosafety Level 3 (BSL-3) facilities and practices, JEV might be transmitted in a laboratory setting through needlesticks, and theoretically, through mucosal or inhalational accidental exposures. Vaccine-induced immunity presumably protects against exposure through a percutaneous route. Exposure to aerosolized JEV, and particularly to high concen-
# BOX 2. Personal protective measures to reduce the risk forJapanese encephalitis and other vectorborne infectious diseases
All travelers should take precautions to avoid mos-
# Administration of JE Vaccines Dosage and Administration
# JE-VC
The primary vaccination series for JE-VC is 2 doses administered 28 days apart (Table 1). Each 0.5-mL dose is given by the intramuscular route; this route is different from that of JE-MB, which is administered subcutaneously. JE-VC is supplied in 0.5-mL single-dose syringes. The 2-dose series should be completed at least 1 week before potential exposure to JEV. The dose is the same for all persons aged ≥17 years. The vaccine is not licensed for use in persons aged <17 years.
# JE-MB
For travelers, the recommended primary vaccination series for JE-MB is 3 doses administered subcutaneously on days 0, 7, and 30 (Table 1). An abbreviated schedule (days 0, 7, and 14) can be used when the longer schedule is impractical. Both regimens produce similar rates of seroconversion among recipients, but neutralizing antibody titers at 2 and 6 months are lower following the abbreviated schedule. Among 80% of vaccinees, 2 doses, administered 1 week apart, will confer short-term immunity. However, this schedule should be used only under unusual circumstances and is not recommended routinely. The last dose should be administered at least 10 days before travel begins to ensure an adequate immune response and access to medical care in the event of a delayed adverse reaction. The dose is 1.0 mL for persons aged ≥3 years. For children aged 1-2 years, the dose is 0.5 mL. The vaccine is not licensed for infants aged <1 year.
Recipients should be observed for a minimum of 30 minutes after vaccination and warned about the possibility of delayed allergic reactions, in particular angioedema of the extremities, face or oropharynx, or generalized urticaria. Vaccinees should be advised to remain in areas with access to medical care for 10 days after receiving each dose of JE-MB because of the possibility of delayed hypersensitivity. The full course of vaccination should be completed at least 10 days before travel.
# Booster Doses
# JE-VC
The need for and timing of booster doses following a 2-dose primary series with JE-VC has not been determined, and further study is needed. The full duration of protection following primary vaccination with JE-VC is unknown. One immunogenicity study indicated that 95% (172/181) of subjects maintained protective neutralizing antibodies 6 months after receiving the first dose and 83% (151/181) still had protective antibodies 12 months after primary vaccination (164). However, a subsequent study determined that only 83% (96/116), 58% (67/116), and 48% (56/116) of subjects had protective antibodies at 6, 12, and 24 months after their first vaccination, respectively (165).
# JE-MB
The full duration of protection following primary vaccination with JE-MB also is unknown. However, immunogenicity studies indicate that neutralizing antibodies likely persist for at least 2 years (175,(177)(178)(179). A booster dose of 1.0 mL (0.5 mL for children aged <3 years) of JE-MB may be administered 2
# Interchangeability of JE Vaccines
No data exist on the interchangeability of JE-VC and JE-MB for use in the primary series or as a booster dose.
# Simultaneous Administration of other Vaccines or Drugs JE-VC
A clinical trial in which the first dose of JE-VC was administered concomitantly with hepatitis A vaccine (HAVRIX ® ) indicated no interference with the immune response to JE-VC or hepatitis A vaccine (166). Subjects who received concomitant vaccination with JE-VC and hepatitis A vaccine were more likely to report pain, redness, and swelling than subjects who received either vaccine alone. No other differences were reported in safety or reactogenicity with concomitant administration of JE-VC and hepatitis A vaccine compared with administration of each vaccine alone.
No data exist on administration of JE-VC with other vaccines or medications. If JE-VC is administered concomitantly with other vaccines, they should be given with separate syringes at different sites.
# JE-MB
Limited data suggest that immunogenicity and safety are not compromised when inactivated mouse brain-derived JE vaccines, including JE-MB, are administered simultaneously with measles-mumps-rubella, diphtheria-tetanus-pertussis or oral polio vaccines (206,207). No data exist on the effect of concurrent administration of medications or other biologicals on the safety and immunogenicity of JE-MB.
# Contraindication and Precautions for the Use of JE Vaccines Allergy to Vaccine Components JE-VC
A severe allergic reaction (e.g., anaphylaxis) after a previous dose of JE-VC is a contraindication to administration of subsequent doses. JE-VC contains protamine sulfate, a compound known to cause hypersensitivity reactions in some persons (14).
# JE-MB
A history of an allergic or hypersensitivity reaction (i.e., generalized urticaria and angioedema) to a previous dose of JE-MB is a contraindication to receiving additional doses (146). Hypersensitivity to thimerosal is a contraindication to vaccination, and persons with a proven or suspected hypersensitivity to proteins of rodent or neural origin should not receive JE-MB.
Persons with a history of previous allergic reactions or urticaria attributed to any cause (e.g., medications, other vaccinations, or insect bite) might be at higher risk for allergic complications from JE-MB (185,190). This history should be considered as a precaution when weighing the risks and benefits of the vaccine for an individual patient. When patients with such a history are offered JE vaccine, they should be alerted to their increased risk for reaction and monitored appropriately. No data exist that support the efficacy of prophylactic antihistamines or steroids in preventing JE-MB-related allergic reactions.
# Age JE-VC
The safety and effectiveness of JE-VC among children has not been established; studies are in progress. Until data are available, JE vaccination of children aged 1-16 years should be performed with JE-MB.
# JE-MB
JE-MB is licensed for use in persons aged ≥1 year. No data are available on the safety and efficacy of JE-MB among infants aged <1 year. Although other inactivated mouse brain-derived JE vaccines have been administered to infants as young as age 6 months in Japan and Thailand (5,147), vaccination of infants traveling to JE-endemic countries should be deferred until they are aged ≥1 year.
# MMWR March 12, 2010
# Pregnancy
Practitioners should use caution when considering the use of JE vaccine in pregnant women. Vaccination with JE vaccines usually should be deferred because of a theoretic risk to the developing fetus. However, pregnant women who must travel to an area in which risk for JE is high should be vaccinated if the benefits outweigh the risks of vaccination to the mother and developing fetus.
# Special Populations
Age JE-VC JE-VC is approved for use in persons aged ≥17 years. Data are limited on the use of the vaccine in persons aged ≥65 years (n = 24) but suggest that safety and immunogenicity are similar to that among younger subjects (14). However, further trials are needed to determine whether older adults respond differently than younger subjects. (For information on children aged <17 years, see the section on age under Contraindications and Precautions for the Use of JE Vaccines).
# JE-MB
# Pregnancy JE-VC
FDA classifies JE-VC as a "Pregnancy Category B" medication. No studies of JE-VC in pregnant women have been conducted (14). See the section on pregnancy under Contraindications and Precautions for the Use of JE Vaccines for more information.
# JE-MB
FDA classifies JE-VC as a "Pregnancy Category C" medication. No specific information is available on the safety of JE-MB in pregnancy (146). See the section on pregnancy under Contraindications and Precautions for the Use of JE Vaccines for more information.
# Breastfeeding Women
Breastfeeding is not a contraindication to vaccination. However, whether JE-VC or JE-MB is excreted in human milk is not known. Because many drugs are excreted in human milk, practitioners should use caution when considering the use of JE vaccine in breastfeeding women.
# Altered Immune States
# JE-VC
No data exist on the use of JE-VC in immunocompromised persons or patients receiving immunosuppressive therapies.
# JE-MB
In limited studies in children infected with HIV or with underlying medical conditions including neoplastic disease, the safety profile of JE-MB was similar to that in healthy children (208)(209)(210). A reduced immune response was seen in HIV-infected children. However, most children with immune recovery after highly active antiretroviral therapy developed a protective antibody response (209,210).
# Postlicensure Surveillance for
Vaccine Adverse Events JE-VC is a promising JE vaccine for travelers given its favorable immunogenicity and reactogenicity profile after a 2-dose primary series. In addition, because JE-VC does not contain gelatin or murine proteins, it might be associated with fewer hypersensitivity or neurologic adverse events than the mouse brain-derived vaccine. However, the actual cause of these reactions following mouse brain-derived vaccine is unknown. Because JE-VC has been studied in <5,000 recipients, the possibility of these or other rare adverse events cannot be excluded. Postlicensure studies and surveillance data from the United States, Europe, and Australia will be used to evaluate the safety profile of JE-VC in larger populations.
# Reporting of Vaccine Adverse Events
As with any newly licensed vaccine, surveillance for rare adverse events associated with administration of JE vaccine is important for assessing its safety in widespread use. Even if a causal relation to vaccination is not certain, all clinically significant adverse events should be reported to the Vaccine Adverse Events Reporting System (VAERS) at . hhs.gov or at telephone 800-822-7967. | 14,522 | {
"id": "5f70b62c23cad26e67ea16615647de5a6318ff95",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | data are insufficient to recommend use of the FDA-approved single-use rapid HIV-1/HIV-2 antigen/antibody combination immunoassay as the initial assay in the algorithm.# Laboratory Testing for the Diagnosis of HIV Infection: Updated Recommendations
# A. Executive Summary
This document updates recommendations for HIV testing by laboratories in the United States and offers approaches for reporting test results to persons ordering HIV tests and to public health authorities. The recommended algorithm is a sequence of tests used in combination to improve the accuracy of the laboratory diagnosis of HIV based on testing of serum or plasma specimens.
The Centers for Disease Control and Prevention (CDC) previously published guidelines for the serodiagnosis of HIV Type 1 infections in 1989, guidelines for testing for antibodies for HIV Type 2 in 1992, and protocols for confirmation of reactive rapid antibody test results in 2004. These previous guidelines employed only tests for HIV antibodies. The updated recommendations also include tests for HIV antigens and HIV nucleic acid because studies from populations at high risk for HIV demonstrate that antibody testing alone might miss a considerable percentage of HIV infections detectable by virologic tests. CDC and the Association of Public Health Laboratories (APHL) have issued these recommendations based on HIV tests approved by the Food and Drug Administration (FDA) as of December 2012 and scientific evidence, laboratory experience, and expert opinion collected from 2007 through December 2013. These recommendations do not include the rapid HIV-1/HIV-2 antigen/antibody combination test approved by the FDA in August 2013 (for which evidence of performance in the algorithm was insufficient) or HIV-2 nucleic acid tests (NATs), which lack FDA approval.
These updated recommendations for HIV testing are necessary because of
- FDA approval of improved HIV assays that allow detection of HIV sooner after infection than previous immunoassays; - evidence that relying on Western blot or indirect immunofluorescence assay (IFA) for confirmation of reactive initial immunoassay results can produce false-negative or indeterminate results early in the course of HIV infection; - recognition that risk of HIV transmission from persons with acute and early infection is much higher than that from persons with established infection; - recent indications for the clinical benefits from antiretroviral treatment (ART) of all persons with HIV infection, including those with acute infection; and - demonstration that the majority of HIV-2 infections detected by available HIV antibody immunoassays are misclassified as HIV-1 by the HIV-1 Western blot. This report provides recommendations to laboratory personnel on the use of FDA-approved assays for the diagnosis of HIV infection in adults and children >24 months of age (Box 1). In brief, testing begins with a combination immunoassay that detects HIV-1 and HIV-2 antibodies and HIV-1 p24 antigen. All specimens reactive on this initial assay undergo supplemental testing with an immunoassay that differentiates HIV-1 from HIV-2 antibodies. Specimens that are reactive on the initial immunoassay and nonreactive or indeterminate on the antibody differentiation assay proceed to HIV-1 nucleic acid testing for resolution. The results of this algorithm may be used to identify persons likely to benefit from treatment, to reassure persons who are uninfected, and for reporting evidence of HIV infection to public health authorities.
The recommended algorithm has several advantages over previous recommendations, including
- more accurate laboratory diagnosis of acute HIV-1 infection,
- equally accurate laboratory diagnosis of established HIV-1 infection,
- more accurate laboratory diagnosis of HIV-2 infection,
- fewer indeterminate results, and - faster turnaround time for most test results.
The HIV-1 Western blot and HIV-1 IFA, previously recommended to make a laboratory diagnosis of HIV-1 infection, are no longer part of the recommended algorithm. Positive results from the recommended algorithm indicate the need for HIV medical care and an initial evaluation that includes additional laboratory tests (such as HIV-1 viral load, CD4+ Tlymphocyte determination, and an antiretroviral resistance assay) to confirm the presence of HIV-1 infection, to stage HIV disease, and to assist in the selection of an initial antiretroviral drug regimen. 23 Because no diagnostic test or algorithm can be completely accurate in all cases of HIV infection, inconsistent or conflicting test results obtained during the clinical evaluation may warrant additional testing of follow-up specimens.
Anticipating continued improvements in laboratory diagnostic techniques, CDC and APHL will monitor the introduction and FDA approval of diagnostic assays for HIV infection and update these recommendations when necessary. CDC and APHL will continue to monitor the performance of the laboratory testing algorithm and review the performance of the recommended algorithm at least every five years.
# Box 1. Recommended Laboratory HIV Testing Algorithm for Serum or Plasma Specimens
1. Laboratories should conduct initial testing for HIV with an FDA-approved antigen/antibody combination immunoassay a that detects HIV-1 and HIV-2 antibodies and HIV-1 p24 antigen to screen for established infection with HIV-1 or HIV-2 and for acute HIV-1 infection. No further testing is required for specimens that are nonreactive on the initial immunoassay.
2. Specimens with a reactive antigen/antibody combination immunoassay result (or repeatedly reactive, if repeat testing is recommended by the manufacturer or required by regulatory authorities) should be tested with an FDA-approved antibody immunoassay that differentiates HIV-1 antibodies from HIV-2 antibodies. Reactive results on the initial antigen/antibody combination immunoassay and the HIV-1/HIV-2 antibody differentiation immunoassay should be interpreted as positive for HIV-1 antibodies, HIV-2 antibodies, or HIV antibodies, undifferentiated.
3. Specimens that are reactive on the initial antigen/antibody combination immunoassay and nonreactive or indeterminate on the HIV-1/HIV-2 antibody differentiation immunoassay should be tested with an FDA-approved HIV-1 nucleic acid test (NAT).
- A reactive HIV-1 NAT result and nonreactive HIV-1/HIV-2 antibody differentiation immunoassay result indicates laboratory evidence for acute HIV-1 infection.
- A reactive HIV-1 NAT result and indeterminate HIV-1/HIV-2 antibody differentiation immunoassay result indicates the presence of HIV-1 infection confirmed by HIV-1 NAT.
- A negative HIV-1 NAT result and nonreactive or indeterminate HIV-1/HIV-2 antibody differentiation immunoassay result indicates a false-positive result on the initial immunoassay. b 4. Laboratories should use this same testing algorithm, beginning with an antigen/antibody combination immunoassay, with serum or plasma specimens submitted for testing after a reactive (preliminary positive) result from any rapid HIV test.
a Exception: As of April 2014, data are insufficient to recommend use of the FDA-approved single-use rapid HIV-1/HIV-2 antigen/antibody combination immunoassay as the initial assay in the algorithm. b See Section M, Additional Considerations, for a discussion of issues related to acute HIV-2 infection.
# B. Introduction
As of 2010, an estimated 1.1 million persons in the United States were living with human immunodeficiency virus (HIV) infection, of whom an estimated 181,000 were unaware of their infection. 30 Approximately 49,000 new HIV diagnoses are reported to CDC each year, and the estimated number of new infections has remained stable at approximately 50,000 annually from 2008 to 2010. 31,32 As of 2009, an estimated 83 million adults aged 18 to 64 years reported they had been tested for HIV. 33 Accurate laboratory diagnosis of HIV is essential to identify persons who could benefit from treatment, to reassure persons who are uninfected, and to reduce HIV transmission. 34
# C. Audience
These recommendations describe the types and sequence of laboratory assays used to make the laboratory diagnosis of acute HIV-1 infection, established HIV-1 infection, and HIV-2 infection. They are intended for use by laboratories authorized to conduct testing on serum or plasma specimens with assays categorized as moderate or high complexity under the Clinical Laboratory Improvement Amendments of 1988 (CLIA). 35
# D. Scope
These updated recommendations are intended only for testing of serum or plasma specimens from adults and children aged 2 years or older. Because maternal antibodies against HIV might be present in uninfected infants born to HIV-infected mothers, 36,37 specific recommendations to establish the presence or absence of the diagnosis of HIV infection in infants are described elsewhere. 38 These updated recommendations do not address methods or strategies for screening blood or organ donors for HIV infection; the Food and Drug Administration (FDA) and U.S. Public Health Service (USPHS) have issued separate guidance and recommendations on this topic.
# E. Background and Rationale
Accurate laboratory diagnosis of HIV infection relies on testing algorithms that maximize overall sensitivity and specificity by employing a sequence of tests in combination and applying decision rules for resolving discordant test results. 42 Since 1989, the diagnostic algorithm for HIV testing in the United States recommended by CDC and the Association of Public Health Laboratories (APHL) initiated testing with a sensitive HIV-1 antibody immunoassay. Specimens with repeatedly reactive initial immunoassays were then tested with a more specific HIV-1 antibody test, either the HIV-1 Western blot or HIV-1 indirect immunofluorescence assay (IFA), to validate those results. 1 In 1992, CDC recommended specific testing for both HIV-1 and HIV-2 antibodies if demographic or behavioral information suggested that HIV-2 infection might be present, if there was clinical evidence or suspicion of HIV disease in the absence of a positive test for antibodies to HIV-1, and in cases in which the HIV-1 Western blot exhibited an unusual indeterminate pattern. At that time, CDC also recommended that laboratories initiating testing with an HIV-1/HIV-2 antibody immunoassay conduct additional, more specific tests to detect the presence of antibodies against HIV-2 if the HIV-1 Western blot was negative or indeterminate. 2 In 2004, CDC recommended confirmation of all reactive rapid HIV test results with either HIV-1 Western blot or HIV-1 IFA, irrespective of results of intermediate immunoassays that may have been conducted. 3 Since those recommendations were issued, improved immunoassays (Box 2), an HIV-1 NAT, and a differentiation immunoassay that distinguishes HIV-1 from HIV-2 antibodies received FDA approval for use in the diagnosis of HIV infections. 43,44 These developments prompted reevaluation of recommendations for HIV diagnostic testing.
# Box 2. Evolution of HIV Immunoassay Technology
HIV immunoassays based on different design principles are generally grouped into "generations":
- 1st generation-All antigens used to bind HIV antibodies are from a lysate of HIV-1 viruses grown in cell culture. An indirect immunoassay format employs labeled antihuman IgG for detection of IgG antibodies. Significant specimen dilution is required to overcome cross-reactivity with cellular protein contaminants. Examples commercially available in the United States as of May 2014 include the HIV-1 Western blot and the HIV-1 IFA.
- 2nd generation-Synthetic peptide or recombinant protein antigens alone or combined with viral lysates are used to bind HIV antibodies. An indirect immunoassay format employs labeled antihuman IgG or protein A (which binds to IgG with high affinity 45 ) for detection of IgG antibodies. Design of the specific antigenic epitopes improves sensitivity for HIV-1 group O and HIV-2; eliminating cellular antigens that contaminate viral lysates improves specificity by eliminating cross-reactivity with cellular proteins. Examples commercially available in the United States as of May 2014 include one HIV-1 enzyme immunoassay and six rapid HIV antibody tests.
- 3rd generation-Synthetic peptide or recombinant protein antigens are used to bind HIV antibodies in an immunometric antigen sandwich format (HIV antibodies in the specimen bind to HIV antigens on the assay substrate and to antigens conjugated to indicator molecules). This allows detection of IgM and IgG antibodies. Lower sample dilutions and the ability to detect IgM antibodies (which are expressed before IgG antibodies) increase sensitivity during early seroconversion. Examples commercially available in the United States as of May 2014 include one HIV-1/HIV-2 enzyme immunoassay and two HIV-1/HIV-2 chemiluminescent immunoassays.
- 4th generation-Synthetic peptide or recombinant protein antigens are used in the same antigen sandwich format as 3rd generation assays to detect IgM and IgG antibodies, and monoclonal antibodies are also included to detect p24 antigen. Inclusion of p24 antigen capture allows detection of HIV-1 infection before seroconversion. 10,12,46,47 These assays (termed "combo" assays") usually do not distinguish antibody reactivity from antigen reactivity. Examples commercially available in the United States as of May 2014 include one HIV-1/HIV-2 enzyme immunoassay, one HIV-1/HIV-2 chemiluminescent immunoassay, and one HIV-1/HIV-2 rapid test that uses separate indicators for antigen and antibody reactivity.
# Laboratory markers of HIV infection and their detection by diagnostic tests
Analyses of specimens from seroconversion panels have established the dynamics of HIV-1 viremia after infection and the sequential appearance of different laboratory markers. The approximate time at which different markers appear, estimated from different data sources, are outlined schematically in Figure 1. Immediately after HIV infection, low levels of HIV-1 RNA (ribonucleic acid) might be present intermittently, but no viral markers are consistently detectable in plasma. 51 Approximately 10 days after infection, HIV-1 RNA becomes detectable by NAT in plasma and quantities increase to very high levels. Next, HIV-1 p24 antigen is expressed and quantities rise to levels that can be detected by 4th generation immunoassays within 4 to 10 days after the initial detection of HIV-1 RNA. 46,48 However, p24 antigen detection is transient because, as antibodies begin to develop, they bind to the p24 antigen and form immune complexes that interfere with p24 assay detection unless the assay includes steps to disrupt the antigen-antibody complexes. Next, immunoglobulin (Ig) M antibodies are expressed which can be detected by 3rd and 4th generation immunoassays 3 to 5 days after p24 antigen is first detectable, 10 to 13 days after the appearance of viral RNA. 46,48,49,61 Finally, IgG antibodies emerge and persist throughout the course of HIV infection. First and second generation immunoassays designed to detect only IgG antibodies exhibit considerable variability in their sensitivity during early infection, becoming reactive 18 to 38 days or more after the initial detection of viral RNA. 46,48,49,62,63 The pattern of emergence of laboratory markers is highly consistent and allows classification of HIV infection into distinct laboratory stages 48,64 :
- The eclipse period is the initial interval after infection with HIV when no laboratory markers are consistently detectable. - The seroconversion window period is the interval between infection with HIV and the first detection of antibodies. Its duration depends on the design of the antibody immunoassay and the sensitivity of the immunoassay during seroconversion. - Acute HIV infection is the interval between the appearance of detectable HIV RNA and the first detection of antibodies. Its duration also depends on the design of the antibody immunoassay and the sensitivity of the immunoassay during seroconversion. - Established HIV infection is the stage characterized by a fully developed IgG antibody response sufficient to meet the interpretive criteria for a positive Western blot or IFA. 1,61,65
# Figure 1. Sequence of appearance of laboratory markers for HIV-1 infection
Note. Units for vertical axis are not noted because their magnitude differs for RNA, p24 antigen, and antibody. Modified from MP Busch, GA Satten (1997) 50 with updated data from Fiebig (2003), 48 Owen (2008), 49 and Masciotra (2011Masciotra ( , 2013). 46,66
# Need for updated recommendations for the laboratory diagnosis of HIV-1 and HIV-2 infection
The previous algorithm, consisting of a repeatedly reactive immunoassay for HIV antibodies and positive HIV-1 Western blot or HIV-1 IFA, has been the gold standard for laboratory diagnosis of HIV-1 infection in the United States since 1989. False-positive results from this combination are rare. 67 HIV-2 infection remains uncommon in the United States, and no definitive criteria have been recommended for HIV-2 diagnosis. 27 Developments and observations in five areas led CDC to update recommendations for the laboratory diagnosis of HIV-1 and HIV-2 infections:
# The previous testing algorithm for HIV-1 fails to identify acute HIV-1 infections
Since 1999, blood screening centers in the United States have used pooled HIV-1 NAT to identify acute HIV infection in donors who had nonreactive 3rd generation immunoassay results. 68 (To reduce costs, multiple specimens are pooled for screening with a single NAT; specimens from reactive pools undergo individual NAT to identify the specimen with HIV-1 RNA.) Among persons seeking HIV testing, programs that used pooled NAT after a nonreactive initial antibody immunoassay result have demonstrated detectable HIV-1 RNA in 2 per 10,000 to 2 per 1,000 persons, depending on the population tested and the generation of the initial immunoassay. 8,16,18 Specimens with nonreactive antibody immunoassay results and reactive NAT results that represent acute HIV infection have been described in 4% to 32% of all new HIV diagnoses at the time of testing in some populations, especially men who have sex with men. 4,6,8,10 Retrospective testing of specimens from high-risk persons demonstrated that 3rd generation immunoassays were reactive in 20% to 37% of specimens that were HIV-1 Western blot-negative but NAT-reactive, and that 4th generation immunoassays were reactive in 62% to 83% of specimens that were NAT-reactive but nonreactive with earlier generation immunoassays.
# Assays that detect HIV-1 infection earlier are now widely available
New generations of immunoassays with improved sensitivity for detecting early HIV-1 infection can narrow the interval between the time of infection and initial immunoassay reactivity (Box 2, Figure 1). In 2006, 74% of U.S. public health laboratories used a 1st or 2nd generation immunoassay as the initial test in the previous algorithm. 74 In 2012, 92% of public health laboratories used a 3rd or 4th generation immunoassay as the initial test in the previous algorithm. 75 However, these immunoassays become reactive days to weeks before the HIV-1 Western blot becomes positive. 46,49 Using the HIV-1 Western blot for confirmation of these immunoassays can produce false-negative results during seroconversion. 10,76
# The risk of HIV-1 transmission from persons with acute and early infection is much higher than that from persons with established infection.
Extremely high levels of infectious virus become detectable in serum and genital secretions during acute HIV-1 infection and persist for 10-12 weeks. Models based on data from cohort studies suggest that the rate of sexual transmission during acute infection is 26 times as high as that during established HIV-1 infection. 20 Acute HIV-1 infection, despite its short duration, can account for 10%-50% of all new HIV-1 transmissions, especially in persons who have multiple concurrent sex partners or high rates of partner change. 19,21,22,80
# Initiation of antiretroviral therapy (ART) during the early stage of HIV-1 infection can benefit patients and reduce HIV transmission
Treatment of acute and early HIV-1 infection with combination ART improves laboratory markers of disease progression. 81,82 Limited data also suggest that treatment of acute HIV-1 infection might decrease the severity of acute disease, lower the viral set point, slow disease progression rates in the event therapy is stopped, reduce the size of the viral reservoir, and decrease the rate of viral mutation by suppressing viral replication and preserving immune function. 83 Because very high levels of virus in blood and genital secretions increase infectiousness during and immediately after acute HIV infection, initiating treatment during acute infection can also reduce the risk of HIV-1 transmission substantially. 23,77,84 In March 2012, the U.S. Department of Health and Human Services Panel on Antiretroviral Guidelines for Adults and Adolescents recommended initiation of ART for all persons with HIV-1 infection to reduce the risk of disease progression and to prevent HIV transmission. 23
# The use of HIV-1 Western blot in the previous algorithm misclassifies the majority of HIV-2 infections
Correct identification of HIV-2 infections is challenging, but accurate diagnosis of HIV-2 is clinically important because some antiretroviral agents effective against HIV-1 (including nonnucleoside reverse transcriptase inhibitors and some protease inhibitors) are not effective against HIV-2. 85,86 Considerable serologic cross-reaction occurs between HIV-1 and HIV-2, but screening exclusively with tests for HIV-1 antibodies failed to detect 15% to 53% of HIV-2 infections. 49 As of May 2014, all FDA-approved 3rd and 4th generation immunoassays incorporate specific antigens to detect antibodies directed against both HIV-1 and HIV-2. 87 When HIV-1/HIV-2 immunoassays are repeatedly reactive, CDC's previous recommendations advised specific testing for HIV-2 for specimens with negative or indeterminate HIV-1 Western blot results. 2 However, studies published in 2010 and 2011 showed that the HIV-1 Western blot was interpreted as positive for HIV-1 in 46% to 85% of specimens from persons found to be infected with HIV-2, resulting in incorrect or delayed diagnosis. The rapid immunoassay approved by the FDA in 2013 for use in algorithms to differentiate HIV-1 from HIV-2 antibodies correctly classifies the majority of both HIV-1 and HIV-2 infections in antibody-positive specimens, including the subset misclassified as HIV-1 by the HIV-1 Western blot. 28,29,47,76,88
# F. Process for Developing Updated Recommendations
These updated recommendations are the product of a lengthy, multistep process. In 2004, CDC and APHL established an HIV Steering Committee --composed of CDC and public health laboratory scientists with expertise in HIV diagnostics--to monitor HIV testing practices, investigate reports of problems with the performance or availability of HIV testing reagents, and assess potential implications of new assays as they received FDA approval. When the shortcomings of previous HIV testing recommendations became evident, 5,9,16,18 the HIV Steering Committee organized a working group in August 2006 with representatives from CDC, APHL, FDA, the National Alliance of State and Territorial AIDS Directors (NASTAD), HIV testing program managers, and scientists from academic, hospital, and commercial laboratories and blood donor screening programs who had expertise in HIV, immunology, laboratory medicine, and evaluation of diagnostic tests (Appendix 1). The Steering Committee asked the working group to examine the evidence for the performance of HIV assays and the previous algorithm for laboratory HIV diagnosis and to propose new algorithms for HIV diagnosis that maximized accuracy, relied on FDA-approved tests, and considered testing costs and cost-effectiveness. A subset of this working group served as the writing group that drafted these recommendations (Appendix 1).
The working group sought assistance from CDC laboratory scientists, who evaluated the performance of available FDA-approved assays on panels of plasma specimens from HIVinfected and uninfected persons and on sequential specimens from persons early in seroconversion; analyzed test combinations in two-test and three-test algorithms; and compared these results to results of the 1989 algorithm for HIV-1 diagnosis. The working group conducted a nonsystematic literature review on the performance characteristics of HIV tests and their use in combinations for HIV-1 diagnosis and examined unpublished data generated by studies at CDC and other public health laboratories. Based on the information from the literature review, unpublished data, and expert opinion, the working group proposed several candidate HIV diagnostic algorithms, disseminated descriptions of the candidate algorithms, and solicited data evaluating the algorithms in the call for abstracts for the 2007 HIV Diagnostics Conference. 89 New research findings were presented and discussed at the conference, and the working group obtained oral comments during the closing session of the conference about the feasibility, benefits, harms, and costs of new testing strategies from conference attendees, who included managers and staff members from public health department HIV testing programs and scientists from clinical, commercial, and public health laboratories, blood donation programs, and manufacturers of HIV tests and testing equipment.
Based on the literature review, expert opinion, and new research findings presented at the 2007 HIV Diagnostics Conference, 89 including CDC's analysis of the relative sensitivity during seroconversion of FDA-approved immunoassays compared with the HIV-1 Western blot, 49 the working group developed a synopsis, HIV Testing Algorithms: A Status Report, 90 issued in April 2009, that described the candidate algorithms and their limitations. The report outlined the key elements of each candidate algorithm, available performance data, potential benefits and drawbacks, and additional data needed to substantiate and refine the algorithm. In that report, the working group acknowledged that none of the candidate algorithms offered a distinct advantage over previous recommendations. For example, performing NAT after all nonreactive antibody test results could detect acute HIV-1 infection, but its routine use would be impractical and costly. Additionally, most algorithms still included the HIV-1 Western blot and could not consistently detect acute HIV-1 infections or HIV-2 infections without the collection of demographic, behavioral, or clinical information that might suggest the need for additional testing. Moreover, new tests such as 4th generation assays were nearing commercialization, and their routine use could render the candidate algorithms obsolete.
In July 2009, the HIV Steering Committee solicited additional data on the performance of candidate algorithms and 4th generation immunoassays in the call for abstracts for the 2010 HIV Diagnostics Conference. 91 At the March 2010 conference, representatives from the American Society for Microbiology, the College of American Pathologists, the Department of Defense, FDA, NASTAD, the Pan American Society for Clinical Virology, public health department HIV testing programs, and scientists from clinical, commercial, and public health laboratories, blood donation programs, and the diagnostics industry reviewed and discussed the research findings and their implications for new testing algorithms. (Manuscripts from conference presentations were submitted for peer review and published in the December 2011 supplement to the Journal of Clinical Virology. 92 ) Based on expert opinion, new data presented at the conference (including evidence for misclassification of HIV-2 infections by the HIV-1 Western blot), and anticipation of commercialization of 4th generation immunoassays in the United States, CDC and APHL laboratory experts proposed a new diagnostic algorithm. The algorithm included 4th generation HIV-1/HIV-2 antigen/antibody combination immunoassays (approved by FDA in 2010 and 2011) and an HIV-1/HIV-2 antibody differentiation assay. The proposed algorithm was intended to improve the accurate diagnosis of acute HIV-1 infection and HIV-2 infection in the absence of clinical, behavioral, or demographic information that is not routinely available to laboratories. 93 To validate the proposed algorithm for supplemental testing, CDC and public health laboratories retrospectively applied available existing test results in the sequence specified by the proposed algorithm 29,70,76,94 and evaluated the 4th generation immunoassays and proposed algorithm on the same specimen collections that had been tested previously. 46,66 The HIV Steering Committee then used the call for abstracts for the 2012 HIV Diagnostics Conference to solicit additional data on the performance of new tests and the proposed algorithm. Three CDC writing group members (Branson, Owen, Wesolowski) developed a figure and draft statements for consideration during the conference describing the proposed algorithm and possible variations if different assays were substituted for those in the proposed algorithm. 95 CDC writing group members solicited oral comments on the proposed algorithm from stakeholders who attended the December 2012 HIV Diagnostics Conference, representing commercial and public health laboratories that conduct HIV testing, HIV testing programs, manufacturers of HIV tests and testing equipment, providers of HIV clinical and preventive services, and persons with HIV. 96 Their input on the proposed algorithm was informed by conference presentations that compared the performance, cost, and cost-effectiveness of the proposed algorithm with the previous algorithm and alternatives. Manuscripts from conference presentations were submitted for peer review and published in the December 2013 supplement to the Journal of Clinical Virology. 100 CDC writing group members also solicited oral comments on the proposed algorithm from other stakeholders at meetings of the CDC-HRSA Advisory Committee, American Association of Clinical Chemistry, Association of Medical Laboratory Immunologists, College of American Pathologists, and the Pan American Society for Clinical Virology. After stakeholders expressed support for the proposed recommendations, the writing group finalized the recommendations. The draft recommendations and their underlying evidence were then reviewed by three independent HIV testing experts not involved in development of the recommendations (in accordance with Office of Management and Budget Regulations for peer review of influential scientific information from the federal government 101 ) and by officials at CDC, the FDA, and the Department of Health and Human Services.
# G. Literature Reviews and Key Questions
The CDC/APHL working group members conducted a nonsystematic review of the literature, unpublished data, meeting abstracts and presentations, and manufacturers' package inserts in 2009 to assess the performance of FDA-approved HIV diagnostic assays and their use in combination for the laboratory diagnosis of acute and established HIV-1 infection. Three CDC writing group members (Branson, Owen, Wesolowski) - Accuracy of algorithms: the number or percentage of all specimens from a given algorithm that, based on all available test results and follow-up information, yielded a correct laboratory diagnosis of HIV-1 infection, HIV-2 infection, or the absence of HIV infection. True-positive and true-negative results were classified as correct laboratory diagnoses. False-negative, false-positive, and indeterminate results, and HIV-2 infections misclassified as HIV-1 were classified as incorrect laboratory diagnoses.
Appendix 2 provides details of the search strategy and a detailed summary and tables of evidence for the published studies relevant to the key questions.
# H. Recommendations for Laboratory Testing for the Diagnosis of HIV Infection
CDC and APHL recommend that laboratories conduct the following sequence of assays with serum or plasma specimens for the accurate diagnosis of HIV infection. Each recommendation lists the rationale for the recommendation and refers to additional evidence and limitations in the corresponding summary and tables of evidence in Appendix 2. These updated recommendations for testing of serum or plasma specimens supersede the 1989 recommendations for interpretation and use of the HIV-1 Western blot in the serologic diagnosis of HIV Type 1 infections, 1 the 1992 recommendations for testing for antibodies to HIV Type 2 in the United States, 2 and the 2004 recommended protocol for confirmation of rapid HIV tests. 3 Because none of the assays in the recommended algorithm are FDA-approved for use with oral fluid or dried blood spot specimens, these updated recommendations do not supersede previous recommendations for testing of dried blood spots or oral fluid for HIV-1 using the FDA-approved immunoassay and HIV-1 Western blot for these specimen types. 1 1. Laboratories should conduct initial testing for HIV with an FDA-approved antigen/antibody combination (4th generation) immunoassay - that detects HIV-1 and HIV-2 antibodies and HIV-1 p24 antigen to screen for established infection with HIV-1 or HIV-2 and for acute HIV-1 infection. No further testing is required for specimens that are nonreactive on the initial immunoassay.
Rationale: Initial testing with a 4th generation antigen/antibody combination immunoassay detects more acute HIV-1 infections than initial testing with a 3rd generation antibody immunoassay and identifies comparable numbers of established HIV-1 and HIV-2 infections, with comparable specificity. 2. Specimens with a reactive antigen/antibody combination immunoassay result (or repeatedly reactive, if repeat testing is recommended by the manufacturer or required by regulatory authorities) should be tested with an FDA-approved antibody immunoassay that differentiates HIV-1 antibodies from HIV-2 antibodies. Reactive results on the initial antigen/antibody combination immunoassay and the HIV-1/HIV-2 antibody differentiation immunoassay should be interpreted as positive for HIV-1 antibodies, HIV-2 antibodies, or HIV-1 and HIV-2 antibodies, undifferentiated.
# Evidence basis (
Appendix
Rationale: Use of the HIV-1/HIV-2 antibody differentiation assay after a reactive initial 4th generation HIV-1/HIV-2 antibody immunoassay detects HIV-1 antibodies earlier than the HIV-1 Western blot, reduces indeterminate results, and identifies HIV-2 infections. Turnaround time for test results is shorter and the cost is lower for the HIV-1/HIV-2 antibody differentiation assay compared with the HIV-1 Western blot. Available evidence is insufficient to recommend specific additional testing, without clinical follow-up, for specimens that are dually reactive for HIV-1 and HIV-2 antibodies on the differentiation immunoassay (see Section J, Limitations of the Recommended Laboratory Testing Algorithm). 3. Specimens that are reactive on the initial antigen/antibody combination immunoassay and nonreactive or indeterminate on the HIV-1/HIV-2 antibody differentiation immunoassay should be tested with an FDA-approved HIV-1 NAT.
# Evidence basis (
- A reactive HIV-1 NAT result and nonreactive HIV-1/HIV-2 antibody differentiation immunoassay result indicates laboratory evidence for acute HIV-1 infection.
- A reactive HIV-1 NAT result and indeterminate HIV-1/HIV-2 antibody differentiation immunoassay result indicates the presence of HIV-1 antibodies confirmed by HIV-1 NAT.
- A negative HIV-1 NAT result and nonreactive or indeterminate HIV-1/HIV-2 antibody differentiation assay result indicates a false-positive result on the initial immunoassay. † Rationale: HIV-1 NAT results can distinguish acute HIV-1 infection from falsepositive initial immunoassay results in specimens with a reactive antigen/antibody immunoassay and a nonreactive HIV-1/HIV-2 antibody differentiation assay result.
HIV-1 NAT does not detect HIV-2, and no HIV-2 NAT is FDA-approved. Available evidence is insufficient to recommend testing for acute HIV-2 infection after a nonreactive HIV-1 NAT result (see Section K, Limitations of the Evidence Supporting These Recommendations). 4. Laboratories should use this same testing algorithm, beginning with a laboratory-based antigen/antibody combination immunoassay, with serum or plasma specimens submitted for testing after a reactive (preliminary positive) result from any rapid HIV test.
Rationale: Previously, supplemental testing (HIV-1 Western blot or HIV-1 IFA) was recommended after a reactive rapid HIV test result regardless of the result of the initial laboratory immunoassay. This was based on observations of some false-negative results from earlier generations of immunoassays (no longer commercially available in the United States) that became reactive later during seroconversion than rapid HIV antibody tests. 3 With the recommended algorithm, the FDA-approved laboratory-based antigen/antibody combination immunoassays detect HIV infection earlier during seroconversion than any of the rapid HIV tests available in the United States as of May 2014, including the rapid HIV-1/HIV-2 antigen/antibody combination test. Therefore, no supplemental testing is required for specimens that are nonreactive on the initial immunoassay in the recommended algorithm.
# I. Alternative Testing Sequences When Tests in the Recommended Algorithm Cannot be Used
During their review and comment on these recommendations, stakeholders described circumstances that might delay or prevent implementation of some of the assays in the recommended algorithm. Based on the evidence review and expert opinion from stakeholders and the working group, CDC members of the writing group identified testing sequences that might be used to improve the laboratory diagnosis of HIV infection if an alternative FDAapproved assay is substituted for one of the classes of assays specified in the recommended algorithm. Replacing a recommended assay has limitations described below that may reduce the accuracy of the testing algorithm.
- Use of a 3rd generation HIV-1/2 antibody immunoassay instead of a 4th generation antigen/antibody combination immunoassay as the initial test: perform subsequent testing as specified in the recommended algorithm.
Limitations: This alternative will miss some acute HIV-1 infections in antibodynegative persons that would be detected by 4th generation antigen/antibody combination immunoassays. - Use of the HIV-1 Western blot or HIV-1 IFA as the second test in the algorithm instead of an HIV-1/HIV-2 antibody differentiation immunoassay: if test results are negative or indeterminate, perform HIV-1 NAT; if HIV-1 NAT is negative, perform HIV-2 antibody immunoassay.
# Limitations: This alternative might misclassify some HIV-2 infections as HIV-1, requires a larger number of tests, and increases turnaround time for test results.
Supporting evidence (Appendix 2): 1.a. 3, 1.b.2, 2.c, 3.b, 3.c, 3.d, 4.a, 4.c, 5, 6 - Use of HIV-1 NAT as the second test instead of an HIV-1/HIV-2 antibody differentiation immunoassay: If HIV-1 NAT result is negative, perform an HIV-1/HIV-2 antibody differentiation immunoassay or other FDA-approved HIV-1 supplemental antibody test.
If result of an HIV-1 supplemental antibody test is nonreactive or indeterminate, perform an HIV-2 antibody test.
# Limitations: This alternative fails to distinguish acute HIV-1 infection from established HIV-1 infection, increases turnaround time for test results and incurs additional costs.
Supporting Evidence (Appendix 2): 1.a.2, 2.e, 3.b, 4.b, 5, 6
- Use of HIV-1 NAT (or pooled HIV-1 NAT) after a nonreactive 3rd or 4th generation immunoassay result: a reactive NAT result provides evidence of acute HIV-1 infection, but false-positive results occur. Follow-up testing to document seroconversion should be conducted if a laboratory HIV diagnosis is based on the result of HIV-1 NAT only.
# J. Limitations of the Recommended Laboratory Testing Algorithm
1. No diagnostic test or algorithm can be completely accurate in all cases of HIV infection. Rare instances have been reported of persons who remained persistently negative for antibodies despite detectable HIV RNA. 102 False-positive HIV test results have been attributed to specimen mix-up, mislabeling, and to autoimmune disorders. Inconsistent or conflicting test results should be investigated with follow-up testing on a newly collected specimen.
A small percentage of specimens produce results that are undifferentiated (dually reactive for HIV-1 and HIV-2 antibodies) on the HIV-1/HIV-2 antibody differentiation assay after completing all testing procedures recommended by the manufacturer. The frequency of dually reactive results on the HIV-1/HIV-2 antibody differentiation assay in the United States is unknown and follow-up data are limited. One study reported 5 (0.50%) of 993 specimens with repeatedly reactive immunoassay results were dually reactive with the HIV-1/HIV-2 antibody differentiation assay approved by the FDA as of March 2013. 107 Three specimens with strong HIV-1 reactivity and weak HIV-2 reactivity were negative by HIV-2 immunoblot and positive for all HIV-1 Western blot bands. One specimen with strong reactivity for both the HIV-1 and HIV-2 indicators was positive by HIV-2 immunoblot; HIV-2 RNA was detected and HIV-1 RNA was undetectable. The fifth specimen, with weak reactivity for both the HIV-1 and HIV-2 indicators, lacked sufficient volume for definitive resolution. The authors concluded that strong reactivity at the indicator for HIV-2 suggested the need for more specific HIV-2 testing. 107 Published data and genotypic analyses from West Africa, where HIV-2 infection is endemic and where the largest number of dual HIV-1/HIV-2 infections have been reported, indicate that most specimens dually reactive for HIV-1 and HIV-2 antibodies represent HIV-1 infections with cross-reactivity to HIV-2 antigens. 108,109 A single case of dual HIV-1/HIV-2 infection has been reported in the United States in a patient who reported sexual contact with a person from Gambia. 110 No other published studies of U.S. populations provide evidence to recommend specific additional tests for specimens with dually reactive antibody test results. Based on expert opinion, the low prevalence of HIV-2 infection in the United States, and the lack of an FDA-approved NAT for HIV-2, laboratories should report dually reactive test results as positive for HIV antibodies that could not be differentiated as HIV-1 or HIV-2, and recommend further investigation for HIV-2 if clinically indicated (for example, if HIV-1 RNA is undetectable on the viral load assay conducted as part of the initial medical workup). See Section M, Additional Considerations, for information related to HIV-2 infection.
3. None of the assays in the updated recommended algorithm are FDA-approved for use with oral fluid or dried blood spot specimens. Laboratories should follow the 1989 recommendations 1 () for using the HIV-1 immunoassay and HIV-1 Western blot approved by the FDA for these specimen types.
4. The recommended algorithm has not been evaluated in persons taking ART for preexposure or postexposure prophylaxis. Occurrences of delayed seroconversion have been reported in persons taking ART for preexposure 111 and postexposure prophylaxis. 112 As of May 2014, data are insufficient to determine whether additional follow-up testing might be indicated for persons taking ART.
5. The recommended algorithm has not been evaluated in specimens from persons with long-term HIV suppression from antiretroviral therapy. Studies document that antibody levels diminish, some immunoassays become nonreactive, and the HIV-1 Western blot reverts from positive to indeterminate in a small percentage of patients who maintain undetectable levels of HIV, especially after antiretroviral therapy initiated early during the acute phase of infection. It has been postulated that rapid and effective virologic suppression due to potent, early antiretroviral therapy may result in levels of antigenic stimulation that are inadequate for developing and maintaining HIV-1-specific antibody responses. 115 Although one study demonstrated the FDA-approved HIV-1/HIV-2 antibody differentiation assay remained reactive in patients with various levels of exposure to antiretroviral therapy, 117 as of May 2014, data are insufficient to determine whether the recommended algorithm might produce false-negative results with specimens from persons taking antiretroviral therapy who have maintained long-term viral suppression.
6. The recommended algorithm increases the ability to detect acute HIV-1 infection, but no laboratory assay can detect HIV infection immediately after it is acquired. The duration of the eclipse period between infection and the appearance of HIV RNA is not welldefined from clinical studies and likely varies with the infection route, inoculum size, and sensitivity of the NAT used to detect HIV-1 nucleic acids.
# K. Limitations of the Evidence Supporting These Recommendations
1. Evaluations of test performance were based on comparison with a composite standard for the presence of HIV infection that consisted of a positive HIV-1 Western blot, the presence of HIV-1 RNA, or both. Clinical evidence from follow-up evaluations was rarely available to document true HIV status (for example, antibody seroconversion after a specimen reactive only for HIV-1 NAT, or the subsequent detection of HIV-1 RNA in specimens that were classified as positive for HIV-1 antibody by the HIV-1/HIV-2 differentiation assay). Therefore, it is possible that some false-positive test results might have gone undetected in the performance evaluations.
2. Only two HIV-1/HIV-2 antigen/antibody combination immunoassays, one HIV-1/HIV-2 antibody differentiation immunoassay, and one HIV-1 NAT were approved by the FDA for HIV diagnosis as of December 2012 (Appendix 2, Table 2). All performance evaluations of the recommended diagnostic testing algorithm were conducted with these assays. Additional evaluations will be required with new assays as they are introduced and receive FDA approval.
3. Published studies document indeterminate results (reactivity to only the synthetic gp41 peptide or recombinant gp41 protein, but not both) in 0.8% to 1.4% of specimens with reactivity on the FDA-approved HIV-1/HIV-2 antibody differentiation immunoassay; 11% to 15% of specimens with indeterminate results proved to be HIV-negative. 76,118 No published data from follow-up testing are available to definitively determine whether indeterminate results in specimens with detectable HIV-1 RNA represent an evolving antibody response consistent with antibody development during acute infection or whether indeterminate results might persist during established infection in some persons.
Little evidence exists for the timing of antibody development after infection with HIV-2 or the occurrence of acute HIV-2 infection in the United States, and no HIV-2 NAT is FDA-approved. The 4th generation antigen/antibody combination assays detect IgM and IgG antibodies against both HIV-1 and HIV-2 and also p24 antigen, specific for HIV-1, but not p26/27, the counterpart core protein in HIV-2. 66 The HIV-1/HIV-2 antibody differentiation assay detects only IgG antibodies against HIV-1 and HIV-2. Therefore, it is possible that IgM antibodies against HIV-2 might have been present in a small number of specimens with a repeatedly reactive 4th generation immunoassay result that were classified as HIV-negative on the basis of a nonreactive HIV-1/HIV-2 antibody differentiation assay result and negative HIV-1 NAT. See Section M, Additional Considerations, for a discussion of issues related to acute HIV-2 infection.
5. Rare cases of a "second window" during early seroconversion during which antigen/antibody combination tests transiently revert to nonreactive have been reported outside the United States with older versions of 4th generation tests than those that are FDA-approved. One case of a "second window" of 8 days' duration was observed in a study of 28 patients in Africa and Thailand with acute, non-B subtype HIV infections identified by frequent RNA testing. 122 Subsequent testing was conducted with immunoassays and viral load assays at frequent intervals. In this case, an FDA-approved 4th generation assay became reactive for antigen at day 9 after RNA detection, was subsequently nonreactive between days 17-25, and became reactive again at day 29 when antibody levels, detected by a 3rd generation assay, began to rise. Presumably, this phenomenon was due to the short interval when antibodies begin to appear during which antigen bound to antibody inhibits detection of either antigen or antibody by the assays. Experience with the FDA-approved 4th generation immunoassays is too limited to predict whether or how often transient seroreversion might occur in patients with subtype B infections.
# L. How These Updated Recommendations Differ From Previous Recommendations
Compared with previous testing recommendations, the updated algorithm increases sensitivity for acute HIV-1 infection by including an initial immunoassay that detects antibodies to both HIV-1 and HIV-2 and also HIV-1 p24 antigen, which can be detected before antibodies develop. The updated algorithm identifies acute HIV-1 infection by using HIV-1 NAT for specimens that are reactive on the initial immunoassay but negative for antibodies on the second immunoassay.
The previously recommended HIV testing algorithms were predicated on screening for HIV-1 antibodies; 1 specific testing for HIV-2 antibodies was confined to only limited circumstances. 2 The updated algorithm screens for both HIV-1 and HIV-2 antibodies, and distinguishes HIV-1 from HIV-2 antibodies using a single supplemental antibody differentiation immunoassay. This diagnostic approach is simpler and more accurate than CDC's 1992 recommendations because it no longer depends on laboratory access to clinical, demographic or behavioral information suggestive of possible HIV-2 exposure. Because the recommended algorithm no longer relies on HIV-1 Western blot or HIV-1 IFA as a supplemental test, it yields fewer specimens with indeterminate results that require resolution by a follow-up test conducted several months later.
Previously, supplemental testing with HIV-1 Western blot or HIV-1 IFA was recommended for all specimens submitted for testing after a reactive rapid HIV test result even if the initial laboratory immunoassay was nonreactive. With the updated recommendation, specimens submitted after any reactive rapid HIV test result (including the HIV-1/HIV-2 antibody differentiation assay, when it is used as an initial rapid test, and the HIV-1/HIV-2 antigen/antibody combination rapid test) are tested according to the same algorithm as all other specimens. No further supplemental testing is required if the result of the initial antigen/antibody combination immunoassay is nonreactive.
# M. Additional Considerations
Medical evaluation and follow-up testing. A laboratory diagnosis of HIV infection identifies the need for HIV medical care. Rarely, specimen mix-up or unexplained cross-reactivity may result in an incorrect laboratory diagnosis with either the previous or the recommended algorithm. 76,123 The Department of Health and Human Services Panel on Antiretroviral Guidelines for Adults and Adolescents recommends a baseline evaluation for every HIV-infected patient entering into care, with a complete medical history, physical examination, and laboratory evaluation, including plasma HIV-1 RNA viral load, CD4 determination, and an antiretroviral resistance assay, to confirm the presence of HIV infection, stage HIV disease, and assist in the selection of an initial antiretroviral drug regimen. 23 If HIV-1 RNA is below the assay's limit of detection, repeat or additional testing is indicated to verify the diagnosis of HIV infection.
# Participants in HIV vaccine trials. Recipients of HIV vaccines might have vaccine-induced
antibodies that produce reactive HIV antibody test results. 124 Laboratories should advise persons who order HIV testing that vaccine recipients with reactive immunoassay results should be encouraged to contact a vaccine trial site for specialized testing necessary to determine their HIV infection status. 125 HIV-2 infection. Fewer than 200 cases of HIV-2 in the United States had been reported to CDC through 2009, 27 although criteria for an HIV-2 case definition were not specified until 2014. 126 The majority of HIV-2 diagnoses in the United States have been made in persons born in Africa, especially West Africa, but 12% of HIV-2 diagnoses were made in persons whose birthplace was India, North America or Europe. 27,28 It is theoretically possible that acute HIV-2 infection might produce a repeatedly reactive HIV-1/HIV-2 antigen/antibody combination immunoassay result with nonreactive HIV-1/HIV-2 antibody differentiation assay and HIV-1 NAT results. Based on expert opinion and the low prevalence of HIV-2 infection in the United States, this sequence of test results most likely indicates a false-positive initial immunoassay result. Laboratories should report that such test results indicate no laboratory evidence for HIV-1 infection and suggest follow-up testing for HIV-2 if clinically indicated. Although accurate diagnosis of HIV-2 is clinically important because HIV-2 strains are naturally resistant to several antiretroviral drugs developed to suppress HIV-1, 127 diagnosis of HIV-2 can be problematic. Only two reports of acute HIV-2 infections have been published; both occurred in West Africa and were documented by seroconversion. 128,129 The reliability of HIV-2 NAT during acute HIV-2 infection is unknown.
In specimens obtained 5 to 6 months after seroconversion, plasma HIV-2 viral load was 28 times lower in HIV-2 seroconverters than among comparable HIV-1 seroconverters. 130 Because HIV-2 RNA is undetectable in at least half of HIV-2 infected patients, testing for proviral DNA may be required for definitive diagnosis. 131,132 No tests for HIV-2 RNA or DNA have received FDA approval. If additional testing for HIV-2 is requested, HIV-2 NAT for which the analytic performance characteristics have been determined may be available from commercial laboratories, city or state public health laboratories, or CDC. 28,133,134 Specimen collection and storage requirements. Freshly collected serum or plasma specimens yield the most accurate HIV test results. Specimen volume also influences the ability to perform the recommended algorithm because a larger volume is needed for specimens that require testing by both the HIV-1/HIV-2 differentiation immunoassay and HIV-1 NAT. A venipuncture specimen of at least 5 ml of whole blood is necessary to yield at least 2 ml of serum or plasma needed to conduct all assays in the recommended algorithm. Some laboratories require a separate specimen for NAT. Specific assays that can be used in the recommended algorithm have different requirements for specimen collection, storage temperatures, and the need for or timing of separating cells from serum or plasma. These requirements are specified by the manufacturer and sometimes change. Ensuring accurate results requires that laboratories
- carefully review manufacturer's instructions in each assay's package insert to determine requirements for acceptable specimen types (serum, plasma), volume, collection tubes, anticoagulants, cell separation, storage and shipping requirements, and timeframes (keeping in mind that these include shipping periods) - communicate these specific requirements to the persons who submit specimens for testing - request collection of another specimen from a second venipuncture if all tests cannot be performed on the same specimen - confirm handling, storage, and shipping requirements before sending specimens to reference laboratories for additional testing
# N. Reporting Results of the Recommended Algorithm for the Laboratory Diagnosis of HIV
Laboratory practices for reporting test results and providing interpretation of results to the persons who ordered the HIV test are influenced by
- the assay manufacturer's instructions (specified in product inserts),
- guidance or recommendations from regulatory or scientific agencies and professional associations, - local clinical practice needs (communicated through requests from institutional clinical practice committees), - requirements of the laboratory information system, and - requirements of the electronic health record. - all assays that were used - the results of each assay - interpretation of the results - any additional testing that is recommended using existing specimens or new specimens that should be submitted - if alternatives to the recommended assays or algorithm sequence were used, the assays that were used and limitations of these tests or sequence compared with the recommended algorithm
Laboratories should report final results when all testing is complete, but can also report test results of individual assays used in the algorithm as they become available. If results of all tests are not reported at one time, the report should specify which test results are pending.
Laboratories should establish methods to expedite reporting of test results consistent with acute HIV infection to the person who ordered HIV testing and to public health authorities to facilitate prompt notification and provision of services for acutely infected persons and their sex and drug injection partners.
# Reporting HIV test results to public health authorities
All states, the District of Columbia, and United States territories and dependent areas require that laboratories report test results indicative of HIV infection to public health authorities in the patient's jurisdiction of residence. 137 Table 1 lists suggested elements that should be reported to public health authorities for each potential outcome of the recommended algorithm. However, specific requirements of state or local health departments might differ. The following reporting principles will facilitate accurate case reporting:
1. Results from the recommended laboratory testing algorithm with a negative overall conclusion (i.e., indicating that the patient is uninfected) should not be reported.
2. If the conclusion of the laboratory diagnostic testing algorithm is positive, indicating the presence of HIV infection, laboratories should report, in the same data transmission to the public health authorities: a. the overall result or conclusion of the lab algorithm, and b. results from each test performed (including negative/nonreactive or indeterminate results)
3. If the recommended laboratory diagnostic testing algorithm was not completed and the overall conclusion was not determined (indicating possible HIV infection that requires additional testing or follow-up), the laboratory should follow local requirements for reporting incomplete or inconclusive results.
# O. Plans for Updating These Recommendations
Anticipating continued improvements in laboratory diagnostic techniques, CDC will monitor the introduction and FDA approval of diagnostic assays for HIV infection and update these recommendations when necessary. CDC's Division of HIV/AIDS Prevention in the National Center for HIV, Viral Hepatitis, STD, and TB Prevention, with APHL, will continue to monitor the performance of the laboratory testing algorithm and review the performance of the recommended algorithm at least every five years. Appendix 2.
# Analytic Framework, Search Strategy, and Summary of Evidence
# A. Analytic Framework
Three CDC writing group members (Branson, Owen, Wesolowski) identified key questions and conducted a systematic literature review to compare outcomes of the previous and proposed algorithm recommendations based on the premise that an accurate HIV test result constituted an outcome important to patients. 138,139 The CDC working group members evaluated the evidence using criteria adapted to the types of study designs necessary to answer the key questions and an analytic framework that links the key questions to outcomes related to the performance of individual assays and their use in combinations (Figure 2).
The evidence synthesis focused on the relative effects of the previous and recommended testing algorithms on diagnostic accuracy and yield. The CDC reviewers considered tests or algorithms that classified specimens as true-positive or true-negative for HIV-1 or HIV-2 as benefits to patients and being classified as false-positive or false-negative or receiving indeterminate or inconclusive results as harms. They considered reduced turnaround time as a benefit, and delayed diagnosis due to the need for additional specimens or follow-up testing as a harm. 42,139,140 Services after diagnosis for persons with HIV and for those identified as uninfected would be needed regardless of which testing algorithm is used. 2) were included in the evidence synthesis.
The literature search identified 1,858 abstracts of potentially relevant articles. Of these, 1,778 were excluded because they were background articles, did not contain assay performance data, or evaluated assays that were not FDA-approved. Of the remaining 80, 39 articles contained data relevant to the key questions for evaluating individual assays or diagnostic algorithms for HIV; 4 studies related to costs or cost-effectiveness; 2 studies related to potential harms from indeterminate HIV test results; 14 studies described viral dynamics of HIV and generic laboratory markers without identifying specific assays; 6 studies described HIV-2 distribution and diagnosis with assays that are not FDA-approved; 3 studies evaluated HIV-1 diagnosis in infants; 7 studies modeled transmission attributable to acute HIV-1 infection; and 5 studies evaluated the potential benefits of antiretroviral therapy for acute HIV-1 infection.
Each of the three CDC writing group members experienced with HIV diagnostic testing studies reviewed the studies independently. For each study, one member abstracted details about the study design, source of specimens, assays evaluated, and study results. Another one of the three CDC writing group members reviewed data abstraction for accuracy. Discrepancies regarding the applicability of the evidence or limitations of the studies were resolved by consensus.
# C. Quality of Evidence
The quality of available studies comparing the performance of individual HIV tests or algorithms was inherently limited. No randomized controlled trials comparing individual assays or algorithms were conducted with specimens from populations with unknown infection status. Limitations affected many of the studies identified during the literature review and are identified for each study in the tables of evidence (Appendix 2, Section E).
- Nearly all studies identified during the literature review were cross-sectional analyses that used previously tested specimens and compared test results with a reference standard in different studies and not directly with each other in the same study. Because the prevalence of acute HIV-1 infection and HIV-2 infection is extremely low in the United States, testing specimens from a population representative of persons screened for HIV infection in the United States would yield very few cases of these infections. Studies therefore used specimen collections enriched with specimens from known cases of acute HIV-1 infection or HIV-2 infection to allow performance evaluations using a smaller number of specimens and a shorter time frame. These were the only study designs that could feasibly answer questions of accuracy, but such studies might be subject to selection bias. - Available studies evaluated tests and algorithms against different reference standards for a laboratory diagnosis of HIV-1 or HIV-2 infection. Some of the HIV-1 NAT or HIV-2 assays that were used are not FDA-approved for HIV diagnosis. Differences in reference standards can reduce the comparability of results for index tests evaluated in different studies.
- No studies conducted all assays on all specimens. Studies initiated testing with different immunoassays, and selected specimens for additional testing based on repeatedly reactive immunoassay results or conducted pooled HIV-1 NAT on immunoassay-negative specimens. Some studies conducted only HIV-1 Western blot or HIV-1 IFA; others also performed HIV-1 NAT. These procedures might result in selection bias and reduce the comparability of results for index tests. - Studies of previously tested specimen sets did not specify whether testing personnel were unaware of the results of the previous tests ("blinded") before conducting the index test. This might have resulted in biased interpretation of test results, especially for assays that require subjective interpretation of results, such as the HIV-1 Western blot and HIV-1/HIV-2 antibody differentiation assay. - Studies that present longitudinal data on natural history of HIV-1 infection such as seroconversion panels typically have small numbers of subjects because subjects must provide multiple blood specimens over a period of weeks or months. The small numbers may not be representative of all persons in populations at risk for HIV-infection. - The 4th generation antigen/antibody combination immunoassays were only recently FDA-approved for clinical use. This limited the number and size of studies that evaluated these tests, which resulted in wide confidence intervals for point estimates in some studies.
Studies that systematically assessed sensitivity, specificity, and accuracy (key questions 1, 2, and 3) are identified in the tables of evidence. No studies systematically assessed the minimum number of assays necessary to obtain an accurate laboratory diagnosis (key question 4) or compared the harms and benefits of the previous and recommended algorithms (key question 6). Three models examined cost and cost effectiveness (key question 5), but no observational studies directly compared costs of the previous and recommended algorithms. Available data related to key questions 4, 5, and 6 are included and cited in the summary of evidence.
Twelve studies directly compared two or more different immunoassays on the same specimens. 18,46,47,49,63,66,70,71,76,94,123,141 Four of these tested the same specimens from seroconversion panels at different times. 46,47,49,66 Fourteen studies evaluated the performance of different immunoassays and algorithms with different specimen collections and compared results to a reference standard. 18,29,47,71,72,76,88,107, Four studies of assay performance for acute HIV-1 infection used specimens that had been identified by pooled HIV-1 RNA screening programs and collected and stored consistent with requirements specified for the assay by the manufacturer. 18,62,71,72 Two studies of previously tested specimens with negative or indeterminate Western blot results conducted retrospective testing with the HIV-1/HIV-2 differentiation assay and HIV-1 NAT, but had no information about whether specimens were stored and handled consistent with requirements for the HIV-1 NAT assays. 148,149 CDC writing group members did not conduct pooled data analyses because the studies were conducted with different assays of the same or different classes (that is, three 3rd generation and two 4th generation immunoassays) using specimen collections from different populations with different pre-test probabilities of infection, or enriched with pedigreed specimens with known laboratory diagnosis of HIV-1 or HIV-2 infection. The number of significant digits reported for values in the evidence summary and tables are those as published in the original studies. The writing group did not conduct recalculations or rounding.
Inferring that an accurate test result improves outcomes important to patients requires availability of effective treatment, improved well-being through prognostic information, and, by excluding an ominous diagnosis, reduction of anxiety. 138 The workgroup relied on other systematic reviews and recommendations for documentation of benefits and harms associated with screening and diagnostic testing for HIV in different populations, effectiveness of treatment for persons with HIV infection, and interventions for HIV-negative persons. 23, 1.a.1. The sensitivity of immunoassays for established HIV-1 and HIV-2 infections is very high and comparable for all FDA-approved 3rd generation and 4th generation immunoassays. Sensitivities of the 3 FDA-approved 3rd generation assays for established HIV-1 infection ranged from 99.80% to 100% (4 studies and 3 product inserts), 46,47,49,94, and of the 2 FDA-approved 4th generation assays, 99.76% to 100% (4 studies and 2 product inserts). 46,47,72,143,160,161 Few independent studies have examined sensitivity for HIV-2. One study that evaluated a 3rd generation HIV-1/HIV-2 immunoassay and one with a 4th generation HIV-1/HIV-2 combination immunoassay found 100% sensitivity for HIV-2. 49,144 Data from manufacturers indicate FDA-approved 3rd and 4th generation HIV-1/HIV-2 immunoassays are 100% sensitive for HIV-2. 143, 1.a.2. Only NATs for HIV-1 RNA, which do not detect HIV-2, 49,162 are FDA-approved. Sensitivity of HIV-1 NAT for established HIV-1 infection is lower than that of immunoassays. In 2 cross-sectional and 2 prospective studies, HIV-1 RNA NAT produced negative results in 2% to 4% of specimens that were reactive on 3rd generation immunoassays and positive on HIV-1 Western blot. 18,49,142,145 Some specimens with undetectable RNA might have come from persons taking ART, but one study documented NAT-negative specimens from antibody-positive persons who were not receiving ART. 18 Some NAT-negative specimens might have come from persons who suppress HIV replication without ART (so-called elite controllers). 163,164 However, this phenomenon is estimated to occur in only 1 of 300 persons with HIV infection. 165 1.a.3. Only one assay is FDA-approved for differentiating HIV-1 from HIV-2 antibodies. Criteria for a positive interpretation as revised in March 2013 require the presence of both HIV-1 indicators (synthetic gp41 peptide and recombinant gp41 protein) when the assay is used as a supplemental test; presence of only one indicator is interpreted as an indeterminate result. 166 Sensitivity of the differentiation assay for established HIV-1 infection in 9 studies ranged from 98.5% to 100%, but not all studies reported whether their diagnostic criteria required one or both of the HIV-1 indicators. 29,47,63,76,88,107,118,146,147 In 2 studies that required both indicators, 11 (85%) of 13 specimens and 8 (89)% of 9 specimens with only one HIV-1 indicator had either a positive HIV-1 Western blot result or detectable HIV-1 RNA. 76,118 Four studies reported the results of parallel testing of specimens with the HIV-1 Western blot and the HIV-1/HIV-2 antibody differentiation assay. One prospective study of 993 specimens repeatedly reactive by 3rd or 4th generation immunoassays identified 882 specimens reactive for HIV-1 only on the antibody differentiation assay. 107 HIV-1 Western blot was positive in 871 and indeterminate in 11 of these specimens. Six of the 11 patients with indeterminate results were eventually traced and found to have serologically confirmed HIV-1 infection. In this same study, 3 specimens were reactive for HIV-2 and 5 specimens were reactive for both HIV-1 and HIV-2 on the antibody differentiation assay (1 with strong reactivity for both HIV-1 and HIV-2, 3 with strong HIV-1 and weak HIV-2 indicators, and 1 with weak reactivity for both HIV-1 and HIV-2). 107 The 3 specimens reactive for HIV-2 only and the specimen with strong dual reactivity were positive by HIV-2 immunoblot and negative by HIV-1 NAT; the dually reactive specimen also had detectable HIV-2 RNA. 107 All 4 HIV-2 specimens showed 3 or 4 bands on the HIV-1 Western blot, sufficient to classify 3 as HIV-1 positive and 1 as indeterminate. The 3 specimens with strong HIV-1 reactivity and weak HIV-2 reactivity were reactive for all bands on the HIV-1 Western blot and negative on the HIV-2 immunoblot. The specimen with weak reactivity for both HIV-1 and HIV-2 was reactive only to the gp160 band on HIV-1 Western blot and negative by HIV-2 immunoblot; volume was insufficient for HIV-1 NAT. 107 In a study of 8,760 specimens repeatedly reactive by 3rd generation immunoassay, the HIV-1/HIV-2 antibody differentiation assay classified 26 (0.3%) of 8,678 specimens with positive HIV-1 Western blot results and 12 (19%) of 63 specimens with indeterminate HIV-1 Western blot results as HIV-2, but no other HIV-2 tests were performed to validate these results. 88 In a second study, the HIV-1/HIV-2 differentiation assay was positive for HIV-1 in 491 and for HIV-2 infection in 2 (0.4%) of 493 specimens that had positive HIV-1 Western blot results; these 2 were also positive by HIV-2 Western blot. 47 One retrospective study of 34 specimens and the product insert description of 207 specimens demonstrated 100% sensitivity of the HIV-1/HIV-2 differentiation assay for HIV-2. 49,166 b. What is the sensitivity of individual assays for acute HIV-1 infection?
# D. Summary of Evidence Supporting the Recommendations
1.b.1. Third generation HIV-1/HIV-2 assays, fourth generation HIV-1/HIV-2 antigen/antibody combination assays, and HIV-1 RNA assays each confer incremental improvements in sensitivity for acute HIV-1 infection. Retrospective testing of specimens from high-risk persons demonstrated that 3rd generation immunoassays were reactive in 20% to 37% and 4th generation assays were reactive in 62% to 83% of specimens that were negative by HIV-1 Western blot but reactive by HIV-1 NAT. 69,70,72,73 In a study of 228 specimens from 26 commercial HIV-1 seroconversion panels, HIV-1 Western blot was positive in 56 specimens (25%); 3 different 3rd generation assays were reactive in 102 (44.7%), 108 (47.4%), and 111 (48.6%) specimens, respectively, and 2 different 4th generation assays were reactive in 131 (57.5%) and 135 (59.2%) specimens. 46,47 In 3 retrospective studies of 42 specimens with detectable HIV-1 RNA but nonreactive 3rd generation antibody immunoassays, 62% to 77% were reactive by 4th generation HIV-1/HIV-2 antigen/antibody combination immunoassays. 18,73,167 In a study involving retrospective testing of 74 specimens reactive by HIV-1 NAT after negative 1st or 2nd generation immunoassays, Western blot was positive in 12.5%, a 3rd generation immunoassay was reactive in 42.2%, and a 4th generation immunoassay was reactive in 89.1%. 71 In one study of 99,111 specimens screened with immunoassays, HIV-1 Western blot, and pooled NAT, 1,186 specimens were positive by either HIV-1 Western blot or HIV-1 NAT. 18 Pooled HIV-1 NAT increased the yield of new HIV diagnoses by 2.2% in specimens with a nonreactive 3rd generation immunoassay and by 0.7% in specimens with a nonreactive 4th generation immunoassay. 18 1.b.2. The HIV-1/HIV-2 antibody differentiation assay detects HIV-1 infection earlier than the HIV-1 Western blot. In one prospective study, Multispot was reactive for HIV-1 in 11 specimens that were indeterminate by HIV-1 Western blot; 6 of the 11 patients were eventually traced and serologically confirmed to have HIV-1 infection. 107 Two studies of 183 specimens from 15 seroconverters demonstrated the HIV-1/HIV-2 differentiation assay became reactive 7 days before the HIV-1 Western blot. 46,49 Among 8,760 specimens repeatedly reactive by 3rd generation immunoassay, the HIV-1/HIV-2 differentiation assay was reactive for HIV-1 in 3 (15.8%) of 19 specimens that were negative by HIV-1 Western blot and in 11 (17.5%) of 63 specimens indeterminate by HIV-1 Western blot. 88 Five studies documented reactive HIV-1/HIV-2 antibody differentiation assay results in specimens repeatedly reactive by 3rd or 4th generation IA, reactive by HIV-1 NAT, but negative or indeterminate by HIV-1 Western blot. 47,76,107,148,149 2. What is the specificity of individual assays in specimens from uninfected persons?
Most studies compared immunoassays with specimens defined as negative for HIV by negative immunoassay or HIV-1 Western blot results. Thus, specificity estimates for immunoassays from such studies might be artificially lowered by specimens from persons with acute or recent HIV infection that were misclassified as negative by the Western blot. 63,70 a. Specificities of the 3 FDA-approved 3rd generation assays ranged from 99.13% to 100% (6 studies, 3 product inserts). 46,47,49,94,141,168 b. Specificities of the 2 FDA-approved 4th generation assays ranged from 99.50% to 100% (6 studies, 2 product inserts). 46,47,72,141,143,160,161,168 c. Specificity of the HIV-1/HIV/2 differentiation assay ranged from 99.03% to 99.93% (3 studies, product insert). 46,63,166,169 d. No recent studies examined Western blot results in specimens without previous repeatedly reactive immunoassay results. A 1990 study demonstrated indeterminate Western blot results in 32% of healthy adults. 170 e. Specificity of HIV-1 NAT for HIV-1 infection was 99.6% in one retrospective study of 513 reference-negative specimens 49 and 99.9% in 2 studies of pooled NAT screening of HIV-1 antibody-negative specimens. 16,18 The 2 studies of pooled NAT screening reported follow-up test results on persons with NAT-reactive specimens. In one, 1 of 5 persons with NAT-reactive results from 8,505 antibody-negative specimens did not seroconvert; 16 in the second, 3 of 15 persons with NAT-reactive results from 54,000 specimens screened did not seroconvert. 18
# Accuracy of algorithms based on combinations of assays a. What is the accuracy of algorithms based on combinations of assays in specimens from persons with established HIV-1 infection?
Six studies indicate that the accuracy of the recommended algorithm is equivalent to or better than that of the previous algorithm for correctly classifying established HIV-1 infections; it also produces fewer indeterminate results. The New York State laboratory's routine testing algorithm allowed direct comparison of the recommended and previous algorithms on 38,257 specimens that were tested prospectively with a 3rd generation immunoassay followed by both Western blot and HIV-1/HIV-2 antibody differentiation immunoassay on repeatedly reactive specimens and HIV-1 RNA NAT on HIV-2negative, Western blot-negative, or indeterminate specimens. The recommended algorithm correctly classified 1,578 (100%) specimens as HIV-1 positive; the previous algorithm classified 1,546 (98%) as HIV-1 positive, 4 as negative, and 28 as inconclusive. 76 In 4 retrospective studies of more than 3,200 HIV-1 Western blot-positive specimens, correct results when the recommended algorithm was initiated with 3rd generation immunoassays ranged from 99.8% to 100%. 47,70,94,123 Three similar studies that initiated retrospective testing with 4th generation immunoassays on 4,200 HIV-1 Western blot-positive specimens did not demonstrate a statistically significant difference between the results of the recommended and the previous algorithms. 46,47,123 b. What is the accuracy of algorithms based on combinations of assays in specimens from persons with acute HIV-1 infection?
In two studies using 230 specimens from the same seroconversion panels, the new algorithm with two 3rd generation assays as the initial test correctly identified acute HIV-1 infections in 14 (6.1%) and 12 (5.2%) specimens, respectively, that were negative with the previous algorithm. 46,47 With the two 4th generation assays as the initial test, the recommended algorithm identified acute HIV-1 infections in 36 (15.7%) and 41 (17.8%) specimens, respectively, that were negative with the previous algorithm. 46,47 One ongoing prospective study identified 654 specimens that were repeatedly-reactive by 4th generation antigen/antibody immunoassay. 10 HIV-1/HIV-2 antibody differentiation assay results were reactive in 555 (84.9%) specimens and negative or indeterminate in 99 (15.1%). HIV-1 RNA NAT was reactive in 55 (55.6%) of these 99: 47 (52.2%) of 90 specimens with negative and 8 (88.9%) of 9 specimens with indeterminate HIV-1/HIV-2 antibody differentiation immunoassay results. 118 One study of 37 patients with repeatedly reactive 4th generation antigen/antibody combination immunoassay results identified detectable HIV-1 RNA in 11 (29.7%) specimens with nonreactive HIV-1/HIV-2 antibody differentiation assay results. 10 Two studies retrospectively tested specimens that were Western blot-negative or indeterminate after repeatedly reactive 3rd generation immunoassays with the HIV-1/HIV-2 differentiation immunoassay and HIV-1 NAT. In one, application of these two supplemental tests identified HIV-1 infections in 184 (5.6%) of 3,273 specimens; 96 (2.9%) were acute HIV-1 infections. 148 In the second study, among 570 specimens obtained from public health laboratories, application of the two supplemental tests identified HIV-1 infection in 55 (9.6%); 19 (3.3%) were acute HIV-1 infections. 149 In a prospective analysis of 51,000 specimens during the first 5 months using the recommended testing algorithm in the Florida Bureau of Public Health Laboratories, the recommended algorithm detected 922 HIV-1 infections, of which 4 (0.4%) were acute. 123 In prospective analysis after implementation of the recommended algorithm with 7,984 specimens from Massachusetts, 8 (3.1%) of 258 HIV-1 infections were acute. 167 c. What is the accuracy of algorithms based on combinations of assays in specimens from persons with established HIV-2 infection?
This question could not be answered directly because there is no FDA-approved NAT for HIV-2 and different diagnostic criteria were used in studies of HIV-2 infections.
According to the previous recommendations, specimens with a positive HIV-1 Western blot would not undergo specific testing for HIV-2; 2 most studies evaluating algorithms involved specimens with evidence of HIV-2 infection that had been misclassified as HIV-1 infection. In a prospective study, parallel testing with both the differentiation assay and HIV-1 Western blot classified 26 (0.29%) of 8,678 HIV-1 Western blotpositive specimens as HIV-2. 88 In a retrospective study, the recommended algorithm reclassified 2 (0.4%) of 493 specimens with positive HIV-1 Western blot results as HIV-2. 47 d. What is the accuracy of algorithms based on combinations of assays in specimens from persons not infected with HIV?
The recommended algorithm accurately classifies more persons who are not infected with HIV than the previous algorithm by reducing the number of specimens with indeterminate test results. In the New York State analysis that directly compared the recommended and previous algorithms after an initial 3rd generation immunoassay on the same specimens, the recommended algorithm classified 36,661 (99.98%) specimens as negative compared with 36,649 (99.95%) by the previous algorithm. 76 In this study, 48 (2.9%) specimens had indeterminate HIV-1 Western blot results with the previous algorithm. Applying test results from the recommended algorithm, only 9 (0.5%) of these 48 specimens would have been classified as inconclusive by the recommended algorithm, because the specimens submitted were unsuitable for NAT testing. 76 A study of 10,014 specimens from life insurance applicants with low HIV prevalence identified 13 (0.1%) specimens repeatedly reactive on at least one 3rd or 4th generation immunoassay. 47 Two were classified as HIV-1 positive by both the previous and recommended algorithms.
One specimen repeatedly reactive only by 3rd generation and 8 repeatedly reactive only by 4th generation were negative by both algorithms. One specimen repeatedly reactive only by 3rd generation and one specimen repeatedly reactive only by 4th generation immunoassay were indeterminate by the previous algorithm and negative by the recommended algorithm. 47 4. What algorithm(s) required the fewest assays to maximize accuracy and minimize indeterminate or inconclusive test results?
a. Required supplemental assays
The recommended algorithm requires a lower number of assays than the previous algorithms to obtain accurate test results for HIV-1 and HIV-2 for most specimens. Both the previous and recommended algorithms include 1 initial assay and 1 or 2 supplemental assays. The recommended algorithm requires one supplemental assay if HIV-1 or HIV-2 antibodies are present; it eliminates the need to use a second supplemental assay to test for HIV-2 antibodies. If results of the first supplemental assay are negative or indeterminate, it requires a second supplemental assay, HIV-1 NAT, to test for acute HIV-1 infection and eliminate indeterminate results. Three studies show that only 0.1% to 0.2% of all specimens required NAT testing to identify acute HIV-1 infections or false-positive immunoassay results. 47,123,167 The previous algorithm requires 1 supplemental assay to test for HIV-1 antibodies. If the result of this first supplemental assay is negative or indeterminate, it requires a second supplemental assay to test for HIV-2 antibodies. The previous algorithm cannot identify acute HIV-1 infection, and indeterminate specimens that are negative for HIV-2 antibodies require testing of a follow-up specimen for HIV-1. Three studies indicate the previous algorithm produces indeterminate results in from 1.9% to 4.5% of specimens with repeatedly reactive 3rd or 4th generation immunoassays. 29,76,118 With both algorithms, HIV-1 antibody-positive specimens are resolved with 2 assays; specimens that are false-positive on the initial assay require 3 tests.
b. HIV-1 NAT screening of specimens negative for HIV antibodies HIV-1 NAT screening after a negative 3rd or 4th generation immunoassay can identify more acute HIV infections than 4th generation immunoassays, but an antibody immunoassay must also be used on all specimens with a negative HIV-1 NAT because HIV-1 NAT is negative in 2% to 4% of specimens with established HIV-1 infection and in all specimens with HIV-2 infection. 18,49,145 If used as the second test in the algorithm, HIV-1 NAT would fail to distinguish acute from established infections, and tests for HIV-1 and HIV-2 antibody would still be required for specimens with negative HIV-1 NAT results. Using HIV-1 NAT as the first or second test in the algorithm would increase the number of tests necessary for accurate diagnosis, increase turnaround time for final results, and increase costs. 142,171 c. HIV-1/HIV-2 Differentiation Assay Using an HIV-1/HIV-2 antibody differentiation assay as the supplemental test in the recommended algorithm can reduce the number of assays because it can identify and differentiate HIV-1 and HIV-2 antibodies in a single step. The differentiation assay also incorporates two different antigenic determinants for HIV-1 (synthetic gp41 peptide and recombinant gp41 protein), so a laboratory diagnosis of HIV-1 is based on 3 concordant reactive test results (the initial immunoassay and both HIV-1 determinants in the differentiation assay). 76,88,149,166 CLIA classifies the FDA-approved differentiation assay as "moderate complexity" 166 so it can be performed by laboratories that are not certified to conduct high complexity assays, such as the Western blot, increasing the number of laboratories that can perform both the initial and supplemental assays in the recommended algorithm.
d. Testing specimens submitted after reactive single-use rapid HIV test results CDC's 2004 recommendation to confirm all reactive rapid tests by HIV-1 Western blot or HIV-1 IFA regardless of results of initial laboratory immunoassays was based on observations of false-negative results from the laboratory immunoassay then in predominant use, 74 which was discontinued by the manufacturer in 2007. Three studies that conducted all HIV tests approved by the FDA as of December 2013 on the same 166 plasma specimens from 16 seroconverters indicated that, for tests FDA-approved as of May 2014, 4th generation laboratory-based combination antigen/antibody immunoassays become reactive earlier during early infection than any of the single-use rapid HIV tests, including the 4th generation antigen/antibody combination rapid HIV test. 46,49,63 One prospective evaluation of 428 specimens submitted after reactive rapid test results demonstrated all were correctly classified by the recommended algorithm. 123 A nonreactive antigen/antibody combination immunoassay result after a reactive rapid HIV test result indicates a false-positive rapid HIV test result and averts the need to conduct additional supplemental testing.
5. Do the costs and cost-effectiveness of the recommended algorithm for the diagnosis of HIV infection differ from the costs and cost-effectiveness of the previous algorithm?
Comparing laboratory costs for testing algorithms is difficult because assay costs vary over time, in different laboratories, and with different testing volumes. Testing costs also depend on the prevalence of established and acute HIV infections in tested specimens (and thus the number of supplemental tests required). 99 Investigators collected cost information from 17 clinical and public health laboratories and used the median cost in a model to compare the cost of previous algorithm and the recommended algorithm. (The model did not include costs or effectiveness for the laboratory diagnosis of HIV-2 infection.) The recommended algorithm identified more specimens with HIV-1 infection. It was less costly than the previous algorithm for specimens positive for HIV antibody, but more costly for the subset of specimens that required HIV-1 NAT to evaluate acute infection or false-positive initial immunoassay results. 99 Estimates of both the number of HIV infections detected and overall laboratory testing costs were higher with the recommended algorithm than with the previous algorithm. 99 In specimens with 1% prevalence of established HIV-1 infection and 0.1% prevalence of acute HIV-1 infection (characteristic of specimens from high-risk populations), the model estimated that, compared with the previous algorithm, the incremental cost per additional HIV-1 infection detected ranged from $5,027 to $14,400. In contrast, for specimens in which the prevalence of established and acute HIV-1 infections is very low (0.01% and 0.001%, respectively), incremental cost-effectiveness of the recommended algorithm exceeds $100,000 per additional infection detected compared with the previous algorithm. A different costeffectiveness model that included as an outcome the costs of cases averted by early detection of HIV infection concluded that HIV testing remained cost saving until costs per new HIV diagnosis exceeded $22,903. 172 Two other U.S. models evaluated the cost-effectiveness of alternative algorithms in which pooled HIV-1 NAT would directly follow an initial nonreactive 3rd generation immunoassay. Both used cost per quality adjusted life year as outcomes. One found that the incremental cost-effectiveness of pooled HIV-1 NAT exceeded $100,000 per quality-adjusted life year unless prevalence of acute HIV infection in tested specimens exceeded 0.4%. 173 The second model found that screening with a 4th generation immunoassay was more economical than pooled NAT screening after a negative 3rd generation immunoassay. 171 In both studies, the cost-effectiveness of each strategy varied considerably with the prevalence of undiagnosed HIV infection and the frequency of re-testing (which influences the proportion of specimens with acute HIV-1 infection).
6. Do benefits and harms for patients associated with the proposed diagnostic algorithm differ from benefits and harms associated with the previous diagnostic algorithm?
The recommended algorithm is associated with additional benefits and fewer harms for patients than the previous algorithm. By reducing the number of false-negative and indeterminate results and misclassified HIV-2 infections, the recommended algorithm is more accurate. The previous algorithm produces indeterminate results in from 2% to 4.5% of persons with repeatedly reactive 3rd or 4th generation immunoassays, which require testing of additional follow-up specimens. 29,76,118 By reducing indeterminate test results, the recommended algorithm reduces delays in HIV diagnosis, anxiety for tested persons, and the inconvenience and cost of collecting additional specimens for more testing. 174,175 The recommended algorithm only rarely requires additional specimens; for example, when an HIV-1 NAT is required and the original specimen is unsuitable. This may be inconvenient or provoke anxiety in tested persons. However, one testing program that initiated a new shipping service to speed delivery of specimens found that the laboratory considered only 1.6% of specimens submitted unsuitable for testing. 167 The recommended algorithm can also reduce turnaround time for test results compared with the previous algorithm. One public health laboratory using the recommended algorithm was able to report 96% of antibody-positive test results in 2 workdays or less, compared with 22% when specimens were tested with the previous algorithm. 123 Another testing program that replaced the previous algorithm with the recommended algorithm was able to shorten the interval between specimen collection and routine notification of test results by 1 week. 167 Turnaround time for test results is longer for specimens that required HIV-1 NAT testing as part of the algorithm, but no studies documented the time necessary to obtain the NAT results.
# E. Tables of Evidence
The following tables identify the specific studies and their limitations and strengths for sensitivity, specificity, and accuracy (key questions 1a, 1b, 2, 3a, 3b, 3c, and 3d). The tables of evidence denote the limitations and strengths that influenced the quality of evidence provided by each study using the following key:
# Limitations:
A. Possible selection bias: Selection of specimens was not consecutive or random. Specimen collections that were comprised of previously tested specimens known to have a laboratory diagnosis of established HIV-1 infection, acute HIV-1 infection, or HIV-2 infection do not reflect pre-test probability of these infections in different populations in the United States that undergo HIV testing.
B. Incomplete information or lack of peer review: Evidence quality reduced when format of information from manufacturer's product inserts did not provide detailed information needed for interpretation such as source of specimens or the identity of all tests used as reference standards, or summaries of unpublished data are based on studies that were not subject to independent peer review (other than by FDA experts).
C. Indirect comparisons of test performance: Evidence quality is reduced when a study evaluates only one index test against a reference standard, and not directly against other index tests on the same specimens in the same study. Indirect comparisons can be prone to selection bias.
# D. Comparison with different, composite, or unspecified reference standards:
Evidence quality is reduced if different reference standards were employed (for example, in the absence of a published standard for HIV-2 infection), if the same standard was not used for all specimens (for example, HIV-1 NAT applied to specimens with negative or indeterminate but not positive HIV-1 Western blot results), or if the study did not define the reference standard used for all specimens.
E. Uncertainty about specimen integrity: Evidence quality is reduced if uncertainty exists about whether handling or storage of previously tested specimens was inconsistent with the requirements for reliable test results from subsequent testing by index or reference tests.
F. Small size of specimen collections: Testing of <100 specimens yields very wide confidence intervals around point estimates and might be more affected by selection bias.
# Strengths:
AA. Specimens prospectively collected for diagnostic testing: Studies that conducted testing on specimen sets obtained as part of routine diagnostic testing are more representative of the populations that will be tested with the recommended algorithm and less likely to be affected by selection bias.
# BB.
All studies evaluated, except manufacturers' package inserts, were subject to peer review. Therefore, peer review was not denoted as a strength for any study and BB is not an entry in the Strengths column of the evidence tables.
CC. Direct comparisons of different tests on the same specimen sets: Direct comparisons improve the validity of performance comparisons of index tests.
# DD.
Comparison with the same reference standard: Evidence quality is improved if all specimens in the study were compared to the same reference standard, and results were compared to other studies that applied the same reference standard.
# EE.
Although specimen handling, when uncertain, is noted as a weakness, appropriate handling and storage of specimens consistent with obtaining reliable test results were not specifically denoted as strengths. Therefore, EE is not an entry in the Strengths column of the evidence tables.
FF. Large sample size: Testing of >1,000 specimens yields narrow confidence intervals around point estimates. | 20,183 | {
"id": "0e10cb29f06067e66ff5bfe93a94ad90e190cf37",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | We congratulate the Council to Improve Foodborne Outbreak Response (CIFOR) for issuing this second edition of the Guidelines to Improve Foodborne Outbreak Response. This new edition incorporates lessons learned over the past few years, along with new and improved techniques for surveillance, detection, investigation, and response to foodborne disease outbreaks. CIFOR was conceived in 2005 by a small group of forward-looking leaders at the Council of State and Territorial Epidemiologists, Association of Public Health Laboratories, and the Centers for Disease Control and Prevention. They recognized the need to improve the way local, state, and federal government agencies coordinate their respective roles in the surveillance, detection, and investigation of and response to outbreaks of foodborne illness. One of the first projects the newly formed CIFOR took on was to develop guidelines to help government agencies improve their foodborne disease outbreak response activities. The first edition of the Guidelines to Improve Foodborne Outbreak Response, published in 2009, and the Guidelines to Improve Foodborne Outbreak Response Toolkit, published in 2011, made a major contribution to improving government's response to foodborne disease outbreaks. The Guidelines and Toolkit are now referenced in many documents that address foodborne disease response. They have been used as criteria for measuring the effectiveness of programs, as key references in training courses, and as tools by government workgroups who have been meeting to identify and prioritize tasks for improving joint responses to foodborne disease outbreaks. A 2013 RAND Health survey of intended users found that 80% of respondents reported being familiar with the Guidelines and 65% with the Toolkit. Since their issuance, the CIFOR Guidelines have influenced the way multiagency outbreak responses are conducted by epidemiologists, laboratorians, and environmental health/food regulatory agencies at the local, state, and federal levels. Agencies are communicating better, they better understand their respective roles and responsibilities, and they are responding quicker and more effectively because of the implementation of the CIFOR Guidelines. CIFOR also has established a close working relationship with the food industry, the sector that bears primary responsibility for the safety of the food supply. The standing CIFOR Industry Workgroup led to development of the recently published Foodborne Illness Response Guidelines for Owners, Operators and Managers of Food Establishments to facilitate the food industry's response to outbreaks of foodborne illness. We are confident that this second edition will lead to further improvement in response to foodborne disease outbreaks by the thousands of dedicated local, state, and federal employees who are working together toward that goal.#
is dedicated to the memory of Dr. William (Bill) E. Keene, who passed away unexpectedly on December 1, 2013, after a short illness. Bill was a charter member of CIFOR in 2006 and a driving force in the organization until his death. He played a monumental role in writing and editing the original, as well as this new (second), edition of the CIFOR Guidelines.
Bill joined the Oregon Public Health Division in 1990 and worked there for 23 years as a foodborne disease epidemiologist. During that time he became known nationally and internationally as a leading expert on foodborne disease surveillance and outbreak investigation. Bill was well known for his passion for public health and dogged determination in solving foodborne outbreaks, often working around the clock to do so. He was a strong, vocal leader during investigations of multistate foodborne outbreaks and solved many, frequently documenting new outbreak vehicles or pathogen-vehicle associations.
Bill profoundly influenced virtually all recent national efforts to improve response to foodborne disease outbreaks in this country (such as CIFOR). His innovations were at the cutting edge of new surveillance and outbreak investigation methods. Bill's passion for foodborne outbreak investigations was reflected in his office's additional role as a national museum of foodborne illness outbreaks. Bill's office memorialized famous outbreaks from the last 2 decades with shelves containing the packages of the implicated food vehicles. His personal license plate was Oregon O157H7.
The following words or phrases have been used to describe Bill: energetic, zealous, dedicated, diligent, food safety hero, public health jewel, superior intelligence, brilliant, hard-working, dry wit, uncompromising candor, innovative, pioneer, inimitable, passionate, high standards, exemplified determination and stamina when investigating outbreaks, tireless, tremendously personable, freely shared his expertise, ever-available, warm, gregarious, and generous. We agree-but to many of his colleagues on CIFOR and throughout the public health community, Bill was most of all an admired, respected, and cherished colleague and friend. He will be missed terribly. Those of you who are familiar with Bill's work will recognize some of the outbreak investigation examples in these Guidelines as his investigations. In addition, the conversational tone, dry humor, and almost poetic nature of the writing in innumerable places throughout these Guidelines can unmistakably be recognized as Bill's work. It follows, then, that these Guidelines serve as one small way to memorialize Bill's incredible contributions to the field of foodborne disease epidemiology.
# Dedication Foreword | 1,033 | {
"id": "b8b58d31a89fd513daaf98b4300e350c9e85cddd",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # INTRODUCTION
Furthermore, product substitution should be a paramount consideration wherever ethylene dichloride is identified or its presence suspected, and it should be replaced wherever feasible with less harmful substitutes.
The recommended standard would apply to the processing, manufacture, and use of ethylene dichloride and products containing ethylene dichloride.
# I. BASIS FOR A REVISED ETHYLENE DICHLORIDE STANDARD
No epidemiologic studies that were designed to investigate the carcinogenicity of ethylene dichloride in humans have been found in the literature.
The seven epidemiologic studies referenced in the criteria document refer to physiological alterations and morbidity (1).
# Studies at the National Cancer Institute have been completed on an animal bioassay of ethylene dichloride (EDC)
given by gastric intubation.
The results are summarized in Table I.
There is a statistically significant positive association between the dosage of ethylene dichloride and the incidence of squamous-cell carcinomas of the forestomach and hemangiosarcomas of the circulatory system in male rats. In female rats, there was a statistically significant increased incidence of adenocarcinomas of the mammary gland.
Ethylene dichloride was carcinogenic in mice also, causing mammary adenocarcinomas and endometrial tumors in female mice, as well as producing lung adenomas in mice of both sexes.
There was a doseresponse relationship for total tumors in both mice and rats, as well as a dose-response relationship for most specific types of tumors. (2) Comprehensive physical examination.
( (5) Emergency telephone numbers shall be prominently posted.
# (b) Engineering Controls
Engineering controls shall be used to limit the inhalation of, and to minimize skin contact with, ethylene dichloride by controlling the amount of ethylene dichloride that is emitted into the air. The most effective control measure is enclosure of unit operations and processes. (1) Access to the regulated area shall be limited to employees having assigned duties there.
(2) A daily entry roster shall be kept of all employees entering the regulated area and of their length of stay.
(3) Employees working in regulated areas shall wash their face, neck, hands, and forearms each time they leave the regulated area. Washing facilities shall be provided at each exit. Employees working in regulated areas shall wash their hands and forearms before using the toilet.
(4) Employees engaged in operations in which ethylene dichloride is transferred, charged, or discharged, or which involve using a laboratory-type hood, opening a closed system, or repackaging, shall be provided with gloves and aprons or coveralls or with fullbody protective suits resistant to penetration by ethylene dichloride.
(5) As a backup precaution, employees using glove boxes to handle ethylene dichloride shall wash their hands and arms on completion of the assigned task.
(6) When employees use protective clothing and equipment, they shall remove it and leave it at the exit before they leave the regulated area; the employees shall then wash their hands, forearms, face, and neck to remove accumulated ethylene dichloride before they enter nonregulated areas.
# (d) Clean Work Clothing Room
A clean work clothing room shall be established and maintained that is free of ethylene dichloride contamination and that contains locker facilities.
(1) Shower facilities shall separate the clean work clothing room from the regulated area.
(2) The clean work clothing room shall be kept under positive pressure relative to the regulated area.
( and that prescribed procedures will be followed.
(1) All lines shall be disconnected or blocked while a vessel is being cleaned. All valves or pumps leading to and from the vessel shall be closed down.
(2) The vessel shall have all liquid ethylene dichloride removed and be purged completely with air.
( (1) A program of personal monitoring shall be instituted to identify and measure, or permit calculation of, the exposure of each employee. Source and area monitoring may be used to supplement personal monitoring.
(2) In all personal monitoring, samples representative of exposure in the breathing zone of the employee shall be collected.
(3) For each ethylene dichloride determination, a sufficient number of samples shall be taken to characterize the employee's work and production schedules, location, or duties. Changes in production schedules shall be considered in deciding when samples are to be collected.
(4) Each operation in each regulated area shall be sampled at least once every 6 months while ethylene dichloride is produced or handled. For intermittent operations, ie, those lasting less than 6 months, at least one monitoring regimen shall be conducted during each operation period, and monitoring should coincide with the periods of maximum potential exposure to ethylene dichloride.
If an employee is found to be exposed to ethylene dichloride at concentrations exceeding the recommended occupational exposure limit, the exposure of that employee shall be measured at least once every
# D E P A R T M E N T O F H E A L T H , E D U C A T IO N , A N D W E L F A R E P U B L IC H E A L T H S E R V IC E C E N T E R FO R D IS E A S E C O N T R O L N A T I O N A L IN S T IT U T E FO R O C C U P A T I O N A L S A F E T Y A N D H E A L T H R O B E R T A T A F T L | 1,144 | {
"id": "8323b3c1d2e54759e1fa5ae49ce614316b27604b",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Sexually transmitted diseases (STDs) constitute an epidemic of tremendous magnitude, with an estimated 15 million persons acquiring a new STD each year . Effective clinical management of STDs represents a strategic element in prevention of HIV infection and in efforts to improve reproductive and sexual health. Clinicians who evaluate persons with STDs or those at risk for STDs should be aware of the current national guidelines for STD treatment. The 2002 Centers for Disease Control (CDC) guidelines for the treatment of STDs provide clinical guidance in the appropriate assessment and management of STDs . The scope and content of these guidelines continues to evolve, reflecting changes not only in clinical experience and epidemiology but also in changes in the health care environment and the circumstances under which clinical services are delivered. The 2002 guidelines for the treatment of sexually transmitted diseases were developed in consultation with public-and private-sector professionals knowledgeable in the management of STDs, using an evidence-based approach. A systematic literature review was performed that focused on peer-reviewed journal articles and published abstracts that have become available since publication of the 1998 guidelines. On the basis of this review process, background papers were developed, and the available evidence was evaluated during a meeting of consultants in September 2000. A draft report was then circulated to professional associations, STD treatment experts, and other agencies,#
-rganizations, and individuals representing diverse perspectives on issues related to STD treatment. The present supplement describes advances in the diagnosis, management, and treatment of STDs and has implications for current clinical practice.
Chlamydia trachomatis infection is the most common bacterial STD in the United States, with an estimated 3 million cases occurring annually . Reported rates of chlamydial infections have increased dramatically over the past decade, which reflects expansion of chlamydial screening activities and the advent of a new generation of highly sensitive nucleic acid amplification tests. However, many women who are at risk for this infection are still not being screened appropriately, which reflects a lack of awareness among some providers and the limited resources available for screening. Efficacious therapeutic regimens for chlamydial treatment include azithromycin or doxycycline. In many settings, azithromycin, which can be administered as a single dose, permitting therapy to be directly observed, may be the more cost-effective treatment, especially for individuals who are unlikely to complete the 7-day doxycycline regimen . Because a high prevalence of chlamydia has been found in women in whom the disease has been diagnosed and treated during the preceding several months, the treatment guidelines suggest that women with chlamydia be rescreened 3-4 months after treatment.
Infections due to Neisseria gonorrhoeae, like those resulting from C. trachomatis, are a cause of cervicitis, urethritis, proctitis, and pelvic inflammatory disease. Several antibiotics, including cefixime, ceftriaxone, ciprofloxacin, and ofloxacin, are effective in single-dose regimens for the treatment of gonorrhea. Recently, quinolone-resistant gonorrhea has been reported from Southeast Asia, Hawaii, and California. Persons with S136 - CID 2002:35 (Suppl 2) - Workowski and Berman gonorrhea who have recently traveled to Asia or the Pacific, Hawaii, or California or whose partner(s) recently traveled to these areas should receive a nonquinolone regimen. Because the prevalence of quinolone-resistant gonorrhea is expected to spread, local data addressing antimicrobial resistance are crucial for guiding therapy recommendations. Therefore, state and local public health officials must maintain the capacity to detect and monitor the prevalence of resistant strains, because prevalence can vary greatly by location. Culture and susceptibility testing should be performed in persons with apparent treatment failure.
Genital herpes simplex virus (HSV)-2 infection remains the most common infectious etiology of genital ulcers in the United States and is a remarkably common infection; HSV-2 seroprevalence is 22% among adults and has increased 32% over the past decade . Most genital herpes infections are transmitted by persons who are unaware that they have the infection or are asymptomatic when transmission occurs. Because of the high proportion of unrecognized infection, the diagnosis of genital herpes should be confirmed by sensitive diagnostic tests such as viral culture or HSV type-specific serological tests. Accurate type-specific assays for HSV rely on the detection of antibodies to HSV-specific glycoprotein G1 and G2. These new type-specific assays may be useful in the diagnosis of unrecognized infection and the evaluation of sexual partners of persons with genital herpes. The optimal management of genital herpes includes antiviral therapy, counseling regarding the natural history of infection, the risk of sexual and perinatal transmission, and the use of methods to prevent further transmission. Systemic antiviral drugs control the symptoms and signs of infection; however, these drugs neither eradicate latent virus nor affect the risk, frequency, or severity of recurrences after the drug is discontinued.
Syphilis continues to be one of the most important STDs both because of its biological effect on HIV acquisition and transmission, increasing risk of HIV infection 3-5-fold , and because of its impact on infant health. Currently, syphilis remains an important problem in the South and in some urban areas of the country. In addition, the recent occurrence of syphilis outbreaks in numerous US cities among men who have sex with men is of particular concern, reflecting, in part, increased risk-taking behavior among that population . Long-acting preparations of penicillin remain the treatment of choice for all stages of syphilis. HIV-infected persons who have early syphilis should be managed according to standard treatment recommendations; however, they may be at increased risk for neurological complications and may have higher rates of treatment failure. Despite limited data to support the use of alternatives to penicillin in the treatment of early syphilis, several new alternative therapies appear to be promising for nonpregnant, penicillin-allergic patients who have primary or secondary syphilis. In the management of neurosyphilis in the penicillinallergic person, ceftriaxone is an alternative treatment regimen. However, the use of alternative regimens to penicillin in the treatment of syphilis among those with HIV infection has not been well studied.
Most human papillomavirus infections are asymptomatic, unrecognized, or subclinical. The primary goal in the treatment of exophytic anogenital warts is the removal of warts, which may be pruritic, painful, and friable or may interfere with normal function. There are no clearly defined guidelines for the treatment of genital warts; thus, specific treatment recommendations should be guided by the experience of the clinician, availability of therapeutic agents, and patient preference. No single treatment has been found to be ideal for all patients, and most treatment modalities appear to have comparable efficacy. Currently available therapies for genital warts may reduce but probably do not eradicate either infection or infectivity. Whether the reduction in viral DNA that results from current treatment regimens affects future transmission remains unclear. Recognition of the etiologic role of specific HPV types in cervical cancer and the advent of type-specific HPV tests have stimulated a focus on the use of HPV diagnostic tests in cervical cancer prevention. HPV testing has been recently proposed as a management strategy to determine which women with lowgrade cervical cytological abnormalities require colposcopic evaluation. Studies to clarify the role of HPV testing in the evaluation of low-grade cervical abnormalities have indicated that HPV testing can be useful and cost effective in the management of Papanicolaou tests that reveal atypical squamous cells of undetermined significance .
Bacterial vaginosis (BV), a sexually associated infection, has been associated with adverse pregnancy outcomes, including chorioamnionitis, premature rupture of membranes, prematurity, and postpartum endometritis. Oral and vaginal metronidazole regimens are similarly efficacious in the treatment of BV and appear to be more effective than clindamycin cream. Several studies have suggested that the treatment of BV in pregnant women with a previous history of preterm birth may reduce their subsequent risk for prematurity . However, no randomized trial has yet demonstrated a reduction in adverse outcomes of pregnancy among asymptomatic women without a history of previous preterm birth; thus, current evidence does not support universal screening for BV in pregnancy.
Ectoparasites are a common cause of skin rash and pruritus throughout the world. Ivermectin represents a new oral therapy for scabies and may hold particular promise in the treatment of severe infestation. Combination therapy with ivermectin and topical scabicides may prove to be the best treatment for crusted scabies. Rising rates of drug resistance in head lice may effect the efficacy of topical agents for pediculosis pubis in the future.
The effective clinical management of STDs represents a strategic common element in prevention of HIV infection and in efforts to improve reproductive and sexual health. Recommendations for STD treatment will continue to evolve in response to clinical research, emerging antimicrobial resistance, and evolving sexual and health-care behaviors. The use of new, more effective treatment regimens, highly sensitive tests for screening for asymptomatic infection, improvements in counseling of patients and their sexual partners, and new vaccines for sexually transmitted pathogens are crucial to improving sexual and reproductive health. | 1,927 | {
"id": "a189c5c399debdeca578676d7f8f2c94511e0543",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | T he N ational Institute for O ccupational Safety and H ealth (NIOSH) recommends that em ployee exposure to acrylonitrile in the workplace be controlled so that no worker will be exposed to acrylonitrile in excess of 4 ppm (8.7 mg/cu m) in air as determ ined by a 4-hour sample collected at 0 2 liter/m inute Because it is not possible at present to establish a safe exposure level for a carcinogen, the NIOSH recom m enda tion is to restrict exposure to very low levels that can still be reliably measured in the workplace. The recommended exposure limit of 4 ppm is the lowest level at which a reliable estimate of oc cupational exposure to acrylonitrile can be deter m ined at this time because of a lim itation in the air m easurement technique Short-term studies are presently underway within the Institute to resolve those problems causing the limitation. If the studies bear positive results, the Institute will forward a recom m endation to reduce the permissable exposure limit to some value less than 4 ppm. The present Federal Standard is 20 ppm , determ ined as a time-weighted average (TWA) concentration for up to an 8-hour work shift in a 40-hour workweek. In addition, the standard contains recom m endations for m edical sur veillance, inform ing em ployees of hazards, sanitation, work practices, labeling and posting, personal protective clothing and equipm ent, m onitoring, and recordkeeping.
The recommended standard is designed to protect the health and provide for the safety of employees for up to a 10-hour work shift, 40hour workweek, over a working lifetime. Com pliance with all sections of the standard should, at the m inim um , substantially reduce the risk of acrylonitrile-induced cancer and prevent other adverse effects of exposure to acrylonitrile in the workplace. The employer should regard the recommended workplace environm ental lim it as the upper boundary for exposure and make every effort to keep the exposure as low as possible
The acute toxic effects of acrylonitrile are similar to those from cyanide poisoning. Toxic effects of acrylonitrile inhalation which have been noted in animals include damage to the central nervous system, lungs, liver, and kidneys. Embryotoxic effects in mice have also been re ported. However, the more recent evidence of acrylom tnles chronic toxicity has caused a reassessm ent of the hazard o f w orkers of acrylonitrile In April 1977, the M anufacturing Chemists Association reported interim results of two-year feeding and inhalation studies (con ducted by the Dow Chemical Company) of acrylonitrile in laboratory rats. By both routes of adm inistration, acrylonitrile caused the develop m ent of central nervous system tum ors and Zymbal gland carcinomas; no such tum ors were seen in control animals. Exposure to 80 ppm of acrylonitrile also revealed an increased incidence of mammary region masses.
A prelim inary epidem iologic study con ducted by the E.I D uPont de N em ours & Com pany, Inc of a cohort of 470 acrylonitrile polymerization workers from the com pany's Camden, South Carolina, textile fibers plant indi cates an excess risk of lung and colon cancer among workers w ith potential acrylonitrile ex posure. A total of 16 cancer cases occurred by be tween 1969 and 1975 among the cohort first ex posed between 1950 and 1955; only 5.8 cancer cases would have been expected, based on D u Pont Company rates (excluding the cohort). A lth o u g h th e ep id em io lo g ic fin d in g s are preliminary in nature and may not alone provide definitive evidence of the carcinogenicity of acrylonitrile in man, when considered in light of the laboratory experim ents dem onstrating car cinogenicity in rats, a serious suspicion is raised that acrylonitrile is a hum an carcinogen Thus, N IO SH believes th at acrylonitrile m ust be handled in the workplace as a suspect hum an car cinogen.
Acrylonitrile is an explosive, flammable liq uid having a normal boiling point of 77 C and a vapor pressure of 80 mm H g (20 C). Synonyms for acrylonitrile include acrylon, carbacryl, cyanoethylene, fum igrain, 2-propenem trile, VCN, ventox, and vinyl cyanide.
A p p ro x im ately 1.5 b illio n p o u n d s o f acrylonitrile are m anufactured each year in the U nited States by the reaction of propylene w ith ammonia and oxygen in the presence of a catalyst The major use of acrylonitrile is in the production of acrylic and modacrylic fibers by copolymerization with m ethyl acrylate, m ethyl m ethacrylate, vinyl acetate, vinyl chloride, or vinylidene chloride. Acrylic fibers are used in the m anufacture of apparel, carpeting, blankets, draperies, and upholstery Some applications of modacrylic fibers are synthetic furs and hair wigs.
O ther major uses of acrylonitrile include the m anufacture of acrylom trile-butadiene-styrene (ABS) and styrene-acrylomtrile (SAN) resins (used to produce a variety of plastic products), nitrile elastomers and latexes, and other chem i cals, such as ad ip o m tn le and acrylam ide. Acrylonitrile is also used as a fumigant. NIOSH estimates that approximately 125,(XX) persons are potentially exposed to acrylonitrile in the workplace.
The recom m ended standard is part of a con tinuing series of recommendations developed by NIOSH in accordance w ith the Occupational Safety and H ealth Act of 1970. The recom m ended standard is being transm itted to the D epartm ent of Labor September 29, 1977, for review and consideration in the standard setting process. If research by N IO SH results in the developm ent of im proved m ethods for sampling and analysis of acrylonitrile in air from the oc cupational environm ent, inform ation regarding the new m ethods will be forwarded to the D epartm ent of Labor.
# I. RECOM M ENDATIONS FOR A N ACRYLONITRILE STANDARD
The N ational Institute for O ccupational Safety an d H e a lth (N IO S H ) re c o m m e n d s th a t employee exposure to acrylonitrile in the w ork place be controlled by adherence to the following sections. The standard is designed to protect the health and provide for the safety of employees for up to a 10-hour work shift, 40-hour workweek, over a working lifetime. Compliance w ith all sec tions of the standard should, at the m inim um , substantially reduce the risk of acrylonitrile-induced cancer and prevent other adverse effects of exposure to acrylonitrile in the workplace The employer should regard the recom m ended w ork place environm ental lim it as the upper boundary for exposure and make every effprt to keep the exposure as low as possible The criteria and stan dard will be subject to review and revision as necessary.
Acrylonitrile is an explosive, flammable liquid having a normal boiling point of 77 C and a vapor pressure of 80 mm Hg (20 C) Synonyms for a c ry lo n itrile in c lu d e a c ry lo n , c a rb a c ry l, cyanoethylene, fum igrain, 2-propenenitrile, VCN, ventox, and vinyl cyanide.
A p p ro x im a te ly 1.5 b illio n p o u n d s o f acrylonitrile are m anufactured each year in the U nited States by the reaction of propylene with ammonia and oxygen in the presence of a catalyst. The major use of acrylonitrile is in the production of acrylic and modacrylic fibers by copolymerization with m ethyl acrylate, m ethyl methacrylate, vinyl acetate, vinyl chloride, or vinylidene chloride. Acrylic fibers are used in the manufacture of apparel, carpeting, blankets, draperies, and upholstery Some applications of modacrylic fibers are synthetic furs and hair wigs. O ther major uses of acrylonitrile include the m anufacture of acrylonitrile-butadiene-styrene (ABS) and styrene-acrylonitrile (SAN) resins (used to produce a variety of plastic products), nitnle elastomers and latexes, and other chem i cals, such as adiponitrile and acrylam ide Acrylonitrile is also used as a fumigant.
NIOSH estimates that approximately 125,000 persons are potentially exposed to acrylonitrile in the workplace.
The acute toxic effects o f acrylonitrile are sim i lar to those from cyanide poisoning Toxic effects of acrylonitrile inhalation which have been noted in animals include damage to the central nervous system, lungs, liver, and kidneys. Embryotoxic effects in mice have also been reported. However, the more recent evidence of acrylonitriles chronic toxicity has caused a reassessment of the hazard to workers of acrylonitrile In April 1977, the M anufacturing Chemists Association reported interim results of two-year feeding and inhala tion studies (conducted by the Dow Chemical Company) of acrylonitrile in laboratory rats. By both routes of adm inistration, acrylonitrile caused the developm ent of central nervous system tum ors and Zymbal gland carcinomas; no such tumors were seen in control animals. Ex posure to 80 ppm of acrylonitrile also revealed an increased incidence of mammary region masses A preliminary epidemiologic study conducted by the E I D uPont de N em ours & Company, Inc of a cohort of 470 acrylonitrile polymerization workers from the com pany's Camden, South Carolina, textile fibers plant indicates an excess risk of lung and colon cancer among workers with potential acrylonitrile exposure A total of 16 cancer cases occurred between 1969 and 1975 among the cohort first exposed between 1950 and 1955, only 5 8 cancer cases would have been ex pected, based on D uPont Company rates (ex cluding the cohort) A lthough the epidemiologic findings are preliminary in nature and may not alone provide definitive evidence of the car cinogenicity of acrylonitrile in man, when con sidered in light of the laboratory experiments dem onstrating carcinogenicity in rats, a serious suspicion is raised that ac.ylom trile is a human carcinogen Thus, NIOSH believes that acrylonitrile must be handled in the workplace as a suspect hum an carcinogen "Occupational exposure to acrylonitrile" refers to any workplace situation in which acrylonitrile is m anufactured, polymerized, used, handled, or stored. All sections of the standard shall apply w here th e re is o ccu p atio n al exp osu re to acrylonitrile.
# Section 1 -Environmental (Workplace Air)
(a) Concentration A crylonitrile shall be controlled in the workplace so that the concentration of airborne acrylonitrile, sampled and analyzed according to the procedures in Appendix I, is not greater than 4 pf>m (approximately 8 7 mg/cu rr) of breathing zone air (b) Sampling and Analytical Methods The environm ental lim it represents the lowest reliably detectable concentration of acrylonitrile measurable by the recommended sampling and analytical m ethods Procedures for the collection and analysis of acrylonitrile in air shall be as pro vided in Appendix I or by any methods shown to be equivalent in accuracy, precision, and sen sitivity to the methods specified.
# Section 2 -Medical
Medical surveillance shall be m ade available as specified below for all workers occupationally ex posed to acrylonitrile.
(a) Preplacem ent initial, and annual medical examinations shall include:
(1) An initial or interim medical and work history with special attention to skin, respiratory and gastrointestinal systems, and those non specific symptoms, such as headache, nausea, vomiting, dizziness, weakness, or other central nervous system dysfunctions that may be associ ated w ith chronic exposure.
(2) A physical examination giving particu lar attention to the skin, thyroid, respiratory system, and central nervous system.
(3) A 14" x 17" posteroanterior chest Xray.
(4) Further tests of the intestinal tract, such as proctosigmoidoscopy, on all workers over the age of 40 and all other workers who, in the opin ion of the responsible physician, show appropri ate indications.
(5) A judgm ent of the worker's ability to use positive pressure respirators.
(b) Initial medical examinations shall be made available to presently employed workers as soon as possible after the prom ulgation of a standard based on these recommendations.
(c) The employer shall ensure that employees trained in first-aid m easures are on duty whenever there is occupational exposure to acrylonitrile.
(d) Two physician's-treatm ent kits shall be immediately available to trained medical person nel at each plant where there is a potential for the release of, or for contact with, acrylonitrile. These kits should contain, as a minimum:
1. Two (2) boxes (2 dozen) ampules; each am pule containing 0.3 ml of amyl nitrite. Ampules shall be replaced biannually or sooner if needed to ensure their potency.
2. Two (2) ampules of sterile sodium nitrite solution (10 ml of a 3% solution in each).
3. Two (2) ampules of sterile sodium thiosulfate solution (50 m l of a 25% solution in each).
4. 2 sterile 10-ml syringes with intravenous needles.
5. 1 sterile 50-ml syringe w ith intravenous needle.
6. 1 tourniquet.
7. 1 gastric tube (rubber). 8 1 non-sterile 100-ml syringe. One kit should be portable in order that it may be carried by medical personnel while accompany ing a patient to the hospital. The other kit should be kept under lock and key to assure that it is in tact and available when and if needed. The key should be readily available at all times to the work supervisor on duty and the storage place should be of such construction as to allow access in the event of loss of the key.
(e) First-aid kits shall be immediately availa ble at workplaces where there is a potential for th e release, a c c id e n ta l or o th erw ise, o f acrylonitrile. This kit shall contain as a m inim um two (2) boxes Qf ampules (2 dozen), each contain ing 0-3 m l of amyl nitrite. Ampules shall be replaced biannually or sooner if needed to ensure their potency. The amyl nitrite ampules should be protected from high temperatures. In all cases, the contents of the physician's-treatm ent and first-aid kits shall be replaced before the m anufacturer's assigned expiration dates.
(f) Appropriate medical services shall be made available to any employee with adverse health effects from acrylonitrile in the workplace.
(g) Medical records shall be m aintained for all workers occupationally exposed to acrylonitrile. Pertinent medical records shall be m aintained for 30 years following the last occupational exposure to acrylonitrile. These records shall be m ade available to the designated medical representa tives of the Secretary of Health, Education, and Welfare, of the Secretary of Labor, of the em ployer, and of th e em ployee or form er employee.
# Section 3 -Labeling and Posting
A label shall be placed on each shipping and storage container of acrylonitrile and all areas w h ere th e re is o c cu p atio n al exp osu re to acrylonitrile shall be posted.
All warning signs shall be printed both in English and in the predom inant language of non-English-reading workers Illiterate workers and workers reading languages other than those used on labels and posted signs shall receive verbally disseminated inform ation regarding hazardous areas and shall be inform ed of the instructions printed on labels and signs.
(a) Labeling If respiratory protection is required in accor dance with Section 4, the following statem ent in large letters shall be added to the required sign:
# Section 4 - Personal Protective Clothing and Equipment
(a) Protective Clothing
(1) Employers shall provide and ensure that employees use gloves, face shields (eightinch m inim um ) and other appropriate protective clothing or equipm ent necessary to prevent skin contact w ith liquid acrylonitrile. Face shields shall comply w ith 29 CFR 1910.133 (a)(2), (a)(4), (a)(5), and (a)(6).
(2) W here exposure of an employee's body to liquid acrylonitrile may occur, employers shall provide facilities for quick drenching of the body within the im m ediate work area for emergency use. (3) Employers shall ensure that any per vious clothing which becomes wet with, or im pervious clothing which becomes grossly con tam inated with, acrylonitrile is removed im mediately and not reworn until the acrylonitrile has been removed from the clothing.
(4) Employers shall ensure that clothing wet with liquid acrylonitrile is placed in closed containers for storage until it can be discarded or until acrylonitrile is removed from the clothing. If the clothing is to be laundered or otherwise cleaned to remove the acrylonitrile, the employer shall inform the person (s) perform ing the opera tion of the hazardous properties of acrylonitrile Work clothing shall not be taken home by employees.
(5) Eye protection shall be provided by the employer and used by the employees where eye contact with liquid acrylonitrile is likely. Selec tion, use, and maintenance of eye protective equipm ent shall be in accordance with the provi sions of the American N ational Standard Practice for Occupational and Educational Eye and Face Protection, ANSI Z87 1-1968. Unless eye protec tion is afforded by a respirator hood or facepiece, protective goggles or a face shield shall be worn at operations where there is danger of contact of the eyes with liquid acrylonitrile because of spills or splashes. If there is danger of liquid acrylonitrile striking the eyes from underneath, or around the sides of the face shield, safety goggles shall be worn as added protection.
(6) The employer shall ensure that all per sonal protective devices are inspected regularly and m aintained in clean and satisfactory working condition (b) Respiratory Protection (1) Engineering controls shall be used when feasible to keep concentrations of airborne acrylomtrile at or below the recommended en vironmental limit Respiratory protective equip m ent may only be used in the following circum stances (A) During the tim e necessary to install or test the required engineering controls (B) For operations such as nonroutine maintenance and repair activities in which brief exposure at concentrations in excess of the recommended environm ental limit may occur (C) During emergencies when con centrations of airborne acrylomtrile m ight exceed the recommended environmental limit
(2) When a respirator is perm itted by paragraph (b)(1) of this section, it shall be selected and used in accordance with the follow ing requirements (A) The employer shall establish and enforce a respiratory protective program. The re quirements for such a program are listed in 29 CFR 1910 H 4 (B) The employer shall provide respira tors in accordance with Table 1-1 and shall ensure that employees use the respirators in a proper manner when the concentration of airborne acrylomtrile exceeds the recommended environ mental limit The respirators shall be those ap proved by NIOSH or the M ining Enforcement and Safety Administration The standard for ap proval is specified in 30 CFR 11 The employer shall ensure that respirators are properly cleaned, m aintained, and stored when not in use
# Section 5 -Informing Employees of Hazards from Acrylonitrile
(a) The em ployer shall ensure th at each employee assigned to work in an area where there is occupational exposure to acrylomtrile is in formed of the hazards and relevant symptoms of exposure to acrylonitrile, and of proper condi tions and precautions for the handling and use of acrylonitrile Workers shall be advised that skin exposure to liquid acrylonitrile may cause blister form ation and burns and that exposure to air borne acrylonitrile may have an im m ediate effect of causing unconsciousness or respiratory failure and, on a long-term basis, may increase the risk of developing cancer, particularly of the lungs and large intestine. Inform ation shall be given to employees at the beginning of em ploym ent and at least twice a year thereafter.
(b) The employer shall institute a continuing education program , conducted by instructors qualified by experience or training, to ensure that all employees have current knowledge of job hazards, p ro p e r m ain ten an ce and clean u p methods, and proper respirator use The instruc tional program shall include a description of the environm ental and medical surveillance pro cedures and of the advantages to the employee of participating in these procedures. Instruction shall include the inform ation specified in the "M aterial Safety Data Sheet," which shall be kept on file and readily accessible to employees at all places of em ploym ent where there is occupa tional exposure to acrylomtrile Workers engaged in maintenance and repair shall be included in these training programs (c) Required inform ation shall be recorded on the "Material Safety Data Sheet" or on a similar form approved by the Occupational Safety and H ealth A dm inistration, US D ep artm ent of Labor.
# Section 6 -Work Practices
(a) Control of Airborne Acrylonitrile Engineering controls, such as process enclosure or local exhaust ventilation, shall be used to keep concentrations of airborne acrylonitrile at or below the recom m ended environm ental limit. If Greater than 2 mg/cu m or E m ergency (entry into area of unknown c o n c e n t r a t i o n f o r emergency purposes)
(1) Self-contained breathing apparatus w ith full facepiece operated in pressure-demand or other positive pressure mode
(2) Combination Type C supplied-air respirator with full facepiece operated in pressure-dem and mode and auxiliary self-contained air supply used, ventilation systems shall be so designed and operated as to prevent accumulation or recircula tion of airborne acrylonitrile in the workplace en vironm ent and to effectively remove acrylonitrile from the breathing zone of employees. Exhaust ventilation systems discharging to outside air must conform to applicable local, state, and federal regulations and m ust not constitute hazards to employees or to the general popula tion Before m aintenance work on control equip m ent begins, sources of airborne acrylonitrile shall be elim inated to the extent feasible Enclosures, exhaust hoods, and ductw ork shall be kept in good repair so that design airflows are maintained. Airflow at each hood shall be measured at least semiannually and preferably m onthly. C ontinuous airflow indicators are recom m ended, such as water or oil m anometers properly m ounted at the juncture of fume hood and duct throat (marked to indicate acceptable airflow). A log shall be kept showing design airflow and results of semiannual or m onthly in spections.
W h enev er feasible, o p e ra tio n s in v o lv in g acrylonitrile should be placed in an isolated area, in com bination with other engineering controls, to reduce exposure of employees not directly concerned with the acrylonitrile operations.
(b) Regulated Areas Regulated areas shall be established and m ain tained where there is occupational exposure to acrylonitrile, and access to these areas shall be lim ited to authorized persons who have been properly inform ed of the potential hazards of acrylonitrile and proper control measures. A daily roster shall be made o f ail persons who enter regulated areas.
( (5) Where a fan is located in ductwork and where acrylonitrile is present in the ductw ork in concentrations greater than 7,500 ppm (approx imately 25 percent of the lower flammable limit), the rotating element of the fan shall be of nonsparking material or the casing shall consist of, or be lined with, nonsparking material There shall be sufficient clearance between the rotating ele m ent of the fan and the fan casing to prevent contact between the rotating element and the casing (6) Sources of ignition, such as smoking or open flames, are prohibited where acrylonitrile presents a fire or explosion hazard (f) Disposal of Waste Waste material shall be disposed of in a m anner that is not hazardous to employees or to the general population Spills of acrylonitrile and flushing of such spills shall be channeled for ap propriate treatm ent or collection for disposal. They shall not be channeled directly into the sanitary sewer system. Acrylonitrile wastes shall be appropriately m arked and any operations g e n e ra tin g airb o rn e a c ry lo n itrile shall be enclosed. In selecting the m ethod of waste dis posal, applicable local, state, and federal regula tions should be consulted (g) Storage Containers of acrylonitrile shall be kept tightly closed when not in use. Containers shall be stored in a safe m anner to minimize the possibility of ac cidental breakage or spills.
(h) General Work Practices (1) G ood housekeeping practices shall be observed to prevent contam ination of areas and equipm ent with liquid acrylonitrile and to pre vent buildup of such contam ination
(2) G ood personal hygiene practices should be encouraged Employees shall be re quired to wash all exposed areas of the body upon exiting from regulated areas Employees occupa tionally exposed to acrylonitrile shall be required to shower at the end of the workshift (i) W ork Clothing/Protective Clothing (1) Coveralls or similar full-body protective clothing and head, leg, and shoe coverings shall be worn by each employee entering a regulated area Upon exiting from a regulated area, the pro tective clothing shall be left at the point of exit W ith the last exit of the day, the protective clothing shall be placed in a suitably marked and closed container for disposal or laundering
(2) Such clothing shall be changed as soon as possible if accidentally contam inated with liq uid acrylonitrile
(3) The employer shall provide for the laundering of this clothing and shall ensure that soiled work clothing is not taken home by the employee. Precautions shall be taken to protect personnel who handle and launder soiled clothing. These workers shall be advised of the hazards of and means of preventing exposure to acrylonitrile Section 7 -Sanitation (a) Em ergency show ers and eye-flushing fountains with adequate pressure of cool water shall be provided and be quickly accessible in areas where there is potential of skin or eye con tact with acrylonitrile. This equipm ent shall be frequently inspected and m aintained in good working condition.
(b) Locker-room facilities, including showers and washbasins, located in nonexposure areas, shall be provided for employees required to change clothes before and after each work shift. The facilities shall provide for storage o f street clothing and clean work clothing separately from soiled work clothing Covered containers shall be provided for work clothing removed at the end of the work shift or after a contam ination inci dent The clothing shall be held in these con tainers until it is removed for decontam ination or disposal (c) Food preparation, dispensing (including vending m ach in es), and e atin g shall be prohibited in areas where there is occupational exposure to acrylonitrile Eating facilities pro vided for employees shall be located in nonex posure areas Washing facilities should be accessi ble nearby.
(d) Smoking and carrying smoking or chewing materials shall be prohibited in work areas where there is occupational exposure to acrylonitrile.
# Section 8 -Monitoring and Recordkeeping Requirements (a) M onitoring
(1) As soon as possible after prom ulgation of a standard based on these recommendations, each employer who has a place of em ploym ent in w h ic h a c r y lo n i tr i le is m a n u f a c tu r e d , polymerized, handled, stored, or otherwise used shall determ ine by an industrial hygiene survey the extent of exposure to acrylonitrile Surveys shall be repeated at least once every year and within 30 days of any process change likely to result in occupational exposure to acrylonitrile Records of these surveys, including the basis for any conclusion that there is no occupational ex posure to acrylonitrile, shall be retained until the next survey has been completed.
(2) If there is occupational exposure to acrylonitrile a program of personal m onitoring shall be instituted to measure or perm it calcula tion of the exposure of all employees.
(A) In all personal m onitoring, samples representative of the breathing zones of the employees shall be collected.
(B) For each environm ental determ ina tion, a sufficient num ber of samples shall be taken to characterize the employees' exposures during each work shift. Variations in work and production schedules and in employees' locations and job functions shall be considered in choosing sampling times, locations, and frequencies.
(C) Each operation in each work area shall be sam pled at least once every 3 months.
(3) If an employee is found to be exposed to acrylonitrile in excess of the recom m ended en vironm ental lim it, the exposure of th at employee shall be measured at least once a week, control measures shall be initiated, and the employee shall be notified of the extent of the exposure and of the control measures being im plem ented. Such m onitoring shall continue until tw o consecutive determ inations, 1 week apart, indicate that the em ployee's exposure no longer exceeds the recom m ended environm ental lim it. R outine m onitoring may then be resumed.
(b) Recordkeeping Environm ental m onitoring records shall be m aintained for at least 30 years after the em p lo y ee's last o c cu p atio n al exposure to acrylonitrile. These records shall include the dates and times of measurements, job function and location of employees within the worksite, m ethods of sampling and analysis used, types of respiratory protection in use at the tim e of sam pling, environm ental concentrations found, and identification o f exposed em ployees. Each employee shall be able to obtain inform ation on that employee's own environm ental exposures. Daily rosters of authorized persons who enter regulated areas shall be retained for 30 years. En vironm ental m onitoring records and entry rosters shall be made available to designated representa tives of the Secretary of Labor and of the Secre tary of Health, Education, and Welfare.
Pertinent medical records for each employee shall be retained for 30 years after the employee's last occupational exposure to acrylonitrile Records of environm ental exposures applicable to an employee should be included in that em ployee's m edical records. These m edical records shall be m ade available to the designated medical representatives of the Secretary of Labor, of the Secretary of Health, Education, and Welfare, of the employer, and of the employee or former employee (c) Employee Observation of M easurement (1) The em ployer shall give affected employees or their representatives an oppor tunity to observe any measurement made p u r suant to the standard of employee exposure to acrylomtrile.
(2) W hen observation of m easurement of employee exposure to acrylonitrile requires entry into an area where the use of personal protective devices, including respirators, is required, the ob server shall be provided with and required to use such equipm ent and comply with all other ap plicable safety procedures.
(3) W ithout interfering with the measure m ent, observers shall be entitled to.
(i) Receive an explanation of the m easurement procedure.
(li) Observe all steps related to the m easurement of the concentration of airborne acrylonitrile that are being perform ed at the place of exposure; and
(iii) Record the results obtained.
# APPENÜIX I PREFACE TO NIOSH METHOD N O . SI 56 (ACRYLONITRILE)
NIOSH M ethod N o SI 56 (copy attached), with slight modifications, is recommended for sampling and analysis of acrylonitrile in air M ethod SI 56 utilizes a charcoal tube for collec tion o f the sam ple and analysis by gaschromatography.
Data from the Standards Completion Program (SCP) indicates that at lower loadings than about 400 micrograms on the charcoal tube, the desorp tion efficiency may not be adequate, thus, the analysis would be questionable.
In addition, since the breakthrough studies of the SCP were not carried out beyond 48 liters {0.2 liter/m inu te for 4 hours at 92.0 mg acrylonitrile/cu m (42.4 ppm )] and since the effects o f h u m id ity on b re a k th ro u g h are unknow n, it cannot be presently assumed that sampling for 8 hours can be accomplished w ith out breakthrough Therefore, based upon the best available data, the Institute has determ ined that 4 ppm of acrylonitrile is the lowest concentration measura ble at the present tim e by a reliable sam pling/analytical m ethod To determ ine the workplace air concentration of acrylonitrile, collect a 4-hour sample at a flow rate of 0 2 liter/m inute. 1.2 The charcoal in the tube is transferred to a small, stoppered sample container, and the analyte is desorbed with methanole.
1 3 An aliquot of the desorbed sample is injected into a gas chromatograph 1 4 The area of the resulting peak is deter m ined and compared with areas obtained for standards 2 Range and Sensitivity 2 1 This m ethod was validated over the range of 17 5-70.0 mg/cu m at an atmospheric tem perature and pressure of 22°C and 760 mm Hg, using a 20-liter sample. U nder the conditionsof sample size (20 liters) the probable useful range of this m ethod is 4.5-135 mg/cu m The m ethod is capable of measuring m uch smaller amounts if the desorption efficiency is adequate Desorption efficiency m ust be determ ined over the range used.
2 2 The upper limit of the range of the m ethod is dependent on the adsorptive capacity of the charcoal tube This capacity varies w ith the concentrations of acrylonitrile and other subs tances m the air. The first section of the charcoal tube was found to hold at least 3 97 mg of acrylonitrile when a test atmosphere containing 92.0 mg/cu m of acrylonitrile in air was sampled at 0.18 liter per m inute for 240 m inutes, at that time the concentration of acrylonitrile in the effluent was less than 5% of that in the influent (The charcoal tube consists of two sections of ac tivated charcoal separated by a section of urethane foam. See Section 6.2) If a particular at mosphere is suspected of containing a large am ount of contam inant, a smaller sampling volume should be taken.
# Interferences
3 1 W hen the am ount of water in the air is so great that condensation actually occurs in the tube, organic vapors will not be trapped effi ciently. Preliminary experiments using toluene indicate that high hum idity severely decreases the breakthrough volume 3.2 W hen in te rfe rin g co m p o u n d s are known or suspected to be present in the air, such inform ation, including their suspected identities, should be transm itted with the sample 3 3 It m ust be emphasized that any com pound which has the same retention tim e as the analyte at the operating conditions described in this m ethod is an interference Retention time data on a single column cannot be considered proof of chemical identity 3 4 If the possibility of interference exists, separation conditions (column packing, tem perature, etc.) m ust be changed to circum vent the problem 4. Precision and Accuracy 4.1 The Coefficient of Variation (CV-p) for the total analytical and sampling m ethod in the range of 17.5-70.0 mg/cu m was 0 073. This value corresponds to a 3.3 mg/cu m standard deviation at the OSHA standard level. Statistical inform a tion and details of the validation and experim en tal test procedures can be found in Reference 11.2 4.2 On the average the concentrations ob tained at the OSHA standard level using the overall sampling and analytical m ethod were 6.0% lower than the "true" concentrations for a limited num ber of laboratory experiments Any difference between the "found" and "true" con centrations may not represent a bias in the sam pling and analytical m ethod, but rather a random - variation from the experimentally determ ined # "true" concentration Therefore, no recovery cor rection should be applied to the final result in Section 10 5 5 A dvantages and D isadvantages o f the M ethod 5 1 The sampling device is small, portable, and involves no liquids Interferences are minimal, and most of those which do occur can be eliminated by altering chrom atographic con ditions The tubes are analyzed by means of a quick, instrum ental m ethod The m ethod can also be used for the simultaneous analysis of two or more substances suspected to be present in the sam e s a m p le by s im p ly c h a n g in g g as chrom atographic conditions 5 2 One disadvantage of the m ethod is that the am ount of sample which can be taken is limited by the num ber of milligrams that the tube will hold before overloading. W hen the sam ple value obtained for the backup section of the charcoal tube exceeds 259? of that found on the front section, the possibility of sample loss exists 5 3 Furtherm ore, the precision o f the m ethod is lim ited by the reproducibility of the pressure drop across the tubes This drop will affect the flow rate and cause the volume to be imprecise, because the pum p is usually calibrated for one tube only 6 Apparatus 6.1
A calibrated personal sampling pum p whose flow can be determ ined within ± 5% at the recom m ended flow rate. (Reference 11.3) 6 2 Charcoal tubes: glass tube with both ends flame sealed, 7 cm long with a 6-mm O D and a 4-mm I D , containing 2 sections of 20/40 mesh activated charcoal separated by a 2-mm portion of urethane foam. The activated charcoal is prepared from coconut shells and is fired at 600°C prior to packing The adsorbing section contains 100 mg of charcoal, the backup section 50 mg A 3-mm portion of urethane foam is placed between the outlet end of the tube and the backup section A plug of silylated glass wool is placed in front of the adsorbing section The pressure drop across the tube m ust be less than one inch of m ercury at a flow rate of 1 liter per m inut" c 3 Gas chrom atograph equipped w ith a flame ionization detector 6 4 Colum n (4-ft x 1/4-in stainless steel) packed with 50/80 mesh Porapak, Type Q 6 5 An electronic integrator or some other suitable m ethod for measuring peak areas 6 6 Two-m illiliter sample containers with f glass stoppers or Teflon-lined caps. If an auto m atic sample injector is used, the associated vials may be used. 6 7 M icrohter syringes 10-microliter, and other convenient sizes for m aking standards 6.8 Pipets-1 O-ml delivery pipets. 6 9 Volumetric flasks 10-ml or convenient sizes for m aking standard solutions 7 Reagents 7 1 Chromatographic quality methanol. 7 2 Acrylonitrile, reagent grade. 8.3.6 The tem perature and pressure of the atm osphere being sam pled should be recorded If pressure reading is not available, record the elevation.
The charcoal tubes should be cap ped w ith the supplied plastic caps immediately after sampling. U nder no circumstances should rubber caps be used. 8.3.8 W ith each batch of ten samples subm it one tube from the same lot of tubes which was used for sample collection and which is sub jected to exactly the same handling as the sam ples except that no air is drawn through it. Label this as a blank. 8 3.9 Capped tubes should be packed tightly and padded before they are shipped to minimize tube breakage during shipping 8 3 10 A sample of the bulk material should be subm itted to the laboratory in a glass container with a Teflon-lined cap This sample should not be transported in the same container as the charcoal tubes. 6. 155°C colum n tem perature 8.4.4 Injection. The first step in the analysis is the injection of the sample into the gas chromatograph. To eliminate difficulties arising from blow back or distillation within the syringe needle, one should employ the solvent flush in jection technique. The 10-microliter syringe is first flushed w ith solvent several times to wet the barrel and plunger Three microliters of solvent are drawn into the syringe to increase the ac curacy and reproducibility of the injected sample volume. The needle is removed from the solvent, and the plunger is pulled back about 0.2 microliter to separate the solvent flush from the sample with a pocket of air to be used as a marker The needle is then immersed m the sam ple, and a 5-microliter aliquot is withdrawn, tak ing into consideration the volume of the needle, since the sample in the needle will be completely injected After the needle is removed from the sample and prior to injection, the plunger is pulled back 1.2 microliters to minimize evapora tion of the sample from the tip of the needle. O b serve that the sample occupies 4 9-5 0 microliters in the barrel of the syringe. Duplicate injections of each sample and standard should be made. N o more than a 3°A difference in area is to be ex pected An autom atic sample injector can be used if it is shown to give reproducibility at least as good as the solvent flush m ethod 8.4.5 Measurement of area. The area of the sample peak is measured by an electronic in tegrator or some other suitable form of area measurement, and preliminary results are read from a standard curve prepared as discussed below 8 5 Determ ination of Desorption Efficien cy 8.5 1 Importance of determ ination The desorption efficiency of a particular com pound can vary from one laboratory to another and also from one batch of charcoal to another. Thus, it is necessary to determ ine at least once the percen tage of the specific com pound that is removed in the desorption process, provided the same batch of charcoal is used. 8 5.2 P ro c e d u re fo r d e te rm in in g d eso rp tio n efficiency . A c tiv a te d charcoal equivalent to the am ount in the first section of the sampling tube (100 mg) is measured into a 2.5 in, 4-m m I D glass tube, flame sealed at one end. This charcoal m ust be from the same batch as that used in obtaining the samples and can be ob tained from unused charcoal tubes. The open end is capped with Parafilm. A known am ount of hexane solution of acrylonitrile containing 0.239 g/m l is injected directly into the activated char coal with a m icroliter syringe, and the tube is capped w ith more Parafilm. W hen using an auto matic sample injector, the sample injector vials, capped with Teflon-faced septa, may be used in place of the glass tubes.
The am ount injected is equivalent to that present in a 20-liter air sample at the selected level. Six tubes at each of three levels (0 5X, IX , and 2X of the standard) are prepared in this m an ner and allowed to stand for at least overnight to assure com plete adsorption of the analyte onto the charcoal. These tubes are referred to as the samples. A parallel blank tube should be treated in the same m anner except that no sample is ad ded to it. The sample and blank tubes are desorbed and analyzed in exactly the same m an ner as the sampling tube described in Section 8.4.
Two or three standards are prepared by injecting the same volume of com pound into 1.0 ml of m ethanol with the same syringe used in the preparation o f the sam ples. These are analyzed with the samples.
The desorption efficiency (D.E.) equals the average weight in mg recovered from the tube divided by the weight in mg added to the tube, or D E = Average W eight recovered (mg) W eight added (mg) The desorption efficiency is depen dent on the am ount of analyte collected on the charcoal. Plot the desorption efficiency versus weight of analyte found. This curve is used in Section 10.4 to correct for adsorption losses.
# Calibration and Standards
It is convenient to express concentration of standards in term s of mg/1.0 ml methanol. The density of the analyte is used to convert mg into m icro liters for easy m easu rem en t w ith a microliter syringe A series of standards, varying in concentration over the range of interest, is pre pared and analyzed under the same G C condi tions and during the same time period as the unknow n samples Curves are established by plotting concentration in mg/1 0 ml versus peak area N ote Since no internal standard is used in the method, standard solutions m ust be analyzed at the same tim e that the sample analysis is done. This will minimize the effect of known day-today variations and variations during the same day of the FID response.
# Calculations
Read the weight, in mg, correspond ing to each peak area from the standard curve. N o volume corrections are needed, because the standard curve is based on mg/1.0 ml m ethanol and the volume of sample injected is identical to the volume of the standards injected.
# D E P A R T M E N T O F H E A L T H , E D U C A T IO N , A N D W E L F A R E P U B L IC H E A L T H S E R V IC E C E N T E R F O R D I S E A S E C O N T R O L N A T I O N A L I N S T I T U T E F O R O C C U P A T I O N A L S A F E T Y A N D H E A L T H R O B E R T A T A F T L A B O R A T O R I E S 4 6 7 6 C O L U M B I | 10,450 | {
"id": "81447ea023db525caf081929f71c3fe14cd75c43",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # Testing for HCV Infection: An Update of Guidance for Clinicians and Laboratorians
In the United States, an estimated 4.1 million persons have been infected with hepatitis C virus (HCV), of whom an estimated 3.2 (95% confidence interval = 2.7-3.9) million are living with the infection (1). New infections continue to be reported particularly among persons who inject drugs and persons exposed to HCV-contaminated blood in health-care settings with inadequate infection control (2).
Since 1998, CDC has recommended HCV testing for persons with risks for HCV infection (3). In 2003, CDC published guidelines for the laboratory testing and result reporting of antibody to HCV (4). In 2012, CDC amended testing recommendations to include one-time HCV testing for all persons born during 1945-1965 regardless of other risk factors (1).
CDC is issuing this update in guidance because of 1) changes in the availability of certain commercial HCV antibody tests, 2) evidence that many persons who are identified as reactive by an HCV antibody test might not subsequently be evaluated to determine if they have current HCV infection (5), and 3) significant advances in the development of antiviral agents with improved efficacy against HCV (6). Although previous guidance has focused on strategies to detect and confirm HCV antibody (3,4), reactive results from HCV antibody testing cannot distinguish between persons whose past HCV infection has resolved and those who are currently HCV infected. Persons with current infection who are not identified as currently infected will not receive appropriate preventive services, clinical evaluation, and medical treatment. Testing strategies must ensure the identification of those persons with current HCV infection.
This guidance was written by a workgroup convened by CDC and the Association of Public Health Laboratories (APHL), comprising experts from CDC, APHL, state and local public health departments, and academic and independent diagnostic testing laboratories, in consultation with experts from the Veterans Health Administration and the Food and Drug Administration (FDA). The workgroup reviewed laboratory capacities and practices relating to HCV testing, data presented at the CDC 2011 symposium on identification, screening and surveillance of HCV infection (7), and data from published scientific literature on HCV testing. Unpublished data from the American Red Cross on validation of HCV antibody testing also were reviewed.
# Changes in HCV Testing Technologies
Since the 2003 guidance was published (4), there have been two developments with important implications for HCV testing:
# Identifying Current HCV Infections
In 2011, FDA approved boceprevir (Victrelis, Merck & Co.) and telaprevir (Incivek, Vertex Pharmaceuticals) for treatment of chronic hepatitis C genotype 1 infection, in combination with pegylated interferon and ribavirin, in adult patients with compensated liver disease. Boceprevir and telaprevir interfere directly with HCV replication. Persons who complete treatment using either of these drugs combined with pegylated interferon and ribavirin are more likely to clear virus (i.e., have virologic cure), compared to those given standard therapy based on pegylated interferon and ribavirin (9). Viral clearance, when sustained, stops further spread of HCV and is associated with reduced risk for hepatocellular carcinoma (10) and all-cause mortality (11). Other compounds under study in clinical trials hold promise for even more effective therapies (6).
Because antiviral treatment is intended for persons with current HCV infection, these persons need to be distinguished from persons whose infection has resolved. HCV RNA in blood, by nucleic acid testing (NAT), is a marker for HCV viremia and is detected only in persons who are currently infected. Persons with reactive results after HCV antibody testing should be evaluated for the presence of HCV RNA in their blood.
# On May 7, 2013, this report was posted as an MMWR Early
Release on the MMWR website ().
# Benefits of Testing for Current HCV Infection
Accurate testing to identify current infection is important to 1) help clinicians and other providers correctly identify persons infected with HCV, so that preventive services, care and treatment can be offered; 2) notify tested persons of their infection status, enabling them to make informed decisions about medical care and options for HCV treatment, take measures to limit HCV-associated disease progression (e.g., avoidance or reduction of alcohol intake, and vaccination against hepatitis A and B), and minimize risk for transmitting HCV to others; and 3) inform persons who are not currently infected of their status and the fact that they are not infectious.
# Recommended Testing Sequence
The testing sequence in this guidance is intended for use by primary care and public health providers seeking to implement CDC recommendations for HCV testing (1,3,4). In most cases, persons identified with HCV viremia have chronic HCV infection. This testing sequence is not intended for diagnosis of acute hepatitis C or clinical evaluation of persons receiving specialist medical care, for which specific guidance is available (12).
Testing for HCV infection begins with either a rapid or a laboratory-conducted assay for HCV antibody in blood (Figure ). A nonreactive HCV antibody result indicates no HCV antibody detected. A reactive result indicates one of the following: 1) current HCV infection, 2) past HCV infection that has resolved, or 3) false positivity. A reactive result should be followed by NAT for HCV RNA. If HCV RNA is detected, that indicates current HCV infection. If HCV RNA is not detected, that indicates either past, resolved HCV infection, or false HCV antibody positivity.
Initial Testing for HCV Antibody. An FDA-approved test for HCV antibody should be used. If the OraQuick HCV Rapid Antibody Test is used, the outcome is reported as reactive or nonreactive. If a laboratorybased assay is used, the outcome is reported as reactive or nonreactive without necessarily specifying signal-to-cutoff ratios.
Testing for HCV RNA. An FDA-approved NAT assay intended for detection of HCV RNA in serum or plasma from blood of at-risk patients who test reactive for HCV antibody should be used. There are several possible operational steps toward NAT after initial testing for HCV antibody:
1. Blood from a subsequent venipuncture is submitted for HCV NAT if the blood sample collected is reactive for HCV antibody during initial testing. 2. From a single venipuncture, two specimens are collected in separate tubes: one tube for initial HCV antibody testing; and a second tube for HCV NAT if the HCV antibody test is reactive.
# FIGURE. Recommended testing sequence for identifying current hepatitis C virus (HCV) infection
# Supplemental Testing for HCV Antibody
If testing is desired to distinguish between true positivity and biologic false positivity for HCV antibody, then, testing may be done with a second HCV antibody assay approved by FDA for diagnosis of HCV infection that is different from the assay used for initial antibody testing. HCV antibody assays vary according to their antigens, test platforms, and performance characteristics, so biologic false positivity is unlikely to be exhibited by more than one test when multiple tests are used on a single specimen (14).
# Test Interpretation and Further Action
See Table.
# Laboratory Reporting
"Acute hepatitis C" and "hepatitis C (past or present)" are nationally notifiable conditions, and are subject to mandated reporting to health departments by clinicians and laboratorians, as determined by local, state or territorial law and regulation. Surveillance case definitions are developed by the Council of State and Territorial Epidemiologists in collaboration with CDC (15). In all but a few jurisdictions, positive results from HCV antibody and HCV RNA testing that are indicative of acute, or past or present HCV infection, are reportable. Specific policies for laboratory reporting are found at health department websites (16).
# Future Studies
Research, development, validation, and cost-effectiveness studies are ongoing to inform the best practices for detecting HCV viremia and for distinguishing between resolved HCV infection and biologic false positivity for HCV antibody in persons in whom HCV RNA is not detected. Outcomes of these studies will provide comprehensive guidance on testing, reporting, and clinical management, and will improve case definitions for disease notification and surveillance. | 1,738 | {
"id": "9aabc6d5e016aa3da71f60347d67e8cab1a1a185",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | These recommendations of the Advisory Committee on Immunization Practices (ACIP) for poliomyelitis prevention replace those issued in 1997. As of January 1, 2000, ACIP recommends exclusive use of inactivated poliovirus vaccine (IPV) for routine childhood polio vaccination in the United States. All children should receive four doses of IPV at ages 2, 4, and 6-18 months and 4-6 years. Oral poliovirus vaccine (OPV) should be used only in certain circumstances, which are detailed in these recommendations. Since 1979, the only indigenous cases of polio reported in the United States have been associated with the use of the live OPV. Until recently, the benefits of OPV use (i.e., intestinal immunity, secondary spread) outweighed the risk for vaccineassociated paralytic poliomyelitis (VAPP) (i.e., one case among 2.4 million vaccine doses distributed). In 1997, to decrease the risk for VAPP but maintain the benefits of OPV, ACIP recommended replacing the all-OPV schedule with a sequential schedule of IPV followed by OPV. Since 1997, the global polio eradication initiative has progressed rapidly, and the likelihood of poliovirus importation into the United States has decreased substantially. In addition, the sequential schedule has been well accepted. No declines in childhood immunization coverage were observed, despite the need for additional injections. On the basis of these data, ACIP recommended on June 17, 1999, an all-IPV schedule for routine childhood polio vaccination in the United States to eliminate the risk for VAPP. ACIP reaffirms its support for the global polio eradication initiative and the use of OPV as the only vaccine recommended to eradicate polio from the remaining countries where polio is endemic. *African Region (AFR), Region of the Americas (AMR), Eastern Mediterranean Region (EMR), European Region (EUR), South East Asia Region (SEAR), and Western Pacific Region (WPR).Children who have initiated the poliovirus vaccination series with one or more doses of OPV should receive IPV to complete the series. If the vaccines are administered ac-# INTRODUCTION
As a result of the introduction of inactivated poliovirus vaccine (IPV) in the 1950s, followed by oral poliovirus vaccine (OPV) in the 1960s, poliomyelitis control has been achieved in numerous countries worldwide, including the entire Western Hemisphere (1,2 ). In the United States, the last indigenously acquired cases of polio caused by wild poliovirus were detected in 1979 (3 ). In 1985, the countries of the Americas- established a goal of regional elimination of wild poliovirus by 1990 (4 ). In 1988, the World *Anguilla, Antigua and Barbuda, Argentina, Aruba, Bahamas, Barbados, Belize, Bermuda, Bolivia, Brazil, Canada, Cayman Islands, Chile, Colombia, Costa Rica, Cuba, Dominica, Dominican Republic, Ecuador, El Salvador, French Guiana, Grenada, Guadeloupe, Guatemala, Guyana, Haiti, Honduras, Jamaica, Martinique, Mexico, Montserrat, Netherlands Antilles, Nicaragua, Panama, Paraguay, Peru, Puerto Rico, Saint Kitts and Nevis, Saint Lucia, Saint Vincent and the Grenadines, Suriname, Trinidad and Tobago, Turks and Caicos Islands, United States of America, Uruguay, Venezuela, United Kingdom Virgin Islands, and United States Virgin Islands.
# MMWR May 19, 2000
Health Assembly (WHA), which is the directing council of the World Health Organization (WHO), adopted the goal of global polio eradication by the end of 2000 (5 ). In the Americas, the last case of polio associated with isolation of wild poliovirus was detected in Peru in 1991 (6 ). The Western Hemisphere was certified as free from indigenous wild poliovirus in 1994, an accomplishment achieved by the exclusive use of OPV (7 ). The global polio eradication initiative has reduced the number of reported polio cases worldwide by >80% since the mid-1980s, and worldwide eradication of the disease by the end of 2000 or soon after appears feasible (8 ).
# Summary of Recent Polio Vaccination Policy in the United States
Based on the continued occurrence of vaccine-associated paralytic poliomyelitis (VAPP) in the United States, the absence of indigenous disease, and the sharply decreased risk for wild poliovirus importation into the United States, the Advisory Committee on Immunization Practices (ACIP) recommended in June 1996 a change from an all-OPV schedule for routine childhood poliovirus vaccination to a sequential IPV-OPV vaccination schedule (i.e., two doses of IPV at ages 2 and 4 months, followed by two doses of OPV at ages 12-18 months and 4-6 years). These recommendations were officially accepted by CDC and published in January 1997 (9 ). The sequential schedule was intended to be a transition policy in place for 3-5 years until eventual adoption of an all-IPV schedule. At the same time that ACIP recommended a sequential schedule, the American Academy of Pediatrics (AAP) and the American Academy of Family Physicians (AAFP) recommended expanded use of IPV, with all-OPV, all-IPV, and sequential IPV-OPV as equally acceptable options (10,11 ).
After the successful implementation of expanded IPV use without any observed declines in childhood immunization coverage (12,13 ), AAP and AAFP joined ACIP in January 1999 in recommending that the first two doses of polio vaccine for routine vaccination be IPV in most circumstances (14,15 ). However, an all-IPV schedule was still needed to eliminate the risk for VAPP while maintaining population immunity. Thus, ACIP recommended in June 1999 that the all-IPV schedule begin January 1, 2000 (16 ). Although AAFP concurred with this recommendation, AAP recommended only that the all-IPV schedule begin during the first 6 months of 2000 (17,18 ).
The United States can remain free of polio only by maintaining high levels of population immunity and reducing or eliminating the risk for poliovirus importation. ACIP strongly reaffirms its support for the global polio eradication initiative, which relies on OPV in countries where the disease has recently been endemic. This report provides the scientific and programmatic background for transition to an all-IPV schedule, presents the current recommendations for polio prevention in the United States, and summarizes recommendations for OPV use if the U.S. vaccine stockpile is needed for outbreak control.
# MMWR 3
# BACKGROUND
# Characteristics of Poliomyelitis
# Acute Poliomyelitis
Poliomyelitis is a highly contagious infectious disease caused by poliovirus, an enterovirus. Most poliovirus infections are asymptomatic. Symptomatic cases are typically characterized by two phases -the first, a nonspecific febrile illness, is followed (in a small percentage of cases) by aseptic meningitis or paralytic disease. The ratio of cases of inapparent infection to paralytic disease ranges from 100:1 to 1,000:1.
After a person is exposed to poliovirus, the virus replicates in the oropharynx and the intestinal tract. Viremia follows, which can result in infection of the central nervous system. Replication of poliovirus in motor neurons of the anterior horn and brain stem results in cell destruction and causes the typical clinical manifestations of paralytic polio. Depending on the sites of paralysis, polio can be classified as spinal, bulbar, or spino-bulbar disease. Progression to maximum paralysis is rapid (2-4 days), is usually associated with fever and muscle pain, and rarely continues after the patient's temperature has returned to normal. Spinal paralysis is typically asymmetric and more severe proximally than distally. Deep tendon reflexes are absent or diminished. Bulbar paralysis can compromise respiration and swallowing. Paralytic polio is fatal in 2%-10% of cases. After the acute episode, many patients recover at least some muscle function and prognosis for recovery can usually be established within 6 months after onset of paralytic manifestations.
# Post-Polio Syndrome
After 30-40 years, 25%-40% of persons who contracted paralytic polio during childhood can experience muscle pain and exacerbation of existing weakness or develop new weakness or paralysis. This disease entity, called post-polio syndrome, has been reported only in persons infected during the era of wild poliovirus circulation. Risk factors for post-polio syndrome include a) the passage of more time since acute poliovirus infection, b) the presence of permanent residual impairment after recovery from the acute illness, and c) being female (19 ).
# Epidemiology
Polio is caused by three serotypes of poliovirus -types 1, 2, and 3. In countries where poliovirus is still endemic, paralytic disease is most often caused by poliovirus type 1, less frequently by poliovirus type 3, and least frequently by poliovirus type 2. The virus is transmitted from person to person primarily by direct fecal-oral contact. However, the virus also can be transmitted by indirect contact with infectious saliva or feces, or by contaminated sewage or water.
The first paralytic manifestations of polio usually occur 7-21 days from the time of initial infection (range: 4-30 days). The period of communicability begins after the virus replicates and is excreted in the oral secretions and feces. This period ends with the termination of viral replication and excretion, usually 4-6 weeks after infection. After household exposure to wild poliovirus, >90% of susceptible contacts become infected. Poliovirus infection results in lifelong immunity specific to the infecting viral serotype.
# MMWR May 19, 2000
Humans are the only reservoir for poliovirus. Long-term carrier states (i.e., excretion of virus by asymptomatic persons >6 months after infection) are rare and have been reported only in immunodeficient persons (20,21 ). Risk factors for paralytic disease include larger inocula of poliovirus, increasing age, pregnancy, strenuous exercise, tonsillectomy, and intramuscular injections administered while the patient is infected with poliovirus (22)(23)(24).
# Secular Trends in Disease and Vaccination Coverage in the United States
In the United States, poliovirus vaccines have eliminated polio caused by wild poliovirus. The annual number of reported cases of paralytic disease declined from >20,000 in 1952 to an average of 8-9 cases annually during 1980-1994 (Figure) (3,25,26 ). During 1980During -1998, a total of 152 cases of paralytic polio were reported, including 144 cases of VAPP, six imported cases, and two indeterminate cases (16 ). Until worldwide polio eradication is achieved, epidemics caused by importation of wild virus to the United States remain a possibility unless population immunity is maintained by vaccinating children early in their first year of life. In the United States, outbreaks of polio occurred in 1970, 1972, and 1979 after wild poliovirus was introduced into susceptible populations that had low levels of vaccination coverage. Vaccination coverage among children in the United States is at the highest level in history because of ongoing immunization initiatives. Assessments of the vaccination status of children entering kindergarten and first grade indicated that 95% had completed primary vaccination against polio during the 1980-81 school year, and rates continue to be above that level.
Coverage levels among preschool-aged children are lower than the levels at school entry, but have increased substantially in recent years. Nationally, representative vaccination coverage rates among children aged 19-35 months are derived from the National Immunization Survey (NIS). Vaccination coverage with at least three doses of poliovirus vaccine among children in this age group increased from 88% in 1995 to 91% in 1996 and remained >90% in 1997 and 1998 (13 ).
# FIGURE. Total number of reported paralytic poliomyelitis cases and total number of reported vaccine-associated paralytic polio (VAPP) cases -United States, 1960-1998*
*Updated June 16, 1999. 1960 1964 1968 1972 1976 1980 1984 1988 1992 1996
# Year
# Number of cases
Serosurveys have identified high levels of population immunity consistent with these high coverage rates. Based on data from selected surveys, >90% of children, adolescents, and young adults had detectable antibodies to poliovirus types 1 and 2, and >85% had antibody to type 3 (27,28 ). Data from seroprevalence surveys conducted in two inner-city areas of the United States during 1990-1991 documented that >80% of all children aged 12-47 months had antibodies to all three poliovirus serotypes. Of the children who had received at least three doses of OPV, 90% had antibodies to all three serotypes (29 ). A serosurvey conducted during 1997-1998 among low-income children aged 19-35 months living in four U.S. cities reported that 96.8%, 99.8%, and 94.5% were seropositive to poliovirus types 1, 2, and 3, respectively (30 ).
Both laboratory surveillance for enteroviruses and surveillance for polio cases suggest that endemic circulation of indigenous wild polioviruses ceased in the United States in the 1960s. During the 1970s, genotypic testing (e.g., molecular sequencing or oligonucleotide fingerprinting) of poliovirus isolates obtained from indigenous cases (both sporadically occurring and outbreak-associated) in the United States indicated that these viruses were imported (31 ). During the 1980s, five cases of polio were classified as imported. The last imported case, reported in 1993, occurred in a child aged 2 years who was a resident of Nigeria; the child had been brought to New York for treatment of paralytic disease acquired in her home country. Laboratory investigations failed to isolate poliovirus among samples taken from this child after she arrived in the United States.
Recent experience in Canada illustrates the continuing potential for importation of wild poliovirus into the United States until global eradication is achieved. In 1993 and 1996, health officials in Canada isolated wild poliovirus in stool samples from residents of Alberta and Ontario. No cases of paralytic polio occurred as a result of these wild virus importations. The strain isolated in 1993 was linked epidemiologically and by genomic sequencing to a 1992 polio outbreak in the Netherlands (32 ). The isolate obtained in 1996 was from a child who had recently visited India (33 ).
Inapparent infection with wild poliovirus no longer contributes to either the establishment or maintenance of poliovirus immunity in the United States because these viruses no longer circulate in the population. Thus, universal vaccination of infants and children is the only way to establish and maintain population immunity against polio.
# Polio Eradication
After the widespread use of poliovirus vaccine in the mid-1950s, the incidence of polio declined rapidly in many industrialized countries. In the United States, the number of cases of paralytic polio reported each year declined from >20,000 cases in 1952 to <100 cases in the mid-1960s (3 ). In 1988, the WHA resolved to eradicate polio globally by 2000 (5 ). This global resolution followed the regional goal to eliminate polio by 1990, set in 1985 by the countries of the Western Hemisphere. The last case of polio associated with wild poliovirus isolation was reported from Peru in 1991, and the entire Western Hemisphere was certified as free from indigenous wild poliovirus by an International Certification Commission in 1994 (7 ). The following polio eradication strategies, which were developed for the Americas, were adopted for worldwide implementation in all polio-endemic countries ( 34):
- Achieve and maintain high vaccination coverage with at least three doses of OPV among infants aged <1 year.
# MMWR May 19, 2000
- Develop sensitive systems of epidemiologic and laboratory surveillance, including acute flaccid paralysis (AFP) surveillance.
- Administer supplemental doses of OPV to all young children (usually those aged <5 years) during National Immunization Days (NIDs) to rapidly decrease widespread poliovirus circulation.
- Conduct mopping-up vaccination campaigns (i.e., localized campaigns that include home-to-home administration of OPV) in areas at high risk to eliminate the last remaining chains of poliovirus transmission.
In 1998, global coverage with at least three doses of OPV among infants aged 80%, except the African Region (AFR), where coverage improved from 32% in 1988 to 53% in 1998 (8 ). Also in 1998, a total of 90 countries conducted either NIDs (74 countries) or Sub-National Immunization Days (16 countries). These 90 countries provided supplemental doses of OPV to approximately 470 million children aged <5 years (i.e., approximately three quarters of the world's children aged <5 years) (8 ). In 1999, NIDs were conducted in all 50 polio-endemic countries. NIDs in the AFR targeted approximately 88 million children aged <5 years (35 ). Synchronized NIDs were conducted in 18 countries of the European Region (EUR) and Eastern Mediterranean Region (EMR), vaccinating 58 million children aged <5 years. Another 257 million children aged <5 years were vaccinated in December 1998 and January 1999 in countries of the EMR (Pakistan), South East Asia Region (SEAR) (Bangladesh, Bhutan, India, Myanmar, Nepal, and Thailand), and Western Pacific Region (WPR) (China and Vietnam) (36)(37)(38)(39)(40). NIDs in India reached 134 million children, representing the largest mass campaigns conducted to date. Each round of NIDs in India was conducted in only one day (41 ). Mopping-up campaigns have been conducted widely in the countries of the Americas (including Brazil, Colombia, Mexico, Peru, and several countries in Central America) and more recently in the Mekong delta area encompassing Cambodia, Laos, and Vietnam in 1997, and in Turkey in 1998.
These supplemental immunization activities have been successful in decreasing the number of reported polio cases globally from 35,251 in 1988 (when the polio eradication target was adopted) to 6,227 in 1998, a decrease of 82% (8 ). This decrease in incidence is even more remarkable considering the progress in implementing sensitive systems for AFP surveillance, which substantially increased the completeness of reporting of suspected or confirmed polio cases. To conduct virological surveillance, a global laboratory network has been established that processes stool specimens in WHOaccredited laboratories, with both quality and performance monitored closely (42 ).
Concurrent with the decline in polio incidence, the number of polio-endemic countries has decreased from >120 in 1988 to approximately 50 in 1998. Approximately 50% of the world's population resides in areas now considered polio-free, including the Western Hemisphere, WPR (which encompasses China), and EUR. Two large endemic areas of continued poliovirus transmission exist in South Asia and Sub-Saharan Africa. Priority countries targeted for accelerated implementation of polio eradication strategies include seven reservoir countries (Bangladesh, Democratic Republic of the Congo, Ethiopia, India, Nepal, Nigeria, and Pakistan) and eight countries in conflict (Afghanistan, Angola, Democratic Republic of the Congo, Liberia, Sierra Leone, Somalia, Sudan, and Tajikistan) (8 ). Progress in these countries will be essential to achieve the goal of global polio eradication by the end of 2000.
# Vaccine-Associated Paralytic Poliomyelitis (VAPP)
Cases of VAPP were observed almost immediately after the introduction of live, attenuated poliovirus vaccines (43,44 ). Before the sequential IPV-OPV schedule was introduced, 132 cases of VAPP were reported during 1980-1995 (Figure) (26;CDC, unpublished data, 2000). Fifty-two cases of paralysis occurred among otherwise healthy vaccine recipients, 41 cases occurred among healthy close contacts of vaccine recipients, and 7 cases occurred among persons classified as community contacts (i.e., persons from whom vaccine-related poliovirus was isolated but who had not been vaccinated recently or been in direct contact with vaccine recipients). An additional 32 cases occurred among persons with immune system abnormalities who received OPV or who had direct contact with an OPV recipient (Table ).
The overall risk for VAPP is approximately one case in 2.4 million doses of OPV vaccine distributed, with a first-dose risk of one case in 750,000 first doses distributed (Table ). Among immunocompetent persons, 83% of cases among vaccine recipients and 63% of cases among contacts occurred after administration of the first dose (Table) (3,25,36 ). Among persons who are not immunodeficient, the risk for VAPP associated with the first dose of OPV is sevenfold to 21-fold higher than the risk associated with subsequent doses (25 ). Immunodeficient persons, particularly those who have B-lymphocyte disorders that inhibit synthesis of immune globulins (i.e., agammaglobulinemia and hypogammaglobulinemia), are at greatest risk for VAPP (i.e., 3,200-fold to 6,800fold greater risk than immunocompetent OPV recipients) (45 ).
Since implementation of the sequential IPV-OPV schedule in 1997, five cases of VAPP with onset in 1997 and two cases with onset in 1998 were confirmed. Three of these cases were associated with administration of the first or second dose of OPV to children
# MMWR May 19, 2000
who had not previously received IPV, and one of the 1998 cases was associated with administration of the third dose. Although these data suggest a decline in VAPP after introduction of the sequential schedule, continued monitoring with additional observation time is required to confirm these preliminary findings because of potential delays in reporting (25,46 ).
# Transition to an All-IPV Schedule
Adopting an all-IPV schedule for routine childhood polio vaccination in the United States is intended to eliminate the risk for VAPP. However, this schedule requires two additional injections at ages 6-18 months and 4-6 years because no combination vaccine that includes IPV as a component is licensed in the United States. Because of concerns regarding potential declines in childhood immunization coverage after introduction of the sequential IPV-OPV schedule (which required two additional injections at ages 2 and 4 months), several evaluations were conducted during this transition period. No evidence exists that childhood vaccination coverage declined because of these additional injections. In two West Coast health maintenance organizations (HMOs) with automated recording and tracking systems for vaccination, researchers assessed the up-to-date vaccination status of infants at age 12 months (i.e., two doses of poliovirus vaccine, three doses of diphtheria and tetanus toxoids and acellular pertussis vaccine , two doses of Haemophilus influenzae type b vaccine , and two doses of hepatitis B vaccine ). The proportion of children who started the routine vaccination schedule with IPV ranged from 36%-98% across the HMOs by the third quarter of 1997. Infants starting with IPV were as likely to be up-to-date as were infants starting with OPV (12 ).
Available data from other public-sector clinics showed similar results. In one innercity clinic in Philadelphia, 152 children due for their first dose of polio vaccine received IPV. Of the 145 children who returned to the clinic, 144 received a second dose of IPV. More than 99% of children due for their third and fourth injections (including IPV) during a single visit received them as indicated ( 47). An evaluation conducted at six public health clinics in one Georgia county also concluded that, of 567 infants who received their first dose of polio vaccine by age 3 months, 534 (94%) received IPV. Among these infants, 99.6% were also up-to-date for their first doses of diphtheria and tetanus toxoids vaccine (DTP), DTaP, Hib, and HepB (48 ). More detailed data on compliance with the recommended vaccination schedules is available from state immunization registries.
Another study reviewed immunization data from children born in Oklahoma during January 1, 1996-June 30, 1997 (i.e., 36,391 children seen at one of 290 facilities). The percentage of children who received IPV as their first dose of polio vaccine increased from <2% of children born in 1996 to 15% of children born in the first quarter of 1997 and to 30% of children born in the second quarter of 1997. However, receipt of IPV did not impact overall vaccination coverage; 80% of children receiving IPV for their first dose were up-to-date, as were 80% of children receiving OPV (49 ).
In 1995, a total of 448,030 doses of IPV were distributed (i.e., approximately 2% of total poliovirus vaccine doses) in the United States. IPV use increased from 6% of all polio doses distributed in 1996 to 29% in 1997 and 34% in 1998. Through August 31, 1999, a total of 69% of doses purchased were IPV, indicating increased acceptance of IPV (18 ).
# INVESTIGATION AND REPORTING OF SUSPECTED POLIOMYELITIS CASES
# Case Investigation
Each suspected case of polio should prompt an immediate epidemiologic investigation with collection of laboratory specimens as appropriate (see Laboratory Methods). If evidence suggests the transmission of wild poliovirus, an active search for other cases that could have been misdiagnosed initially (e.g., as Guillain-Barré syndrome , polyneuritis, or transverse myelitis) should be conducted. Control measures (including an OPV vaccination campaign to contain further transmission) should be instituted immediately. If evidence suggests vaccine-related poliovirus, no vaccination plan should be developed because no outbreaks associated with live, attenuated vaccinerelated poliovirus strains have been documented.
The two most recent outbreaks of polio reported in the United States affected members of religious groups who object to vaccination (i.e., outbreaks occurred in 1972 among Christian Scientists and in 1979 among members of an Amish community). Polio should be suspected in any case of acute flaccid paralysis that affects an unvaccinated member of such a religious group. All such cases should be investigated promptly (see Surveillance).
# Surveillance
CDC conducts national surveillance for polio in collaboration with state and local health departments. Suspected cases of polio must be reported immediately to local or state health departments. CDC compiles and summarizes clinical, epidemiologic, and laboratory data concerning suspected cases. Three independent experts review the data and determine whether a suspected case meets the clinical case definition of paralytic polio (i.e., a paralytic illness clinically and epidemiologically compatible with polio in which a neurologic deficit is present 60 days after onset of symptoms ). CDC classifies confirmed cases of paralytic polio as a) associated with either vaccine administration or wild virus exposure, based on epidemiologic and laboratory criteria, and b) occurring in either a vaccine recipient or the contact of a recipient, based on OPV exposure data (25 ). For the recommended control measures to be undertaken quickly, a preliminary assessment must ascertain as soon as possible whether a suspected case is likely vaccine-associated or caused by wild virus (see Case Investigation and Laboratory Methods).
# Laboratory Methods
Specimens for virus isolation (e.g, stool, throat swab, and cerebrospinal fluid ) and serologic testing must be obtained in a timely manner. The greatest yield for poliovirus is from stool culture, and timely collection of stool specimens increases the likelihood of case confirmation. At least two stool specimens and two throat swab specimens should be obtained from patients who are suspected to have polio. Specimens should be obtained at least 24 hours apart as early in the course of illness as possible, ideally within 14 days of onset. Stool specimens collected >2 months after the onset of paralytic manifestations are unlikely to yield poliovirus. Throat swabs are less often positive than stool samples, and virus is rarely detected in CSF. In addition, an acutephase serologic specimen should be obtained as early in the course of illness as possible, and a convalescent-phase specimen should be obtained at least 3 weeks later.
The following tests should be performed on appropriate specimens collected from persons who have suspected cases of polio: a) isolation of poliovirus in tissue culture; b) serotyping of a poliovirus isolate as serotype 1, 2, or 3; and c) intratypic differentiation using DNA/RNA probe hybridization or polymerase chain reaction to determine whether a poliovirus isolate is associated with a vaccine or wild virus.
Acute-phase and convalescent-phase serum specimens should be tested for neutralizing antibody to each of the three poliovirus serotypes. A fourfold rise in antibody titer between appropriately timed acute-phase and convalescent-phase serum specimens is diagnostic for poliovirus infection. The recently revised standard protocol for poliovirus serology should be used (50 ). Commercial laboratories usually perform complement fixation and other tests. However, assays other than neutralization are difficult to interpret because of inadequate standardization and relative insensitivity. The CDC Enterovirus Laboratory is available for consultation and will test specimens from patients who have suspected polio (i.e., patients with acute paralytic manifestations). The telephone number for this lab is (404) 639-2749.
# INACTIVATED POLIOVIRUS VACCINE (IPV)
# Background
IPV was introduced in the United States in 1955 and was used widely until OPV became available in the early 1960s. Thereafter, the use of IPV rapidly declined to <2% of all poliovirus vaccine distributed annually in the United States. A method of producing a more potent IPV with greater antigenic content was developed in 1978 and is the only type of IPV in use today (51 ). The first of these more immunogenic vaccines was licensed in the United States in 1987. Results of studies from several countries have indicated that the enhanced-potency IPV is more immunogenic for both children and adults than previous formulations of IPV (52 ).
# Vaccine Composition
Two IPV vaccine products are licensed in the United States,- although only one (IPOL ® ) is both licensed and distributed in the United States. These products and their descriptions are as follows: antigen units of type 1 poliovirus, 8 D antigen units of type 2, and 32 D antigen units of type 3. Each dose also contains 0.5% of 2-phenoxyethanol and up to 200 ppm of formaldehyde as preservatives, as well as trace amounts of neomycin, streptomycin, and polymyxin B used in vaccine production. This vaccine does not contain thimerosal.
- POLIOVAX ® . One dose (0.5 mL administered subcutaneously) consists of the sterile suspension of three types of poliovirus: type 1 (Mahoney), type 2 (MEF-1), and type 3 (Saukett). The viruses are grown on human diploid (MRC-5) cell cultures, concentrated, purified, and formaldehyde inactivated. Each dose of vaccine contains 40 D antigen units of type 1 poliovirus, 8 D antigen units of type 2, and 32 D antigen units of type 3, as well as 27 ppm formaldehyde, 0.5% of 2phenoxyethanol, 0.5% of albumin (human), 20 ppm of Tween 80™, and <1 ppm of bovine serum. Trace amounts of neomycin and streptomycin can be present as a result of the production process. This vaccine does not contain thimerosal.
# Immunogenicity
A clinical trial of two preparations of enhanced-potency IPV was completed in the United States in 1984 (53 ). Among children who received three doses of one of the enhanced-potency IPVs at ages 2, 4, and 18 months, 99%-100% had developed serum antibodies to all three poliovirus types at age 6 months, which was 2 months after administration of the second dose. The percentage of children who had antibodies to all three poliovirus serotypes did not increase or decrease during the 14-month period after the second dose, confirming that seroconversion had occurred in most of the children. Furthermore, geometric mean antibody titers increased fivefold to tenfold after both the second and third doses.
Data from subsequent studies have confirmed that 90%-100% of children develop protective antibodies to all three types of poliovirus after administration of two doses of the currently available IPV, and 99%-100% develop protective antibodies after three doses (53)(54)(55). Results of studies showing long-term antibody persistence after three doses of enhanced-potency IPV are not yet available in the United States. However, data from one study indicated that antibody persisted throughout a 4-year follow-up period (56 ). In Sweden, studies of persons who received four doses of an IPV with lower antigen content than the IPVs licensed in the United States indicated that >90% of vaccinated persons had serum antibodies to poliovirus 25 years after the fourth dose (57 ). One dose of IPV administered to persons during an outbreak of poliovirus type 1 in Senegal during 1986-1987 was 36% effective; the effectiveness of two doses was 89% (58 ).
Several European countries (e.g., Finland, Netherlands, Sweden, and Iceland) have relied exclusively on enhanced-potency IPV for routine poliovirus vaccination to eliminate the disease. More recently, all Canadian provinces have adopted vaccination schedules relying exclusively on IPV (i.e., five doses at ages 2, 4, 6, and 18 months and 4-6 years), and Ontario has used an all-IPV schedule since 1988 (59 ). In addition, France has used only IPV since 1983 (60 ).
# MMWR May 19, 2000
# Safety
In countries relying on all-IPV schedules, no increased risk for serious adverse events has been observed. An extensive review by the Institute of Medicine (IOM) of adverse events associated with vaccination suggested that no serious adverse events have been associated with the use of IPV in these countries (61 ). Since expanded use of IPV in the United States in 1996, no serious adverse events have been linked to use of IPV (CDC, unpublished data, 1999).
# RECOMMENDATIONS FOR IPV VACCINATION
# Recommendations for IPV Vaccination of Children Routine Vaccination
All children should receive four doses of IPV at ages 2, 4, and 6-18 months and 4-6 years. The first and second doses of IPV are necessary to induce a primary immune response, and the third and fourth doses ensure "boosting" of antibody titers to high levels. If accelerated protection is needed, the minimum interval between doses is 4 weeks, although the preferred interval between the second and third doses is 2 months (see Recommendations for IPV Vaccination of Adults). All children who have received three doses of IPV before age 4 years should receive a fourth dose before or at school entry. The fourth dose is not needed if the third dose is administered on or after the fourth birthday.
# Incompletely Vaccinated Children
The poliovirus vaccination status of children should be evaluated periodically. Those who are inadequately protected should complete the recommended vaccination series. No additional doses are needed if more time than recommended elapses between doses (e.g., more than 4-8 weeks between the first two doses or more than 2-14 months between the second and third doses).
# Scheduling IPV Administration
Until appropriate combination vaccines are available, the administration of IPV will require additional injections at ages 2 and 4 months. When scheduling IPV administration, the following options should be considered to decrease the number of injections at the 2-and 4-month patient visits:
- Administer HepB at birth and ages 1 and 6 months.
- Schedule additional visits if there is reasonable certainty that the child will be brought back for subsequent vaccination at the recommended ages.
- Use available combination vaccines.
cording to their licensed indications for minimum ages and intervals between doses, four doses of OPV or IPV in any combination by age 4-6 years is considered a complete series, regardless of age at the time of the third dose. A minimum interval of 4 weeks should elapse if IPV is administered after OPV. Available evidence indicates that persons primed with OPV exhibit a strong mucosal immunogloblulin A response after boosting with IPV (62 ).
# Administration with Other Vaccines
IPV can be administered simultaneously with other routinely recommended childhood vaccines. These include DTP, DTaP, Hib, HepB, varicella (chickenpox) vaccine, and measles-mumps-rubella vaccine.
# Recommendations for IPV Vaccination of Adults
Routine poliovirus vaccination of adults (i.e., persons aged >18 years) residing in the United States is not necessary. Most adults have a minimal risk for exposure to polioviruses in the United States and most are immune as a result of vaccination during childhood. Vaccination is recommended for certain adults who are at greater risk for exposure to polioviruses than the general population, including the following persons:
- Travelers to areas or countries where polio is epidemic or endemic.
- Members of communities or specific population groups with disease caused by wild polioviruses.
- Laboratory workers who handle specimens that might contain polioviruses.
- Health-care workers who have close contact with patients who might be excreting wild polioviruses.
- Unvaccinated adults whose children will be receiving oral poliovirus vaccine. Unvaccinated adults who are at increased risk should receive a primary vaccination series with IPV. Adults without documentation of vaccination status should be considered unvaccinated. Two doses of IPV should be administered at intervals of 4-8 weeks; a third dose should be administered 6-12 months after the second. If three doses of IPV cannot be administered within the recommended intervals before protection is needed, the following alternatives are recommended: q If more than 8 weeks are available before protection is needed, three doses of IPV should be administered at least 4 weeks apart.
- If fewer than 8 weeks but more than 4 weeks are available before protection is needed, two doses of IPV should be administered at least 4 weeks apart.
- If fewer than 4 weeks are available before protection is needed, a single dose of IPV is recommended.
The remaining doses of vaccine should be administered later, at the recommended intervals, if the person remains at increased risk for exposure to poliovirus. Adults who have had a primary series of OPV or IPV and who are at increased risk can receive another dose of IPV. Available data do not indicate the need for more than a single lifetime booster dose with IPV for adults.
# Pregnancy
Although no adverse effects of IPV have been documented among pregnant women or their fetuses, vaccination of pregnant women should be avoided on theoretical grounds. However, if a pregnant woman is at increased risk for infection and requires immediate protection against polio, IPV can be administered in accordance with the recommended schedules for adults (see Recommendations for IPV Vaccination of Adults).
# Immunodeficiency
IPV is the only vaccine recommended for vaccination of immunodeficient persons and their household contacts. Many immunodeficient persons are immune to polioviruses as a result of previous vaccination or exposure to wild virus when they were immunocompetent. Administration of IPV to immunodeficient persons is safe. Although a protective immune response in these persons cannot be ensured, IPV might confer some protection.
# False Contraindications
Breastfeeding does not interfere with successful immunization against polio. A dose of IPV can be administered to a child who has diarrhea. Minor upper respiratory illnesses with or without fever, mild to moderate local reactions to a previous dose of vaccine, current antimicrobial therapy, and the convalescent phase of an acute illness are not contraindications for vaccination (63 ).
# ORAL POLIOVIRUS VACCINE (OPV) Background
Routine production of OPV in the United States has been discontinued. However, an emergency stockpile of OPV for polio outbreak control is maintained. Because OPV is the only vaccine recommended to control outbreaks of polio, this section describes OPV and indications for its use.
# MMWR 15
# Vaccine Composition
Trivalent OPV contains live attenuated strains of all three poliovirus serotypes. The viruses are propagated in monkey kidney cell culture. Until introduction of the sequential IPV-OPV schedule in 1997, OPV was the nation's primary poliovirus vaccine, after its licensing in the United States in 1963. One dose of OPV (0.5 mL administered orally from a single dose dispenser) is required to contain a minimum of 10 6 TCID 50 (tissue culture infectious dose) Sabin strain of poliovirus type 1 (LSc 2ab), 10 5.1 TCID 50 Sabin strain of poliovirus type 2 (P712 Ch 2ab), and 10 5.8 TCID 50 Sabin strain of poliovirus type 3 (Leon 12a 1 b), balanced in a formulation of 10:1:3, respectively. The OPV formerly manufactured in the United States- contained approximately threefold to tenfold the minimum dose of virus necessary to meet these requirements consistently (64 ). Each dose of 0.5 mL also contained <25 µG each of streptomycin and neomycin.
# Immunogenicity
After complete primary vaccination with three doses of OPV, >95% of recipients develop long-lasting (probably lifelong) immunity to all three poliovirus types. Approximately 50% of vaccine recipients develop antibodies to all three serotypes after a single dose of OPV (53 ). OPV consistently induces immunity of the gastrointestinal tract that provides a substantial degree of resistance to reinfection with poliovirus. OPV interferes with subsequent infection by wild poliovirus, a property that is important in vaccination campaigns to control polio epidemics. Both IPV and OPV induce immunity of the mucosa of the gastrointestinal tract, but the mucosal immunity induced by OPV is superior (65,66 ). Both IPV and OPV are effective in reducing pharyngeal replication and subsequent transmission of poliovirus by the oral-oral route.
# RECOMMENDATIONS FOR OPV VACCINATION
# Recommendations for OPV Vaccination for Outbreak Control Rationale
As affirmed by ACIP, OPV remains the vaccine of choice for mass vaccination to control polio outbreaks (16 ). Data from clinical trials and empirical evidence support the effectiveness of OPV for outbreak control. The preference for OPV in an outbreak setting is supported by a) higher seroconversion rates after a single dose of OPV compared with a single dose of IPV; b) a greater degree of intestinal immunity, which limits community spread of wild poliovirus; and c) beneficial secondary spread (intestinal shedding) of vaccine virus, which improves overall protection in the community.
As a live attenuated virus, OPV replicates in the intestinal tract and induces antibodies in more recipients after a single dose. Thus, OPV can protect more persons who are susceptible in a population, making it the preferred vaccine for rapid intervention during an outbreak (53,67 ). Among persons previously vaccinated with three doses of IPV *Official name: Orimune ® (poliovirus vaccine, live, oral, trivalent types 1,2,3 ), manufactured by Lederle Laboratories, Division of American Cyanamid Company, Pearl River, New York.
# MMWR May 19, 2000
-r OPV, excretion of poliovirus from the pharynx and the intestine appears most closely correlated with titers of homologous humoral antibody (68 ). Three doses of either IPV or OPV induce protective antibody levels (neutralizing antibody titers >1:8) to all three serotypes of poliovirus in >95% of infant recipients (9 ). Therefore, boosting of immunity with a single dose of OPV or IPV is likely to reduce both pharyngeal and intestinal excretion of poliovirus, effectively stopping epidemic transmission of wild poliovirus.
# Use of OPV for Outbreak Control
OPV has been the vaccine of choice for polio outbreak control. During a polio outbreak in Albania in 1996, the number of cases decreased 90% within 2 weeks after administration of a single dose of OPV to >80% of the population aged 0-50 years. Two weeks after a second round of vaccination with OPV, no additional cases were observed (69 ). Rapidly implemented mass vaccination campaigns resulting in high coverage appears to have been similarly effective in interrupting wild poliovirus outbreaks in other countries (70 ).
European countries that rely solely on IPV for routine poliovirus vaccination (e.g., the Netherlands and Finland) have also used OPV for primary control of outbreaks. During the 1992-93 polio outbreak in the Netherlands, OPV was offered to members of a religious community affected by the outbreak (who were largely unvaccinated before the outbreak) and other persons living in areas affected by the outbreak. IPV was given to immunized persons outside the outbreak areas to ensure protection in this population (71 ). During a 1984-85 polio outbreak in Finland, 1.5 million doses of IPV initially were administered to children <18 years for immediate boosting of protection (72 ). Later, approximately 4.8 million doses of OPV were administered to 95% of the population. In contrast, mass vaccination with IPV exclusively has had little impact on outbreaks and has rarely been used since OPV became available (70,73 ).
# Recommendations for Other Uses of OPV
For the remaining nonemergency supplies of OPV, only the following indications are acceptable for OPV administration:
- Unvaccinated children who will be traveling in fewer than 4 weeks to areas where polio is endemic. If OPV is not available, IPV should be administered.
- Children of parents who do not accept the recommended number of vaccine injections. These children can receive OPV only for the third or fourth dose or both. In this situation, health-care providers should administer OPV only after discussing the risk for VAPP with parents or caregivers.
# Precautions and Contraindications
# Hypersensitivity or Anaphylactic Reactions to OPV
OPV should not be administered to persons who have experienced an anaphylactic reaction to a previous dose of OPV. Because OPV also contains trace amounts of neomycin and streptomycin, hypersensitivity reactions can occur in persons sensitive to these antibiotics.
# Pregnancy
Although no adverse effects of OPV have been documented among pregnant women or their fetuses, vaccination of pregnant women should be avoided. However, if a pregnant woman requires immediate protection against polio, she can receive OPV in accordance with the recommended schedules for adults (see Use of OPV for Outbreak Control).
# Immunodeficiency
OPV should not be administered to persons who have immunodeficiency disorders (e.g., severe combined immunodeficiency syndrome, agammaglobulinemia, or hypogammaglobulinemia) (74)(75)(76) because these persons are at substantially increased risk for VAPP. Similarly, OPV should not be administered to persons with altered immune systems resulting from malignant disease (e.g., leukemia, lymphoma, or generalized malignancy) or to persons whose immune systems have been compromised (e.g., by therapy with corticosteroids, alkylating drugs, antimetabolites, or radiation or by infection with human immunodeficiency virus ). OPV should not be used to vaccinate household contacts of immunodeficient patients; IPV is recommended. Many immunodeficient persons are immune to polioviruses as a result of previous vaccination or exposure to wild virus when they were immunocompetent. Although their risk for paralytic disease could be lower than for persons with congenital or acquired immunodeficiency disorders, these persons should not receive OPV.
# Inadvertent Administration of OPV to Household Contacts of Immunodeficient Persons
If OPV is inadvertently administered to a household contact of an immunodeficient person, the OPV recipient should avoid close contact with the immunodeficient person for approximately 4-6 weeks after vaccination. If this is not feasible, rigorous hygiene and hand washing after contact with feces (e.g., after diaper changing) and avoidance of contact with saliva (e.g., sharing food or utensils) can be an acceptable but probably less effective alternative. Maximum excretion of vaccine virus occurs within 4 weeks after oral vaccination.
# False Contraindications
Breastfeeding does not interfere with successful immunization against polio. A dose of OPV can be administered to a child who has mild diarrhea. Minor upper respiratory illnesses with or without fever, mild to moderate local reactions to a previous dose of vaccine, current antimicrobial therapy, and the convalescent phase of an acute illness are not contraindications for vaccination (63 ).
# Adverse Reactions Vaccine-Associated Paralytic Poliomyelitis (VAPP)
In rare instances, administration of OPV has been associated with paralysis in healthy recipients and their contacts. No procedures are available for identifying persons (other than those with immunodeficiency) who are at risk for such adverse reactions. Although an all-IPV schedule for routine childhood polio vaccination in the United States to eliminate the risk for VAPP. ACIP also reaffirms its support for the global polio eradication initiative and the use of OPV as the only vaccine recommended to eradicate polio from the remaining countries where polio is endemic.
the risk for VAPP is minimal, vaccinees (or their parents) and their susceptible, close, personal contacts should be informed of this risk (Table ). Administration of OPV can cause VAPP that results in death, although this is rare (3,45 ).
# Guillain-Barré Syndrome (GBS)
Available evidence indicates that administration of OPV does not measurably increase the risk for GBS, a type of ascending inflammatory polyneuritis. Preliminary findings from two studies in Finland led to a contrary conclusion in a review conducted by IOM in 1993 (77,78 ). Investigators in Finland reported an apparent increase in GBS incidence that was temporally associated with a mass vaccination campaign during which OPV was administered to children and adults who had previously been vaccinated with IPV. However, after the IOM review, these data were reanalyzed, and an observational study was completed in the United States. Neither the reanalysis nor the new study provided evidence of a causal relationship between OPV administration and GBS (79 ).
# REPORTING OF ADVERSE EVENTS AFTER VACCINATION
The National Childhood Vaccine Injury Act of 1986 requires health-care providers to report serious adverse events after poliovirus vaccination (80 ). Events that must be reported are detailed in the Reportable Events Table of this act and include paralytic polio and any acute complications or sequelae of paralytic polio. Adverse events should be reported to the Vaccine Adverse Events Reporting System (VAERS). VAERS reporting forms and information are available 24 hours a day by calling (800) 822-7967.
# Vaccine Injury Compensation Program
The National Vaccine Injury Compensation Program, established by the National Childhood Vaccine Injury Act of 1986, provides a mechanism through which compensation can be paid on behalf of a person who died or was injured as a result of receiving vaccine. A Vaccine Injury Table lists the vaccines covered by this program and the injuries, disabilities, illnesses, and conditions (including death) for which compensation can be paid (81 ). This program provides potential compensation after development or onset of VAPP in a) an OPV recipient (within 30 days), b) a person in contact with an OPV vaccinee (no time frame specified), or c) an immunodeficient person (within 6 months). Additional information is available from the National Vaccine Injury Compensation Program ( 338-2382) or CDC's National Immunization Program Internet site at the following address: .
# CONCLUSION
In 1997, ACIP recommended using a sequential schedule of IPV followed by OPV for routine childhood polio vaccination in the United States, replacing the previous all-OPV vaccination schedule. This change was intended to reduce the risk for VAPP. Since 1997, the global polio eradication initiative has progressed rapidly, and the likelihood of poliovirus importation into the United States has decreased substantially. The sequential schedule has been well accepted, and no declines in childhood immunization coverage have been observed. On the basis of these data, ACIP recommended on June 17, 1999,
# ACCREDITATION Continuing Medical Education (CME). CDC is accredited by the Accreditation Council for Continuing Medical Education
(ACCME) to provide continuing medical education for physicians. CDC designates this educational activity for a maximum of 1 hour in category 1 credit towards the AMA Physician's Recognition Award. Each physician should claim only those hours of credit that he/she actually spent in the educational activity.
# Continuing Education Unit (CEU). CDC has been approved as an authorized provider of continuing education and training programs by the International Association for Continuing Education and Training and awards 0.1 hour Continuing Education Units (CEUs).
# Continuing Nursing Education (CNE).
This activity for 1.2 contact hours is provided by CDC, which is accredited as a provider of continuing education in nursing by the American Nurses Credentialing Center's Commission on Accreditation.
# CE-2
# MMWR
May 19, 2000
# GOALS and OBJECTIVES
This MMWR provides recommendations regarding the prevention of poliomyelitis in the United States. These recommendations were developed by the Advisory Committee on Immunization Practices (ACIP). The goal of this report is to provide guidance on the use of poliovirus vaccine in the United States. Upon completion of this educational activity, the reader should be able to a) describe the epidemiology of polio in the United States, b) describe the current recommendations for routine poliovirus vaccination in the United States, c) recognize contraindications and precautions to the use of inactivated poliovirus vaccine, and d) list the major components of the global polio eradication program.
To receive continuing education credit, please answer all of the following questions.
Vol. | 11,686 | {
"id": "859ebe62cbb8c8433c92b49faa9a3c0dd79eeea2",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | C enters fo r D isease C o n tro l a n d P re v e n tio n MMWR M o r b id ity a n d M o rta lity W e e k ly R ep o rt R eco m m en d a tio n s and R eports / Vol. 62 / No. 3# Introduction
Q fever, first described in 1937, is a worldwide zoonosis that has long been considered an underreported and underdiagnosed illness because symptoms frequently are nonspecific, making diagnosis challenging (1-3). The causative organism, Coxiella burnetii, is an intracellular bacterium that tends to infect mononuclear phagocytes but can infect other cell types as well. Infection in humans usually occurs by inhalation of bacteria from air that is contaminated by excreta of infected animals. Other modes of transmission to humans, including tick bites, ingestion of unpasteurized milk or dairy products, and human-to-human transmission, are rare (1). Laboratory diagnosis relies mainly on serology, and doxycycline is the most effective treatment for acute illness. No vaccine is available commercially in the United States.
Q fever was designated a nationally notifiable disease in the United States in 1999. Since then, reports of Q fever have increased, with 167 cases reported in 2008, an increase greater than ninefold compared with 2000, in which 17 cases were reported (4). The national seroprevalence of Q fever is estimated to be 3.1% based on data from the National Health and Nutrition Examination Survey (2003-2004), and human infections have been reported from every state in the United States (5). Q fever infections in humans and animals have been reported from every world region except Antarctica (6).
Q fever has acute and chronic stages that correspond to two distinct antigenic phases of antibody response. During an acute infection, an antibody response to C. burnetii phase II antigen is predominant and is higher than the response to the phase I Recommendations and Reports antigen, whereas a chronic infection is associated with a rising phase I immunoglobulin G (IgG) titer. Although acute Q fever symptoms in humans vary, the condition typically is characterized by a nonspecific febrile illness, hepatitis, or pneumonia. Asymptomatic infections followed by seroconversion have been reported in up to 60% of cases identified during outbreak investigations (6)(7)(8). Onset of symptoms usually occurs within 2-3 weeks of exposure, and symptomatic patients might be ill for weeks or months if untreated.
Chronic Q fever can manifest within a few months or several years after acute infection and can follow symptomatic or asymptomatic infections. Chronic disease is rare (<5% of patients with acute infections) and typically is characterized by endocarditis in patients with preexisting risk factors such as valvular or vascular defects (9). Unlike acute Q fever, which has a low mortality rate (<2%), chronic Q fever endocarditis is always fatal if untreated (10). Routine blood cultures are negative in patients with chronic Q fever endocarditis. Diagnosis of chronic Q fever endocarditis can be extremely difficult because vegetative lesions are visualized by echocardiography in approximately 12% of patients (6).
Q fever is an occupational disease in persons whose work involves contact with animals, such as slaughterhouse workers, veterinarians, and farmers, although infection is not limited to these groups. Urban outbreaks and cases with no known exposure or close proximity to livestock have been reported, as have nonoccupational exposures such as through a hobby farm (a small farm that is not a primary source of income) (11). Data collected from Q fever case report forms submitted to CDC during 2000-2010 indicate that 320 of405 (79%) cases in patients who reported occupational status are recognized in patients who are not in previously defined high-risk occupations, and 243 of 405 (60%) cases are in patients who do not report livestock contact (CDC, unpublished data, 2010). These findings underscore the need for health-care professionals to consider Q fever in the differential diagnosis in patients with a compatible illness, even in the absence of occupational risk or history of direct contact with animal reservoirs. Approximately 200 cases of acute Q fever were reported in U.S. military personnel who had been deployed to Iraq since 2003. Investigations of these cases linked illness to tick bites, sleeping in barns, and living near helicopter zones with environmental exposure resulting from helicopter generated aerosols (12,13).
The largest known reported Q fever outbreak involved approximately 4,000 human cases and occurred during 2007-2010 in the Netherlands. This outbreak was linked to dairy goat farms near densely populated areas and presumably involved human exposure via a windborne route (14).
Prompt diagnosis and appropriate treatment shortens the illness and reduces the risk for severe complications (15,16). In patients with chronic Q fever illness, early treatment might be lifesaving. Physician awareness of the epidemiologic and clinical characteristics of Q fever is required to make a prompt and correct diagnosis. Information in this report is designed to assist U.S. clinicians with the following:
- Recognize common epidemiologic features and clinical manifestations of Q fever.
# Methods
This report provides the first national guidelines for the diagnosis and management of Q fever in the United States. The recommendations were prepared by the Q Fever Working Group, which includes CDC scientists, infectious disease specialists, laboratorians, epidemiologists, and clinical practitioners with expertise in the diagnosis and management of Q fever. These recommendations were developed through expert consultation and consensus and represent the best judgment of Q fever subject-matter experts, many of whom are international experts because of the low number of Q fever clinical subject-matter experts in the United States. In 2009, CDC created the first draft using previously published guidelines, review articles, and multiple search strategies of medical and professional computerized databases. During 2010-2012, each member of the Q Fever Working Group Recommendations and Reports reviewed, revised, and refined the recommendations. In 2012, the CDC National Institute of Occupational Safety and Health reviewed the recommendations. When possible, recommendations were based on existing recommendations or guidelines (referenced within the text), with emphasis on U.S. populations. Published guidelines and the peer-reviewed literature were reviewed to ensure the relevance, completeness, and accuracy of the recommendations. If no adequate guidelines existed, the guidelines and recommendations were based on the experience and expertise of the Q Fever Working Group members.
# Epidem iology
# O verview
Cattle, sheep, and goats are the primary reservoirs for C. burnetii. However, infection has been confirmed in multiple vertebrate species, including wildlife, marine mammals, domestic mammals, birds, and reptiles (17). C. burnetii has been isolated from approximately 40 species of ticks, and possible tickborne transmission of C. burnetii to humans has been reported (18)(19)(20). Any infected animal has the potential to transmit the pathogen via bacterial shedding in their body secretions. Human outbreaks and cases have been epidemiologically linked to exposure to multiple species including pigeons, dogs, and rabbits (21-23). Human cases and outbreaks linked to exposure to infected parturient cats also have been reported (24)(25)(26).
The majority of animal infections are asymptomatic. Clinical illness in ruminants is primarily characterized by reproductive disorders such as abortion, stillbirth, endometritis, mastitis, and infertility (27). The highest numbers of organisms are shed in birth products, although viable organisms also might be shed in the urine, milk, and feces of infected animals (28,29). A positive antibody titer in an infected animal does not correlate with active shedding of organisms because some seronegative animals might actively shed bacteria (30,31). Conversely, animals might seroconvert after exposure to C. burnetii but not shed the bacteria (31-34). Thus, serologic testing is not a reliable method to determine whether specific animals are a potential source of transmission of C. burnetii to humans or other animals. Polymerase chain reaction (PCR) testing of body fluids (e.g., feces, milk, and vaginal mucus) is a more reliable method to detect shedding, which might be intermittent.
The most common mode of transmission in humans is inhalation of infectious aerosols directly from birth fluids of infected animals or via inhalation of dust contaminated with dried birth fluids or excreta. C. burnetii is extremely resistant to physical stresses, including heat and desiccation and can survive in the environment for months to years. The bacteria can become airborne, traveling on wind currents for miles, resulting in outbreaks (35,36). In one outbreak, Q fever cases were documented in persons who lived 10 miles from the farm that was the source of the outbreak. In a recent outbreak in the Netherlands, living within 2 km of an infected farm was a significant risk factor for infection (36-38). Less common routes of transmission include ingestion of raw milk and dairy products or contact with contaminated clothing (39,40).
Person-to-person transmission of Q fever is possible but rarely reported. Persistent infection of the genital tract has been documented both in animals and humans, and sexual transmission and transplacental transmission of disease have been reported (41)(42)(43)(44). Rare cases of transmission caused by blood transfusion or bone marrow transplantation from infected human donors have been reported (45,46). C. burnetii has been isolated from human breast milk, and lactogenic transmission is possible, although no cases have been documented via this route of transmission (47). Sporadic cases of nosocomial transmission associated with autopsies and obstetrical procedures of infected women have been reported (48,49).
The reported incidence and seroprevalence of acute Q fever is higher among persons aged >40 years than among younger persons, and disease severity increases with age (5,50). Persons aged 60-64 years have the highest age-related risk of Q fever in the United States (4). In addition, males have a higher risk for symptomatic Q fever illness than females (6), which might be partly explained by sex-associated occupational exposures or the protective effects of 17P-estradiol in females, which has been validated in animal studies (51,52).
Although infections occur year round, acute Q fever cases in the United States peak in the spring. Seasonal incidence of acute Q fever likely correlates with livestock birthing times or farm management practices such as manure spreading (4,53,54). In addition to host factors, other elements that might influence disease susceptibility and clinical manifestations include route of infection and size of inoculum (6,55,56).
# Epidem iologic Factors
# Assessment of Clinical Signs and Symptoms
Acute Q Fever
# A d u l t s
Symptomatic acute Q fever, which occurs in approximately half of infected persons, is characterized by a wide variety of clinical signs and symptoms. After an incubation period of 2-3 weeks, the most common clinical manifestation is a nonspecific febrile illness that might occur in conjunction with pneumonia or hepatitis (6). The most frequently reported symptoms include fever, fatigue, chills, and myalgia (Table 1). In a study of deployed U.S. military personnel, the three most common International Classification of Diseases, Clinical Modification (ICD-9-CM) codes assigned to patients who were later identified as having Q fever were as follows: fever, not otherwise specified (780.6); pneumonia, organism unspecified (486); and viral infection, not otherwise specified (079.99) (12).
Severe, debilitating headaches also are a frequent symptom, and lumbar punctures have been performed on patients for suspected meningitis who were later shown to have Q fever (J. Hartzell, MD, Walter Reed National Military Medical Center, 2012, personal communication). The headache might be retroorbital and associated with photophobia (6). In patients with acute Q fever illness, this has been misclassified as a new onset migraine headache or a potentially infected tooth because the headache pain radiates to the jaw (11).
Pneumonia is an important clinical manifestation of acute Q fever, and C. burnetii might be an underrecognized cause of community-acquired pneumonia. In North America in the 1980s, the prevalence of acute Q fever in 1,306 cases of community-acquired pneumonia in hospitalized patients was 30 cases (2.3%) (57). Features of Q fever pneumonia are similar to other etiologies of community-acquired pneumonia and cannot be distinguished clinically, radiologically, or by any other routine laboratory evaluation. Q fever pneumonia can range from mild to severe, and numerous patients have extrapulmonary manifestations (including severe headache, myalgia, and arthralgia). Cough is often present and is nonproductive in 50% of patients; upper respiratory signs are less likely to be reported in persons with Q fever pneumonia (58-62).
Fever lasts a median of 10 days in untreated patients (range: 5-57 days); the majority of cases defervesce within 72 hours of doxycycline administration (63,64). The duration of fever increases with age; one study demonstrated that 60% ofpatients aged >40 years had a fever duration of >14 days, compared with 29% of patients aged <40 years (64). In another study, 5%-21% of patients with acute Q fever had a maculopapular or purpuric rash (53). Onset of symptoms can be gradual or abrupt, with variable severity. Although mortality is <2% in patients with acute Q fever, in the Netherlands outbreak, which included approximately 4,000 reported cases, up to 50% of acute Q fever patients were hospitalized (6,38). Less frequently described clinical symptoms include pericarditis, myocarditis, aseptic meningitis, encephalitis, and cholecystitis (53,65).
# C h i l d r e n
Children with Q fever are less likely to have symptoms than adults and might have a milder illness. Acute Q fever in symptomatic children typically is characterized by a febrile illness, often accompanied by headache, weakness, cough, and other nonspecific systemic symptoms. Illness frequently is self-limited, although a relapsing febrile illness lasting for several months has been documented in some children (66). Gastrointestinal symptoms such as diarrhea, vomiting, abdominal pain, and anorexia are reported in 50%-80% of pediatric cases (66-68). Skin rash is more common in children than adults, with a prevalence as high as 50% among children with diagnosed cases (66-69). Q fever pneumonia is usually moderate with mild cough; respiratory distress and chest pain might occur. Severe manifestations of acute disease are rare in children and include hepatitis, hemolytic uremic syndrome, myocarditis, pericarditis, encephalitis, meningitis, hemophagocytosis, lymphadenitis, acalculous cholecystitis, and rhabdomyolysis (70-74). P r e g n a n t W o m e n Q fever infections in women that occur shortly before conception or during pregnancy might result in miscarriage, stillbirth, premature birth, intrauterine growth retardation, or low birthweight (75). Adverse pregnancy outcomes are likely to be caused by vasculitis or vascular thrombosis resulting in placental insufficiency, although direct infection of the fetus has been documented (76). Of the reports that describe outcomes of infected pregnant women, none have documented an increased risk for congenital malformations because of infection (75,76).
Pregnant women might be less likely to have symptoms of Q fever compared with other adults (e.g., a febrile illness), although they remain at risk for adverse pregnancy outcomes (50). As a result, if a pregnant woman with no history of clinical illness has only a single increased antibody titer, it is difficult for the health-care provider to determine whether the increase is from a previous or current infection. Serosurveys of pregnant women evaluating a possible association between a single, elevated C. burnetii antibody titer (which cannot differentiate between previous or current infection) and adverse pregnancy outcomes have reported mixed findings (77-80). A woman with a previous infection (>30 days before conception) with no evidence of progression to chronic disease does not require treatment during pregnancy. However, a Q fever infection during pregnancy requires antibiotic treatment (Table 2), and health-care providers should consider several factors to determine the best treatment approach. Careful assessment of serologic results are useful because the phase II antibody response is increased in patients with an acute infection but decreases during convalescence as the phase I antibody response increases. Factors to consider before initiating treatment include whether the patient had contact with infected livestock, occupational animal contact, or an epidemiological link to another person with Q fever to guide treatment decisions.
The risk for adverse effects on the fetus and the risk that the mother will develop chronic Q fever are highest when an acute infection occurs during the first trimester (81,82). Untreated infection in the first trimester is more likely to result in miscarriage, whereas infection later in pregnancy is more likely to cause premature delivery (75). Women infected with acute Q fever during pregnancy, including those who were asymptomatic or experienced no adverse pregnancy outcomes, might be at risk for recrudescent infection during subsequent pregnancies (83). Therefore, pregnant women with a history of Q fever infection during a previous pregnancy should be monitored closely for recrudescent infection in all subsequent pregnancies.
Health-care providers should educate women of child bearing age who receive a diagnosis of acute Q fever of potential risks to the fetus. These women should be advised to avoid pregnancy for at least 1 month after diagnosis and treatment and should receive a pregnancy test to determine whether long-term antibiotic treatment is needed.
# R a d i o l o g i c E v a l u a t i o n
Pneumonia is one of the primary clinical manifestations of acute Q fever (6). Chest radiograph abnormalities are seen in the majority of patients with acute Q fever, although patients in the early stages of disease might have normal radiographic findings. Radiographic evaluation of acute Q fever patients during the Netherlands outbreak showed infiltrates in >96% of patients (84). Radiographic patterns for acute Q fever pneumonia are nonspecific; the most common abnormalities are segmental or lobar consolidation, which might be unilateral or bilateral, involve upper or lower lobes, or feature multiple or single opacities. Patchy infiltrations are an uncommon feature of Q fever pneumonia (58,85,86). Acute respiratory distress syndrome is a rare manifestation of Q fever but has occurred (87,88). It is not possible to differentiate Q fever pneumonia from other causes of community-acquired pneumonia solely on the basis of radiographic findings. t Prophylactic treatment after a potential Q fever exposure is not recommended; treatment is not recommended for asymptomatic infections or after symptoms have resolved, although it might be considered in persons at high risk for development of chronic Q fever. § Patients may take doxycycline with food to avoid stomach upset but should have no dairy products within 2 hours (before or after) of taking medication. Doxycycline should not be taken with antacids or bismuth-containing products, and patients should avoid taking it immediately before going to bed or lying down. Doxycycline might cause photosensitivity and can decrease the efficacy of hormonal contraceptives. 1 1 Doxycycline is the drug of choice for treatment of Q fever in adults and patients of any age with severe illness. Short courses (14 days without resolution of symptoms. ft Limited data are available on treatment of Q fever during pregnancy. Consultation with an expert in infectious diseases is recommended. 55 Target serum levels for optimal efficacy during chronic Q fever treatment is >5 ^g/mL. 11 Take with food or milk. Should not be used by persons with glucose-6-phosphate dehydrogenase deficiency. Monitor for retinal toxicity. Target serum levels for optimal efficacy is 1.0+0.2 ^g/mL. The safety of long-term treatment in children has not been evaluated. * Limited data are available on treatment of chronic Q fever in children. Consultation with an expert in pediatric infectious diseases is recommended. ttt The safety of long-term doxycycline or hydroxychloroquine treatment in pregnant women and fetal risk has not been evaluated. Consultation with an expert in infectious diseases and obstetrics is recommended. 555 Limited reports of treatment for chronic Q fever unrelated to endocarditis or vascular infection (e.g., osteoarticular infections or chronic hepatitis); duration of treatment is dependent on serologic response. Consultation with expert in infectious diseases is recommended. 111 Women should only be treated postpartum if serologic titers remain elevated >12 months after delivery (immunoglobulin G phase I titer >1:1024). Women treated during pregnancy for acute Q fever should be monitored similarly to other patients who are at high risk for progression to chronic disease (e.g., serologic monitoring at 3, 6, 12, 18, and 24 months after delivery). Reports of treatment studies are rare. Although limited success has occurred with long-term or pulsed tetracycline-class antibiotics, evidence to guide patient management is weak.
# Recommendations and Reports
# L a b o r a t o r y F i n d i n g s
Although up to 25% of patients with acute Q fever have an increased leukocyte count, most patients have normal white blood cell counts. Mild thrombocytopenia in early illness, which occurs in approximately one third of patients, might be followed by subsequent thrombocytosis. Increased erythrocyte sedimentation rate, hyponatremia, hematuria, increased creatine kinase, and increased C-reactive protein levels have been reported.
# The most common laboratory abnormalities are increased liver enzyme levels, which are observed in up to 85% of cases (89). Hyperbilirubinemia occurs in one in four patients (90-93). Hepatomegaly or splenomegaly (unrelated to thrombocytopenia) also might be present, although jaundice is rare (90). Q fever causes significant immune activation that might result in cross reactivity with other laboratory tests for autoimmune or infectious processes or agents, including tests for antineutrophil cytoplasmic
# Recommendations and Reports antibodies, human immunodeficiency virus (HIV), brucellosis, or rapid plasma reagin (94).
S u m m a r y o f A c u t e Q F e v e r - Prolonged fever (>10 days) with a normal leukocyte count, thrombocytopenia, and increased liver enzymes is suggestive of acute Q fever infection. - Children with Q fever generally have a milder acute illness than adults. - Children are more likely to have a rash than adults. Rash has been reported in up to 50% of children with acute Q fever. - Women infected with Q fever during pregnancy are at increased risk for miscarriage and preterm delivery. - Women of child-bearing age who receive a diagnosis of Q fever can benefit from pregnancy screening and counseling to guide health-care management decisions Chronic Q Fever
A d u l t s
Chronic Q fever is rare, occurring in <5% of persons with acute infection, and might occur within a few months, years, or even decades after the initial acute infection (6).
# Chronic disease can occur after symptomatic or asymptomatic infections. Potential signs and symptoms include endocarditis, chronic hepatitis, chronic vascular infections, osteomyelitis, osteoarthritis, and chronic pulmonary infections (6). Although patients likely have lifelong immunity to reinfection, disease recrudescence might occur and has been well documented (95).
The patients at highest risk for chronic Q fever are those with valvular heart disease, a vascular graft, or an arterial aneurysm. Acute infection in immunosuppressed persons and pregnant women also has been linked to later development of chronic disease (6,96). Since Q fever was categorized as a notifiable disease in the United States in 1999, CDC has received 49 reports of chronic Q fever, of which 24 manifested as endocarditis (CDC, unpublished data, 2012). Endocarditis is the major form of chronic Q fever, comprising 60%-78% of all cases worldwide. Endocarditis is a severe condition that is invariably fatal due to heart failure if untreated and has a 10-year mortality rate of 19% in treated patients (97,98). The second most common form of chronic Q fever is infection of aneurysms or vascular prostheses, followed by chronic Q fever infections after pregnancy (98). However, during the Q fever outbreak in the Netherlands, vascular infections were the most common form of chronic disease reported (15).
A clinical assessment of patients with acute Q fever should be performed to determine whether they are at high risk for subsequent chronic infection. Approximately 40% of persons with a known valvulopathy with an acute Q fever diagnosis subsequently develop infective endocarditis (99). Patients with endocarditis are predominantly men aged >40 years (97). Similar to other infective endocarditis etiologies, patients at highest risk for development of Q fever endocarditis after acute infection are those with a prosthetic valve, followed by patients with aortic bicuspid valves, mitral valve prolapse, and moderate mitral insufficiency (99,100).
The
# P r e g n a n t W o m e n
Women infected with Q fever during pregnancy are at high risk for developing chronic Q fever, possibly because of a failure to mount an appropriate immune response to acute infection or the ability of C. burnetii to use placental trophoblasts as a replicative niche (106,107). The earlier during pregnancy a woman is infected, the greater her risk for development of chronic disease (75). After a diagnosis of new onset acute infection, treatment throughout pregnancy is recommended to decrease the risk for an adverse birth outcome as well as the risk for future development of chronic Q fever (75).
# Chronic infection might be evidenced by increased phase I IgG
# C. burnetii titers that do not decrease after pregnancy and can lead to adverse outcomes during subsequent pregnancies (81).
# C h i l d r e n
Chronic Q fever is rarely reported in children. Pediatric chronic Q fever manifests most frequently as chronic relapsing or multifocal osteomyelitis, blood-culture-negative endocarditis, or chronic hepatitis (108). Children with Q fever osteomyelitis often experience a prolonged course with recurrent episodes affecting multiple bones before diagnosis (71,109). Like adults, children who are immunocompromised or have underlying heart valve disease might be at higher risk for chronic Q fever.
# Post-Q Fever Fatigue Syndrom e
Post-Q fever fatigue syndrome has been reported in up to 20% of patients with acute Q fever and is the most common chronic outcome after acute infection (110-122). However, the data related to post-Q fever fatigue syndrome are limited, and its rate of occurrence in the United States is unknown. This syndrome is distinct from other sequelae of acute infection such as chronic Q fever manifesting as endocarditis or osteomyelitis. The majority of patients with post-Q fever fatigue syndrome are previously healthy persons with no underlying medical or psychological problems who develop a complex of symptoms dominated by a debilitating fatigue after symptomatic acute Q fever infection. Other accompanying symptoms might include nausea, headache, night sweats, myalgia, intermittent muscle fasciculations, enlarged lymph nodes, arthralgia, difficulty sleeping, alcohol intolerance, photophobia, irrational and out-of-proportion irritability, depression, decreased concentration, and impaired short-term memory. There is no apparent organ involvement (111).
Although any or all of these symptoms might occur in patients with an acute infection and last up to a year followed by full recovery, post-Q fever fatigue syndrome is characterized by fatigue and other expected Q fever symptoms that last beyond a year and for many patients last for several years or for life. No consensus has been reached on the pathogenesis of post-Q fever fatigue syndrome, although genetic predisposition and the severity of acute illness have been suggested to play a role in its development (117,123).
Diagnosis of post-Q fever fatigue syndrome relies on persistence of characteristic symptoms >1 year after a symptomatic acute Q fever infection, elevated antibody titers against C. burnetii antigen, and a lack of clinical and laboratory evidence of chronic Q fever with organ involvement. All other causes of similar symptoms should be excluded, and a thorough search for organ involvement or nidus of infection is imperative before making a diagnosis of post-Q fever fatigue syndrome because Q fever with organ involvement is responsive to antibiotic treatment. Management strategies for post-Q fever fatigue syndrome might reflect those used for chronic fatigue syndrome, such as graded exercise therapy and cognitive behavioral therapy. There are anecdotal reports of limited success using antibiotic therapy; however, no evidence-based recommendations exist for treatment of post-Q fever fatigue syndrome (124,125).
# Recommendations and Reports
# Diagnosis
# Acute Q Fever
Because most persons with acute Q fever have nonspecific symptoms, health-care providers typically do not suspect Q fever during the acute stage of the disease. Although a laboratory diagnosis of acute Q fever can be made on the basis of serologic results, the requirement of a fourfold rise in phase II IgG antibody titer between acute and convalescent samples for definitive diagnosis makes this primarily a retrospective diagnosis (Table 3). For a definitive diagnosis in the early stages of acute Q fever illness, serologic testing in combination with PCR is recommended. PCR of whole blood or serum can be positive very early after symptom onset but becomes negative as the antibody titer increases and after administration of antibiotics (Table 4).
When interpreting serologic and PCR data, particularly if appropriately timed acute and convalescent titers were not obtained, empiric treatment should be based on the presence of a clinically compatible syndrome. Treatment should never be withheld pending receipt of diagnostic test results or discontinued because of a negative acute serologic or PCR result. Conversely, because antibodies might remain detectable for months to years after infection, treatment should not be provided based solely on elevated titers (such as those detected through routine screening or baseline occupational assessments) without clinical manifestation of acute illness (e.g., fever, pneumonia, hepatitis, or other acute symptoms).
# S e r o l o g i c T e s t i n g
For serologic testing, the indirect immunofluorescence assay (IFA) is commercially available and is the most commonly used method for serologic diagnosis of Q fever in the United States. Other methods described for Q fever serologic diagnosis include complement fixation, radioimmunoassay, enzymelinked immunosorbent assay, and Western immunoblotting, although assay kits for these tests are not readily available in the United States.
The interpretation of serologic results for possible Q fever must include differential reactivity to Coxiella antigens. C. burnetii exists in two antigenic phases, phase I and phase II. Phase I is the virulent, highly infectious form that undergoes a transition to phase II, the avirulent form, during serial laboratory passages in embryonated eggs or cell cultures. In acute infection, the phase II antibody response to C. burnetii appears first and is higher than the phase I antibody response (6).
The most commonly used means of confirming the diagnosis of acute Q fever is demonstration of a fourfold rise in phase II IgG by IFA between serum samples from the acute and convalescent phases taken 3-6 weeks apart. Ideally, the first serum specimen should be taken during the first week of illness. Although this specimen can be tested immediately, results often are negative or too low for detection pending production of measurable antibodies. Therefore, serum samples from the acute phase are not helpful for guiding immediate treatment decisions. Various values are used by individual laboratories to categorize patients as seropositive or seronegative.
Alternatively, the serum specimen from the acute phase could be appropriately stored (refrigerated at 4°C [ because baseline antibodies acquired as a result of previous exposure to Q fever might exist, especially in patients with rural or farming backgrounds. § U.S. laboratories use a twofold dilution scheme that does not result in a titer equaling 800; in this document, a titer of 1024 is used as the replacement.
# might have a higher cross-reactivity. Cross-reactions between
# Coxiella, Legionella, and Bartonella species have been reported (128,129). However, the cross-reacting antibodies generally have low titers and should not result in misdiagnosis.
# Because early doxycycline treatment (within the first 3 days of symptoms) is most effective, treatment of a patient suspected of having Q fever should be based on clinical findings and should not be delayed while awaiting laboratory confirmation (16). No evidence indicates that early administration ofdoxycycline blunts the antibody response or prevents seroconversion (130,131).
# N u c l e i c A c i d D e t e c t i o n
Rapid, sensitive, and quantitative PCR techniques have been developed for Q fever testing. Multiple gene targets have been used, and physicians should be aware that they can differ in sensitivity and specificity (132).
Either whole blood collected in anticoagulant-treated tubes or serum can be used for PCR testing. Whole blood might have a higher concentration of C. burnetii DNA than serum but is also likely to have more PCR inhibitors. For PCR results to be useful, the clinical sample must be obtained in the acute phase of infection (optimally during the first 2 weeks of symptom onset) and either before or shortly after (within 24-48 hours) antibiotic administration. When appropriate samples are drawn (i.e., during the acute phase and before or shortly after antibiotic administration), PCR results are positive in almost all patients with early acute Q fever before the antibody response develops (133).
# Chronic Q Fever
The Duke criteria, a set of validated diagnostic criteria for infective endocarditis, were revised in 2000 to include redefined Q fever serologic parameters (134)
# . That revision defined a phase I IgG antibody titer >1:800 or a single positive blood culture for C. burnetii as a major criterion for infective endocarditis. The Duke Endocarditis Service also advocated for use of TEEs as the initial diagnostic test of choice in patients categorized as having possible infective endocarditis, those with suspected complicated infective endocarditis, and those with suspected prosthetic valve infective endocarditis (Appendix B). A patient with a phase I IgG antibody titer >1:800 or a single positive blood culture for C. burnetii and one of the following minor criteria would be classified as having possible infective endocarditis, thereby warranting use of an initial TEE: predisposition, predisposing heart condition or injection drug use, fever, vascular phenomena, immunologic phenomena, or microbiologic evidence.
# Recommendations and Reports
# S e r o l o g i c T e s t i n g
Chronic Q fever is diagnosed primarily by serologic testing. Establishing an identifiable nidus of chronic infection (e.g., endocarditis, vascular infection, or osteomyelitis) is required, as is laboratory confirmation. The distinct antigenic phases to which humans develop antibodies play an important role in the diagnosis. In contrast to acute Q fever infection, chronic infection is associated with continued increasing phase I IgG titers (typically >1:1024) that might be higher than phase II IgG. However, there are reports of chronic Q fever patients who retain extremely high phase II IgG antibody titers that equal or exceed their phase I IgG titers (135,136). If an acute Q fever case progresses to chronic disease, phase I IgG titer will continue to rise to levels >1:1024 and might exceed the phase II titer. It is possible for a patient with previously diagnosed acute Q fever who no longer has clinical symptoms to have increased phase I IgG titers for several months that subsequently decrease or stabilize without ever progressing to chronic disease (135,137).
# N u c l e i c A c i d D e t e c t i o n
Patients with suspected chronic Q fever should have whole blood or serum PCR performed because they can experience a recurrent bacteremia similar to early acute infection. Reported rates of PCR positivity in blood or serum of patients with Q fever endocarditis have ranged from 33% to 64% (97,135,136). PCR assays also can be performed on excised heart valve tissue from the site of active infection, even if frozen or embedded in paraffin. Infected heart valves, procured fresh or as formalin-fixed, paraffin-embedded specimens, are excellent for laboratory diagnosis because they typically contain abundant numbers of bacteria. PCR can be performed on cerebrospinal fluid, pleural fluid, bone marrow, bone biopsies, liver biopsies, milk, placenta, and fetal tissue.
# I m m u n o h i s t o c h e m i s t r y
# Immunohistochemistry can be used to detect the presence of C. burnetii antigens in formalin-fixed, paraffin-embedded tissues and is particularly valuable for examining cardiac valve specimens excised from patients with culture-negative endocarditis for whom chronic Q fever is suspected (138).
This assay is particularly useful because it can stain C. burnetii bacteria in tissues from patients even after they have received antibiotic therapy. The assay also can provide a crucial retrospective diagnosis in patients who relapse after valve replacement surgery for unrecognized or undiagnosed Q fever endocarditis. In the United States, this test can be referred to CDC through state public health laboratories.
# I s o l a t i o n
Cultivation of C. burnetii is not recommended for routine diagnosis because the process is difficult, time consuming, and dangerous; culture requires a biosafety level 3 (BSL-3) laboratory because bacteria are highly infective and can be hazardous for laboratory workers. Often, patients with chronic Q fever have already received antibiotics, which further complicates isolation attempts; a negative culture does not rule out a C. burnetii infection. Specimens can be referred to CDC through state public health laboratories for culture.
# C o l l e c t i o n a n d S t o r a g e o f S p e c i m e n s
# Clinical specimens for evaluation of C. burnetii can be tested at some state public health laboratories or private referral laboratories. Health-care providers should contact their state health department for assistance with specimen submission and reporting infected patients. CDC accepts samples and performs testing at no charge if the samples have been submitted with the approval of or through a state health department. In 2011, the Food and Drug Administration approved a PCR test, for use by deployed military health-care providers, that includes a Department of Defense assay for the diagnosis of Q fever (139).
Serum. Using a red-top or serum separator tube, the acutephase specimen should be collected as soon as possible after symptom onset (within the first 2 weeks) with a convalescentphase specimen collected 3-6 weeks later. Sera should be
# Recommendations and Reports refrigerated and shipped by express shipping on frozen gel packs separated from the specimen by packing material. Samples can be frozen in a non-frost-free freezer and shipped on dry ice to the laboratory.
Blood. Whole blood for PCR testing should be collected before antibiotic administration in EDTA-treated anticoagulant tubes and shipped refrigerated on frozen gel packs by overnight shipping. If samples are to be prepared for other laboratory tests, the buffy coat can be saved for DNA amplification and stored frozen in a non-frost-free freezer.
Tissue. Heart valve tissue is the most commonly evaluated specimen used for confirmation of chronic Q fever. Fresh tissue specimens, which are the most effective and have the widest range of diagnostic techniques, should be refrigerated if they are being transported within 24 hours, and they should be shipped on frozen gel packs. If transport does not occur within 24 hours, specimens should be frozen in a non-frost-free freezer and shipped on dry ice for either culture or PCR analysis. In preparation for transport, fresh tissue should not be immersed in saline but should be placed on a gauze pad moistened with sterile saline and placed in a sterile collection cup. PCR, immunohistochemistry staining, and culture isolation for C. burnetii can be attempted on fresh tissue. Should culture attempts be performed, biopsy specimens should be kept at -80°C (-112°F) before shipping and shipped on dry ice.
Formalin-fixed paraffin-embedded blocks for PCR and immunohistochemistry can be stored and shipped at room temperature and should never be frozen. During warmer months, the blocks should be shipped refrigerated with a frozen gel pack to prevent melting. Formalin-fixed wet tissue should be stored and shipped at room temperature. Length of time in formalin might adversely affect assay results. If sending glass slides with sections from paraffin-embedded blocks, 10-12 treated (e.g., with silane or poly-L-lysine) glass slides with sections of affected tissue cut at a thickness of 3 f m (no greater than 5 f m) should be submitted. These may be shipped at room temperature or refrigerated on cold packs and should never be frozen.
# S um m ary o f Q Fever Diagnosis
# Treatm ent and M anagem ent
Acute Q Fever in Adults
The majority of acute Q fever cases resolve spontaneously within 2-3 weeks, even without treatment. Symptomatic patients with confirmed or suspected acute Q fever, including children with severe infections, should be treated with doxycycline (Table 2). Doxycycline is the most effective treatment for Q fever. Treatment is most effective if given within the first 3 days of symptoms, shortens the illness, and reduces the risk for severe complications (15,16). Other antibiotic regimens that can be used if doxycycline is contraindicated because of allergies include moxifloxacin, clarithromycin, trimethoprim/sulfamethoxazole, and rifampin (75,140,141). Treatment for acute Q fever is not routinely recommended for asymptomatic persons or for those whose symptoms have resolved, although it might be considered in those at high risk for developing chronic Q fever. In one study of acute Q fever patients who were monitored over time for progression to chronic disease, those who eventually had chronic Q fever were more likely to have not received appropriate doxycycline treatment during their acute illness because their symptoms were mild or they were asymptomatic (15).
# Patients with acute Q fever should undergo a careful clinical assessment to determine whether they might be at risk for progression to chronic Q fever because patients at high risk require closer observation during the convalescent period. A thorough clinical assessment should include review of possible immunosuppressive conditions, pregnancy testing when appropriate, and assessment for vascular and heart valve defects because certain valvular lesions might not be detectable by auscultation (142). A medical history and clinical examination alone might not be sufficient to identify patients with existing
Recommendations and Reports heart valve defects (143,144); health-care providers should use their clinical judgment to determine the most appropriate tools for assessment of risk (Figure ).
# Chronic Q Fever in Adults
Management of chronic Q fever is evaluated through both serologic and clinical monitoring. Using the same laboratory and testing procedures for serologic monitoring is important because variations among laboratories might give an inaccurate appearance of significant titer decreases or increases. However, patients should be advised to seek medical care immediately should symptoms occur at any time throughout their lives, because those with valvular defects or vascular abnormalities remain at high risk for chronic Q fever for life. In addition, patients who have been infected with acute Q fever and develop valvular disease later in life from any cause are at risk for a recrudescent infection that can result in chronic Q fever endocarditis.
# Patients who are healthy and have no identified risk factor for chronic illness should receive a clinical and serologic evaluation approximately 6 months after diagnosis of acute infection to identify potential progression to chronic disease. Phase I and phase II
It is not uncommon for patients with an acute Q fever infection to develop serologic profiles of chronic Q fever that eventually regress. Clinical evidence of chronic Q fever must accompany increased phase I IgG antibody titers to confirm a chronic diagnosis, and treatment should not be given based on increased titers alone. In all monitored patients, diagnosis of chronic Q fever is based on a rising or elevated phase I IgG titer (typically >1:1024) and an identifiable nidus of infection (e.g., endocarditis, vascular infection, and osteomyelitis). Any symptomatic patient with serologic evidence of chronic Q fever (phase I IgG antibody titer >1:1024) should be given a thorough clinical assessment to identify potential organ infection. The phase I IgG antibody titer might be higher than the phase II IgG titer; however, this is not a diagnostic criterion because patients with chronic Q fever might retain extremely high phase II IgG titers that equal or exceed phase I IgG titers (136).
Adults who receive a diagnosis of chronic Q fever should receive a treatment regimen of doxycycline and hydroxychloroquine (100 mg of doxycycline twice daily with 200 mg of hydroxychloroquine three times daily); duration of treatment might vary by the site of infection (Table 2) (6). A combination regimen is necessary to eradicate the organism because hydroxychloroquine raises the pH in the acidified phagosomal compartment and, in combination with doxycycline, has been shown to have in vitro bactericidal activity against C. burnetii. Because of potential retinal toxicity from long-term use of hydroxychloroquine, a baseline ophthalmic examination should be performed before treatment and every 6 months thereafter. Both doxycycline and hydroxychloroquine can cause photohypersensitivity, and hypersensitivity to sunlight is a potential complication with acute and chronic treatment regimens. Hydroxychloroquine is contraindicated in persons with glucose-6-phosphate dehydrogenase deficiency and persons with retinal or visual field deficits.
During treatment for chronic Q fever, patients should receive monthly serologic testing for C. burnetii phase I and II IgG and IgM antibodies and monthly clinical evaluations. If an appropriate treatment response is not achieved, monthly monitoring for hydroxychloroquine plasma levels (which should be maintained at 0.8-1.2 fg/mL) and doxycycline plasma levels (which should be maintained at >5 fg/mL) should also be performed during the treatment (145,146).
# Treatment should continue for at least 18 months for native valve infections and at least 24 months for prosthetic valve infections (97). Although treatment of vascular infections, such as infected aneurysms or grafts, is less clearly defined because of the smaller patient group, duration of antibiotic therapy reported in recovered patients is similar (18-24 months) (104). Early surgical intervention improves patient survival and might be necessary to remove an infected graft if the patient does not respond to antibiotic therapy (104,105). Treatment and management of rarer manifestations of chronic disease (e.g., osteoarticular infections) depends on clinical and serologic
# Recommendations and Reports
# Acute and Chronic Q Fever in Pregnant W om en
Treatment of pregnant women who received an acute Q fever diagnosis during pregnancy with trimethoprim/ sulfamethoxazole throughout pregnancy has been shown to significantly decrease the risk for adverse consequences for the fetus (75). Up to 81% of untreated infected pregnant women might have adverse pregnancy outcomes (75).
Although approximately 40% of pregnant women who receive long-term trimethoprim/sulfamethoxazole treatment might still experience adverse outcomes, complications are more likely to be limited to intrauterine growth retardation and premature delivery instead of stillbirth or miscarriage (75). Long-term trimethoprim/sulfamethoxazole treatment during pregnancy has decreased the risk for conversion to chronic Q fever in the mother and prevented adverse pregnancy events in subsequent pregnancies (75).
Doxycycline is classified as a category D drug because of demonstrated concerns about the effects of tetracyclines on the bone structure and dentitia of the developing fetus (see drug categories for pregnancy at . nih.gov/pregnancycategories.htm). An effective alternative, trimethoprim/sulfamethoxazole, has been used as a treatment in pregnant women who received an acute Q fever diagnosis, although the drug is classified as a category C drug. The use of trimethoprim/sulfamethoxazole during pregnancy might increase the risk for congenital abnormalities (primarily including urinary tract and cardiovascular abnormalities) because of antifolate effects (148), and concomitant use of folic acid is recommended. Research to assess the potential fetal risk from trimethoprim/sulfamethoxazole during pregnancy has been inconclusive (149).
Because pregnant women with acute Q fever are considered to be at high risk for chronic Q fever infection or recrudescent infections activated during subsequent pregnancies, patients should be monitored after delivery for postpartum progression to chronic disease and during subsequent pregnancies. Although rare, the development of Q fever endocarditis in a pregnant woman presents a difficult clinical dilemma because the safety of the treatment of choice (doxycycline and hydroxychloroquine) has not been evaluated during pregnancy. Health-care providers who are treating chronic Q fever endocarditis during pregnancy should consult with an expert in infectious diseases.
Women who are treated for acute Q fever during pregnancy should be monitored similarly to other patients at high risk for progression to chronic disease (e.g., serologic monitoring at 3, 6, 12, 18, and 24 months after delivery). Women should be advised of potential risks to the fetus should they become pregnant during the monitoring or treatment period. In one study, seven women treated for chronic Q fever with doxycycline and hydroxychloroquine for at least 1 year had normal subsequent pregnancies with no recurrent miscarriages (81). Q fever serologic testing should be resumed for women previously treated during pregnancy who become pregnant again during this 2-year period; reinitiation of long-term trimethoprim/sulfamethoxazole is indicated when IgG titers demonstrate a fourfold rise indicating a recrudescent infection, even if other signs or a definite nidus of infection cannot be identified. In these women, the nidus of infection is assumed to be the reproductive system, and the only clinical sign might be an adverse pregnancy event in a subsequent pregnancy.
# Acute and Chronic Q Fever in Children
# Occupational Exposure and Prevention
# O verview
Certain occupations are associated with increased risk for exposure to C. burnetii, as might their associated institutions and businesses. Multiple Q fever outbreaks have been reported among workers in slaughterhouses, farms, animal research facilities, military units, and, rarely, hospitals and diagnostic laboratories (12,(151)(152)(153)(154)(155)(156). Employees in high-risk occupations should be educated about the risk for exposure and the clinical presentation of Q fever. Educational efforts should describe groups vulnerable to development of chronic Q fever, such as workers who have preexisting valvulopathy, a prosthetic heart valve, a vascular prosthesis, an aneurysm, are pregnant or might become pregnant, or are immunosuppressed, because these employees have a higher risk for a severe outcome or death if infected. Although protection for at-risk workers can be provided by Q fever vaccination, a licensed vaccine for humans is only commercially available in Australia (157). Therefore, most workers in high-risk occupations in the United States are not vaccinated.
Transmission of C. burnetii to health-care personnel has been rarely reported (158,159). One obstetrician was infected through contact with the birth fluids of an infected parturient woman (48). Hospital personnel have become infected after autopsies of patients with Q fever, although the infection control precautions used, if any, are unknown (49,160).
Adherence to standard precautions is recommended to prevent Q fever infection in health-care personnel during routine care (161). During autopsies of patients who have died of Q fever, health-care personnel should use either a BSL-3 facility or use the barrier precautions of BSL-2 and the negative airflow and respiratory precautions of BSL-3 as recommended by the CDC Guidelines for Safe Work Practices in Human and Animal Medical Diagnostic Laboratories (162). During procedures that put health care personnel at risk for infection from splashing of infected material, such the delivery of an infant from an infected woman, standard precautions including the use of a face mask and eye protection or a face shield are recommended. Care should be used when handling soiled laundry (e.g., bedding, towels, and personal clothing) of Q fever patients. Soiled laundry should not be shaken or otherwise handled in a way that might aerosolize infectious particles.
During any procedure that might generate aerosols of infectious materials (e.g., a procedure involving use of a surgical power instrument such as an oscillating bone saw) in a patient with suspected or confirmed Q fever, health-care personnel should also take the following precautions: No recent data support the use of postexposure prophylaxis outside of an experimental setting and the use of prophylactic antimicrobial agents are not routinely recommended for workers after a known or potential exposure and before symptom onset. A 1956 U.S. military study of five men who received tetracycline prophylaxis 8-12 days after a C. burnetii challenge dose exposure resulted in asymptomatic infection; five men equally exposed and administered tetracycline prophylaxis 1 day after exposure had delayed symptom onset (130). One report theorized that providing postexposure prophylaxis after a bioterrorism release of C. burnetii would be useful if the timing of exposure were known (171). However, the recommendation was based on the limited data provided by the 1956 study. Prophylaxis prevented symptomatic illness in this study but did not prevent infection. In persons at high Recommendations and Reports risk, the risk for chronic Q fever development is still present regardless of whether symptomatic or asymptomatic infection occurs. Because of the limited treatment duration in the 1956 study, the lack of additional studies verifying its findings, and use of oxytetracycline instead of doxycycline, the benefit of prophylactic antimicrobial agents is questionable and therefore not recommended.
A daily fever monitoring log should be kept for a minimum of 3 weeks after exposure to C. burnetii. The incubation period for Q fever is dose dependent. The majority of infected persons have symptom onset 2-3 weeks after exposure, although onset can occur up to 6 weeks after exposure. If fever occurs during the monitoring period, immediate treatment with doxycycline should be administered and testing should be performed. Treatment within 24 hours of fever onset is extremely effective in shortening illness duration and symptom severity (16,130). Baseline serologic testing can be performed to evaluate previous infection status with a convalescent sample drawn 6 weeks later to determine whether asymptomatic seroconversion has occurred. Although asymptomatic infections do not routinely require treatment, even asymptomatic infection carries a risk for progression to chronic disease in groups at high risk; therefore, treatment for asymptomatic infection might be considered in these groups. Determination of infection status might provide useful data to guide future health management.
# S um m ary o f O ccupational Exposure
to Q Fever
# Surveillance and Reporting
Q fever is a nationally notifiable disease in the United States. Health-care providers should report suspected or confirmed cases through local or state reporting mechanisms in place for notifiable disease conditions. Many state laboratories have systems in place that automatically report specific diseases, although this varies by state. The most recent human case definition, revised in 2009, provides separate reporting categories for acute and chronic Q fever (Table 3). The CDC case definition is used for national reporting as a public health surveillance tool and is not intended for clinical diagnosis. A medical diagnosis is made to treat a patient and should consider all aspects of the illness. Surveillance case definitions are used for standardization, not patient care.
National surveillance for Q fever in the United States relies on accurate and timely reporting of cases by health-care providers. When health-care providers identify a potential case of acute or chronic Q fever, they should notify the local or state health department, which can assist with obtaining appropriate diagnostic testing. Epidemiologic and clinical patient information is reported through state health departments to CDC on a standard, confidential case report form (Appendix D). When reporting cases of Q fever, clinical signs and symptoms should be included as well as laboratory results. CDC compiles reports of Q fever from state health departments including diagnosis, date of onset, and basic demographic and geographic data, and reports the summary data.
Although Q fever is a zoonotic disease, infection in animals is not considered reportable to national agricultural authorities. However, many states consider the disease reportable when it is diagnosed in animals. Veterinary health authorities should follow state-specific reporting guidelines.
# S um m ary o f Q Fever Surveillance
and R eporting
# Recommendations and Reports
# Recommendations and Reports
# A ppendix A
# Q Fever Key Point Sum m aries
# Acute Clinical Features
Prolonged fever (>10 days) with a normal leukocyte count, thrombocytopenia, and increased liver enzymes is suggestive of acute Q fever infection. Children with Q fever generally have a milder acute illness than adults. Children are more likely to have a rash than adults. Rash has been reported in up to 50% of children with acute Q fever. Women infected with Q fever during pregnancy are at increased risk for miscarriage and preterm delivery. Women of child-bearing age who receive a diagnosis of Q fever can benefit from pregnancy screening and counseling to guide health-care management decisions.
# Chronic Clinical Features
Persons who are at high risk for development of chronic Q fever include persons with preexisting valvular heart disease, vascular grafts, or arterial aneurysms. Infection during pregnancy and immunosuppression (e.g., from chemotherapy) are both conditions that have been linked to chronic Q fever development. Endocarditis and infections of aneurysms or vascular prostheses are the most common forms of chronic Q fever and generally are fatal if untreated. Chronic Q fever is rarely reported in children. In contrast with adults, osteomyelitis is one of the most common findings in children with chronic Q fever.
# Diagnosis
Polymerase chain reaction (PCR) of whole blood or serum provides rapid results and can be used to diagnose acute Q fever in approximately the first 2 weeks after symptom onset but before antibiotic administration. A fourfold increase in phase II immunoglobulin G (IgG) antibody titer by immunofluorescent assay (IFA) of paired acute and convalescent specimens is the diagnostic gold standard to confirm diagnosis of acute Q fever. A negative acute titer does not rule out Q fever because an IFA is negative during the first stages of acute illness. Most patients seroconvert by the third week of illness.
A single convalescent sample can be tested using IFA in patients past the acute stage of illness; however, a demonstrated fourfold rise between acute and convalescent samples has much higher sensitivity and specificity than a single elevated, convalescent titer.
# Diagnosis of chronic Q fever requires demonstration of an increased phase I IgG antibody (>1:1024) and an identifiable persistent infection (e.g., endocarditis) PCR, immunohistochemistry, or culture of affected tissue can provide definitive confirmation of infection by
# Coxiella burnetii.
Test specimens can be referred to CDC through state public health laboratories.
# Treatm ent and M anagem ent
Because of the delay in seroconversion often necessary to confirm diagnosis, antibiotic treatment should never be withheld pending laboratory tests or discontinued on the basis of a negative acute specimen. In contrast, treatment of chronic Q fever should be initiated only after diagnostic confirmation. Treatment for acute or chronic Q fever should only be given in clinically compatible cases and not based on elevated serologic titers alone (see Pregnancy section for exception). Doxycycline is the drug of choice, and 2 weeks of treatment is recommended for adults, children aged >8 years, and for severe infections in patients of any age. Children aged <8 years with uncomplicated illness may be treated with trimethoprim/sulfamethoxazole or a shorter duration (5 days) of doxycycline. Women who are pregnant when acute Q fever is diagnosed should be treated with trimethoprim/sulfamethoxazole throughout the duration of pregnancy. Serologic monitoring is recommended following acute Q fever infection to assess possible progression to chronic infection. The recommended schedule for monitoring is based on the patient's risk for chronic infection.
# Occupational Exposure | 12,835 | {
"id": "225f31934ee47044b4125cb0027c10ad2027d2da",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | The U.S. national civilian vulnerability to the deliberate use of biological and chemical agents has been highlighted by recognition of substantial biological weapons development programs and arsenals in foreign countries, attempts to acquire or possess biological agents by militants, and high-profile terrorist attacks. Evaluation of this vulnerability has focused on the role public health will have detecting and managing the probable covert biological terrorist incident with the realization that the U.S. local, state, and federal infrastructure is already strained as a result of other important public health problems. In partnership with representatives for local and state health departments, other federal agencies, and medical and public health professional associations, CDC has developed a strategic plan to address the deliberate dissemination of biological or chemical agents. The plan contains recommendations to reduce U.S. vulnerability to biological and chemical terrorism -preparedness planning, detection and surveillance, laboratory analysis, emergency response, and communication systems. Training and research are integral components for achieving these recommendations. Success of the plan hinges on strengthening the relationships between medical and public health professionals and on building new partnerships with emergency management, the military, and law enforcement professionals.# INTRODUCTION
An act of biological or chemical terrorism might range from dissemination of aerosolized anthrax spores to food product contamination, and predicting when and how such an attack might occur is not possible. However, the possibility of biological or chemical terrorism should not be ignored, especially in light of events during the past 10 years (e.g., the sarin gas attack in the Tokyo subway and the discovery of military bioweapons programs in Iraq and the former Soviet Union ). Preparing the nation to address this threat is a formidable challenge, but the consequences of being unprepared could be devastating.
The public health infrastructure must be prepared to prevent illness and injury that would result from biological and chemical terrorism, especially a covert terrorist attack. As with emerging infectious diseases, early detection and control of biological or chemical attacks depends on a strong and flexible public health system at the local, state, and federal levels. In addition, primary health-care providers throughout the United States must be vigilant because they will probably be the first to observe and report unusual illnesses or injuries.
# MMWR April 21, 2000
This report is a summary of the recommendations made by CDC's Strategic Planning Workgroup in Preparedness and Response to Biological and Chemical Terrorism: A Strategic Plan (CDC, unpublished report, 2000 ), which outlines steps for strengthening public health and health-care capacity to protect the United States against these dangers. This strategic plan marks the first time that CDC has joined with law enforcement, intelligence, and defense agencies in addition to traditional CDC partners to address a national security threat.
As a reflection of the need for broad-based public health involvement in terrorism preparedness and planning, staff from CDC's centers, institute, and offices participated in developing the strategic plan, including the The Agency for Toxic Substances and Disease Registry (ATSDR) is also participating with CDC in this effort and will provide expertise in the area of industrial chemical terrorism. In this report, the term CDC includes ATSDR when activities related to chemical terrorism are discussed. In addition, colleagues from local, state, and federal agencies; emergency medical services (EMS); professional societies; universities and medical centers; and private industry provided suggestions and constructive criticism.
Combating biological and chemical terrorism will require capitalizing on advances in technology, information systems, and medical sciences. Preparedness will also require a re-examination of core public health activities (e.g., disease surveillance) in light of these advances. Preparedness efforts by public health agencies and primary health-care providers to detect and respond to biological and chemical terrorism will have the added benefit of strengthening the U.S. capacity for identifying and controlling injuries and emerging infectious diseases.
# U.S. VULNERABILITY TO BIOLOGICAL AND CHEMICAL TERRORISM
Terrorist incidents in the United States and elsewhere involving bacterial pathogens (3 ), nerve gas (1 ), and a lethal plant toxin (i.e., ricin) (4 ), have demonstrated that the United States is vulnerable to biological and chemical threats as well as explosives. Recipes for preparing "homemade" agents are readily available (5 ), and reports of arsenals of military bioweapons (2 ) raise the possibility that terrorists might have access to highly dangerous agents, which have been engineered for mass dissemination as small-particle aerosols. Such agents as the variola virus, the causative agent of smallpox, are highly contagious and often fatal. Responding to large-scale outbreaks caused q Because the initial detection of a covert biological or chemical attack will probably occur at the local level, disease surveillance systems at state and local health agencies must be capable of detecting unusual patterns of disease or injury, including those caused by unusual or unknown threat agents.
- Because the initial response to a covert biological or chemical attack will probably be made at the local level, epidemiologists at state and local health agencies must have expertise and resources for responding to reports of clusters of rare, unusual, or unexplained illnesses.
by these agents will require the rapid mobilization of public health workers, emergency responders, and private health-care providers. Large-scale outbreaks will also require rapid procurement and distribution of large quantities of drugs and vaccines, which must be available quickly.
# OVERT VERSUS COVERT TERRORIST ATTACKS
In the past, most planning for emergency response to terrorism has been concerned with overt attacks (e.g., bombings). Chemical terrorism acts are likely to be overt because the effects of chemical agents absorbed through inhalation or by absorption through the skin or mucous membranes are usually immediate and obvious. Such attacks elicit immediate response from police, fire, and EMS personnel.
In contrast, attacks with biological agents are more likely to be covert. They present different challenges and require an additional dimension of emergency planning that involves the public health infrastructure (Box 1). Covert dissemination of a biological agent in a public place will not have an immediate impact because of the delay between exposure and onset of illness (i.e., the incubation period). Consequently, the first casualties of a covert attack probably will be identified by physicians or other primary healthcare providers. For example, in the event of a covert release of the contagious variola virus, patients will appear in doctors' offices, clinics, and emergency rooms during the first or second week, complaining of fever, back pain, headache, nausea, and other symptoms of what initially might appear to be an ordinary viral infection. As the disease progresses, these persons will develop the papular rash characteristic of early-stage smallpox, a rash that physicians might not recognize immediately. By the time the rash becomes pustular and patients begin to die, the terrorists would be far away and the disease disseminated through the population by person-to-person contact. Only a short window of opportunity will exist between the time the first cases are identified and a second wave of the population becomes ill. During that brief period, public health officials will need to determine that an attack has occurred, identify the organism, and prevent more casualties through prevention strategies (e.g., mass vaccination or prophylactic treatment). As person-to-person contact continues, successive waves of transmission could carry infection to other worldwide localities. These issues might also be relevant for other person-to-person transmissible etiologic agents (e.g., plague or certain viral hemorrhagic fevers).
# BOX 1. Local public health agency preparedness
Certain chemical agents can also be delivered covertly through contaminated food or water. In 1999, the vulnerability of the food supply was illustrated in Belgium, when chickens were unintentionally exposed to dioxin-contaminated fat used to make animal feed (6 ). Because the contamination was not discovered for months, the dioxin, a cancer-causing chemical that does not cause immediate symptoms in humans, was probably present in chicken meat and eggs sold in Europe during early 1999. This incident underscores the need for prompt diagnoses of unusual or suspicious health problems in animals as well as humans, a lesson that was also demonstrated by the recent outbreak of mosquitoborne West Nile virus in birds and humans in New York City in 1999. The dioxin episode also demonstrates how a covert act of foodborne biological or chemical terrorism could affect commerce and human or animal health.
# FOCUSING PREPAREDNESS ACTIVITIES
Early detection of and response to biological or chemical terrorism are crucial. Without special preparation at the local and state levels, a large-scale attack with variola virus, aerosolized anthrax spores, a nerve gas, or a foodborne biological or chemical agent could overwhelm the local and perhaps national public health infrastructure. Large numbers of patients, including both infected persons and the "worried well," would seek medical attention, with a corresponding need for medical supplies, diagnostic tests, and hospital beds. Emergency responders, health-care workers, and public health officials could be at special risk, and everyday life would be disrupted as a result of widespread fear of contagion.
Preparedness for terrorist-caused outbreaks and injuries is an essential component of the U.S. public health surveillance and response system, which is designed to protect the population against any unusual public health event (e.g., influenza pandemics, contaminated municipal water supplies, or intentional dissemination of Yersinia pestis , the causative agent of plague ). The epidemiologic skills, surveillance methods, diagnostic techniques, and physical resources required to detect and investigate unusual or unknown diseases, as well as syndromes or injuries caused by chemical accidents, are similar to those needed to identify and respond to an attack with a biological or chemical agent. However, public health agencies must prepare also for the special features a terrorist attack probably would have (e.g., mass casualties or the use of rare agents) (Boxes 2-5). Terrorists might use combinations of these agents, attack in more than one location simultaneously, use new agents, or use organisms that are not on the critical list (e.g., common, drug-resistant, or genetically engineered pathogens). Lists of critical biological and chemical agents will need to be modified as new information becomes available. In addition, each state and locality will need to adapt the lists to local conditions and preparedness needs by using the criteria provided in CDC's strategic plan.
Potential biological and chemical agents are numerous, and the public health infrastructure must be equipped to quickly resolve crises that would arise from a biological or chemical attack. However, to best protect the public, the preparedness efforts must be focused on agents that might have the greatest impact on U.S. health and security, especially agents that are highly contagious or that can be engineered for widespread dissemination via small-particle aerosols. Preparing the nation to address these dangers is a major challenge to U.S. public health systems and health-care providers. Early detection requires increased biological and chemical terrorism awareness among frontline health-care providers because they are in the best position to report suspicious illnesses and injuries. Also, early detection will require improved communication systems between those providers and public health officials. In addition, state and local
# Category C
Third highest priority agents include emerging pathogens that could be engineered for mass dissemination in the future because of q availability; q ease of production and dissemination; and q potential for high morbidity and mortality and major health impact.
# Preparedness and Prevention
Detection, diagnosis, and mitigation of illness and injury caused by biological and chemical terrorism is a complex process that involves numerous partners and activities. Meeting this challenge will require special emergency preparedness in all cities and q pulmonary agents, -phosgene, -chlorine, -vinyl chloride; q incapacitating agents, -BZ (3-quinuclidinyl benzilate); q pesticides, persistent and nonpersistent; q dioxins, furans, and polychlorinated biphenyls (PCBs); q explosive nitro compounds and oxidizers, -ammonium nitrate combined with fuel oil; q flammable industrial gases and liquids, -gasoline, -propane; q poison industrial gases, liquids, and solids, -cyanides, -nitriles; and q corrosive industrial acids and bases, -nitric acid, -sulfuric acid. Because of the hundreds of new chemicals introduced internationally each month, treating exposed persons by clinical syndrome rather than by specific agent is more useful for public health planning and emergency medical response purposes. Public health agencies and first responders might render the most aggressive, timely, and clinically relevant treatment possible by using treatment modalities based on syndromic categories (e.g., burns and trauma, cardiorespiratory failure, neurologic damage, and shock). These activities must be linked with authorities responsible for environmental sampling and decontamination.
# BOX 5. (Continued) Chemical agents
states. CDC will provide public health guidelines, support, and technical assistance to local and state public health agencies as they develop coordinated preparedness plans and response protocols. CDC also will provide self-assessment tools for terrorism preparedness, including performance standards, attack simulations, and other exercises. In addition, CDC will encourage and support applied research to develop innovative tools and strategies to prevent or mitigate illness and injury caused by biological and chemical terrorism.
# Detection and Surveillance
Early detection is essential for ensuring a prompt response to a biological or chemical attack, including the provision of prophylactic medicines, chemical antidotes, or vaccines. CDC will integrate surveillance for illness and injury resulting from biological and chemical terrorism into the U.S. disease surveillance systems, while developing new mechanisms for detecting, evaluating, and reporting suspicious events that might represent covert terrorist acts. As part of this effort, CDC and state and local health agencies will form partnerships with front-line medical personnel in hospital emergency departments, hospital care facilities, poison control centers, and other offices to enhance detection and reporting of unexplained injuries and illnesses as part of routine surveillance mechanisms for biological and chemical terrorism.
# Diagnosis and Characterization of Biological and Chemical Agents
CDC and its partners will create a multilevel laboratory response network for bioterrorism (LRNB). That network will link clinical labs to public health agencies in all states, districts, territories, and selected cities and counties and to state-of-the-art facilities that can analyze biological agents (Figure 1). As part of this effort, CDC will transfer diagnostic technology to state health laboratories and others who will perform initial testing. CDC will also create an in-house rapid-response and advanced technology (RRAT) laboratory. This laboratory will provide around-the-clock diagnostic confirmatory and reference support for terrorism response teams. This network will include the regional chemical laboratories for diagnosing human exposure to chemical agents and provide links with other departments (e.g., the U.S. Environmental Protection Agency, which is responsible for environmental sampling).
# Response
A comprehensive public health response to a biological or chemical terrorist event involves epidemiologic investigation, medical treatment and prophylaxis for affected persons, and the initiation of disease prevention or environmental decontamination measures. CDC will assist state and local health agencies in developing resources and expertise for investigating unusual events and unexplained illnesses. In the event of a confirmed terrorist attack, CDC will coordinate with other federal agencies in accord with Presidential Decision Directive (PDD) 39. PDD 39 designates the Federal Bureau of Investigation as the lead agency for the crisis plan and charges the Federal Emergency Management Agency with ensuring that the federal response management is adequate to respond to the consequences of terrorism (8 ). If requested by a state health agency, CDC will deploy response teams to investigate unexplained or suspicious illnesses or
# Level D laboratories
Agentspecific laboratory
# Rapid-response and advanced technology laboratory
# Level C laboratory
# Level B laboratory
# Level A laboratory Specimen testing and referral
# Training and consultation
# Agentspecific laboratory
Agentspecific laboratory
# Functional Levels of the Laboratory Response Network for Bioterrorism
Level A: Early detection of intentional dissemination of biological agents -Level A laboratories will be public health and hospital laboratories with low-level biosafety facilities. Level A laboratories will use clinical data and standard microbiological tests to decide which specimens and isolates should be forwarded to higher level biocontainment laboratories. Level A laboratory staff will be trained in the safe collection, packaging, labeling, and shipping of samples that might contain dangerous pathogens.
# Level B:
Core capacity for agent isolation and presumptive-level testing of suspect specimens -Level B laboratories will be state and local public health agency laboratories that can test for specific agents and forward organisms or specimens to higher level biocontainment laboratories. Level B laboratories will minimize false positives and protect Level C laboratories from overload. Ultimately, Level B laboratories will maintain capacity to perform confirmatory testing and characterize drug susceptibility.
Level C: Advanced capacity for rapid identification -Level C laboratories, which could be located at state health agencies, academic research centers, or federal facilities, will perform advanced and specialized testing. Ultimately, Level C laboratories will have the capacity to perform toxicity testing and employ advanced diagnostic technologies (e.g., nucleic acid amplification and molecular fingerprinting). Level C laboratories will participate in the evaluation of new tests and reagents and determine which assays could be transferred to Level B laboratories.
Level D: Highest level containment and expertise in the diagnosis of rare and dangerous biological agents -Level D laboratories will be specialized federal laboratories with unique experience in diagnosis of rare diseases (e.g., smallpox and Ebola). Level D laboratories also will develop or evaluate new tests and methods and have the resources to maintain a strain bank of biological agents. Level D laboratories will maintain the highest biocontainment facilities and will be able to conduct all tests performed in Level A, B, and C laboratories, as well as additional confirmatory testing and characterization, as needed. They will also have the capacity to detect genetically engineered agents. unusual etiologic agents and provide on-site consultation regarding medical management and disease control. To ensure the availability, procurement, and delivery of medical supplies, devices, and equipment that might be needed to respond to terrorist-caused illness or injury, CDC will maintain a national pharmaceutical stockpile.
# Communication Systems
U.S. preparedness to mitigate the public health consequences of biological and chemical terrorism depends on the coordinated activities of well-trained health-care and public health personnel throughout the United States who have access to up-to-the minute emergency information. Effective communication with the public through the news media will also be essential to limit terrorists' ability to induce public panic and disrupt daily life. During the next 5 years, CDC will work with state and local health agencies to develop a) a state-of-the-art communication system that will support disease surveillance; b) rapid notification and information exchange regarding disease outbreaks that are possibly related to bioterrorism; c) dissemination of diagnostic results and emergency health information; and d) coordination of emergency response activities. Through this network and similar mechanisms, CDC will provide terrorism-related training to epidemiologists and laboratorians, emergency responders, emergency department personnel and other front-line health-care providers, and health and safety personnel.
# PARTNERSHIPS AND IMPLEMENTATION
# Preparedness and Prevention
- Maintain a public health preparedness and response cooperative agreement that provides support to state health agencies who are working with local agencies in developing coordinated bioterrorism plans and protocols. q Establish a national public health distance-learning system that provides biological and chemical terrorism preparedness training to health-care workers and to state and local public health workers. q Disseminate public health guidelines and performance standards on biological and chemical terrorism preparedness planning for use by state and local health agencies.
# Detection and Surveillance
- Strengthen state and local surveillance systems for illness and injury resulting from pathogens and chemical substances that are on CDC's critical agents list. q Develop new algorithms and statistical methods for searching medical databases on a real-time basis for evidence of suspicious events. q Establish criteria for investigating and evaluating suspicious clusters of human or animal disease or injury and triggers for notifying law enforcement of suspected acts of biological or chemical terrorism.
# Diagnosis and Characterization of Biological and Chemical Agents
- Establish a multilevel laboratory response network for bioterrorism that links public health agencies to advanced capacity facilities for the identification and reporting of critical biological agents. q Establish regional chemical terrorism laboratories that will provide diagnostic capacity during terrorist attacks involving chemical agents. q Establish a rapid-response and advanced technology laboratory within CDC to provide around-the-clock diagnostic support to bioterrorism response teams and expedite molecular characterization of critical biological agents.
# Response
- Assist state and local health agencies in organizing response capacities to rapidly deploy in the event of an overt attack or a suspicious outbreak that might be the result of a covert attack. q Ensure that procedures are in place for rapid mobilization of CDC terrorism response teams that will provide on-site assistance to local health workers, security agents, and law enforcement officers. q Establish a national pharmaceutical stockpile to provide medical supplies in the event of a terrorist attack that involves biological or chemical agents.
# RECOMMENDATIONS
Implementing CDC's strategic preparedness and response plan by 2004 will ensure the following outcomes: q U.S. public health agencies and health-care providers will be prepared to mitigate illness and injuries that result from acts of biological and chemical terrorism.
- Public health surveillance for infectious diseases and injuries -including events that might indicate terrorist activity -will be timely and complete, and reporting of suspected terrorist events will be integrated with the evolving, comprehensive networks of the national public health surveillance system.
- The national laboratory response network for bioterrorism will be extended to include facilities in all 50 states. The network will include CDC's environmental health laboratory for chemical terrorism and four regional facilities.
- State and federal public health departments will be equipped with state-of-the-art tools for rapid epidemiological investigation and control of suspected or confirmed acts of biological or chemical terrorism, and a designated stock of terrorism-related medical supplies will be available through a national pharmaceutical stockpile.
- A cadre of well-trained health-care and public health workers will be available in every state. Their terrorism-related activities will be coordinated through a rapid and efficient communication system that links U.S. public health agencies and their partners.
# CONCLUSION
Recent threats and use of biological and chemical agents against civilians have exposed U.S. vulnerability and highlighted the need to enhance our capacity to detect and control terrorist acts. The U.S. must be protected from an extensive range of critical biological and chemical agents, including some that have been developed and stockpiled for military use. Even without threat of war, investment in national defense ensures preparedness and acts as a deterrent against hostile acts. Similarly, investment in the Communication Systems q Establish a national electronic infrastructure to improve exchange of emergency health information among local, state, and federal health agencies. q Implement an emergency communication plan that ensures rapid dissemination of health information to the public during actual, threatened, or suspected acts of biological or chemical terrorism. q Create a website that disseminates bioterrorism preparedness and training information, as well as other bioterrorism-related emergency information, to public health and health-care workers and the public. public health system provides the best civil defense against bioterrorism. Tools developed in response to terrorist threats serve a dual purpose. They help detect rare or unusual disease outbreaks and respond to health emergencies, including naturally occurring outbreaks or industrial injuries that might resemble terrorist events in their unpredictability and ability to cause mass casualties (e.g., a pandemic influenza outbreak or a large-scale chemical spill). Terrorism-preparedness activities described in CDC's plan, including the development of a public health communication infrastructure, a multilevel network of diagnostic laboratories, and an integrated disease surveillance system, will improve our ability to investigate rapidly and control public health threats that emerge in the twenty first century.
# Recommendations and Reports
# Continuing Education Activity
Sponsored by CDC
# ACCREDITATION Continuing Medical Education (CME). CDC is accredited by the Accreditation Council for Continuing Medical Education (ACCME)
to provide continuing medical education for physicians. CDC designates this educational activity for a maximum of 1.0 hour in category 1 credit towards the AMA Physician's Recognition Award. Each physician should claim only those hours of credit that he/she actually spent in the educational activity.
# Continuing Nursing Education (CNE).
This activity for 1.2 contact hours is provided by CDC, which is accredited as a provider of continuing education in nursing by the American Nurses Credentialing Center's Commission on Accreditation. | 4,939 | {
"id": "84fc26bb87975dbd785755de8a8a3252be5f7c08",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # Compendium of Animal Rabies Control, 1994
# National Association of State Public Health
Veterinarians, Inc.*
The purpose of this Compendium is to provide rabies information to veterinarians, public health officials, and others concerned with rabies control. These recommendations serve as the basis for animal rabies control programs throughout the United States and facilitate standardization of procedures among jurisdictions, thereby contributing to an effective national rabies control program. This document is reviewed annually and revised as necessary. Immunization procedure recommendations are contained in Part I; all animal rabies vaccines licensed by the United States Department of Agriculture (USDA) and marketed in the United States are listed in Part II; Part III details the principles of rabies control.
# Part I: Recommendations for Immunization Procedures
# A. Vaccine Administration
All animal rabies vaccines should be restricted to use by, or under the direct supervision of, a veterinarian.
# B. Vaccine Selection
In comprehensive rabies control programs, only vaccines with a 3-year duration of immunity should be used. This constitutes the most effective method of increasing the proportion of immunized dogs and cats in any population. (See Part II.)
# C. Route of Inoculation
All vaccines must be administered in accordance with the specifications of the product label or package insert. If administered intramuscularly, it must be at one site in the thigh.
# D. Wildlife Vaccination
Parenteral vaccination of captive wildlife is not recommended because the efficacy of rabies vaccines in such animals has not been established and no vaccine is licensed for wildlife. For this reason, and because virus shedding periods are unknown, wild or exotic carnivores and bats should not be kept as pets. Hybrids (offspring of wild species bred with domestic dogs or cats) are considered wildlife. Zoos and research institutions may establish vaccination programs which attempt to protect valuable animals, but not in lieu of appropriate public health activities that protect humans. When they become available, the use of licensed oral vaccines for the mass immunization of wildlife may be considered in selected situations with state government approval.
# E. Accidental Human Exposure to Vaccine
Accidental inoculation may occur during administration of animal rabies vaccine. Such exposure to inactivated vaccines constitutes no rabies hazard.
# F. Identification of Vaccinated Animals
All agencies and veterinarians should adopt the standard tag system. This practice will aid the administration of local, state, national, and international control procedures. Animal license tags should be distinguishable in shape and color from rabies tags. Anodized aluminum rabies tags should be no less than 0.064 inches in thickness. a. Licensure. Registration or licensure of all dogs and cats may be used to aid in rabies control. A fee is frequently charged for such licensure and revenues collected are used to maintain rabies or animal control programs. Vaccination is an essential prerequisite to licensure. b. Canvassing of Area. House-to-house canvassing by animal control personnel facilitates enforcement of vaccination and licensure requirements.
# Rabies Tags
c. Citations. Citations are legal summonses issued to owners for violations, including the failure to vaccinate or license their animals. The authority for officers to issue citations should be an integral part of each animal control program. d. Animal Control. All communities should incorporate stray animal control, leash laws, and training of personnel in their programs. 5. Postexposure Management. Any animal bitten or scratched by a wild, carnivorous mammal (or a bat) not available for testing should be regarded as having been exposed to rabies. a. Dogs and Cats. Unvaccinated dogs and cats exposed to a rabid animal should be euthanized immediately. If the owner is unwilling to have this done, the animal should be placed in strict isolation for 6 months and vaccinated 1 month before being released. Dogs and cats that are currently vaccinated should be revaccinated immediately, kept under the owner's control, and observed for 45 days. b. Livestock. All species of livestock are susceptible to rabies; cattle and horses are among the most frequently infected of all domestic animals. Livestock exposed to a rabid animal and currently vaccinated with a vaccine approved by USDA for that species should be revaccinated immediately and observed for 45 days. Unvaccinated livestock should be slaughtered immediately. If the owner is unwilling to have this done, the animal should be kept under very close observation for 6 months.
The following are recommendations for owners of unvaccinated livestock exposed to rabid animals: 1) If the animal is slaughtered within 7 days of being bitten, its tissues may be eaten without risk of infection, provided liberal portions of the exposed area are discarded. Federal meat inspectors must reject for slaughter any animal known to have been exposed to rabies within 8 months. 2) Neither tissues nor milk from a rabid animal should be used for human or animal consumption. However, since pasteurization temperatures will inactivate rabies virus, drinking pasteurized milk or eating cooked meat does not constitute a rabies exposure.
3) It is rare to have more than one rabid animal in a herd, or herbivore to herbivore transmission, and therefore it may not be necessary to restrict the rest of the herd if a single animal has been exposed to or infected by rabies. c. Other Animals. Other animals bitten by a rabid animal should be euthanized immediately. Such animals currently vaccinated with a vaccine approved by USDA for that species may be revaccinated immediately and placed in strict isolation for at least 90 days. 6. Management of Animals that Bite Humans. A healthy dog or cat that bites a person should be confined and observed for 10 days; it is recommended that rabies vaccine not be administered during the observation period. Such animals should be evaluated by a veterinarian at the first sign of illness during confinement. Any illness in the animal should be reported immediately to the local health department. If signs suggestive of rabies develop, the animal should be humanely killed, its head removed, and the head shipped under refrigeration for examination by a qualified laboratory designated by the local or state health department. Any stray or unwanted dog or cat that bites a person may be humanely killed immediately and the head submitted as described above for rabies examination. Other biting animals which might have exposed a person to rabies should be reported immediately to the local health department. Prior vaccination of an animal may not preclude the necessity for euthanasia and testing if the period of virus shedding is unknown for that species. Management of animals other than dogs and cats depends on the species, the circumstances of the bite, and the epidemiology of rabies in the area.
# C. Control Methods in Wildlife
The public should be warned not to handle wildlife. Wild carnivorous mammals and bats (as well as the offspring of wild species cross-bred with domestic dogs and cats) that bite or otherwise expose people, pets or livestock should be humanely killed and the head submitted for rabies examination. A person bitten by any wild mammal should immediately report the incident to a physician who can evaluate the need for antirabies treatment. (See current rabies prophylaxis recommendations of the ACIP.) 1. Terrestrial Mammals. Continuous and persistent government-funded programs for trapping or poisoning wildlife are not cost effective in reducing wildlife rabies reservoirs on a statewide basis. However, limited control in high-contact areas (e.g., picnic grounds, camps, and suburban areas) may be indicated for the removal of selected high-risk species of wildlife. | 1,562 | {
"id": "141e2d6e5dba4b03d58db1895d22719157b0f235",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | assess and m anage risk in drinking w a te r system s. A W ater Safety P lan (W SP) aim s to identify hazards to drinking w ater quality th at can be introduced at m ultiple points from the source to the tap. T he W S P does not, how ever, traditionally provide fo r identifying hazards th a t could com prom ise drinking w ater quality after it reaches the household tap, such as contam ination associated w ith w ater collection, storage, and treatm en t practices w ithin the hom e. T he aim o f th is m anual is to provide guidance on conducting a household survey as part o f a W ater Safety P lan fo r organized piped w ater supply system s in resource-lim ited settings.
S pecific exam ples intended to guide the planner in designing the survey are provided in the appendices. A sum m ary checklist for survey planning and com pletion is provided as A ppendix A.
B e f o r e Y o u S t a r t Successful im plem entation o f a household survey for a W ater Safety P lan requires good background know ledge o f the w a te r delivery system , the survey area, and the p o pulation in the service area. B efore initiating a survey, it w ill be helpful to gather the inform ation b elo w to help guide questionnaire developm ent and provide supporting inform ation to the report. Som e o f this data, particularly the data relating to w ater quality, m ay already be collected as part o f the W ater Safety P lan system description (W S P M o dule 2). O ther inform ation m ay be obtained from health departm ent or surveillance personnel, laboratory personnel, or com m unity health w orkers.
# P o p u l a t i o n d a ta
D etailed m aps by com m unity or district (som e dividable cluster or area w ith defined boundaries and know n population size) M o st recen t census data (for population estim ates, preferably by cluster) A lternative w ater sources u sed (private w ells, rivers, springs, b ottled w ater, rain w ater, etc.), and the uses o f alternative sources (e.g., for drinking vs. other H e a lth d e p a r t m e n t H o w m any hospitals and clinics (public and private) serve the area?
Is there a surveillance system in place?
Is there a regional or national estim ate for diarrhea incidence or gastrointestinal (G I) disease? H ave there been any outbreaks or seasonal trends? (especially G I-related)
Is there a know n, suspected, o r perceived health problem associated w ith the w ater (this m ay be anecdotal)? T im e m ust also be allotted fo r data cleaning, analysis, and report w riting, w hich could take several w eeks, and fo r presenting and review ing the data w ith the W S P team . B ecause the inform ation from the household survey contributes to Step 2 o f the W SP, it should be initiated early in the process, preferably concurrently w ith other System A ssessm ent activities.
B u d g e t P l a n n i n g B elo w are som e guidelines for estim ating th e household survey budget and som e tip s for handling paym ents. A ppendix B provides an exam ple o f calculating survey costs.
# I n t e r v i e w p e r s o n n e l
Interview ers m ay be paid on a daily basis or per questionnaire. I f paym ent is m ade on a per questionnaire basis, a sm all daily base rate should be paid, as w ell as a sm all rate fo r household visits w here the interview w as n o t com pleted, either because o f the o ccu p an ts' not being hom e or th eir refusal to participate. This practice w ill im prove adherence to household v isitation protocol. Interview ers should be paid a daily rate equivalent to the rate earned on an average fieldw ork day for training days.
# F i e l d m a n a g e r
T he field m anager should be paid at a daily rate slightly higher than the average daily rate o f the interview ers.
# T r a n s p o r t a t i o n
D rivers should be paid a daily rate for a full-tim e com m itm ent during field w o rk days.
A driver should include the cost o f gas and phone calls in the quoted rate.
# P h o t o c o p y i n g c o s t s
# P h o n e c a r d s o r c r e d i t
It is advisable to supply interview ers and the field co ordinator w ith phone cards or som e other form o f credit to encourage com m unication regarding protocol issues or security concerns w ith the field or study coordinator. To avoid overuse, consider either using a daily call log or supplying credit on a daily basis or every 2 days.
# W a t e r q u a l i t y t e s t i n g
C osts include on-site test kits and th e associated reagents test for chlorine or other desired param eters (e.g. turbidity, pH ). I f m icrobiology testing is done, include the cost o f processing or providing additional supplies to a local laboratory, or obtaining field test kits, such as D el A gua. C ollection bottles, labels, and a cooler w ill also be needed.
# D a ta e n t r y
D ata entry should be paid on a daily or per-questionnaire basis. Q uality should be spotchecked frequently to ensure careful data entry. I f sufficient personnel and financial resources are available, double data entry should be done, w here data is entered tw ice and th e com pleted databases are th en com pared to identify and correct errors in data entry.
# O t h e r c o s t s
A dditional item s m ay be needed, such as clipboards or binders fo r interview ers to use in th e field (consider closable plastic binders fo r w et clim ates), stationary supplies for posting training m aterials, pens and pencils, identification badges, etc. O nce an appropriate sam ple size has been determ ined, an additional ten percent o f h ouseholds should b e added to th e sam ple size to allow fo r refusals, unusable data, or other lim itations. I f the non-response rate is expected to be h igher than usual, then this percentage should be increased.
In som e cases it m ay be o f interest to obtain inform ation about specific areas or sub populations; exam ples are people not connected to the distribution netw ork, those w ho live farthest along th e distribution line, people in areas in close proxim ity to a know n source o f contam inants, those living in areas w ith frequent w ater outages, etc. I f such data are desired, selected households should be additional to and not included in th e calculated survey size. It is im portant to recognize th at th e sam ple size w ill not be sufficient to allow identification o f significant differences betw een populations (unless the survey is designed to do so from th e start), but th e sub-sam ple can be considered a pilot study to identify trends or suspected problem s, and it m ay inform the need for future study.
A ppropriate sam ple sizes alw ays need to be calculated by use o f procedures like those described above-th ere is no single num ber fo r sam ple size th at w ill alw ays w ork fo r any survey. It is also im portant to recog nize th at sam ple sizes w ill be different fo r different surveys, and a sam ple size should nev er be selected sim ply because it w as used for another project. In SRS, all households are num bered and then a num ber betw een one and the total n um ber o f households (N) is selected th rough a random n um ber gen erato r (one is available in M icrosoft Excel: R A N D ()- N generates a random n um ber b etw een 0 and N ). C ontinue to select households by use o f random num bers until the desired sam ple size is reached.
In m ost cases, such a list w ill n ot be available, and Stratified S ystem atic Sam pling is recom m ended. F o r th is m ethodology, y o u w ill need to identify th e sm allest sub-areas w ithin th e survey area fo r w hich population data are available, such as com m unities, districts, or v oting blocks. T hese estim ates m ay be obtained from a statistical bureau (census data), the w ater utility (custom er data), or other sources. I f the num ber o f h ouseholds in each sub-area is not know n, it can be estim ated by dividing th e population by the average n um ber o f people p er household. T he n um ber o f households to be included from each sub-area is allocated proportional to size, w here, for exam ple, a sub-area containing 6% o f the households w ould be assigned 6% o f the sam ple size. A sam ple provided that the areas are relatively homogeneous and the starting point fo r each sub area is selected at random).
To reduce bias o f household selection, the first house to be visited should be determ ined by a random m ethod. O ne w ay to do so is to assign a num ber (1 -1 0 ) to each in terv iew er and let the in terv iew er select the first house on the basis o f counting th at n um ber from the closest house to the d ro p -o ff point. T he sam pling interval is then u sed to select subsequent households.
E very effort should be m ade to include all households selected. I f a house is u n occupied at the tim e o f a visit, th at house should be revisited later th at day or on another day. I f the house is perm anently vacant, i f the occupants decline to participate, or i f an adult is not available fo r in terv iew after m ultiple attem pts, th en th e next closest household should be visited. The n um ber o f non-respondent households should be recorded on a H ousehold V isitation L og Sheet, to be subm itted daily to the Survey C oordinator. A sam ple H ousehold V isitation L o g Sheet is provided as A ppendix F.
# R e c r u i t m e n t a n d T r a i n i n g o f t h e S u r v e y T e a m
# C o m p o s i t i o n o f t h e s u r v e y t e a m
# Data entry
O ne or tw o detail-oriented individuals should be hired to enter com pleted questionnaires into a survey database. It is p referable that data be entered on a daily basis in order to allow for prom pt addressing o f any am biguities th at m ay arise.
D epending upon the availability o f tim e, personnel, and funds, data should ideally be double-entered (entered by tw o separate persons), and any differences should be exam ined in order to m inim ize entry errors. I f double-entry is not possible, then data cleaning prior to analysis w ill need to be m ore extensive.
# T ran spo rtatio n service
A driver w ith good fam iliarity o f the survey area w ill be needed to transport interview ers to and from survey sites. D epending on laboratory arrangem ents, the driver m ay also need to tran sp o rt sam ples from the field to the laboratory. Inform ation th at w ill be appropriate fo r m ost surveys includes (but is not lim ited to) the follow ing areas:
- H ou sehold characteristics/dem ographics (age, education, socioeconom ic status, household size)
- W ater sources (for drinking and other uses)
- C onsistency and quality o f piped w ater service (pressure problem s, breaks in service)
- H ou sehold w a te r storage practices (tanks, buckets, pitchers, etc.)
- H ou sehold w a te r treatm en t (bleach, boiling, filters, solar, etc.)
- P erceptions (about w ater quality, custom er satisfaction, health risks)
- C osts o f w a te r service (paym ent system , m eters)
- Sanitation and hygiene (toilet type, handw ashing, etc.)
- H ealth problem s o f household m em bers (diarrhea, skin infections, access to care, etc.)
- W ater quality-chlorine residual in delivered and stored household w ater B elo w are som e suggestions for analyses that m ay be o f interest to a W S P survey report. T hese analyses m ay n ot b e relevant in all contexts, and there w ill likely be additional param eters o f interest to each W S P survey report.
- B asic description o f the w ater utility, service, and consum er population
- D escription o f targ et area (rural vs. urban, socioeconom ic status, specific challenges related to w ater acquisition, industry, etc.)
- D em ographic characteristics o f survey respondents: age, gender, fam ily size, education, etc.
- P roportion o f population connected to m unicipal w ater system
- W ater sources (tap, w ell, rainw ater collection, etc.)
- C onsistency o f w ater service; breaks in service or pressure
- A lternative w ater sources u sed w hen system is not functioning
# R e p o r t i n g a n d p r e s e n t a t i o n o f r e s u l t s
R eporting o f survey results should roughly follow the survey instrum ent, describing dem ographics, w ater sources and service, including alternative w a te r sources, household storage and treatm en t practices, sanitation and hygiene, costs, com m unity concerns and health. W ater quality results-chlorine residual, m icrobiology, or othershould be reported separately fo r sam ples tak en directly from the tap, from household storage tanks, or from drinking w a te r containers, and the results should be com pared to identify differences in chlorine residual or coliform contam ination.
# A p p e n d i x A -S e c t i o n -b y -S e c t i o n S u m m a r y S u r v e y P l a n n i n g
C h e c k l i s t
# NOTE: This is an example from a low-resource setting provided fo r illustration only; estimates should be based on actual costs in the area where the survey will be conducted A p p e n d i x B -S a m p l e B u d g e t E s t i m a t e f o r W S P H o u s e h o l d S u r v e y
# A p p e n d i x C -S a m p l e I n f o r m e d C o n s e n t f o r W S P H o u s e h o l d S u r v e y
A sk to speak w ith fem ale head o f household (if not available, m ale head o f household is ok).
"H ello, m y nam e i s __________ and I am w orking w ith the (agency/ies carrying out the survey). W e are conducting a survey to g et a better understanding o f w ater use practices, the consistency o f w ater service, com m unity concerns, and health in (City/town location of WSP).
Y our house w as selected at random . The survey is anonym ous and w e w ill not collect any nam es or addresses. The questions in the interview do n ot ask anything private and you can choose not to answ er any question. P articipation in the survey is com pletely voluntary. Y ou are u n d er no obligation to participate, b u t y o u r responses w ill help us to understand the potential issues relating to w ater service in (City/town). The survey should take about 20 m inutes. W e w ill also collect sam ples o f y o u r tap and drinking w ater fo r testing. W ould you like to participate? I f you have any questions later, you can contact the (provide telephone number o f appropriate contact, e.g. health department, water utility, etc.) at (provide correct local phone #)." I f "no" , th an k the respondent and go to the next house (check " choose not to participate" box on log sheet). I f "y e s" , beg in questionnaire below .
# A p p e n d i x D -M e t h o d f o r C a l c u l a t i n g S a m p l e S i z e f o r a W S P H o u s e h o l d S u r v e y
O ne relatively sim ple m ethod for calculating survey size is to use the " sam ple size and pow er" calculator for a "population survey" in the StatC alc function o f E pi Info (this free softw are can be dow nloaded from the CD C w ebsite: h ttp w w .cdc.gov/epiinfo/) . The StatC alc option is found u n d e r the "U tilities" heading. The program w ill prom pt for entry o f (1) the population size, (2) the expected frequency o f the outcom e variable y o u select, and (3) the "w o rst acceptable resu lt" .
P o pulation size w ill usually be know n, or it can be obtained from the census bureau, the M inistry o f H ealth, or other governm ental or non-governm ental agency.
F o r estim ation o f the expected frequency o f a health-based outcom e variable (like diarrhea), clinic, hospital, or surveillance data should be used, w here available. P ublished studies or country estim ates can also be u sed i f they are related to a population com parable to the survey population.
T he "w o rst acceptable result" defines the lim it w ithin w hich a difference can be determ ined. F o r exam ple, fo r a household survey in a developing country w ith diarrhea prevalence estim ated at 15%, it is reasonable to define a difference b etw een the observed and expected values w h en a value that is 5 percentage points higher or low er than the expected value is observed-in this case, a prevalence o f 20% or 10%. T he standard confidence lim it used is 95% (alpha = 0.05). In other w ords, i f a "w o rst acceptable result" o f 10% is put into the m odel, assum ing a 15% expected prevalence o f diarrhea, w e are asking how m any subjects w ould need to be included in the survey in order to detect a difference o f diarrheal prevalence in the surveyed population o f at least 5% from the expected prevalence based on country data, w ith 95% confidence and 80% pow er.
T he E pi Info screens b elo w provide an exam ple o f calculating sam ple size for a population o f 155,555, w ith an expected diarrhea frequency o f 50% (this assum es a w orst-case scenario), in order to detect a difference o f five percentage points from the expected value (entering 45% or 55% w ould give the sam e value). T he sam ple size, given these values and a 95% confidence lim it from the chart below , is 383 households. A dding approxim ately 10% to account for non-respondents (increase i f a greater n um ber o f non-respondents is expected), the targ et should be 422 households. V .
3 8 3 9 9 6 6 1 9 9 . 9 v. 1 , 0 7 5 9 9 . 9 9 V . - P ilo t study-half-day field practicum (this data w ill not be included in actual survey) - R ev iew o f practicum , troubleshooting, questions - Final edits to questionnaire T he follow ing sam ple questionnaire w as designed based upon the conditions and issues o f potential concern th at w ere revealed th rough pre-survey planning. It w as u sed as a tem plate fo r various household surveys fo r W S P s in resource-lim ited settings w ith an organized piped w ater supply system in the C aribbean and L atin A m erica. P opulation size o f the surveyed areas ranged from 30,000 to 120,000. M o st households received w ater directly to th eir hom es from a piped w ater supply system . O thers had y ard or shared taps, or they used w ater from rivers or rain. F o r each targ et area, there w ere areas th at w ere n ot connected to the m unicipal w ater system or th at had unauthorized connections. Storage in household storage tan k s and secondary treatm ent w ithin the hom e w ere com m on as a result o f frequent interruptions in w ater service and pressure.
Som e questions contained in this sam ple m ay not be relevant to a given setting, or there m ay be other p ertinent inform ation th at is not included here. Q uestions th at w ill not contribute to the rep o rt should not be included. N otes on survey questions are em bedded in the questionnaire in blue print. A lternatives fo r som e questions are also provided. If, fo r exam ple, surveyors are typically invited into the hom e during the survey, som e questions m ay be replaced by direct observation. T hese alternative questions are also em bedded in the tex t in blue print. Ask the following questions if there is heavy reliance on household storage tanks in the study community. T h a n k y o u v ery m uch fo r ta k in g part in th is in te rv ie w | 4,961 | {
"id": "51d657d19b623051dba340a75bbeed666e5b648c",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Introduction
Obtaining accurate and reliable measures of CD4 + T lymphocytes (CD4 + T cells) is essential to assessing the immune system and managing the health care of persons infected with human immunodeficiency virus (HIV) (1)(2)(3)(4). The pathogenesis of acquired immunodeficiency syndrome (AIDS) is largely attributable to the decrease in the number of T cells that bear the CD4 receptor (5)(6)(7)(8)(9). Progressive depletion of CD4 + T cells is associated with an increased likelihood of severe HIV disease and an unfavorable prognosis (10)(11)(12)(13). Accordingly, the U.S. Public Health Service (PHS) has recommended that CD4 + T-cell levels be monitored every 3-6 months in all HIVinfected persons (14). Measurement of CD4 + T-cell levels has been used to establish decision points for initiating prophylaxis for Pneumocystis carinii pneumonia and other opportunistic infections and for initiating and monitoring antiretroviral therapy (15)(16)(17)(18)(19)(20). CD4 + T-cell levels are also a criterion for categorizing HIV-related clinical conditions according to CDC's classification system for HIV infection and surveillance case definition of AIDS among adults and adolescents (21).
Single-platform technology (SPT) is designed to enable determinations of both absolute and percentage lymphocyte subset values using a single tube. Until recently, most absolute T-cell numbers were derived from three measurements determined with two different instruments, a hematology analyzer and a flow cytometer (dual-platform technology ). Hence, the CD4 + T-cell number is the product of three laboratory measurements: the white blood cell count, the percentage of white blood cells that are lymphocytes (differential), and the percentage of lymphocytes that are CD4 + T cells (determined by flow cytometry). In 1997, CDC published guidelines addressing concerns related to DPT (22); those guidelines remain appropriate for laboratories performing CD4 + T-cell counts with this technology.
On November 2001, a third national conference on CD4 + immunophenotyping was held in Orlando, Florida, to discuss scientific and technologic advances in the development and production of reagents, instrumentation, and software that have occurred since publication of the 1997
# Guidelines for Performing Single-Platform Absolute CD4 + T-Cell Determinations with CD45 Gating for Persons Infected with Human Immunodeficiency Virus
MMWR January 31, 2003
guidelines. The conference was attended by representatives from public health, private, and academic laboratories as well as product manufacturers. These guidelines reflect a consensus of that conference, reviewed by attendees, and specifically related to the performance of SPT. Development of new guidelines was driven by advances in knowledge and experience with new approaches to enumerate CD4 + T cells. First, a gating strategy for identifying lymphocytes using CD45 fluorescence and side-scattering characteristics is now the preferred method for identifying lymphocytes accurately and reproducibly. Second, three-or four-color flow cytometry has been demonstrated to be superior to two-color methods for measuring CD4 + and CD8 + T-cell counts (23). Finally, the availability of Food and Drug Administration (FDA)-approved commercial microfluorosphere counting reagents for SPT has resulted in decreased interlaboratory variability (24,25). Consequently, SPT is the preferred method in an increasing number of laboratories (4).
# Recommendations
# I. Laboratory Safety
A. Use universal precautions with all specimens (26). B. Adhere to the following safety practices (27)(28)(29):
1. Wear laboratory coats and gloves when processing and analyzing specimens, including reading specimens on the flow cytometer. 2. Never pipette by mouth. Use safety pipetting devices. 3. Never recap needles. Dispose of needles and syringes in puncture-proof containers designed for this purpose. 4. Handle and manipulate specimens (e.g., aliquot, add reagents, vortex, and aspirate) in a class I or II biological safety cabinet. 5. Centrifuge specimens in safety carriers. 6. After working with specimens, remove gloves and wash hands with soap and water. 7. For stream-in-air flow cytometers, follow the manufacturer's recommended procedures to eliminate the operator's exposure to any aerosols or droplets of sample material. 8. Disinfect flow cytometer wastes. Before adding waste materials to the waste container, add a sufficient volume of undiluted household bleach (5% sodium hypochlorite) so that the final concentration of bleach will be 10% (0.5% sodium hypochlorite) when the container is full (e.g., add 100 mL of undiluted bleach to an empty 1,000-mL container).
9. Disinfect the flow cytometer as recommended by the manufacturer. One method is to flush the flow cytometer fluidic chambers with a 10% bleach solution for 5-10 minutes at the end of the day and then flush with water or saline for at least 10 minutes to remove excess bleach, which is corrosive. 10. Disinfect spills with household bleach or an appropriate dilution of mycobactericidal disinfectant. Note: Organic matter will reduce the ability of bleach to disinfect infectious agents. NCCLS recommendations regarding how to disinfect specific areas should be followed (30).
For use on smooth, hard surfaces, a 1% solution of bleach is usually adequate for disinfection; for porous surfaces, a 10% solution is needed (30). 11. Ensure that all samples have been properly fixed after staining and lysing but before analysis. Note: Some commercial reagents employ a single-step, lyse and fix method that reduces the infectious activity of cell-associated HIV by 3-5 logs (31,32); however, these reagents have not been evaluated for their effectiveness against other agents (e.g., hepatitis virus). Cell-free HIV can be inactivated with 1% paraformaldehyde within 30 minutes (33)(34)(35). Note: Use overnight carriers with an established record of consistent overnight delivery to ensure arrival the following day. Check with these carriers for their specific packaging requirements. E. Obtain specific protocols and arrange appropriate times of collection and transport from the facility collecting the specimen.
# II. Specimen Collection for Single-Platform Technology
# IV. Specimen Integrity
A. Inspect the tube and its contents immediately upon arrival. B. Take corrective actions if any of the following occur:
1. If the specimen is hot or cold to the touch but not obviously hemolyzed or frozen, process it but note the temperature condition on the worksheet and report form. Do not rapidly warm or chill specimens to bring them to room temperature because this may adversely affect the immunophenotyping results (39). Abnormalities in light-scattering patterns may reveal a compromised specimen. 2. If blood is hemolyzed or frozen, reject the specimen and request another. 3. If clots are visible, reject the specimen and request another. 4. If the specimen is received >72 hours after collection, reject it and request another.
# V. Specimen Processing
A. Perform the test within 48 hours (preferred), but no later than 72 hours after drawing the blood specimen (44). B. Place the samples on a gentle blood rocker for 5 minutes to ensure that the samples are uniformly distributed. C. Pipette blood volumes accurately and in a reproducible manner. A reverse pipetting technique is recommended (Box). D. Vortex sample tubes to mix the blood and reagents and break up cell aggregates. In addition, vortex samples immediately after the lyse/fixation step and before analysis to disperse cells optimally. E. Incubate all tubes in the dark during the staining procedure. F. A lyse/no-wash method is required for SPT. Follow directions provided by the manufacturer. G. Immediately after processing the specimens, cap the tubes and store all stained samples in the dark and under refrigeration (39°-50°F ) until flow cytometric analysis. These specimens should not be stored for longer than 24 hours unless the laboratory can demonstrate that scatter and fluorescence patterns do not change for specimens for stored longer periods. Use of SPT to obtain absolute CD4 counts requires accurate and precise measurement of blood and beads. Reverse pipetting technique is recommended for dispensing these products.
# VI. Monoclonal Antibody Panels
# Testing Pipetting Precision
The precision of pipetting should be evaluated periodically (e.g., monthly) to ensure the accuracy of results. Retain all records of this evaluation procedure for quality assurance purposes.
- Using the reverse pipetting technique, pipette 10 replicates of blood and record the weights. Select a volume normally used in the performance of the assay. - Using the reverse pipetting technique, pipette 10 replicates of bead suspension and record the weights (this applies to methods in which the beads must be pipetted into the tubes). - Calculate the mean, standard deviation, and coefficient of variation (CV). The CV for replicates should be <2% (Table 2).
# Testing Pipetting Accuracy
The following procedure can be used to test the pipette and how accurately it measures volume. Water is used because the weight of 1 µL of water is 1 µg.
- Using the reverse pipetting technique, pipette 10 replicates of distilled water and record the weight. (100 µL of water should weigh 0.1000 grams.) (Table 2) - Calculate the mean, standard deviation, and CV. The CV must be <2% (range: 0.098-0.102).
# Procedures
The following information is consolidated from operational instruction manuals from several pipette manufacturers. Complete information and more detailed instructions are contained in specific pipette instruction manuals; some of these are available online. Read the manufacturer's manual carefully before beginning the pipetting procedure.
- Select the desired volume (with manual pipettes, higher volumes should be set first; if adjusting from a lower to a higher volume, first surpass the desired volume and then slowly decrease the volume until the required setting is reached). - If applicable, select the desired mode (e.g., reverse pipette). This is recommended for optimal precision and reproducibility. - Reverse pipetting can be done with a manual pipette by pressing the control button slightly past the first stop when aspirating, taking up more liquid than will be dispensed, then pressing the control button only to the first stop when dispensing. A small volume will remain in the tip after dispensing. - Select an appropriate tip (usually color matches the color of the control button).
# Prerinsing
The following procedures will help ensure optimal precision and accuracy.
- Volumes >10 µL: Prerinse pipette 2-3 times for each new tip (this involves aspirating and dispensing liquid several times).
Reasons for prerinsing include the following: -to compensate for system pressure, for slight differences in temperature between pipette and liquid, and for properties of the liquid; -to clear the thin film formed by the liquid on the inside of the pipette. Without prerinsing, retention of a thin film on the inside wall of the tip would cause the first volume to be too small. The thickness and nature of this film, and therefore the potential source of error, will vary depending on the nature of the liquid being pipetted. - Volumes <10 µL: Do not prerinse pipette, but rinse tip after dispensing to ensure that the whole volume was dispensed. For smaller volumes, prerinsing is not recommended because the dispensed volume would be too great.
# Filling
- Make sure tip is securely attached.
- Hold pipette upright.
- When aspirating, try to keep the tip at a constant depth below the surface of the liquid. - Glide control button slowly and smoothly (electronic pipettes perform this step automatically). - When pipetting viscous liquids (e.g., whole blood), leave the tip in the liquid for 1-2 seconds after aspirating before withdrawing it. - After liquid is in the tip, never lay the pipette on its side.
# Dispensing
- Hold the tip at an angle, against the inside wall of the vessel/ tube if possible. - Glide control button slowly and smoothly (electronic pipettes perform this step automatically).
# Other Recommendations
- To ensure optimal performance, the temperature of the pipetted solution and the pipette and tips should be the same (volume errors may occur because of changes in air displacement and viscosity of the liquid). Do not pipette liquids with temperatures >70 º C. - Volume errors may also occur with liquids that have a high vapor pressure or a density/viscosity that differs greatly from water. Water is most commonly used to calibrate pipettes and to check inaccuracy and imprecision. A pipette could possibly be recalibrated for liquids with densities that vary greatly from that of water. - Pipettes should be checked regularly for precision and accuracy.
- Regular maintenance (e.g., cleaning) should be performed either by the user or a service technician according to manufacturer's instructions. parameters. In the logbook, record instrument settings, peak channels, and coefficient of variation (CV) values for materials used to monitor or verify optical alignment, standardization, fluorescence resolution, and spectral compensation. Reestablish target fluorescence levels for each quality control procedure when lot numbers of beads are changed or the instrument has been serviced.
# IX. Sample Analyses
A. With single-platform absolute count determination, use of the lyse/no-wash sample processing is mandatory. The lymphocyte population is identified as having bright CD45 fluorescence and low side-scattering properties (Figure 2). Set the threshold or discriminator as recommended by the manufacturer.
Adjust side scatter so that all leukocyte populations are visible. Draw a gate on the bright CD45 + cell population and analyze the cells in that population (47). B. Count at least 2,500 gated lymphocytes in each sample to ensure that enough cells and beads have been counted to provide an accurate absolute lymphocyte value.
# X. Data Analysis
# Evaluation and Validation of a Newly Adopted SPT in the Laboratory
When a laboratory adopts the new SPT, specimens should be tested in parallel by using both the current and the new method to characterize any systematic differences in the methods. Laboratorians should use statistical tools that provide useful information for the comparison studies. Linear least squares regression analyses are helpful in establishing good correlations between the new and established methods. If no error is detected with the new method, the r 2 value will ap-proach 1.0. However, regression-type scatter plots provide inadequate resolution when the errors are small in comparison with the analytical range and do not characterize the relationship between the two methods (50)(51)(52).
A bias scatter plot provides laboratorians with a more useful tool for determining bias. These simple, high-resolution graphs plot the differences in the individual measurements of each method (result of old method-result of new method) against measurements obtained with one of the methods (result of old method) (50). Such graphs provide an easy means of determining if bias is present and distinguishing whether bias is systematic, proportional, or random/nonconstant. The laboratorian can visually determine the magnitude of these differences over the entire range of values. When sufficient values are plotted, outliers or samples containing interfering substances can be identified. The laboratorian can then divide the data into ranges relevant to medical decisions and calculate the systematic error (mean of the bias) and the random error (standard deviation of the bias) to gain insight into analytical performance at the specified decision points (50)(51)(52).
Several detailed guidelines and texts provide additional information regarding quality goals, method evaluation, estimation of bias, and bias scatter plots (50)(51)(52)(53)(54). Once a new method is accepted and implemented, the laboratory will need to confirm or redefine its normal range and should continue to monitor the correlation between the results and the patient's clinical disease data to ensure that no problems have gone undetected by the relatively few samples typically tested during method evaluations.
# Discussion
More than 1.6 million CD4 + T-cell measurements are performed yearly by the approximately 600 testing laboratories in the United States (55). This figure is based on the reported number of tests performed annually by laboratories participating in CDC's Model Performance Evaluation Program (MPEP) for T-lymphocyte immunophenotyping in 1996. These measurements are performed with flow cytometers using either multiplatform technology or SPT. SPT was introduced for clinical application in 1996, and its wide-scale implementation is relatively new. In 2000, results of two independent multicenter studies studies of SPT were reported (24,25). Those and subsequent reports on SPT and CD45 gating (56)(57)(58)(59)(60) have increasingly encouraged adoption of these improved testing practices (61,62). The resulting outcomes associated with SPT and CD45 gating include a) increased confidence in results, b) more reproducible results, c) increased ability to resolve discrepant problems, d) decreased propor-tion of unacceptable specimens received for testing, e) decreased proportion of specimens requiring reanalysis, and f ) fewer incidents that could pose biohazard risks (61).
Although these guidelines for SPT use might foster improved laboratory practices, developing comprehensive guidelines for every aspect of CD4 + T-cell testing (including some laboratory-specific practices) is not possible. Moreover, measuring the outcomes associated with the adoption of these guidelines is inherently difficult. First, the guidelines lack evaluation protocols that can adequately account for the interactions among the recommendations. No weight of importance has been assigned for the individual recommendations that address unique steps in the testing process; hence, the consequences of incompletely following the entire set of recommendations are uncertain. Second, because published data are not available for every aspect of the guidelines, certain recommendations are based on the experience and opinion of knowledgeable persons. Recommendations made on this basis, in the absence of data, may be biased and inaccurate. Finally, variations in testing practices and interactions among the practices (e.g., how specimens are obtained and processed, skill of laboratory personnel , testing methods used, testresult reporting practices, and compliance with other voluntary standards and laboratory regulations) complicate both the development of guidelines that will fit every laboratory's unique circumstances and the assessment of the value of implementing the guidelines.
The first CDC recommendations for laboratory performance of CD4 + T-cell testing (63) were written so as not to impede development of new technology or investigations into better ways to assess the status of the immune system in HIVinfected persons. Developments in the technology have resulted in an assay that is technically less complicated and more accurate. These single-platform methods are now being implemented in as many as one fourth of the laboratories in the United States (MPEP data). In addition, other T-cell phenotypic markers are being investigated as prognostic indicators or markers of treatment efficacy, alone and in combination with other cellular markers (64,65).
These guidelines for SPT are intended for domestic implementation. Several alternative methods are available that require fewer reagents and involve more cost-effective gating algorithms. Some of these alternative methods may be compatible with current U.S. clinical laboratory methods; however, to date they have not been validated for domestic applications. As published validation data accumulate from multisite studies for methods such as PanLeucogating ( 66) and primary CD4 gating (67,68), these potentially more costeffective options will be considered as alternative or substitute methods. In the future, guidelines should be harmonized to include all methods that meet domestic performance standards to ensure consistent high quality.
# Introduction
In 2001, the QuantiFERON ® -TB test (QFT) (manufactured by Cellestis Limited, Carnegie, Victoria, Australia) was approved by the Food and Drug Administration (FDA) as an aid for detecting latent Mycobacterium tuberculosis infection (1). This test is an in vitro diagnostic aid that measures a component of cell-mediated immune reactivity to M. tuberculosis. The test is based on the quantification of interferon-gamma (IFN-γ) released from sensitized lymphocytes in whole blood incubated overnight with purified protein derivative (PPD) from M. tuberculosis and control antigens.
Tuberculin skin testing (TST) has been used for years as an aid in diagnosing latent tuberculosis infection (LTBI) and includes measurement of the delayed type hypersensitivity response 48-72 hours after intradermal injection of PPD. TST and QFT do not measure the same components of the immunologic response and are not interchangeable. Assessment of the accuracy of these tests is limited by lack of a standard for confirming LTBI.
As a diagnostic test, QFT 1) requires phlebotomy, 2) can be accomplished after a single patient visit, 3) assesses responses to multiple antigens simultaneously, and 4) does not boost anamnestic immune responses. Compared with TST, QFT results are less subject to reader bias and error. In a CDCsponsored multicenter trial, QFT and TST results were mod-erately concordant (overall kappa value = 0.60). The level of concordance was adversely affected by prior bacille Calmette-Guérin (BCG) vaccination, immune reactivity to nontuberculous mycobacteria (NTM), and a prior positive TST (2). In addition to the multicenter study, two other published studies have demonstrated moderate concordance between TST and QFT (3,4). However, one of the five sites involved in the CDC study reported less agreement (5).
Limitations of QFT include the need to draw blood and process it within 12 hours after collection and limited laboratory and clinical experience with the assay. The utility of QFT in predicting the progression to active tuberculosis has not been evaluated.
This report provides interim recommendations for using and interpreting QFT results based on available data. As with TST, interpretation and indicated applications of QFT differ between those persons at low risk and those at increased risk for LTBI. This report should assist public health officials, health-care providers, and laboratorians who are responsible for TB control activities in the United States in their efforts to incorporate QFT testing for detecting and treating LTBI.
# QFT Performance, Interpretation, and Use
Tuberculin testing is performed for persons who are 1) suspected as having active TB; 2) at increased risk for progression to active TB; 3) at increased risk for LTBI; or 4) at low risk for LTBI, but are tested for other reasons (Table 1).
# QFT Performance
Aliquots of heparinized whole blood are incubated with the test antigens for 16-24 hours.- The antigens included in the test kits are PPD from M. tuberculosis (tuberculin) † and PPD from Mycobacterium avium (avian sensitin). The kits also include phytohemaglutinin (a mitogen used as a positive assay control), and saline (negative control or nil). After incubation, the concentration of IFN-γ in the separated plasma is determined by enzyme-linked immunosorbent assay (ELISA).
QFT results are based on the proportion of IFN-γ released in response to tuberculin as compared with mitogen, or (tu-berculin -nil) / (mitogen -nil) × 100 = percentage tuberculin response. § The difference in the amount of IFN-γ released in response to tuberculin as compared with avian sensitin is expressed as (avian -nil) -(tuberculin -nil) / (tuberculin -nil) × 100 = percentage avian difference. A computer program is available from the test manufacturer that performs these calculations and interprets the test results. ¶
# QFT Interpretation
Interpretation of QFT results (Table 2) is stratified by estimated risk for infection with M. tuberculosis, in a manner simi-- Additional technical information is available from the manufacturer at http:// www.cellestis.com. † PPD from M. tuberculosis is referred to by the manufacturer and in FDA documents as human PPD.
# Using QFT for Persons at Increased Risk for LTBI
QFT can aid in detecting M. tuberculosis infections among certain populations who are at increased risk for LTBI (6). These populations include recent immigrants (i.e., immigrated within the previous 5 years) from high-prevalence countries where tuberculosis case rates are >30/100,000, injection-drug users, residents and employees of prisons and jails, and healthcare workers who, after their preemployment assessment, are considered at increased risk for exposure to tuberculosis. For these populations, a percentage tuberculin response of >15 should be considered a positive QFT result.
# Using QFT for Persons at Low Risk for LTBI
CDC discourages use of diagnostic tests for LTBI among populations at low risk for infection with M. tuberculosis (6). However, initial testing is occasionally performed among certain population groups for surveillance purposes or where cases of active, infectious tuberculosis might result in extensive transmission to highly susceptible populations. These populations include military personnel, hospital staff and health-care workers whose risk of prior exposure to TB was low, and U.S.-born students at higher education institutions (e.g., as a requirement for admission to U.S. colleges and universities). For these populations, a percentage tuberculin response of >30 should be considered a positive QFT result.
# Recommendations
The highest priority of targeted tuberculin testing programs remains one that identifies persons at increased risk for TB who will benefit from treatment for LTBI. Following that principle, targeted tuberculin testing should be conducted among groups at risk for recent infection with M. tuberculosis and those who, regardless of duration of infection, are at increased risk for progression to active TB. The role of QFT in targeted testing has not yet been defined, but QFT can be considered for LTBI screening as follows:
- initial and serial testing of persons with an increased risk for LTBI (e.g., recent immigrants, injection-drug users, and residents and employees of prisons and jails); - initial and serial testing of persons who are, by history, at low risk for LTBI but whose future activity might place them at increased risk for exposure, and others eligible for LTBI surveillance programs (e.g., health-care workers and military personnel); or - testing of persons for whom LTBI screening is performed but who are not considered to have an increased probability of infection (e.g., entrance requirements for certain schools and workplaces). Before QFT testing is contemplated, arrangements should be made with a qualified laboratory. Those arrangements should include quality assurance and collection and transport of blood within the required 12 hours.
Confirmation of QFT results with TST is possible because performance of QFT does not affect subsequent QFT or TST results. The probability of LTBI is greatest when both the QFT and TST are positive. Considerations for confirmation are as follows:
- When the probability of LTBI is low, confirmation of a positive QFT result with TST is recommended before initiation of LTBI treatment. LTBI therapy is not recommended for persons at low risk who are QFT-negative or who are QFT-positive but TST-negative. - TST can also be used to confirm a positive QFT for persons at increased risk for LTBI. However, the need for LTBI treatment when QFT is positive and the subsequent TST is negative should be based on clinical judgment and perceived risk. - Negative QFT results do not require confirmation, but results can be confirmed with either a repeat QFT or TST if the accuracy of the initial test is in question.
# Contraindications
Because of insufficient data on which to base recommendations, QFT is not recommended for - evaluation of persons with suspected tuberculosis. Active tuberculosis is associated with suppressed IFN-γ responses, and in prior studies, fewer persons with active TB had positive QFT results than TST results. The degree of suppression appears to be related to the severity of disease and the duration of therapy. Studies are under way that compare the sensitivity of QFT and TST among persons with untreated active TB. - assessment of contacts of persons with infectious tuberculosis, because rates of conversion of QFT and TST af-ter a known exposure to M. tuberculosis have not been compared, and concordance of QFT and TST after exposure and with serial LTBI screening have not been studied. - screening of children aged <17 years, pregnant women, or for persons with clinical conditions that increase the risk for progression of LTBI to active TB (e.g., human immunodeficiency virus infection). Further studies are needed to define the appropriate use of QFT among these persons. - detection of LTBI after suspected exposure (i.e., contact investigation after a resident or employee is diagnosed with active TB or a laboratory spill of M. tuberculosis) of persons participating in longitudinal LTBI surveillance programs. The approach of using QFT for initial screening, followed by QFT and TST 3 months after the end of the suspected exposure, has not been evaluated. - confirmation of TST results because injection of PPD for TST might affect subsequent QFT results. Although QFT is not recommended for confirmation of TST results, QFT can be used for surveillance <12 months after a negative TST, if the initial QFT is negative. - diagnosis of M. avium complex disease.
# Conclusions
These interim recommendations are intended to achieve a high rate of acceptance and completion of testing for LTBI among groups who have been identified for targeted testing. Testing programs using TST or QFT should only be implemented if plans are also in place for the necessary follow-up medical evaluation and treatment (e.g., chest radiograph or LTBI treatment) of persons who are diagnosed with LTBI and quality laboratory services are ensured. | 6,284 | {
"id": "e1049a7e476731db0d8e20114d218d4ab39a0fb1",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | the use of influenza vaccine). Changes include statements about a) the influenza strains in the trivalent vaccine for 1990-1991 and b) revised recommendations for the use of antiviral agents for controlling outbreaks of influenza. INTRODUCTION#
Influenza A viruses are classified into subtypes on the basis of two surface antigens: hemagglutinin (H) and neuraminidase (N). Three subtypes of hemagglutinin (HI, H2, H3) and two subtypes of neuraminidase (N1, N2) are recognized among influenza A viruses that have caused widespread human disease. Immunity to these antigens-especially to the hemagglutinin--reduces the likelihood of infection and lessens the severity of disease if infection occurs. Infection with a virus of one subtype confers little or no protection against viruses of other subtypes. Furthermore, over time, antigenic variation (antigenic drift) within a subtype may be so marked that infection or vaccination with one strain may not induce immunity to distantly related strains of the same subtype. Although influenza B viruses have shown more antigenic stability than influenza A viruses, antigenic variation does occur. For these reasons, major epidemics of respiratory disease caused by new variants of influenza continue to occur. The antigenic characteristics of strains currently circulating provide the basis for selecting virus strains to include in each year's vaccine.
Typical influenza illness is characterized by abrupt onset of fever, myalgia, sore throat, and nonproductive cough. Unlike other common respiratory infections, influenza can cause severe malaise lasting several days. More severe illness can result if primary influenza pneumonia or secondary bacterial pneumonia occur. During influenza epidemics, high attack rates of acute illness result in increased numbers of visits to physicians' offices, walk-in clinics, and emergency rooms and increased hospitalizations for management of lower-respiratory-tract complications.
Fdderly persons and persons with underlying health problems are at increased risk for complications of influenza infection. If infected, such high-risk persons are more likely than the general population to require hospitalization. During major epidemics, hospitalization rates for high-risk adults may increase two-to fivefold, depending on the age group. Previously healthy children and younger adults may also require hospitalization for influenza-related 3/31/2009 complications, but the relative increase in their hospitalization rates is less than for persons who belong to high-risk groups.
An increase in mortality further indicates the impact of influenza epidemics. Increased mortality results not only from influenza and pneumonia but also from cardiopulmonary and other chronic diseases that can be exacerbated by influenza infection. At least 10,000 excess deaths have been documented in each of 19 different U.S. epidemics in the period ; greater than 40,000 excess deaths occurred in each of three of these epidemics. Approximately 80%-90% of the excess deaths attributed to pneumonia and influenza were among persons greater than or equal to 65 years of age.
Because the proportion of elderly persons in the U.S. population is increasing and because age and its associated chronic diseases are risk factors for severe influenza illness, the toll from influenza can be expected to increase unless control measures are used more vigorously. The number of younger persons at increased risk for influenza-related complications is also increasing for various reasons, such as the success of neonatal intensive care units, better management of diseases such as cystic fibrosis and acquired immunodeficiency syndrome (AIDS), and better survival rates for organ-transplant recipients. OPTIONS FOR THE CONTROL OF INFLUENZA Two measures available in the United States that can reduce the impact of influenza are immunoprophylaxis with inactivated (killed-virus) vaccine and chemoprophylaxis or therapy with an influenza-specific antiviral drug (e.g., amantadine). Vaccination of high-risk persons each year before the influenza season is currently the most effective measure for reducing the impact of influenza. Vaccination can be highly cost-effective when a) it is directed at persons who are most likely to experience complications or who are at increased risk for exposure, and b) it is administered to high-risk persons during hospitalization or a routine health-care visit before the influenza season, thus making special visits to physicians' offices or clinics unnecessary. Recent reports indicate that-when vaccine and epidemic strains of virus are well matched-achieving high vaccination rates among closed populations can reduce the risk of outbreaks by inducing herd immunity.
Other indications for vaccination include the strong desire of any person to avoid influenza infection, reduce the severity of disease, or reduce the chance of transmitting influenza to highrisk persons with whom the individual has frequent contact.
The antiviral agent available for use at this time (amantadine hydrochloride) is effective only against influenza A and, for maximum effectiveness as prophylaxis, must be used throughout the period of risk. When used as either prophylaxis or therapy, the potential effectiveness of amantadine must be balanced against potential side effects. Chemoprophylaxis is not a substitute for vaccination. Recommendations for chemoprophylaxis are provided primarily to help health-care providers make decisions regarding persons who are at greatest risk of severe illness and complications if infected with an influenza A virus. Use of amantadine may be considered a) as a control measure when influenza A outbreaks occur in institutions housing high-risk persons, both for treatment of ill individuals and as prophylaxis for others; b) as short-term prophylaxis after late vaccination of high-risk individuals (i.e., when influenza A infections are already occurring in the community) during the period when immunity is developing in response to vaccination; c) as seasonal prophylaxis for individuals for whom vaccination is contraindicated; d) as seasonal prophylaxis for immunocompromised individuals who may not produce protective levels of antibody in response to vaccination; and 3/31/2009 e) as prophylaxis for unvaccinated health-care workers and household contacts who care for high-risk individuals either for the duration of influenza activity in the community or until immunity develops after vaccination. Amantadine is also approved for use by any person who wishes to reduce his or her chances of becoming ill with influenza A. INACTIVATED VACCINE FOR INFLUENZA A AND B Influenza vaccine is made from highly purified, egg-grown viruses that have been rendered noninfectious (inactivated). Therefore, the vaccine cannot cause influenza. Each year's influenza vaccine contains three virus strains (usually two type A and one type B) representing influenza viruses believed likely to circulate in the United States in the upcoming winter. The composition of the vaccine is such that it rarely causes systemic or febrile reactions. Whole-virus, subvirion, and purified-surface-antigen preparations are available. To minimize febrile reactions, only subvirion or purified-surface-antigen preparations should be used for children; any of the preparations may be used for adults. Most vaccinated children and young adults develop high postvaccination hemagglutination-inhibition antibody titers that are protective against infection by strains similar to those in the vaccine or the related variants that may emerge during outbreak periods. Elderly persons and persons with certain chronic diseases may develop lower postvaccination antibody titers than healthy young adults, and thus may remain susceptible to influenza upper-respiratory-tract infection. Nevertheless, even if such persons develop influenza illness, the vaccine has been shown to be effective in preventing lower-respiratory-tract involvement or other complications, thereby reducing the risk of hospitalization and death.
# RECOMMENDATIONS FOR USE OF INFLUENZA VACCINE
Influenza vaccine is strongly recommended for any person greater than or equal to 6 months of age who~because of age or underlying medical condition-is at increased risk for complications of influenza. Health-care workers and others (including household members) in close contact with high-risk persons should also be vaccinated. In addition, influenza vaccine may be given to any person who wishes to reduce the chance of becoming infected with influenza.
The trivalent influenza vaccine prepared for the 1990-1991 season will include A/Taiwan/1/86like (H1N1), A/Shanghai/16/89-like (H3N2), and B/Yamagata/16/88-like hemagglutinin antigens. Recommended doses are listed in Table 1. Guidelines for the use of vaccine in different groups follow.
Although the current influenza vaccine can contain one or more antigens used in previous years, annual vaccination using the current vaccine is necessary because immunity for an individual declines in the year following vaccination. Because the 1990-1991 vaccine differs from the 1989-1990 vaccine, supplies of 1989-1990 vaccine should not be used to provide protection for the 1990-1991 influenza season. Two doses may be required for a satisfactory antibody response in previously unvaccinated children less than 9 years of age; however, studies with vaccines similar to those in current use have shown little or no improvement in antibody responses when a second dose is given to adults during the same season.
During the past decade, data on influenza vaccine immunogenicity and side effects have been obtained when vaccine has been administered intramuscularly. Because there has been no adequate evaluation o f recent influenza vaccines adm inistered by other routes, the intram uscular route is the one recommended for use. Adults and older children should be vaccinated in the 3/31/2009 deltoid muscle, and infants and young children in the anterolateral aspect of the thigh. TARGET
# GROUPS FOR SPECIAL VACCINATION PROGRAMS
To maximize protection of high-risk persons, they and their close contacts should be targeted for organized vaccination programs. Groups at Increased Risk for Influenza-Related Complications:
1. Persons greater than or equal to 65 years of age.
2. Residents of nursing homes and other chronic-care facilities housing persons of any age with chronic medical conditions.
3. Adults and children with chronic disorders of the pulmonary or cardiovascular systems, including children with asthma.
4. Adults and children who have required regular medical follow-up or hospitalization during the preceding year because of chronic metabolic diseases (including diabetes mellitus), renal dysfunction, hemoglobinopathies, or immunosuppression (including immunosuppression caused by medications).
5. Children and teenagers (6 months-18 years of age) who are receiving long-term aspirin therapy, and therefore may be at risk of developing Reye syndrome after influenza. Groups That Can Transmit Influenza to High-Risk Persons:
Persons who are clinically or subclinically infected and who attend or live with high-risk persons can transmit influenza virus to them. Some high-risk persons (e.g., the elderly, transplant recipients, or persons with AIDS) can have low antibody responses to influenza vaccine. Efforts to protect these high-risk persons against influenza may be improved by reducing the chances of exposure to influenza from their care providers. Therefore, the following groups should be vaccinated:
1. Physicians, nurses, and other personnel in both hospital and outpatient-care settings who have contact with high-risk persons in all age groups, including infants.
2. Employees of nursing homes and chronic-care facilities who have contact with patients or residents.
3. Providers of home care to high-risk persons (e.g., visiting nurses, volunteer workers). 4. Household members (including children) of high-risk persons.
# VACCINATION OF OTHER GROUPS General Population
Physicians should administer influenza vaccine to any person who wishes to reduce the chance of acquiring influenza infection. Persons who provide essential community services and students or other persons in institutional settings (e.g., schools and colleges) may be considered for vaccination to minimize the disruption of routine activities during outbreaks. Pregnant Women Influenza-associated excess mortality among pregnant women has not been documented except in the pandemics of 1918-1919 and 1957-1958. However, pregnant women who have other medical conditions that increase their risks for complications from influenza should be vaccinated, as the vaccine is considered safe for pregnant women. Administering the vaccine after the first trimester is a reasonable precaution to minimize any concern over the theoretical risk of teratogenicity. However, it is undesirable to delay vaccination of pregnant women who have high-risk conditions and who will still be in the first trimester of pregnancy when the influenza season begins. Persons Infected with HIV Little information exists regarding the frequency and severity of influenza illness in human immunodeficiency virus(HIV)-infected persons, but recent reports suggest that symptoms may be prolonged and the risk of complications increased for this high-risk group. Because influenza may result in serious illness and complications, vaccination is a prudent precaution and will result in protective antibody levels in many recipients. However, the antibody response to vaccine may be low in persons with advanced HIV-related illnesses; a booster dose of vaccine has not improved the immune response for these individuals. Foreign Travelers Increasingly, the elderly and persons with high-risk medical conditions are embarking on international travel. The risk of exposure to influenza during foreign travel varies, depending on season and destination. In the tropics, influenza can occur throughout the year; in the southern hemisphere, the season of greatest activity is April-September. Because of the short incubation period for influenza, exposure to the virus during travel can result in clinical illness that also begins while traveling, an inconvenience or potential danger, especially for persons at increased risk for complications. Persons preparing to travel to the tropics at any time of year or to the southern hemisphere during April-September should review their influenza vaccination histories. If they were not vaccinated the previous fall/winter, they should consider influenza vaccination before travel. Persons in the high-risk categories should be especially encouraged to receive the most currently available vaccine. High-risk persons given the previous season's vaccine before travel should be revaccinated in the fall/winter with current vaccine. PERSONS WHO SHOULD NOT BE VACCINATED Inactivated influenza vaccine should not be given to persons known to have anaphylactic hypersensitivity to eggs (see "Side Effects and Adverse Reactions").
Persons with acute febrile illnesses usually should not be vaccinated until their symptoms have abated. SIDE EFFECTS AND ADVERSE REACTIONS Because influenza vaccine contains only noninfectious viruses, it cannot cause influenza. Respiratory disease after vaccination represents coincidental illness unrelated to influenza vaccination. The most frequent side effect of vaccination is soreness at the vaccination site that lasts for up to 2 days; this is reported for less than one-third of vaccinees.
In addition, two types of systemic reactions have occurred:
1. Fever, malaise, myalgia, and other systemic symptoms occur infrequently and most often affect persons who have had no exposure to the influenza virus antigens in the vaccine (e.g., young children). These reactions begin 6-12 hours after 3/31/2009 vaccination and can persist for 1 or 2 days.
2. Immediate-presumably allergic-reactions (such as hives, angioedema, allergic asthma, or systemic anaphylaxis) occur rarely after influenza vaccination. These reactions probably result from hypersensitivity to some vaccine component-most likely residual egg protein. Although current influenza vaccines contain only a small quantity of egg protein, this protein presumably induces immediate hypersensitivity reactions in persons with severe egg allergy. Persons who have developed hives, have had swelling of the lips or tongue, or experienced acute respiratory distress or collapse after eating eggs should not be given the influenza vaccine. Persons with documented immunoglobulin E(IgE)-mediated hypersensitivity to eggs-including those who have had occupational asthma or other allergic responses from exposure to egg protein-may also be at increased risk for reactions from influenza vaccine. The protocol for influenza vaccination developed by Murphy and Strunk may be considered for patients who have egg allergies and medical conditions that place them at increased risk for influenza infection or its complications (See Murphy and Strunk, 1985).
Unlike the 1976 swine influenza vaccine, subsequent vaccines prepared from other virus strains have not been associated with an increased frequency of Guillain-Barre syndrome. Although influenza vaccination can inhibit the clearance of warfarin and theophylline, studies have failed to show any adverse clinical effects attributable to these drugs in patients receiving influenza vaccine. SIMULTANEOUS ADMINISTRATION OF OTHER VACCINES, INCLUDING CHILDHOOD VACCINES
The target groups for influenza and pneumococcal vaccination overlap considerably. Both vaccines can be given at the same time at different sites without increasing side effects. However, influenza vaccine must be given each year; with few exceptions, pneumococcal vaccine should be given only once.
High-risk children may receive influenza vaccine at the same time as measles-mumps-rubella, Haemophilus b, pneumococcal, and oral polio vaccines. Vaccines should be given at different sites. Influenza vaccine should not be given within 3 days of vaccination with pertussis vaccine.
# TIMING OF INFLUENZA VACCINATION ACTIVITIES
Beginning each September, when vaccine for the upcoming influenza season becomes available, high-risk persons who are hospitalized or who are seen by health-care providers for routine care should be offered influenza vaccine. Except in years of pandemic influenza (e.g., 1957 and 1968), high levels of influenza activity rarely occur in the contiguous 48 states before December. Therefore, November is the optimal time for organized vaccination campaigns for high-risk persons. In facilities such as nursing homes, it is particularly important to avoid administering vaccine too far in advance of the influenza season because antibody levels begin declining within a few months. Vaccination programs may be undertaken as soon as current vaccine is available if regional influenza activity is expected to begin earlier than December.
Children less than 9 years of age who have not previously been vaccinated should receive two doses of vaccine at least a month apart to maximize the chance of a satisfactory antibody response to all three vaccine antigens. The second dose should be given before December, if possible. Vaccine should be offered to both children and adults up to and even after influenza virus activity is documented in a community, as late as April in some years. Despite the recognition that optimum medical care for both adults and children includes regular review of immunization records and administration of vaccines as appropriate, less than 30% of persons in high-risk groups receive influenza vaccine each year. More effective strategies are needed for delivering vaccine to high-risk persons, their health-care providers, and their household contacts.
In general, successful vaccination programs have combined education for health-care workers, publicity and education targeted toward potential recipients, a plan for identifying (usually by medical-record review) persons at high-risk, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine.
Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described below. All patients should be offered vaccine in one period shortly before the beginning of the influenza season. Patients admitted to such programs during the winter months after the earlier vaccination program has been conducted should be vaccinated at the time of admission.
Household members should receive written information regarding the need for vaccination and the places to obtain influenza vaccine. Visiting Nurses and Others Providing Home Care to High-Risk Persons Nursing-care plans should identify high-risk patients, and vaccine should be provided in the home if necessary. Care givers and others in the household (including children) should be referred for vaccination. Facilities Providing Services to Persons greater than or equal to 65 Years of Age (e.g., retirement communities, recreation centers)
All unvaccinated residents/attendees should be offered vaccine on site at one time period before the influenza season; alternatively, education/publicity programs should emphasize the need for influenza vaccine and should provide specific information on how, where, and when to obtain it.
# Clinics and Others Providing Health Care for Travelers
Indications for influenza vaccination should be reviewed before travel and vaccine offered if appropriate (see "Foreign Travelers"). Health-Care Workers Administrators of all health-care facilities should arrange for influenza vaccine to be offered to all personnel before the influenza season. Personnel should be provided with appropriate educational materials and strongly encouraged to receive vaccine, with particular emphasis on vaccination of persons who care for high-risk patients (e.g., staff of intensive-care units, including newborn intensive-care units; staff of medical/surgical units; and employees of nursing homes and chronic-care facilities). Using a mobile cart to take vaccine to hospital wards or other work sites and making vaccine available during night and weekend work shifts may enhance compliance, as may a follow-up campaign if an outbreak occurs in the community.
# ANTIVIRAL AGENTS FOR INFLUENZA A
The two antiviral agents with specific activity against influenza A viruses are amantadine hydrochloride and rimantadine hydrochloride. Only amantadine islicensed for use in the United States. These chemically related drugs interfere with the replication cycle of type A (but not type B) influenza viruses, although the specific mechanisms of their antiviral activity are not completely understood. When given prophylactically to healthy young adults or children in advance of and throughout the epidemic period, amantadine is approximately 70%-90% effective in preventing illnesses caused by naturally occurring strains of type A influenza viruses. When administered to otherwise healthy young adults and children for symptomatic treatment within 48 hours after the onset of influenza illness, amantadine has been shown to reduce the duration of fever and other systemic symptoms and may permit a more rapid return to routine daily activities. Since antiviral agents taken prophylactically may prevent illness but not subclinical infection, some persons who take these drugs may still develop immune responses that will protect them when exposed to antigenically related viruses in later years.
As with all drugs, symptoms may occur that are side effects of amantadine in a small proportion of persons. Such symptoms are rarely severe, but may be important for some categories of patients. RECOMMENDATIONS FOR THE USE OF AMANTADINE Outbreak Control in Institutions When outbreaks of influenza A occur in institutions that house high-risk persons, chemoprophylaxis should begin as early as possible to reduce the spread of the infection. Contingency planning is needed to ensure rapid administration of amantadine to residents and employees. This should include preapproved medication orders or plans to obtain physicians' orders on short notice. When amantadine is used for outbreak control, it should be administered httn 3/31/2009 to all residents of the affected institution regardless of whether they received influenza vaccine the previous fall. The dose for each resident should be determined after consulting the dosage recommendations and precautions that follow in this document and those listed in the manufacturer's package insert. To reduce spread of virus and to minimize disruption of patient care, chemoprophylaxis should also be offered to unvaccinated staff who provide care to highrisk patients. To be fully effective as prophylaxis, the antiviral drug must be taken each day for the duration of influenza activity in the community. Use as Prophylaxis High-risk individuals vaccinated after influenza A activity has begun High-risk individuals can still be vaccinated after an outbreak of influenza A has begun in a community. However, the development of antibodies in adults after vaccination usually takes 2 weeks, during which time amantadine should be given. Children who receive influenza vaccine for the first time may require up to 6 weeks of prophylaxis, or until 2 weeks after the second dose of vaccine has been received. Amantadine does not interfere with the antibody response to the vaccine. Persons providing care to high-risk persons Although amantadine can reduce the severity and shorten the duration of influenza A illness in healthy adults, there are no data on the efficacy of amantadine therapy in preventing complications of influenza A in high-risk persons. Therefore, no specific recommendations can be made regarding the therapeutic use of amantadine for these patients. This does not preclude physicians from using amantadine in high-risk patients who develop illness compatible with influenza during a period of known or suspected influenza A activity in the community. When amantadine is administered to healthy young adults at a dose of 200 mg divided by ay, minor central-nervous-system (CNS) side effects (nervousness, anxiety, insomnia, difficulty concentrating, and lightheadedness) and/or gastrointestinal side effects (anorexia and nausea) occur in approximately 5%-10% of patients. Side effects diminish or cease soon after discontinuing use of the drug. With prolonged use, side effects may also diminish or disappear after the first week of use. More serious but less frequent CNS-related side effects (seizures, confusion) associated with use of amantadine have usually affected only elderly persons, those with renal disease, and those with seizure disorders or other altered mental/behavioral conditions. Reducing the dosage to less than or equal to 100 mg divided by ay appears to reduce the frequency of these side effects in such persons without compromising the prophylactic effectiveness of amantadine.
The package insert should be consulted before use of amantadine for any patient. The patient's age, weight, renal function, presence of other medical conditions, and indications for use of amantadine (prophylaxis or therapy) must be considered, and the dosage and duration of treatment adjusted appropriately. | 5,242 | {
"id": "55c4cd3115959a316dc7ce812dedde206e6bde54",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | This report updates CDC's 2002 recommendations regarding screening tests to detect Chlamydia trachomatis and Neisseria gonorrhoeae infections (CDC. Screening tests to detect Chlamydia trachomatis and Neisseria gonorrhoeae infections-2002infections- . MMWR 200251) and provides new recommendations regarding optimal specimen types, the use of tests to detect rectal and oropharyngeal C. trachomatis and N. gonorrhoeae infections, and circumstances when supplemental testing is indicated. The recommendations in this report are intended for use by clinical laboratory directors, laboratory staff, clinicians, and disease control personnel who must choose among the multiple available tests, establish standard operating procedures for collecting and processing specimens, interpret test results for laboratory reporting, and counsel and treat patients. The performance of nucleic acid amplification tests (NAATs) with respect to overall sensitivity, specificity, and ease of specimen transport is better than that of any of the other tests available for the diagnosis of chlamydial and gonococcal infections. Laboratories should use NAATs to detect chlamydia and gonorrhea except in cases of child sexual assault involving boys and rectal and oropharyngeal infections in prepubescent girls and when evaluating a potential gonorrhea treatment failure, in which case culture and susceptibility testing might be required. NAATs that have been cleared by the Food and Drug Administration (FDA) for the detection of C. trachomatis and N. gonorrhoeae infections are recommended as screening or diagnostic tests because they have been evaluated in patients with and without symptoms. Maintaining the capability to culture for both N. gonorrhoeae and C. trachomatis in laboratories throughout the country is important because data are insufficient to recommend nonculture tests in cases of sexual assault in prepubescent boys and extragenital anatomic site exposure in prepubescent girls. N. gonorrhoeae culture is required to evaluate suspected cases of gonorrhea treatment failure and to monitor developing resistance to current treatment regimens. Chlamydia culture also should be maintained in some laboratories to monitor future changes in antibiotic susceptibility and to support surveillance and research activities such as detection of lymphogranuloma venereum or rare infections caused by variant or mutated C. trachomatis.# Introduction
# Chlamydia trachomatis
Chlamydia trachomatis infection is the most common notifiable disease in the United States with >1.3 million infections reported to CDC in 2010 (1). The majority of persons with C. trachomatis infection are not aware of their infection because they do not have symptoms that would prompt them to seek medical care (2). Consequently, screening is necessary to identify and treat this infection. Direct medical costs of C. trachomatis infections were estimated at $516.7 million in 2008 (2012 dollars) (3). The direct costs associated with C. trachomatis infections and their sequelae have decreased in recent years as sequelae have been managed in a less-costly manner and ectopic pregnancies have been increasingly managed medically (4). Also of importance are the tangible costs, including the lost productivity, and the intangible costs, including psychological and emotional injury caused by infertility and ectopic pregnancy (5).
Untreated C. trachomatis infections can lead to serious complications. In older observational treatment studies, up to 30% of women with untreated C. trachomatis infections developed pelvic inflammatory disease (PID) (6,7). Of these women, the majority had symptoms that were too mild or nonspecific to cause them to seek medical treatment. Regardless of symptom intensity, the consequences of PID are severe. Left untreated, 20% of those with symptomatic PID might become infertile; 18% will experience debilitating, chronic pelvic pain; and 9% will have a life-threatening tubal pregnancy (8). The importance of subclinical PID became apparent with observations that most of the women with tubal factor infertility or ectopic pregnancy who had serologic evidence of chlamydial infection apparently had no history of PID (9,10). C. trachomatis infection during pregnancy might lead to infant conjunctivitis and pneumonia and maternal postpartum endometritis (11). Among men, urethritis is the most common illness resulting from C. trachomatis infection. Complications (e.g., epididymitis) affect a minority of infected men and rarely result in reproductive health sequelae (12).
C. trachomatis infections of the rectum can result from unprotected anal intercourse and are typically asymptomatic but might progress to proctocolitis (13,14). Ocular infections can result in conjunctivitis in neonates and adults (15). Sexually acquired reactive arthritis also has been reported as a possible consequence of C. trachomatis infection (16).
# Neisseria gonorrhoeae
N. gonorrhoeae infections are the second most common notifiable communicable disease in the United States with 309,341 cases being reported to CDC in 2010 (1). Gonococcal infections tend to cause a stronger inflammatory response than C. trachomatis but are typically asymptomatic in women until complications such as PID develop (17). In men, the majority of urethral infections cause urethritis with painful urination and, less commonly, epididymitis or disseminated gonococcal infection (17).
Subsets of men who have sex with men (MSM) engage in risky sexual behaviors (e.g., having sex with multiple, anonymous partners and unprotected oral and rectal sex), often leading to infections at these extragenital sites. MSM might have a high prevalence of sexually transmitted infections especially at these extragenital sites, which can be a public health problem because of the potential for enhancing HIV transmission (18). CDC recommends routine laboratory screening of genital and extragenital sites for all sexually active MSM at risk for infection (19).
# Purpose of This Report
This report updates CDC's 2002 recommendations regarding screening tests to detect C. trachomatis and N. gonorrhoeae infections (20) and provides new recommendations regarding optimal specimen types, the use of tests to detect rectal and oropharyngeal C. trachomatis and N. gonorrhoeae infections, and information regarding when supplemental testing is indicated (Box 1). These recommendations are intended for use by clinical laboratory directors, laboratory staff, clinicians, and disease control personnel who must choose among the multiple available tests, establish standard operating procedures for collecting and processing specimens, interpret test results for laboratory reporting, and counsel and treat patients.
# BOX 1. Summary of recommendations
- Nucleic acid amplification tests (NAATs) that are cleared by the Food and Drug Administration (FDA) are recommended for detection of genital tract infections caused by Chlamydia trachomatis and Neisseria gonorrhoeae infections in men and women with and without symptoms. For detecting these infections of the genital tract, optimal specimen types for NAATs are vaginal swabs from women and first catch urine from men. Older nonculture tests and non-NAATs have inferior sensitivity and specificity characteristics and no longer are recommended. - NAATs have not been cleared by FDA for the detection of rectal and oropharyngeal infections caused by C. trachomatis and N. gonorrhoeae. CDC is recommending NAATs to test for these extragenital infections based on increased sensitivity, ease of specimen transport and processing. Because these specimen types have not been cleared by FDA for use with NAATs, laboratories must establish performance specifications when using these specimens to meet Clinical Laboratory Improvement Amendments (CLIA) regulatory requirements and local or state regulations as applicable prior to reporting results for patient management. Positive reactions with nongonococcal Neisseria species have been reported with some NAATs, particularly with oropharyngeal specimens. Alternate target testing using NAATs without reported crossreactivity might be needed to avoid false positive gonorrhea results when using these tests with these specimens. - Routine repeat testing of NAAT-positive genital tract specimens is not recommended because the practice does not improve the positive predictive value of the test. - Laboratory interpretation of test results should be consistent with product inserts for FDA-cleared tests or have met all federal and state regulations for a modified procedure if the laboratory has changed the cutoff values or testing algorithm. This approach provides the most appropriate information to the clinician, who is ultimately responsible for assessing test results to guide patient and partner management. - N. gonorrhoeae culture capacity is still needed for evaluating suspected cases of treatment failure and monitoring antimicrobial susceptibility. - C. trachomatis and N. gonorrhoeae culture capacity might still be needed in instances of child sexual assault in boys and extragenital infections in girls.
# Methods
In 2008, CDC and the Association of Public Health Laboratories (APHL) convened an independent work group to evaluate available information and make suggestions for CDC to consider in the development of recommendations for the laboratory diagnosis of C. trachomatis and N. gonorrhoeae in the United States. Members of the work group- were selected on the basis of published expertise in the field of C. trachomatis and N. gonorrhoeae diagnostics or were public health laboratory directors, sexually transmitted diseases (STD) clinicians, CDC's Division of STD Prevention, and representatives of the Food and Drug Administration (FDA) or the Centers for Medicaid Services (CMS). Four members of the work group, including three who served as co-authors, previously had published papers in which they acknowledged receiving financial support from diagnostic test manufacturers for test evaluations. These potential conflicts of interest were disclosed and managed in accordance with the editorial standards of the journals that published the scientific reports. In addition, in August 2013, to maintain objectivity and to confirm that the recommendations were evidence based, a second independent panel of microbiologic, statistical, and clinical experts reviewed the draft recommendations. Approximately 6 months before a meeting held at CDC during January 13-15, 2009, work group members were asked by conference call to identify key questions regarding chlamydia and gonorrhea laboratory diagnostics that emerged from literature reviews and discussed the information available to answer those questions. Work group members were assigned key questions to research (see Appendix) and, with the assistance of CDC and APHL staff, conducted an extensive Medline database of peer-reviewed literature published during January 1, 2000-January 1, 2009. The Medline database was searched using the terms "Chlamydia trachomatis" or "chlamydia" "Neisseria gonorrhoeae" or "gonorrhea" or "lymphogranuloma venereum" or "LGV" and "infection" or "reproductive tract" or "specimen" or "urogenital specimen" or "urine" or "rectum" or "pharynx" or "oropharynx" or "culture" or "nucleic acid amplification" or "nucleic acid probe" or "enzyme immunoassay" or "detection" or "performance" or "screening" or "adolescent" or "prevalence" or "confirmation" or "repeat testing" or "pediatric" or "sexual assault" or "sexual abuse" or "point of care" or "serology." The key questions were categorized into three principal areas of laboratory diagnostics: 1) performance characteristics of tests, 2) screening applications, and 3) laboratory confirmation of test results. Monthly conference calls or e-mail message exchanges were conducted with work group members researching key questions in each principal area to ensure progress and adequate support in obtaining relevant publications. Work group members assigned to address key questions developed tables of evidence from peer-reviewed publications and presented these tables at the in-person meeting held in January 2009. Each key question was introduced, and publications were discussed in terms of strengths, weaknesses, and overall relevance of the data to the key questions. Scientific publications with findings derived from studies with an analytic plan involving a patient's infected status were included in developing these recommendations. Studies using discrepant analysis were excluded. All work group members agreed with these inclusion and exclusion criteria because they approach design characteristics used by FDA when evaluating diagnostic tests for marketing in the United States. During the meeting, each topic was presented by the assigned work group member, and an open forum followed to allow all work group members and ad hoc attendees to discuss the merits of publications used to address the key questions. At the end of each discussion, a recommendation was proposed and adopted for consideration by CDC if there were no objections from the work group members. Following the in-person meeting, the same database was searched periodically for subsequently published articles for the work group to consider by e-mail and/or teleconference calls. A writing team was formed to draft the recommendations generated from these discussions, and the senior CDC author was responsible for the overall content.
# Testing Technologies
Multiple laboratory test options can be used to detect chlamydia and gonorrhea although some might not be recommended for routine use based on performance. Direct detection of the pathogen using culture or nonculture methods is possible. Of the nonculture tests available, only nucleic acid amplification testing (NAAT) is recommended for routine use whereas other tests (e.g., enzyme immunoassays, nucleic acid probe tests, and genetic transformation tests) are not recommended. Serologic tests that detect a systemic immune response to infection are not recommended because of the lack of precision for the detection of an active infection.
Since 2002, improvements in chlamydia and gonorrhea NAAT technologies have enabled significant implementation and expansion of screening programs using less invasive specimen collection. Although these changes have created opportunities for more rapid and accurate chlamydia and gonorrhea diagnosis and a broader understanding of key populations at risk, they also might have created challenges (e.g., increased laboratory costs and physical design constraints requiring unidirectional specimen processing to minimize contamination when laboratories attempt to incorporate new technologies into their existing test repertoire). The performance of NAATs with respect to overall sensitivity, specificity, and ease of specimen transport is better than that of any of the other tests available for the diagnosis of chlamydial and gonococcal infections (21)(22)(23)(24)(25)(26)(27)(28)(29)(30). Culture for C. trachomatis and N. gonorrhoeae was long the reference standard against which all other diagnostic tests were compared. However, better tests have been needed because of difficulties in maintaining the viability of organisms during transport and storage in the diverse settings in which testing is indicated. In addition, the tissue culture methods for C. trachomatis isolation are difficult to standardize, technically demanding, expensive, and relatively insensitive. Thus, diagnostic test manufacturers developed nonculture tests. The first nonculture tests for C. trachomatis and N. gonorrhoeae included enzyme immunoassays (EIAs), which detect specific chlamydial or gonococcal antigens, and direct fluorescent antibody (DFA) tests for C. trachomatis, which use fluorescein-conjugated monoclonal antibodies that bind specifically to bacterial antigen in smears. These antigendetection tests were followed by nucleic acid hybridization tests, which detect C. trachomatis-specific or N. gonorrhoeaespecific deoxyribonucleic acid (DNA) or ribonucleic acid (RNA) sequences. With the availability of these nonculture tests, some of which could be automated, screening programs for C. trachomatis were initiated, and screening programs for N. gonorrhoeae began to change from culture to using the more convenient and, for remote settings, more reliable nonculture methods. The primary drawback of these tests, especially for C. trachomatis, was that they failed to detect a substantial proportion of infections (30)(31)(32)(33)(34)(35)(36)(37)(38)(39). This changed with the introduction of NAATs that amplify and detect C. trachomatis-specific or N. gonorrhoeae-specific DNA or RNA sequences. These tests are approximately 20%-35% more sensitive than the earlier nonculture tests (30)(31)(32)(33)(34)(35)(36)(37)(38)(39).
This report emphasizes the importance of maintaining the capability to culture for both N. gonorrhoeae and C. trachomatis in laboratories throughout the country because there are insufficient data to recommend nonculture tests in cases of sexual assault in boys and extragenital site exposure in girls. N. gonorrhoeae culture is required as a test of cure to evaluate suspected cases of gonorrhea treatment failure and to monitor developing resistance to current treatment regimens. Test of cure should be done when clinically indicated only (i.e., not part of routine care). Chlamydia culture capability also should be maintained in some laboratories to monitor future changes in antibiotic susceptibility and to support surveillance and research activities such as detection of lymphogranuloma venereum (LGV) or rare infections caused by variant or mutated C. trachomatis such as the type recently described in Sweden (40,41).
# Recommendations Tests to Detect C. trachomatis and N. gonorrhoeae
Isolation and identification of Chlamydia trachomatis. Specimen collection swabs for C. trachomatis culture must have a plastic or wire shaft and either rayon, dacron, or cytobrush tip. Other materials might inhibit isolation. Specimen collection for C. trachomatis culture is invasive requiring insertion of a swab 2-3 cm into the male urethral or 1-2 cm into the endocervical canal followed by two or three rotations to collect sufficient columnar or cuboidal epithelial cells. Following the collection, culture samples should be stored in an appropriate transport media such as sucrose phosphate glutamate buffer or M4 media (Thermal Scientific, Lenexa, Kansas) and transported at ≤4°C to the laboratory within 24 hours of collection to maximize recovery of viable organisms. If transport is delayed >24 hours, the transport media containing the specimen should be stored at -70°C. The specimen is inoculated by centrifugation onto a confluent monolayer of McCoy, HeLa 229, or Buffalo green monkey kidney cells that support growth of C. trachomatis (42)(43)(44)(45)(46). Once the specimen has been inoculated, 2µg/ml of cycloheximide should be added to the growth medium to suppress protein synthesis by the host eukaryotic cell (47). Inoculated cells are harvested after 48-72 hours of growth; infected cells develop characteristic intracytoplasmic inclusions that contain substantial numbers of C. trachomatis elementary and reticulate bodies.
The cell monolayers are reacted with either genus specific or species-specific fluorescein-conjugated monoclonal antibodies to allow specific visualization of the chlamydial inclusions with an epifluorescent microscope. Cell culture detection of C. trachomatis is highly specific if a C. trachomatis major outer membrane protein (MOMP)-specific stain is used. Monoclonal antibodies directed against the family-specific lipopolysaccharide (LPS) of Chlamydiaceae cost less but might stain bacteria that share LPS antigens. LPS stains might be suitable for routine use, but a species-specific (MOMP) stain is recommended in situations requiring increased specificity (48)(49)(50). Less specific inclusion-detection methods using iodine or Giemsa stain are not recommended (48)(49)(50).
Cell culture methods vary among laboratories, resulting in substantial interlaboratory variation in performance (51). The shell vial method of culture uses a larger inoculum with a reduced risk for crosscontamination and therefore provides better accuracy than the 96-well microtiter plate method (42,43). In certain laboratories, higher sensitivities are obtained by performing a blind pass in which an inoculated cell monolayer is allowed to incubate for 48-72 hours, after which the monolayer is disrupted and used to inoculate a fresh monolayer that is stained after 48-72 hours of incubation to allow for another cycle of growth (49).
Despite the technical difficulties, cell culture, when performed by an experienced analyst, was the most sensitive diagnostic test for chlamydial infection until the introduction of NAATs (28,52). The relatively low sensitivity, extended testing turnaround time, difficulties in standardization, labor intensity, technical complexity, stringent specimen collection and transport requirements, and relatively high cost are disadvantages of cell culture isolation of C. trachomatis. Recommended procedures for C. trachomatis isolation and culture detection using a species specific stain must be followed when using this test in cases of suspected child sexual assault in boys and extragenital infections in girls.
Isolation and identification of N. gonorrhoeae. Because of its high specificity (>99%) and sensitivity (>95%), a Gram stain of a male urethral specimen that demonstrates polymorphonuclear leukocytes with intracellular Gram-negative diplococci can be considered diagnostic for infection with N. gonorrhoeae in symptomatic men. However, because of lower sensitivity, a negative Gram stain should not be considered sufficient for ruling out infection in asymptomatic men. In addition, Gram stains of endocervical specimens, pharyngeal, or rectal specimens also are not sufficient to detect infection and therefore are not recommended. Specific testing for N. gonorrhoeae is recommended because of the increased utility and availability of highly sensitive and specific testing methods and because a specific diagnosis might enhance partner notification.
If multiple specimens are being collected from an anatomic site, N. gonorrhoeae culture specimens should be obtained first; this sequence maximizes the load collected, which increases the likelihood of a successful culture (39). Specimens collected for gonorrhea culture should be obtained by using swabs with plastic or wire shafts and rayon, Dacron, or calcium alginate tips. Other swab material such as wood shafts and cotton tips might be inhibitory or toxic to the organism and should be avoided. Although collection of epithelial cells is less important for culture detection of N. gonorrhoeae, swabs should be inserted 2-3 cm in the male urethra or 1-2 cm into the endocervical canal followed by two or three rotations. In cases of urethritis, collection of the exudate is sufficient for N. gonorrhoeae culture.
Several nonnutritive swab transport systems are available, and some studies suggest that these transport systems might maintain gonococcal viability for up to 48 hours in ambient temperatures (53)(54)(55). However, environmental conditions might vary by location and season, which could affect the viability of gonorrhea in these transport systems; thus, additional local validation of transport conditions might be needed. Culture medium transport systems are preferred because there are some advantages over swab transport systems (e.g., extended shelf life and better recovery because cultivated isolates are being transported rather than a clinical specimen) (39). Culture medium is inoculated with the swab specimen and then placed immediately into a CO 2enriched atmosphere for transportation to the laboratory. Because N. gonorrhoeae has demanding nutritional and environmental growth requirements, optimal recovery rates are achieved when specimens are inoculated directly and when the growth medium is incubated in an increased CO 2 environment as soon as possible.
Methods of gonococcal culture have been described elsewhere (39,56). Specimens from normally nonsterile sites are streaked on a selective (e.g., Thayer-Martin or Martin-Lewis) medium and specimens from sterile sites are streaked on nonselective (e.g., chocolate agar) medium. Culture media for N. gonorrhoeae isolation include a base medium supplemented with chocolatized (heated) equine or bovine blood to support the growth of the gonococcus. Commercially prepared chocolate agar containing synthetic hemin and growth factors for N. gonorrhoeae are available from various vendors. Selective media differ from routine culture media in that they contain antimicrobial agents (i.e., vancomycin, colistin, and nystatin or another antifungal agent) that inhibit the growth of other bacteria and fungi. Using selective media might improve isolation if the anatomic source of the specimen normally contains other bacterial species although some strains of N. gonorrhoeae have been demonstrated to be inhibited on selective media (57). Inoculated media are incubated at 35°C-36.5°C in an atmosphere supplemented with 5% CO 2 and examined at 24 and 48 hours postcollection. Supplemental CO 2 can be supplied by a CO 2 incubator, candle-extinction jar using unscented candles (e.g., votive candles) or CO 2 -generating tablets.
Isolates recovered from a genital specimen on selective medium that are Gram-negative diplococcus-and oxidasepositive might be presumptively identified as N. gonorrhoeae (39). A presumptive identification indicates only that a Gramnegative, oxidase-positive diplococcus (e.g., any Neisseria species or Branhamella catarrhalis) might be isolated from such specimens. Certain coccobacilli, including Kingella denitrificans, might appear to be Gram-negative diplococci in Gram-stained smears. A confirmed laboratory diagnosis of N. gonorrhoeae requires additional biochemical tests (Table 1). A presumptive test result is sufficient to initiate antimicrobial therapy, but additional tests must be performed to confirm the identity of an isolate as N. gonorrhoeae (39).
Culture for N. gonorrhoeae is inexpensive to perform from genital sites and is specific and sensitive if the specimen is collected and transported properly to the laboratory. However, it is less than ideal for routine diagnostics because of stringent collection and transport requirements, and confirmation might take several days from time of specimen collection. The primary advantage of isolating N. gonorrhoeae by culture is the ability to characterize the isolate further by antimicrobial susceptibly testing and genetic analysis if necessary. Cephalosporins are the sole class of antibiotics recommended for the treatment of N. gonorrhoeae infections in CDC's 2010 STD treatment guidelines (available at / treatment/2010/default.htm) (19), and the availability of gonoccocal culture capacity at the local level is an important consideration if a patient fails therapy (58).
Antibiotic susceptibility testing. Gonorrhea treatment is complicated by the ability of N. gonorrhoeae to develop resistance to antimicrobial therapies. Genetic mutations and/or acquisition of genetic material from closely related bacteria species might result in antibiotic-resistant N. gonorrhoeae. Plasmid mediated resistance to penicillin can be conferred by extrachromosomal genes encoding for β-lactamase that destroys penicillin (59,60). Resistance to tetracycline also might occur when the organism acquires an extrachromosomal gene from streptococcus, the tetM gene that allows for ribosomal protein synthesis that is normally impaired by tetracycline (61). Testing for these plasmid genes provides limited information because genetic changes in the chromosome also might confer resistance to penicillin and tetracycline in addition to spectinomycin and fluoroquinolones (62,63). Testing specimens for genetic alterations in the chromosome requires a complete understanding of the complex and multiple mechanisms associated with resistance. For example, chromosomal-mediated resistance to penicillin can alter penicillin binding, penetration, or efflux (64). Resistance to fluoroquinlones results from mutations in DNA gyrase (gyrA) or topoisomerase (parC) resulting in decreased drug penetration and increased efflux (65,66). Penicillin-, tetracycline-, and fluoroquinolone-resistant N. gonorrhoeae isolates now are disseminated widely throughout the United States and globally (67). These antimicrobial agents no longer are recommended regimens for N. gonorrhoeae treatment, and thus susceptibility testing is not needed to make recommendations for clinical management. Laboratory capacity for N. gonorrhoeae culture and antibiotic susceptibility testing is critical to monitor for emerging resistance. Updated information regarding N. gonorrhoeae antibiotic susceptibility testing is available from CDC at .
Assessing N. gonorrhoeae isolates for antibiotic susceptibility requires viable isolates because accurate genetic markers of antibiotic resistance to recommended therapies have not been documented. Agar plate dilution testing that provides minimum inhibitory concentration values of tested antibiotics is the preferred method for testing the susceptibility of N. gonorrhoeae but might be too difficult to perform in laboratories with limited capacity and low testing volumes. Disk diffusion and E-test are simpler methods for determining susceptibilities of gonococcal isolates, although cefixime E-test strips are not FDA-cleared for use in the United States. Isolates that appear to be sess susceptible than the current Clinical and Laboratory Standards Institute (CLSI) interpretive criteria for susceptible organisms (68) (available at . cdc.gov/std/gonorrhea/arg/criteria.htm) should be submitted to CDC for reference testing using the agar plate dilution method because there are no CLSI interpretive criteria for resistance to CDC-recommended therapeutic agents. Procedures for agar dilution and disk diffusion testing are available at . cdc.gov/std/gonorrhea/lab/testing.htm. Clinicians who diagnose N. gonorrhoeae infection in a patient with suspected treatment failure should contact their local or state public health laboratory or local clinical laboratory for guidance on submitting specimens for culture and susceptibility testing. Local and state public health laboratory directors are encouraged to maintain culture and antimicrobial susceptibility testing capabilities for N. gonorrhoeae or identify public health or private laboratories in their area with such capacity if they do not perform such testing. No standard procedures exist to assess in vitro susceptibility of C. trachomatis to antibiotics (69). Further research is required to determine the relationship between in vitro data and outcome of treatment.
# Nucleic acid amplification tests (NAATs).
As of May 2013, five manufacturers had commercially available and FDA-cleared NAAT assay platforms for the detection of C. trachomatis and N. gonorrhoeae in the United States. NAAT assays are recommended for detection of urogenital infections caused by C. trachomatis and N. gonorrhoeae infections in women and men with and without symptoms. These tests have been shown to be cost-effective in preventing sequelae due to these infections (70)(71)(72). A list of FDA-cleared specimen types and transport and storage requirements is provided (Table 2). These include Abbott RealTime m2000 CT/NG (Abbott Molecular Inc. Des Plaines, Illinois), Amplicor and cobas CT/NG test (Roche Molecular Diagnostics, Branchburg, New Jersey); Aptima, (Hologic/Gen-Probe, San Diego, California); BD ProbeTec ET and Qx (Becton Dickinson, Sparks, Maryland), and Xpert CT/ NG Assay (Cepheid, Sunnyvale, California) (Table 2). NAATs are designed to amplify and detect nucleic acid sequences that are specific for the organism being detected. Similar to other nonculture tests, NAATs do not require viable organisms. The increased sensitivity of NAATs is attributable to their theoretic ability to produce a positive signal from as little as a single copy of the target DNA or RNA. This high sensitivity has allowed the use of less invasively collected specimens such as first catch urines and vaginal swabs to detect shed organisms. Use of such specimens greatly facilitates screening.
Commercial tests differ in their amplification methods and their target nucleic acid sequences (Table 2). The two Roche tests and the Abbott RealTime CT/NG use polymerase chain reaction (PCR) and both Becton Dickinson tests use strand displacement amplification (SDA) to amplify C. trachomatis DNA sequences in the cryptic plasmid that is found in >99% of strains of C. trachomatis. The Hologic/Gen-Probe Aptima Combo 2 assay for C. trachomatis uses transcription-mediated amplification (TMA) to detect a specific 23S ribosomal RNA target. The Roche cobas CT/NG test, Abbott, Becton Dickinson, and Hologic/Gen-Probe tests detect the new variant of C. trachomatis (nvCT) strain. These nucleic acid amplification methods also are used to detect N. gonorrhoeae, and each manufacturer has marketed a duplex assay that allows for simultaneous detection of both organisms. The nucleic acid primers used by commercial NAATs for C. trachomatis are not known to detect DNA from other bacteria found in humans. However, the primers employed by the Becton Dickinson N. gonorrhoeae NAATs might detect nongonococcal Neisseria species (73-76) (Table 3). Most commercial NAATs have been cleared by FDA to detect C. trachomatis and N. gonorrhoeae in vaginal and endocervical swabs from women, urethral swabs from men, and first catch urine from both men and women (Table 2).
Because NAATs are so sensitive, efforts are warranted to prevent contamination of specimens in the clinic or spread of environmental amplicon in the laboratory. Laboratories should follow standard molecular method techniques, clean workspaces and equipment frequently, include multiple negative controls in each run, and monitor the rate of indeterminate and positive results as a change in monthly trends might indicate a need to investigate the accuracy of the results. Environmental monitoring might be required as recommended by the manufacturer. If environmental amplicons are found, robust cleaning of the laboratory is needed until negative results are obtained. Steps to prevent cross-contamination include proper testing of laboratory workflow design and strict adherence to testing and quality assurance protocols.
# Performance of Tests to Detect
# C. trachomatis and N. gonorrhoeae
Studies assessing the performance of NAATs might include test algorithms that use multiple NAATs, nonculture and culture tests as reference standards. Regardless of the analytic study design, the performance characteristics are relative to the standards used at the time of evaluation. When less sensitive methods are used as the reference standard, the specificity of the test under evaluation is likely to be underestimated. Conversely, the sensitivity of older assays was likely overestimated because of the relative poor performance of the assays used as standards at the time.
Because no gold standard exists, researchers compared two versions of the patient-infected-status algorithm (PISA) to assess the performance of NAATs. Using simulations with latent-class models, these researchers concluded that PISAbased methods can produce biased estimates of sensitivity and specificity that changed markedly as the true prevalence changes (77). However, there is no consensus on the optimal approach to evaluating the performance of NAATs, and better methods are needed (78). Until better methods become available, these recommendations support continuing reliance on NAATS based on their approval by FDA for indicated clinical use.
Simply quoting sensitivity and specificity data from package inserts or published studies is not useful because the numbers are estimates and are valid only within the context of the particular evaluation. Variables that can impact on these numbers include what comparison tests were used, in which population the evaluation was performed, and whether calculations were made on the basis of an infected patient standard or a direct comparison of specimens. Asymptomatic women: endocervical swab and urine. Asymptomatic men: urethral swab and urine. Symptomatic women: endocervical swab and urine. Symptomatic men: urethral swab and urine.
30 hours at 2°-30°C (urine specimen in primary cup) 7 days at 2°-8°C (urine specimen in primary cup) 30 days at 2°-30°C (urine specimen in urine processing tube) 60 days at -20°C or lower (neat urine specimen or urine in urine processing tube) 6 days at 2°-27°C (swab specimens) 30 days at 2°-8°C (swab specimens) BD ProbeTec Q X CT Amplified DNA assay BD ProbeTec Q X GC Amplified DNA assay (Becton Dickinson and Company, Sparks, MD) Asymptomatic women: endocervical swab, patient-collected vaginal swab in a clinical setting, gynecologic specimens collected in BDSurePath or PreservCyt solution and urine. Asymptomatic men: urethral swab and urine. Symptomatic women: endocervical swab, patient-collected vaginal swab in a clinical setting, gynecologic specimens collected in BDSurePath or PreservCyt solution and urine. Symptomatic men: urethral swab and urine.
30 hours at 2°-30°C (urine specimen in primary cup). 7 days at 2°-8°C (urine specimen in primary cup) 30 days at 2°-30°C (urine specimen in urine processing tube) 180 days at -20°C or lower (neat urine specimen or urine in urine processing tube) 30 days at 2°-30°C (endocervical and urethral swab specimens) 180 days at -20°C or lower (endocervical and urethral swab specimens) 14 days at 2°-30°C (dry vaginal swab specimens) 30 days at 2°-30°C (expressed vaginal swab specimens) 180 days at -20°C or lower (dry or expressed vaginal swab specimens) Xpert CT/NG assay (Cepheid, Sunnyvale, CA) Asymptomatic women: endocervical swab, patient-collected vaginal swab in a clinical setting, and urine. Asymptomatic men: urine. Symptomatic women: endocervical swab, patient-collected vaginal swab in a clinical setting, and urine. Symptomatic men: urine. 24 hours at room temperature (female urine specimen in primary cup) 3 days at room temperature (male urine specimen in primary cup) 8 days at 4°C (female and male urine specimen in primary cup) 3 Nevertheless, despite the absence of a criterion standard, valid generalizations can be made. All diagnostic tests including NAATs can generate inaccurate results, and it is important for laboratorians and clinicians to understand test limitations. Certain false positives and false negatives can occur as a consequence of specimen collection, test operation, and laboratory environment. However, NAATs are far superior in overall performance compared with other C. trachomatis and N. gonorrhoeae culture and nonculture diagnostic methods. NAATs offer greatly expanded sensitivities of detection, usually well above 90%, while maintaining very high specificity, usually ≥99%. NAATs typically detect 20%-50% more chlamydial infections than could be detected by culture or earlier nonculture tests (20). The increment for detection of gonococcal infections is somewhat less.
# Detection of Genitourinary C. trachomatis and N. gonorrhoeae Infections in Women
Screening programs have been demonstrated to reduce both the prevalence of C. trachomatis infection and rates of PID in women (79,80). Sexually active women aged ≤25 years and women aged >25 years with risk factors (e.g., those who have a new sex partner or multiple partners) should be screened annually for chlamydial infections (81).
The prevalence of gonorrhea varies widely among communities and populations; health-care providers should consider local gonorrhea epidemiology when making screening decisions. Although widespread screening is not recommended, targeted screening of young women (i.e., those aged ≤25 years) at increased risk for infection is a primary component of gonorrhea control in the United States because gonococcal infections among women are frequently asymptomatic. For sexually active women, including those who are pregnant, the U.S. Preventive Services Task Force (USPSTF) recommends that clinicians provide gonorrhea screening only to those at increased risk for infection (e.g., women with previous gonorrhea infection, other STDs, new or multiple sex partners, and inconsistent condom use; those who engage in commercial sex work and drug use; women in certain demographic groups; and those living in communities with a high prevalence of disease). USPSTF does not recommend screening for gonorrhea in women who are at low risk for infection (81).
For female screening, specimens obtained with a vaginal swab are the preferred specimen type. Vaginal swab specimens are as sensitive as cervical swab specimens, and there is no difference in specificity (82)(83)(84)(85)(86)(87). Self-collected vaginal swabs are equivalent in sensitivity and specificity to those collected by a clinician (83,88). Cervical samples are acceptable when pelvic examinations are done, but vaginal swab specimens are an appropriate sample type, even when a full pelvic exam is being performed. Cervical specimens collected into a liquid cytology medium for Pap screening are acceptable for NAATs that have been cleared by FDA for such specimen types (Table 2). However, following Pap screening, there should be a clinical indication for reflex additional testing of liquid cytology specimens for chlamydia and gonorrhea since these specimen types are more widely used in older populations at low risk for infection. First catch urine from women, while acceptable for screening, might detect up to 10% fewer infections when compared with vaginal and endocervical swab samples (82,87,89) (Box 2).
# Detection of Genitourinary C. trachomatis and N. gonorrhoeae Infections in Men
C. trachomatis and N. gonorrhoeae control efforts in men differ substantially from those recommended for women. Although chlamydia prevalence data have provided a basis for setting age guidelines for routine annual screening and behavioral guidelines for targeted screening in women (11), no such consensus has been reached regarding control program definitions in men who have sex with women (12). Although there are no recommendations to screen heterosexual men, it USPSTF suggests testing to test sexually active heterosexual men in clinical settings with a high prevalence of C. trachomatis (e.g., STD clinics, adolescent clinics, and detention and correctional facilities) and among persons entering the Armed Forces or the National Job Training Program (81).
The prevalence of N. gonorrhoeae varies widely among communities and populations; health-care providers should consider the local gonorrhea epidemiology when making screening recommendations. There is insufficient evidence for or against routine screening for gonorrhea in sexually active heterosexual men at increased risk for infection (81). However, it is not recommended to screen for gonorrhea infections in men at low risk for infection (81).
Overwhelming evidence described the performance of male first catch urine samples as equivalent to, and in some situations superior to, urethral swabs (23,90). Use of urine samples is highly acceptable and might improve the likelihood of uptake of routine screening in men (Box 3).
# Detection of Extragenital C. trachomatis and N. gonorrhoeae Infections in Men and Women
Infections with C. trachomatis and N. gonorrhoeae are common in extragenital sites in certain populations such as MSM. Because extragenital infections are common in MSM, and most infections are asymptomatic (91), routine annual screening of extragenital sites in MSM is recommended. No recommendations exist regarding routine extragenital screening in women because studies have focused on genitourinary screening, but rectal and oropharyngeal infections are not uncommon.
A 2003 study that assessed NAATs for diagnosing C. trachomatis and N. gonorrhoeae infections in multiple anatomic sites in MSMs (91) used Becton Dickinson's ProbeTec NAAT, which had been validated previously for such use. Among 6,434 MSM attending an STD clinic or a gay men's clinic, the study found that the prevalence by site for C. trachomatis was 7.9% for the rectum, 5.2% urethral, and 1.4% pharyngeal; and prevalence by site for N. gonorrhoeae
# BOX 2. Chlamydia trachomatis and Neisseria gonorrhoeae testing in women
- Nucleic acid amplification tests (NAATs) are the recommended test method. - A self-or clinician-collected vaginal swab is the recommended sample type. Self-collected vaginal swab specimens are an option for screening women when a pelvic exam is not otherwise indicated. - An endocervical swab is acceptable when a pelvic examination is indicated. - A first catch urine specimen is acceptable but might detect up to 10% fewer infections when compared with vaginal and endocervical swab samples. - An endocervical swab specimen for N. gonorrhoeae culture should be obtained and evaluated for antibiotic susceptibility in patients that have received CDCrecommended antimicrobial regimen as treatment, and subsequently had a positive N. gonorrhoeae test result (positive NAAT ≥7 days after treatment), and did not engage in sexual activity after treatment. was 6.9% for the rectum, 6% urethral, and 9.2% pharyngeal.
The great majority (84%) of the gonococcal and chlamydial rectal infections were asymptomatic. More than half (53%) of C. trachomatis and 64% of N. gonorrhoeae infections were at nonurethral sites and would have been missed if the traditional approach to screening of men by testing only urethral specimens had been used. The scope of the problem of extragenital infection in MSM is not known at the national level. In 2007, CDC coordinated an evaluation of MSM attending several community-based organizations and public or STD clinics and found that of approximately 30,000 tests performed, 353 (5.4%) MSM were positive for rectal infection with N. gonorrhoeae, and 468 (8.9%) were positive for rectal C. trachomatis. Pharyngeal N. gonorrhoeae tests were positive for 759 MSM (5.3%), and 54 (1.6%) were positive for C. trachomatis (92).
In the United Kingdom, some studies on screening MSMs have been performed using NAATs (93,94), and in one study of 3,076 MSM attending an STD clinic, there was an 8.2% prevalence of infection with C. trachomatis in the rectum and 5.4% in the urethra. The majority (69%) of the men with C. trachomatis were asymptomatic, stressing the need for screening (94).
A study that compared culture to two NAATs (Hologic/ Gen-Probe's Aptima Combo2 ) and Becton Dickinson's ProbeTec) for the detection of C. trachomatis and N. gonorrhoeae in pharyngeal and rectal specimens collected from 1,110 MSM being seen in an STD clinic confirmed all NAAT positive results when either the original test or a test using alternate primers was positive (95). For oropharyngeal N. gonorrhoeae, sensitivities were 41% for culture, 72% for SDA, and 84% for AC2; and for rectal N. gonorrhoeae, sensitivities were 43% for culture, 78% for SDA, and 93% for AC2. For oropharyngeal infections with C. trachomatis (for which only nine infections were detected), sensitivities were 44% for culture, 67% for SDA, and 100% for AC2; for rectal C. trachomatis, sensitivities were 27% for culture, 63% for SDA, and 93% for AC2. Specificities were >99.4% for all specimens, tests, and anatomic sites. The number of infections detected was more than doubled when a more sensitive NAAT was used as compared with the use of standard culture. Other researchers also have demonstrated the superiority of NAATs as compared with culture for diagnosing C. trachomatis and N. gonorrhoeae in rectal and oropharyngeal sites (36,75,76).
Although commercially available NAATs are recommended for testing genital tract specimens, they have not been cleared by FDA for the detection of C. trachomatis and N. gonorrhoeae infections of the rectum and oropharynx (Box 4). Results from commercially available NAATs can be used for patient management if the laboratory has established specifications for the performance characteristics according to CLIA regulations (96). If a moderate complexity test such as the GeneXpert is modified in any manner, the test defaults to high complexity and the laboratory must meet all high complexity CLIA requirements, including those for personnel. Certain NAATs that have been demonstrated to detect commensal Neisseria species in urogenital specimens might have comparable low specificity when testing oropharyngeal specimens for N. gonorrhoeae. Thus, a N. gonorrhoeae NAAT that does not react with nongonococcal commensal Neisseria species is recommended when testing oropharyngeal specimens (Table 3).
# BOX 3. Chlamydia trachomatis and Neisseria gonorrhoeae testing in men
- Nucleic acid amplification tests (NAATs) are the recommended test method. - A first catch urine is the recommended sample type and is equivalent to a urethral swab in detecting infection. - A urethral swab specimen for N. gonorrhoeae culture should be obtained and evaluated for antibiotic susceptibility in patients who have received CDCrecommended an antimicrobial regimen as treatment, and subsequently had a positive N. gonorrhoeae test result (positive NAAT ≥7 days after treatment), and did not engage in sexual activity after treatment. - Nucleic acid amplification tests (NAATs) are the recommended test method for rectal and oropharyngeal specimens. - Laboratories must be in compliance with CLIA for test modifications since these tests have not been cleared by the FDA for these specimen types. - Commensal Neisseria species commonly found in the oropharynx might cause false positive reactions in some NAATs, and further testing might be required for accuracy. - A rectal or oropharyngeal swab specimen for N. gonorrhoeae culture should be obtained and evaluated for antibiotic susceptibility in patients who have received CDC-recommended antimicrobial regimen as treatment, had a subsequent positive N. gonorrhoeae test result (positive NAAT ≥7 days after treatment), and did not engage in sexual activity after treatment.
# Detection of Genitourinary and Extragenital C. trachomatis and N. gonorrhoeae Infections in Cases of Sexual Assault
Detailed information about evaluation and treatment of suspected victims of sexual assault can be obtained from the 2010 STD treatment guidelines (19). General recommendations pertaining only to C. trachomatis and N. gonorrhoeae testing are presented here. Examination of victims is required for two purposes: 1) to determine if an infection is present so that it can be successfully treated and 2) to acquire evidence for potential use in a legal investigation. Testing to satisfy the first purpose requires a method that is highly sensitive, whereas satisfying the second purpose requires a method that is highly specific. Although NAATs meet these criteria, acceptance of any test results is determined by local legal authorities. Local legal requirements and guidance also should be sought for maintaining and documenting a chain of custody for specimens and results that might be used in a legal investigation and for which test results are accepted as evidence.
NAATs for C. trachomatis and N. gonorrhoeae are preferred for the diagnostic evaluation of adult sexual assault victims, from any sites of penetration or attempted penetration (97,98). Data on use of NAATs for detection of N. gonorrhoeae in children are limited. Consultation with an expert is necessary before use of NAATs for this indication in children to minimize the possibility of positive reactions with nongonococcal Neisseria species and other commensals. NAATs can be used as alternative to culture with vaginal specimens or urine specimens from girls. Culture remains the preferred method for urethral specimens from boys and extragenital specimens (pharynx and rectum) in boys and girls.
Using highly specific tests is critical with prepubescent children for whom the diagnosis of a sexually transmitted infection might lead to initiation of an investigation for child abuse. Specimen collection for culture for N. gonorrhoeae includes the pharynx and rectum in boys and girls, the vagina in girls, and the urethra in boys. Cervical specimens are not recommended for prepubertal girls. For boys with a urethral discharge, a meatal specimen discharge is an adequate substitute for an intra-urethral swab specimen. Standard culture procedures must be followed. Gram stains are inadequate to evaluate prepubertal children for N. gonorrhoeae and should not be used to diagnose or exclude N. gonorrhoeae. Specimens from the vagina, urethra, pharynx, or rectum should be streaked onto selective media for isolation of N. gonorrhoeae, and all presumptive isolates of N. gonorrhoeae should be identified definitively by at least two tests that involve different principles (e.g., biochemical, enzyme substrate, or serologic). Isolates should be preserved to enable additional or repeated testing.
Cultures for C. trachomatis can be collected from the rectum in both boys and girls and from the vagina in girls. The likelihood of recovering C. trachomatis from the urethra of prepubertal boys is too low to justify the trauma involved in obtaining an intraurethral specimen. However a meatal specimen should be obtained if urethral discharge is present. Pharyngeal specimens for C. trachomatis are not recommended for children of either sex because the yield is low, perinatally acquired infection might persist beyond infancy, and culture systems in some laboratories use antibody stains that do not distinguish between C. trachomatis and C. pneumoniae. All specimens must be retained for additional testing, if necessary, regardless of a positive or negative test result.
Only standard culture systems for the isolation of C. trachomatis should be used. The isolation of C. trachomatis should be confirmed by microscopic identification of inclusions by staining with fluorescein-conjugated monoclonal antibody MOMP specific for C. trachomatis; stains using monoclonal antibodies directed against LPS should not be used. EIAs are not acceptable confirmatory methods. Isolates should be preserved. Nonculture tests for C. trachomatis such as EIAs and DFA are not sufficiently specific for use in circumstances involving possible child abuse or sexual assault. NAATs can be used for detection of C. trachomatis in vaginal specimens or urine from girls. No data exist on the use of nucleic acid amplification tests in boys and extragenital specimens (rectum) in boys and girls. Culture remains the preferred method for extragenital sites in these circumstances.
# Detection of lymphogranuloma venereum Infections
Serologic testing for LGV is not widely available in the United States. The chlamydial complement fixation test (CFT), which measures antibody against group-specific lipopolysaccharide antigen, has been used as an aid in the diagnosis of LGV. A CFT titer ≥1:64 typically can be measured in the serum of patients with bubonic LGV (99). The microimmunofluorescence (MIF) test was initially developed for serotyping strains of C. trachomatis isolated from the eye and genital tract but was soon adapted to measure antibody responses in patients with chlamydial infections. Although the original MIF method was complicated, involving the titration of sera against numerous antigens, it was found to have many advantages when compared with the CFT (99). The MIF test can be used detect type-specific antibody and different immunoglobulin classes. The MIF test is more sensitive than the CFT with a larger proportion of patients developing an antibody response and at higher titer. Patients with LGV tend to have broadly crossreactive MIF titers that are often greater than 1:256 (99). Microtiter plate format enzyme immunoassays have been developed but comparative performance data are lacking. Serologic test interpretation for LGV is not standardized, tests have not been validated for clinical proctitis presentations, and C. trachomatis serovarspecific serologic tests are not widely available. More detailed information concerning the diagnosis and treatment of LGV has been published (19).
Genital and lymph node specimens (i.e., lesion swab or bubo aspirate) can be tested for C. trachomatis by culture, direct immunofluorescence, or nucleic acid detection. Commercially available NAATs for C. trachomatis detect both LGV and non-LGV C. trachomatis but cannot distinguish between them. Additional molecular procedures (e.g., PCR-based genotyping) can be used to differentiate LGV from non-LGV C. trachomatis, but these are not widely available (100)(101)(102)(103)(104).
For patients presenting with proctitis, C. trachomatis NAAT testing of a rectal specimen is recommended. While a positive result is not definitive diagnosis of LGV, the result might aid in a presumptive clinical diagnosis of LGV proctitis.
# Additional Considerations
Supplemental testing of NAAT-positive specimens. In 2002, CDC recommended that consideration be given to performing an additional test routinely after a positive NAAT screening test for C. trachomatis and N. gonorrhoeae (20). This approach was advised to improve the positive predictive value (PPV) of a NAAT screening test. This was particularly important when the test was used in a population with a low prevalence of infection. However, studies since 2002 addressing the utility of routine repeat testing of positive specimens demonstrated >90% concurrence with the initial test for either C. trachomatis or N. gonorrhoeae (105)(106)(107). Therefore, routine additional testing following a positive NAAT screening test for C. trachomatis no longer is recommended by CDC unless otherwise indicated in the product insert. Some NAATs might detect nongonococcal Neisseria species (Table 3). When these NAATs are used, consideration should be given to retest these specimens with an alternate target assay if the anatomic site from which the specimen was collected is typically colonized with these commensal organisms, e.g., oropharyngeal specimens. As with any diagnostic test, if there is a clinical or laboratory reason to question a test result, a repeat test should be considered.
Test interpretation. The laboratory should interpret and report results according to the manufacturer's package insert instructions In the event of discordant results from multiple tests, the report should indicate the results of both the initial and any additional tests. An interpretation of "inconclusive," "equivocal," or "indeterminate" would be most appropriate. A new specimen should be requested for testing in these situations. All test results should be interpreted by clinicians within the context of the patient-specific information to determine appropriate patient management.
Test of cure. Culture is the only method that can be used to properly assess the efficacy of antibiotic therapy because commercial NAATs are not FDA-cleared for use as a test of cure. Residual nucleic acid from bacteria rendered noninfective by antibiotics might still give a positive C. trachomatis NAAT up to 3 weeks after therapy (108,109). Detection of N. gonorrhoeae nucleic acid has been observed for up to 2 weeks following therapy although the vast majority of patients who were treated effectively for gonorrhea had a negative NAAT 1 week after treatment (110). However, data from these studies were derived from older NAATs and should be repeated with current NAATs.
Pooling of specimens. The superior performance characteristics of NAATs for detection of C. trachomatis and N. gonorrhoeae have led some researchers to pool urine specimens in an attempt to reduce the higher material costs associated with their use (111)(112)(113). Samples of individual specimens are first combined into a pool, which is then tested by a NAAT. If the pool is negative, all specimens forming the pool are reported as negative. If the pool is positive, a second aliquot of each specimen that contributed to the pool is tested individually. The potential cost-savings with pooling increases with decreasing prevalence of infection, because more specimens can be included in a pool without increasing the probability of a pool testing positive. The number of specimens pooled to achieve the greatest cost savings for a particular prevalence can be calculated (111). Available evidence indicates that pooled aliquots from up to 10 urine specimens can be a cost-effective alternative to testing individual specimens without any loss of sensitivity or specificity (111). Savings from reduced reagent costs have ranged from 40% to 60% (111). However, the increased complexity of the pooling protocol might require more personnel time to deconstruct positive pools for individual specimen testing. The use of pooled specimens for testing is not cleared by FDA and, therefore, the CLIA requirements applicable to modifying a test procedure must be met before implementation and reporting results intended to guide patient care. Laboratories must be aware that the process of pooling specimens requires extensive handling of samples which increases the potential for cross-contamination. Studies for pooling clinical specimens other than urine are required before extending this recommendation.
# Tests Not Recommended for Routine Use
Direct fluorescent antibody (DFA) tests .This assay should not be used for routine testing of genital tract specimens. Rather, DFA tests are the only FDA-cleared tests for ocular C. trachomatis infections. Depending on the commercial product used, the antigen that is detected by the antibody in the C. trachomatis DFA procedure is either the MOMP or LPS molecule. Specimen material is obtained with a swab or endocervical brush, which is then rolled over the specimen well of a slide. After the slide has dried and the fixative applied, the slide can be stored or shipped at ambient temperature. The laboratory should process the slide <7 days after the specimen has been obtained. Staining consists of flooding the smear with fluorescein-labeled monoclonal antibody that binds to C. trachomatis elementary bodies. Stained elementary bodies are then identified by fluorescence microscopy. Only C. trachomatis organisms will stain with the anti-MOMP antibodies used in commercial kits. The anti-LPS monoclonal antibodies will react with family-specific epitopes found within the LPS of Chlamydiaceae and might cross-react with LPS of other bacteria. The procedure requires an experienced microscopist and is labor-intensive and time-consuming. No DFA tests exist for the direct detection of N. gonorrhoeae in clinical specimens.
Nucleic acid hybridization/probe tests. Two nucleic acid hybridization assays are FDA-cleared to detect C. trachomatis or N. gonorrhoeae: the Hologic/Gen-Probe PACE 2 and the Digene Hybrid Capture II assays. Both the PACE and Hybrid Capture assays can detect C. trachomatis or N. gonorrhoeae in a single specimen. The Hybrid Capture assay is not widely available and the PACE 2C test was discontinued December 31, 2012.
Nucleic acid genetic transformation tests. The Gonostat test (Sierra Diagnostics, Incorporated, Sonora, California) uses a gonococcal mutant that grows when genetically altered by DNA extracted from a swab specimen containing N. gonorrhoeae. N. meningitidis causes false-positive results (114). The test has received limited evaluation in published studies (115)(116)(117)(118), which included an evaluation of its use with mailed specimens (117). Amplified and hybridization tests that detect N. gonorrhoeae nucleic acid have better performance characteristics than the Gonostat test. The gonorrhea nucleic acid genetic transformation test might have some utility in settings that lack the stringency for gonorrhea culture. However, it is not recommended as an alternative test to N. gonorrhoeae NAATs. A genetic transformation test is not available for detection of C. trachomatis infection.
Enzyme immunoassay (EIA) tests. A substantial number of EIA tests have been marketed for detecting C. trachomatis infection. The performance and cost characteristics of EIA tests for N. gonorrhoeae infection have not made them competitive with other available tests (56). C. trachomatis EIA tests detect chlamydial LPS, and there is the potential for false-positive results caused by crossreaction with LPS of other microorganisms. Manufacturers have developed blocking assays that verify positive EIA test results to improve specificity. None of the EIAs are as sensitive or specific as NAATs, and their use is discouraged.
Serology tests. Serology has little, if any, value in testing for uncomplicated genital C. trachomatis infection. It should not be used for screening because previous chlamydial infection might or might not elicit a systemic antibody response. Infections caused by LGV serovars of C. trachomatis tend to invade to the draining lymph nodes resulting in a greater likelihood of detectable systemic antibody response and might aid in diagnosis of inguinal (but not rectal) disease (99). The complement fixation test was classically used for this purpose but has been replaced by the more sensitive species-specific microimmunofluorescence test. A serologic screening or diagnostic assay is not available for N. gonorrhoeae.
# Conclusion
Technological evolution in clinical laboratory diagnostics has advanced considerably by allowing for the direct molecular detection of a pathogen in a clinical specimen rather than relying on isolation and cultivation. This approach has decreased the time required to identify a pathogen because the laboratory is no longer limited by the growth kinetics of the organism. Therefore, patients can be evaluated and if infected can be treated promptly, thereby diminishing progression to disease and disrupting transmission. As with all changes in laboratory technology, a synthesis of scientific evidence is required for an informed decision regarding the implementation of a new or improved test platform. Previous CDC recommendations to use NAATs for the detection of chlamydia and gonorrhea as the standard laboratory test remain. These updated CDC recommendations now specify that vaginal swabs are the preferred specimen for screening women and include the use of rectal and oropharyngeal specimens among populations at risk for extragenital tract infections. FDA clearance is important for widespread use of a test, and it is important that clearance be obtained for NAAT use with rectal and oropharyngeal specimens, and with vaginal swabs collected in other than clinic settings.
Future revisions to these recommendations will be influenced by the development and marketing of new laboratory tests, or indications of existing tests, for chlamydia and gonorrhea.
Improvements in molecular tests that continue to decrease detection time and decrease the test complexity might facilitate the use of NAATs in non-traditional laboratory settings such as physician offices, health fairs, school clinics, or other outreach venues. Shifting chlamydia and gonorrhea diagnostics from laboratories might require new recommendations on test application or reporting positive cases of reportable diseases. Periodic updates to these recommendations will be available on the CDC Division of STD Prevention website (http:// www.cdc.gov/std). | 14,310 | {
"id": "1b93ab9bea8b541c62f558a12718fd710c3ea466",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | These revised Immunization Practices Advisory Committee (ACIP) recommendations on yellow fever vaccine update previous recommendations (MMWR 1984;32:679-88). Changes have been made to clarify 1) the risks of acquiring yellow fever associated with travel to endemic areas, 2) the precautions necessary for vaccination of special groups (immunosuppressed individuals, infants, pregnant women), and 3) simultaneous administration of cholera vaccine and other vaccines. INTRODUCTION Yellow fever presently occurs only in Africa and South America. Two forms of yellow fever--urban and jungle--are epidemiologically distinguishable. Clinically and etiologically they are identical (1,2). Urban yellow fever is an epidemic viral disease of humans transmitted from infected to susceptible persons by Aedes aegypti mosquitoes, which breed in domestic and peridomestic containers (e.g., water jars, barrels, drums, tires, tin cans) and thus in close association with humans. In areas where Ae. aegypti has been eliminated or suppressed, urban yellow fever has disappeared. In the early 1900s, eradication of Ae. aegypti in a number of countries, notably Panama,#
Urban yellow fever can be prevented by eradicating Ae. aegypti mosquitoes or by suppressing their numbers to the point that they no longer perpetuate infection. Jungle yellow fever can most effectively be prevented by vaccination of human populations at risk of exposure. YELLOW FEVER VACCINE Yellow fever vaccine is a live, attenuated virus preparation made from the 17D yellow fever virus strain (4). The 17D vaccine is safe and effective (5). The virus is grown in chick embryos inoculated with a seed virus of a fixed-passage level. The vaccine is a freeze-dried supernate of centrifuged embryo homogenate, packaged in 1-dose and 5-dose vials for domestic use.
Vaccine should be stored at temperatures between 5 C (41 F) and -30 C (-22 F)--preferably frozen, below 0 C (32 F)--until it is reconstituted by the addition of diluent sterile, physiologic saline supplied by the manufacturer. Multiple-dose vials of reconstituted vaccine should be held at 5 C-10 C (41 F-50 F); unused vaccine should be discarded within 1 hour after reconstitution. VACCINE USAGE A. Persons living or traveling in endemic areas 1. Persons greater than or equal to 9 months of age traveling to or living in areas of South America and Africa where yellow fever infection is officially reported should be vaccinated. These areas are listed in the "Bi-Weekly Summary of Countries with Areas Infected with Quarantinable Diseases," available in state and local health departments. Information on known or probably infected areas is also available from the World Health Organization (WHO) and Pan American Health Organization offices or the Division of Vector-Borne Infectious Diseases, Center for Infectious Diseases, CDC, Fort Collins, Colorado, telephone (303) 221-6400. Vaccination is also recommended for travel outside the urban areas of countries that do not officially report the disease but that lie in the yellow fever endemic zone (shaded area, Figure 1). The actual areas of yellow fever virus activity far exceed the infected zones officially reported; in recent years, fatal cases of yellow fever have occurred among unvaccinated tourists visiting rural areas within the yellow fever endemic zone (6).
2. Infants less than 9 months of age and pregnant women should be considered for vaccination if traveling to areas experiencing ongoing epidemic yellow fever when travel cannot be postponed and a high level of prevention against mosquito exposure is not feasible. However, in no instance should infants less than 4 months of age receive yellow fever vaccine because of the risk of encephalitis (see Precautions and Contraindications).
3. Laboratory personnel who might be exposed to virulent yellow fever virus by direct or indirect contact or by aerosols should also be vaccinated. B. Vaccination for international travel.
For purposes of international travel, yellow fever vaccines produced by different manufacturers worldwide must be approved by WHO and administered at an approved Yellow Fever Vaccination Center. State and territorial health departments have the authority to designate nonfederal vaccination centers; these can be identified by contacting state or local health departments. Vaccinees should receive an International Certificate of Vaccination completed, signed, and validated with the center's stamp where the vaccine is given. Vaccination for international travel may be required under circumstances other than those specified herein. Some countries in Africa require evidence of vaccination from all entering travelers. Some countries may waive the requirements for travelers coming from noninfected areas and staying less than 2 weeks. Because requirements may change, all travelers should seek current information from health departments. Travel agencies, international airlines, and/or shipping lines should also have up-to-date information. Some countries require an individual, even if only in transit, to have a valid International Certificate of Vaccination if s/he has been in countries either known or thought to harbor yellow fever virus. Such requirements may be strictly enforced, particularly for persons traveling from Africa or South America to Asia. Travelers should consult Health Information for International Travel 1989 (7) to determine requirements and regulations for vaccination. C. Primary vaccination.
For persons of all ages, a single subcutaneous injection of 0.5 ml of reconstituted vaccine is used. D. Booster doses.
The International Health Regulations require revaccination at intervals of 10 years. Revaccination boosts antibody titer; however, evidence from several studies (8-10) suggests that yellow fever vaccine immunity persists for at least 30-35 years and probably for life. REACTIONS Reactions to 17D yellow fever vaccine are generally mild. After vaccination, 2%-5% of vaccinees have mild headaches, myalgia, low-grade fevers, or other minor symptoms for 5-10 days. Fewer than 0.2% of the vaccinees curtail regular activities. Immediate hypersensitivity reactions, characterized by rash, urticaria, and/or asthma, are uncommon (incidence less than 1/1,000,000) and occur principally among persons with histories of egg allergy. Although greater than 34 million doses of vaccine have been distributed, only two cases of encephalitis temporally associated with vaccinations have been reported in the United States; in one fatal case, 17D virus was isolated from the brain. PRECAUTIONS AND CONTRAINDICATIONS A. Age. Infants less than 4 months of age are more susceptible to serious adverse reactions (encephalitis) than older children. The risk of this complication appears to be age-related; whenever possible, vaccination should be delayed until age 9 months. B. Pregnancy. Although specific information is not available concerning adverse effects of yellow fever vaccine on the developing fetus, pregnant women theoretically should not be vaccinated, and travel to areas where yellow fever is present should be postponed until after delivery. If international travel requirements constitute the only reason to vaccinate a pregnant woman, rather than an increased risk of infection, efforts should be made to obtain a waiver letter from the traveler's physician (see section D. Hypersensitivity). Pregnant women who must travel to areas where the risk of yellow fever is high should be vaccinated. Under these circumstances, for both mother and fetus, the small theoretical risk from vaccination is far outweighed by the risk of yellow fever infection.
C. Altered immune states. Infection with yellow fever vaccine virus poses a theoretical risk of encephalitis to patients with immunosuppression in association with acquired immunodeficiency syndrome (AIDS) or other manifestations of human immunodeficiency virus (HIV) infection, leukemia, lymphoma, generalized malignancy, or to those whose immunologic responses are suppressed by corticosteroids, alkylating drugs, antimetabolites, or radiation. Such patients should not be vaccinated. If travel to a yellow fever-infected zone is necessary, patients should be advised of the risk, instructed in methods for avoiding vector mosquitoes, and supplied with vaccination waiver letters by their physicians. Low-dose (10 mg prednisone or equivalent) or short-term (less than 2 weeks) corticosteroid therapy or intra-articular, bursal, or tendon injections with corticosteroids should not be immunosuppressive and constitute no increased hazard to recipients of yellow fever vaccine. Persons who have had previously diagnosed asymptomatic HIV infections and who cannot avoid potential exposure to yellow fever virus should be offered the choice of vaccination. Vaccinees should be monitored for possible adverse effects. Since the vaccination of such persons may be less effective than that for non-HIV-infected persons, their neutralizing antibody response to vaccination may be desired before travel. For such determinations, the appropriate state health department or CDC ((303) 221-6400) may be contacted. Family members of immunosuppressed persons, who themselves have no contraindications, may receive yellow fever vaccine.
D. Hypersensitivity. Live yellow fever vaccine is produced in chick embryos and should not be given to persons hypersensitive to eggs; generally, persons who are able to eat eggs or egg products may receive the vaccine. If international travel regulations are the only reason to vaccinate a patient hypersensitive to eggs, efforts should be made to obtain a waiver. A physician's letter stating the contraindication to vaccination has been acceptable to some governments. (Ideally, it should be written on letterhead stationary and bear the stamp used by health department and official immunization centers to validate the International Certificate of Vaccination.) Under these conditions, the traveler should also obtain specific and authoritative advice from the embassy or consulate of the country or countries s/he plans to visit. Waivers of requirements obtained from embassies or consulates should be documented by appropriate letters and retained for presentation with the International Health Certificate. If vaccination of an individual with a questionable history of egg hypersensitivity is considered essential because of a high risk of exposure, an intradermal test dose may be administered under close medical supervision. Specific directions for skin testing are found in the package insert. SIMULTANEOUS ADMINISTRATION OF OTHER VACCINES Determination of whether to administer yellow fever vaccine and other immunobiologics simultaneously should be made on the basis of convenience to the traveler in completing the desired vaccinations before travel and on information regarding possible interference. The following will help guide these decisions.
Studies have shown that the serologic response to yellow fever vaccine is not inhibited by the administration of certain other vaccines concurrently or at various intervals of a few days to 1 month. Measles and yellow fever vaccines have been administered in combination with full efficacy of each of the components; Bacillus Calmette Guerin (BCG) and yellow fever vaccines have been administered simultaneously without interference. Additionally, severity of reactions to vaccination has not been amplified by the concurrent administration of yellow fever and other live virus vaccines (11). If live virus vaccines are not given concurrently, 4 weeks should elapse between sequential vaccinations.
Some data have indicated that persons given yellow fever and cholera vaccines simultaneously or 1-3 weeks apart had lower than normal antibody responses to both vaccines (12,13). Unless there are time constraints, cholera and yellow fever vaccines should be administered at a minimal interval of 3 weeks. If the vaccines cannot be administered at least 3 weeks apart, the vaccines can be given simultaneously or at any time within the 3-week interval.
Hepatitis B and yellow fever vaccine may be given concurrently ( 14). No data exist on possible interference between yellow fever and typhoid, paratyphoid, typhus, plague, rabies, or Japanese encephalitis vaccines.
In a prospective study of persons given yellow fever vaccine and 5 cc of commercially available immune globulin, no alteration of the immunologic response to yellow fever vaccine was detected when compared with controls (15). Although chloroquine inhibits replication of yellow fever virus in vitro, it does not adversely affect antibody responses to yellow fever vaccine in humans receiving antimalaria prophylaxis (16). | 2,497 | {
"id": "8410f4823cb91e3be9e4e814e5cecfc3c9f78ea2",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | -2003 in the United States decreased 44% and is now occurring at a historic low level (14,874 cases in 2003). The Advisory Council for the Elimination of Tuberculosis has called for a renewed commitment to eliminating TB in the United States, and the Institute of Medicine has published a detailed plan for achieving that goal. In this statement, the American Thoracic Society (ATS), CDC, and the Infectious Diseases Society of America (IDSA) propose recommendations to improve the control and prevention of TB in the United States and to progress toward its elimination. This statement is one in a series issued periodically by the sponsoring organizations to guide the diagnosis, treatment, control, and prevention of TB. This statement supersedes the previous statement by ATS and CDC, which was also supported by IDSA and the American Academy of Pediatrics (AAP). This statement was drafted, after an evidence-based review of the subject, by a panel of representatives of the three sponsoring organizations. AAP, the National Tuberculosis Controllers Association, and the Canadian Thoracic Society were also represented on the panel. This statement integrates recent scientific advances with current epidemiologic data, other recent guidelines from this series, and other sources into a coherent and practical approach to the control of TB in the United States. Although drafted to apply to TB control activities in the United States, this statement might be of use in other countries in which persons with TB generally have access to medical and public health services and resources necessary to make a precise diagnosis of the disease; achieve curative medical treatment; and otherwise provide substantial science-based protection of the population against TB. This statement is aimed at all persons who advocate, plan, and work at controlling and preventing TB in the United States, including persons who formulate public health policy and make decisions about allocation of resources for disease control and health maintenance and directors and staff members of state, county, and local public health agencies throughout the United States charged with control of TB. The audience also includes the full range of medical practitioners, organizations, and institutions involved in the health care of persons in the United States who are at risk for TB.# Introduction
During 1993-2003, incidence of tuberculosis (TB) in the United States decreased 44% and is now occurring at a historic low level (14,874 cases in 2003). The Advisory Council for the Elimination of Tuberculosis (ACET) (1) has called for a renewed commitment to eliminating TB in the United States, and the Institute of Medicine (IOM) (2) has published a detailed plan for achieving that goal. In this statement, the American Thoracic Society (ATS), CDC, and the Infectious Diseases Society of America (IDSA) propose recommendations to improve the control and prevention of TB in the United States and to progress toward its elimination.
This statement is one in a series issued periodically by the sponsoring organizations to guide the diagnosis, treatment, control, and prevention of TB (3)(4)(5). This statement supersedes one published in 1992 by ATS and CDC, which also was supported by IDSA and the American Academy of Pediatrics (AAP) (6). This statement was drafted, after an evidencebased review of the subject, by a panel of representatives of the three sponsoring organizations. AAP, the National Tuberculosis Controllers Association (NTCA), and the Canadian Thoracic Society were also represented on the panel. The recommendations contained in this statement (see Graded Recommendations for the Control and Prevention of Tuberculosis) were rated for their strength by use of a letter grade and for the quality of the evidence on which they were based by use of a Roman numeral (Table 1) (7). No rating was assigned to recommendations that are considered to be standard practice (i.e., medical or administrative practices conducted routinely by qualified persons who are experienced in their fields).
This statement integrates recent scientific advances with current epidemiologic data, other recent guidelines from this series (3)(4)(5), and other sources (2,(8)(9)(10) into a coherent and practical approach to the control of TB in the United States.
# MMWR November 4,
Although drafted to apply to TB control activities in the United States, this statement might be of use in other countries in which persons with TB generally have access to medical and public health services and resources necessary to make a precise diagnosis of the disease; achieve curative medical treatment; and otherwise provide substantial science-based protection of the population against TB. This statement is aimed at all persons who advocate, plan, and work at controlling and preventing TB in the United States, including persons who formulate public health policy and make decisions about allocation of resources for disease control and health maintenance and directors and staff members of state, county, and local public health agencies throughout the United States charged with control of TB. The audience also includes the full range of medical practitioners, organizations, and institutions involved in the health care of persons in the United States who are at risk for TB.
Throughout this document, the terms latent TB infection (LTBI), TB, TB disease, and infectious TB disease are used. LTBI is used to designate a condition in which an individual is infected with Mycobacterium tuberculosis but does not currently have active disease. Such patients are at risk for progressing to tuberculosis disease. Treatment of LTBI (previously called preventive therapy or chemoprophylaxis) is indicated for those at increased risk for progression as described in the text. Persons with LTBI are asymptomatic and have a negative chest radiograph. TB, TB disease, and infectious TB indicate that the disease caused by M. tuberculosis is clinically active; patients with TB are generally symptomatic for disease. Positive culture results for M. tuberculosis complex are an indication of TB disease. Infectious TB refers to TB disease of the lungs or larynx; persons with infectious TB have the potential to transmit M. tuberculosis to other persons.
# Progress Toward TB Elimination
A strategic plan for the elimination of TB in the United States was published in 1989 (11), when the United States was experiencing a resurgence of TB (Figure 1). The TB resurgence was attributable to the expansion of HIV infection, nosocomial transmission of M. tuberculosis, multidrugresistant TB, and increasing immigration from counties with a high incidence of TB. Decision makers also realized that the U.S. infrastructure for TB control had deteriorated (12); this problem was corrected by a substantial infusion of resources at the national, state, and local levels (13). As a result, the increasing incidence of TB was arrested; during 1993-2003, an uninterrupted 44% decline in incidence occurred, and in 2003, TB incidence reached a historic low level. This success in responding to the first resurgence of TB in decades indicates that a coherent national strategy; coordination of local, state, and federal action; and availability of adequate resources can result in dramatic declines in TB incidence. This success also raised again the possible elimination of TB, and in 1999, ACET reaffirmed the goal of tuberculosis elimination in the United States (1).
The prospect of eliminating tuberculosis was critically analyzed in an independent study published by IOM in 2000 (2). The IOM study concluded that TB could ultimately be eliminated but that at the present rate of decline, elimination would take >70 years. Calling for greater levels of effort and resources than were then available, the IOM report proposed a comprehensive plan to 1) adjust control measures to the declining incidence of disease; 2) accelerate the decline in incidence by increasing targeted testing and treatment of LTBI; 3) develop new tools for diagnosis, treatment, and prevention; 4) increase U.S. involvement in global control of TB; 16,000 20,000 24,000 28,000 1983 1985 1987 1989 1991 1993 1995 1997 1999 2001 2003 Year Number and 5) mobilize and sustain public support for TB elimination. The report also noted the cyclical nature of the U.S. response to TB and warned against allowing another "cycle of neglect" to occur, similar to that which caused the 1985-1992 resurgence.
As noted, the 44% decrease in incidence of TB in the United States during 1993-2003 (14,15) has been attributed to the development of effective interventions enabled by increased resources at the national, state, and local levels (1,2,16). Whereas institutional resources targeted specific problems such as transmission of TB in health-care facilities, public resources were earmarked largely for public health agencies, which used them to rebuild the TB-control infrastructure (13,17). A primary objective of these efforts was to increase the rate of completion of therapy among persons with TB, which was achieved by innovative case-management strategies, including greater use of directly observed therapy (DOT). During 1993-2000, the percentage of persons with reported TB who received DOT alone or in combination with self-supervised treatment increased from 38% to 78%, and the proportion of persons who completed therapy in <1 year after receiving a diagnosis increased from 63% to 80% (14). Continued progress in the control of TB in the United States will require consolidation of the gains made through improved cure rates and implementation of new strategies to further reduce incidence of TB.
# Challenges to Progress Toward TB Elimination
The development of optimal strategies to guide continuing efforts in TB control depends on understanding the challenges confronting the effort. The five most important challenges to successful control of TB in the United States are 1) prevalence of TB among foreign-born persons residing in the United States; 2) delays in detecting and reporting cases of pulmonary TB; 3) deficiencies in protecting contacts of persons with infectious TB and in preventing and responding to TB outbreaks; 4) persistence of a substantial population of persons living in the United States with LTBI who are at risk for progression to TB disease; and 5) maintaining clinical and public health expertise in an era of declining TB incidence. These five concerns (Box 1) serve as the focal point for the recommendations made in this statement to control and prevent TB in the United States.
# Prevalence of TB Among Foreign-Born Persons Residing in the United States
Once a disease that predominately affected U.S.-born persons, TB now affects a comparable number of foreign-born persons who reside in the United States permanently or temporarily, although such persons make up only 11% of the U.S. population (14). During 1993-2003, as TB incidence in the United States declined sharply, incidence among foreign-born persons changed little (14). Lack of access to medical services because of cultural, linguistic, financial, or legal barriers results in delays in diagnosis and treatment of TB among foreign-born persons and in ongoing transmission of the disease (18)(19)(20)(21). Successful control of TB in the United States and progress toward its elimination depend on the development of effective strategies to control and prevent the disease among foreign-born persons.
# Delays in Detection and Reporting of Cases of Pulmonary TB
New cases of infectious TB should be diagnosed and reported as early as possible in the course of the illness so curative treatment can be initiated, transmission interrupted, and public health responses (e.g., contact investigation and case-management services) promptly arranged. However, delays in case detection and reporting continue to occur; these delays are attributed to medical errors (22)(23)(24)(25)(26) and to patient factors (e.g., lack of understanding about TB, fear of the authorities, and lack of access to medical services) (18)(19)(20). In addition, genotyping studies have revealed evidence of persistent transmission of M. tuberculosis in communities that have implemented highly successful control measures (27)(28)(29), suggesting that such transmission occurred before a diagnosis was received. Improvements in the detection of TB cases, leading to earlier diagnosis and treatment, would bring substantial benefits to affected patients and their contacts, decrease TB among children, and prevent outbreaks.
# Deficiencies in Protecting Contacts of Person with Infectious TB and in Preventing and Responding to TB Outbreaks
Although following up contacts is among the highest public health priorities in responding to a case of TB, problems in conducting contact investigations have been reported (30)(31)(32). Approaches to contact investigations vary widely from program to program, and traditional investigative methods are not well adapted to certain populations at high risk. Only half of at-risk contacts complete a course of treatment for LTBI (32). Reducing the risk of TB among contacts through the development of better methods of identification, evaluation, and management would lead to substantial personal and public health benefits and facilitate progress toward eliminating TB in the United States.
Delayed detection of TB cases and suboptimal contact investigation can lead to TB outbreaks, which are increasingly reported (26,(33)(34)(35)(36)(37)(38). Persistent social problems such as crowding in homeless shelters and detention facilities are contributing factors to the upsurge in TB outbreaks. The majority of jurisdictions lack the expertise and resources needed to conduct surveillance for TB outbreaks and to respond effectively when they occur. Outbreaks have become an important element in the epidemiology of TB, and measures to detect, manage, and prevent them are needed.
# Persistence of a Substantial Population of Persons Living in the United States with LTBI Who Are at Risk for Progression to TB Disease
An estimated 9.6-14.9 million persons residing in the United States have LTBI (39). This pool of persons with latent infection is continually supplemented by immigration from areas of the world with a high incidence of TB and by ongoing person-to-person transmission among certain populations at high risk. For TB disease to be prevented among persons with LTBI, those at highest risk must be identified and receive curative treatment (4). Progress toward the elimination of TB in the United States requires the development of new cost-effective strategies for targeted testing and treatment of persons with LTBI (17,40).
# Maintaining Clinical and Public Health Expertise in an Era of Declining TB Incidence
Detecting a TB case, curing a person with TB, and protecting contacts of such persons requires that clinicians and the staff members of public health agencies responsible for TB have specific expertise. However, as TB becomes less common, maintaining such expertise throughout the loosely coordinated TB-control system is challenging. As noted previously, medical errors associated with the detection of TB cases are common, and deficiencies exist in important public health responsibilities such as contact investigations and outbreak response. Errors in the treatment and management of TB patients continue to occur (41,42). Innovative approaches to education of medical practitioners, new models for organizing TB services (2), and a clear understanding and acceptance of roles and responsibilities by an expanded group of participants in TB control will be needed to ensure that the clinical and public health expertise necessary to progress toward the elimination of TB are maintained.
# Meeting the Challenges to TB Elimination
Further improvements in the control and prevention of TB in the United States will require a continued strong public health infrastructure and involvement of a range of health professionals outside the public health sector. The traditional model of TB control in the United States, in which planning and execution reside almost exclusively with the public health sector (17), is no longer the optimal approach during a sustained drive toward the elimination of TB. This statement emphasizes that success in controlling TB and progressing toward its elimination in the United States will depend on the integrated activities of professionals from different fields in the health sciences. This statement proposes specific measures to enhance TB control so as to meet the most important challenges; affirms the essential role of the public health sector in planning, coordinating, and evaluating the effort (43); proposes roles and responsibilities for the full range of participants; and introduces new approaches to the detection of TB cases, contact investigations, and targeted testing and treatment of persons with LTBI.
The plan to reduce the incidence of TB in the United States must be viewed in the larger context of the global effort to control TB. The global TB burden is substantial and increasing. In 2000, an estimated 8.3 million (7.9-9.2 million) new cases of TB occurred, and 1.84 million (1.59-2.22 million) persons died from TB; during 1997-2000, the worldwide TB case rate increased 1.8%/year (44). TB is increasing worldwide as a result of inadequate local resources and the global epidemic of HIV infection. In sub-Saharan Africa, the rate of TB cases is increasing 6.4%/year (44). ACET (1), IOM (2), and other public health authorities (45,46) have acknowledged that TB will not be eliminated in the United States until the global epidemic is brought under control, and they have called for greater U.S. involvement in global control efforts. In response, CDC and ATS have become active participants in a multinational partnership (Stop TB Partnership) that was formed to guide the global efforts against TB. U.S. public and private entities also have provided assistance to countries with a high burden of TB and funding for research to develop new, improved tools for diagnosis, treatment, and prevention, including an effective vaccine.
Despite the global TB epidemic, substantial gains can be made toward elimination of TB in the United States by focusing on improvements in existing clinical and public health practices (47)(48)(49). However, the drive toward TB elimination in the United States will be resource-intensive (1,12). Public health agencies that plan and coordinate TB-control efforts in states and communities need sufficient strength in terms of personnel, facilities, and training to discharge their responsibilities successfully, and the growing number of nonpublic health contributors to TB control, all pursuing diverse individual and institutional goals, should receive value for their contributions. Continued progress toward TB elimination in the United States will require strengthening the nation's public health infrastructure rather than reducing it (1,50).
# Basic Principles of TB Control in the United States
Four prioritized strategies exist to prevent and control TB in the United States (17), as follows:
- The first strategy is to promptly detect and report persons who have contracted TB. Because the majority of persons with TB receive a diagnosis when they seek medical care for symptoms caused by progression of the disease, health-care providers, particularly those providing primary health care to populations at high risk, are key contributors to the detection of TB cases and to case reporting to the jurisdictional public health agency for surveillance purposes and for facilitating a treatment plan and case-management services. - The second strategy is to protect close contacts of patients with contagious TB from contracting TB infection and disease. Contact evaluation not only identifies persons in the early stages of LTBI, when the risk for disease is greatest (30)(31)(32), but is also an important tool to detect further cases of TB disease. - The third strategy is to take concerted action to prevent TB among the substantial population of U.S. residents with LTBI. This is accomplished by identifying those at highest risk for progression from latent infection to active TB through targeted testing and administration of a curative course of treatment (4). Two approaches exist for increasing targeted testing and treatment of LTBI. The first approach is to encourage clinic-based testing of persons who are under a clinician's care for a medical condition, such as human immunodeficiency virus (HIV) infection or diabetes mellitus, who are at risk for progressing from LTBI to active TB (4). The second approach is to establish specific programs to reach persons who have an increased prevalence of LTBI, an increased risk for developing active disease if LTBI is present, or both (51). - The fourth strategy is to reduce the rising burden of TB from recent transmission of M. tuberculosis by identifying settings at high risk for transmission and applying effective infection-control measures to reduce the risk. This strategy was used during the 1985-1992 TB resurgence, when disease attributable to recent transmission was an important component of the increase in TB incidence (52)(53)(54). TB morbidity attributable to recent spread of M. tuberculosis continues to be a prominent part of the epidemiology of the disease in the United States. Data collected by CDC's National Tuberculosis Genotyping and Surveillance Network at seven sentinel surveillance sites indicate that 44% of M. tuberculosis isolates from persons with newly diagnosed cases of TB were clustered with at least one other intrasite isolate, often representing TB disease associated with recent spread of M. tuberculosis (55). TB outbreaks are also being reported with greater frequency in correctional facilities (37), homeless shelters (33), bars (27), and newly recognized social settings (e.g., among persons in an East Coast network of gay, transvestite, and transsexual HIVinfected men ; persons frequenting an abandoned junkyard building used for illicit drug use and prostitution ; and dancers in adult entertainment clubs and their contacts, including children ). Institutional infection-control measures developed in the 1990s in response to the 1985-1992 resurgence in transmission of M. tuberculosis in the United States (10) have been highly successful in health-care facilities (56). However, newly recognized high-risk environments (26,27,33,34,37,38) present challenges to the implementation of effective infection-control measures. Further attention is required to control the transmission of M. tuberculosis in these environments.
# Structure of this Statement
This statement provides comprehensive guidelines for the full spectrum of activities involved in controlling and preventing TB in the United States. The remainder of this statement is structured in eight sections, as follows:
- Scientific Basis of TB Control. This section reviews the base of knowledge of how TB is transmitted and how the disease is distributed in the U.S. population, including new information based on genotyping studies. It provides basic background information as a review for current workers in the field and orients healthcare professionals who become new participants in TB-control efforts. - Principles and Practice of TB Control. This section makes the transition from the scientific knowledge base to clinical and public health practice by discussing the goal of TB control in the United States, which is to reduce the morbidity and mortality caused by TB by preventing transmission of M. tuberculosis from persons with contagious forms of the disease to uninfected persons and preventing progression from LTBI to TB disease among persons who have contracted M. tuberculosis infection. This section also provides basic background information as a review for current workers in the field and serves as an orientation for health-care professionals who become new participants in TB-control efforts. - Recommended Roles and Responsibilities for TB Control. This section outlines roles and responsibilities for the spectrum of participants in the diverse clinical and public health activities that lead to the control and prevention of TB. The paramount role of the public health sector is reviewed, followed by proposed responsibilities for nine prominent nonpublic health partners in tuberculosis control: medical practitioners, civil surgeons, community health centers, hospitals, academic institutions, medical professional organizations, community-based organizations, correctional facilities and the pharmaceutical and biotechnology industries. Because responsibilities for the nonpublic health sector have not been specified previously, this information also should be useful to policy makers and advocates for strengthened TB control.
# Essential Components of TB Control in the United
States. This section gives detailed recommendations for enhancing the core elements of TB control: case detection and management, contact investigations, and targeted testing and treatment of LTBI. Recommendations are provided for targeted public education to neutralize the stigma of TB and facilitate earlier care-seeking behavior among patients and for education of health-care professionals from whom patients with TB seek care. A set of five clinical scenarios is presented in which a diagnosis of TB should be undertaken in primary medical practice, and guidelines are presented for activities among certain populations to detect TB among persons who have not sought medical care. Guidelines are provided for a conducting a systematic, step-by-step contact investigation.
All jurisdictional TB-control programs are urged to develop written policies and procedures on the basis of these guidelines. Recommended procedures are also outlined for conducting surveillance for TB outbreaks and for developing an outbreak response plan. In addition, a framework is presented for identifying and prioritizing subpopulations and settings within a community that are at high risk for TB and that should receive targeted testing and treatment for LTBI. Priorities for high-risk populations should be established on the basis of the expected impact and efficacy of the intervention. Persons who are readily accessible and have preexisting access to healthcare services (e.g., prisoners, patients receiving ongoing clinic-based care for HIV infection, and immigrants and refugees with abnormalities on preimmigration chest radiographs) should receive the highest priority. An approach is also presented to reach members of new immigrant and refugee communities, who often exist on the margin of U.S. society. - Control of TB Among Populations at High Risk. On the basis of the epidemiology of TB in the United States, this section provides specific recommendations for controlling and preventing TB among five populations: 1) children; 2) foreign-born persons; 3) HIV-infected persons; 4) homeless persons; and 5) detainees and prisoners in correctional facilities. Each population is readily identifiable and has been demonstrated to be at risk for TB exposure or progression from exposure to disease, or both. Surveillance and surveys from throughout the United States indicate that certain epidemiologic patterns of TB are consistently observed among these populations, suggesting that the recommended control measures are generalizable.
# Control of TB in Health-Care Facilities and Other
High-Risk Environments. This section recommends infection-control measures to prevent the transmission of M. tuberculosis in high-risk settings. The approach to control of TB that was developed for health-care facilities continues to be the most successful model and is discussed in detail. The recommendations in this section have been updated with respect to the assessment of institutional risk for TB. Three levels of risk (low, medium, and potential ongoing transmission) are outlined on the basis of community and institutional experience with TB. An associated recommendation is that the frequency of testing of employees for LTBI should be based on the institution's risk category. Recommendations also are provided for control of transmission of M. tuberculosis in correctional facilities, homeless shelters, and other newly identified high-risk environments.
- Research Needs To Enhance TB Control. This section defines gaps in knowledge and deficiencies in technology that limit current efforts to control and prevent TB. Additional research is needed in these areas to produce the evidence base and the tools for optimal diagnosis, treatment, and prevention of TB. This section should be useful to persons who formulate U.S. public health policy and research priorities and members of academic professions interested in contributing to enhanced TB control, both in the United States and throughout the world. - Graded Recommendations for Control and Prevention of TB. This section groups detailed graded recommendations for each area discussed in this report.
# Scientific Basis of TB Control
Transmission of TB M. tuberculosis is nearly always transmitted through an airborne route, with the infecting organisms being carried in droplets of secretions (droplet nuclei) that are expelled into the surrounding air when a person with pulmonary TB coughs, talks, sings, or sneezes. Person-to-person transmission of M. tuberculosis is determined by certain characteristics of the source-case and of the person exposed to the source-person and by the environment in which the exposure takes place (Box 2). The virulence of the infecting strain of M. tuberculosis might also be a determining factor for transmission.
# Characteristics of the Source-Case
By the time persons with pulmonary TB come to medical attention, 30%-40% of persons identified as their close personal contacts have evidence of LTBI (30). The highest rate of infection among contacts follows intense exposure to patients whose sputum smears are positive for acid-fast bacilli (AFB) (31,57-59) (Figure 2). Because patients with cavitary pulmonary TB are more likely than those without pulmonary cavities to be sputum AFB smear-positive (60), patients with cavitary pulmonary disease have greater potential to transmit TB. Such persons also have a greater frequency of cough, so the triad of cavitary pulmonary disease, sputum AFB smear-positivity, and frequency of cough are likely related causal factors for infectivity. AFB smearnegative TB patients also transmit TB, but with lower potential than smear-positive patients. Patients with sputum AFB smear-negative pulmonary TB account for approximately 17% of TB transmission (61).
# Characteristics of the Exposed Person
A study of elderly nursing home residents indicated that persons with initially positive tuberculin skin test results during periods of endemic exposure to TB had a much lower risk for TB than those whose skin test results were initially negative (62,63). This finding suggests that preexisting LTBI confers protection against becoming infected upon subsequent exposure and progression to active disease. Similarly, having prior disease caused by M. tuberculosis had been assumed to confer protection against reinfection with a new strain of M. tuberculosis. However, molecular typing of paired isolates of M. tuberculosis from patients with recurrent episodes of TB disease has demonstrated that reinfection does occur among immunocompetent and immunocompromised persons (64,65). The classic means of protecting persons exposed to infectious diseases is vaccination. Because of its proven efficacy in protecting infants and young children from meningeal and miliary TB (66), vaccination against TB with Mycobacterium bovis bacillus Calmette-Guerín (BCG) is used worldwide (although not in the United States). This protective effect against the disseminated forms of TB in infants and children is likely based on the ability of BCG to prevent progression of the primary infection when administered at that stage of life (67). Epidemiologic evidence suggests that BCG immunization does not protect against the development of infection with M. tuberculosis upon exposure (68), and use of BCG has not had an impact on the global epidemiology of TB. One recent retrospective study found that BCG protective efficacy can persist for 50-60 years, indicating that a single dose might have a long duration of effect (69). A meta-analysis indicated that overall BCG reduced the risk for TB 50% (66); however, another meta-analysis that examined protection over time demonstrated a decrease in efficacy of 5%-14% in seven randomized controlled trials and an increase of 18% in three others (70). An effective vaccine against M. tuberculosis is needed for global TB control to be achieved.
Because only 30%-40% of persons with close exposure to a patient with pulmonary TB become infected (30,31), innate immunity might protect certain persons from infection (71). The innate mechanisms that protect against the development of infection are largely uncharacterized (71). Although immunocompromised persons (e.g., those with HIV infection) are at increased risk for progression to TB disease after infection with M. tuberculosis, no definitive evidence exists that immunocompromised persons, including those with HIV infection, have increased susceptibility to infection upon exposure.
Observational studies suggest that population-based variability in susceptibility to TB might be related to the length of time a population has lived in the presence of M. tuberculosis and has thus developed resistance to infection through natural selection (72)(73)(74). However, the genetic basis for susceptibility or resistance to TB is not well understood (72,75).
# Characteristics of the Exposure
Studies that have stratified contacts of persons with pulmonary TB according to time spent with the infected person indicate that the risk for becoming infected with M. tuberculosis is in part determined by the frequency and duration of exposure (60). In a given environment shared by a patient with pulmonary TB and a contact, the risk for transmitting the infection varies with the density of infectious droplet nuclei in the air and how long the air is inhaled. Indoors, tubercle bacilli are expelled into a finite volume of air, and, unless effective ventilation exists, droplet nuclei containing M. tuberculosis might remain suspended in ambient air (76). Exposures in confined air systems with little or no ventilation pose a major risk for transmission of TB; this has been demonstrated in homes, ships, trains, office buildings, and health-care institutions (77)(78)(79)(80). When contact occurs outdoors, TB bacilli expelled from the respiratory tract of an infectious person are rapidly dispersed and are quickly rendered nonviable by sunlight (77). The risk for transmission during such encounters is very limited.
Considerable attention has been given to transmission of M. tuberculosis during air travel. Investigations have demonstrated that the risk for transmission from an infectious person to others on an airplane is greater on long flights (>8 hours) and that the risk for contracting M. tuberculosis infection is highest for passengers and flight crew members sitting or working near an infectious person (81,82). However, the overall public health importance of such events is negligible (77,81).
# Virulence of the Infecting Strain of M. tuberculosis
Although much is known about factors that contribute to the risk for transmission of M. tuberculosis from person to person, the role of the organism itself is only beginning to be understood (83). Genetic variability is believed to affect the capability of M. tuberculosis strains to be transmitted or to cause disease once transmitted, or both. The M. tuberculosis W-strain family, a member of the globally spread Beijing family (84), is a group of clonally related multidrug-resistant organisms of M. tuberculosis that caused nosocomial outbreaks involving HIVinfected persons in New York City (NYC) during 1991-1994 (85,86). W-family organisms, which have also been associated with TB outbreaks worldwide, are believed to have evolved from a single strain of M. tuberculosis that developed resistanceconferring mutations in multiple genes. The growth of W-family organisms in human macrophages is four-to eightfold higher than that of strains that cause few or no secondary cases of TB; this enhanced ability to replicate in human macrophages might contribute to the organism's potential for enhanced transmission (87).
Whether M. tuberculosis loses pathogenicity as it acquires resistance to drugs is not known. Isoniazid-resistant M. tuberculosis strains are less virulent than drug-susceptible isolates in guinea pigs (88), and genotyping studies from San Francisco, California, and from the Netherlands indicated that isoniazid-resistant strains are much less likely to be associated with clusters of TB cases than drug-susceptible strains (89,90). Nevertheless, because person-to-person spread has been demonstrated repeatedly, persons with TB with drug-resistant isolates should receive the same public health attention at the programmatic level as those with drug-susceptible isolates (91,92).
# Effect of Chemotherapy on Infectiousness
Patients with drug-susceptible pulmonary and other forms of infectious TB rapidly become noninfectious after institution of effective multiple-drug chemotherapy. This principle has been established by studies demonstrating that household contacts of persons with infectious pulmonary TB who were treated at home after a brief period of hospitalization for institution of therapy developed LTBI at a frequency no greater than that of persons with pulmonary TB who were hospitalized for 1 year (93) or until sputum cultures became negative (94). This potent effect of chemotherapy on infectiousness is likely attributable, at least in part, to the rapid elimination of viable M. tuberculosis from sputum (95) and to reduction in cough frequency (96). The ability of chemotherapy to eliminate infectivity is one reason why detecting infectious cases and promptly instituting multiple-drug therapy is the primary means of interrupting the spread of TB in the United States.
The effect of chemotherapy to eliminate infectiousness was once thought to occur rapidly, and patients on chemotherapy were thought not to be infectious (97,98). However, no ideal test exists to assess the infective potential of a TB patient on treatment, and infectivity is unlikely to disappear immediately after multidrug therapy is started. Quantitative bacteriologic studies indicate that the concentration of viable M. tuberculosis in sputum of persons with cavitary sputum AFB smear-positive pulmonary TB at the time of diagnosis, which averaged 10 6 -10 7 organisms/ml, decreased >90% (10-fold) during the first 2 days of treatment, an effect attributable primarily to administration of isoniazid (99), and >99% (100-fold) by day 14-21, an effect attributable primarily to administration of rifampin and pyrazinamide (100). Thus, if no factor other than the elimination of viable M. tuberculosis from sputum were to account for the loss of infectivity during treatment, the majority of patients (at least those with infection attributable to isolates susceptible to isoniazid) who have received treatment for as few as 2 days with the standard regimen (i.e., isoniazid, rifampin, ethambutol, and pyrazinamide) could be assumed to have an infective potential that averages 10% of that at the time of diagnosis. After 14-21 days of treatment, infectiousness averages <1% of the pretreatment level.
This statement presents general guidelines on elimination of infectivity with treatment (Box 3). However, decisions about infectiousness of a person on treatment for TB should always be individualized on the basis of 1) the extent of illness; 2) the presence of cavitary pulmonary disease; 3) the degree of positivity of sputum AFB smear results; 4) the frequency and strength of cough; 5) the likelihood of infection with multidrug-resistant organisms; and 6) the nature and circumstances of the contact between the infected person and exposed contacts (101). Patients who remain in hospitals or reside either temporarily or permanently in congregate settings (e.g., shelters and correctional facilities) are subject to different criteria for infectiousness. In such congregate settings, identification and protection of close contacts is not possible during the - These criteria for absence of infectivity with treatment should be considered general guidelines. Decisions about infectivity of a person on treatment for TB should depend on the extent of illness and the specific nature and circumstances of the contact between the patient and exposed persons.
early phase of treatment, and more stringent criteria for determining absence of infectivity (i.e., three consecutive AFB-negative sputum smears) should be followed (10). All patients with suspected or proven multidrug resistant TB should be subjected to these more stringent criteria for absence of infectivity (10).
# Progression from LTBI to TB Disease
Although the human immune response is highly effective in controlling primary infection resulting from exposure to M. tuberculosis among the majority of immunocompetent persons, all viable organisms might not be eliminated. M. tuberculosis is thus able to establish latency, a period during which the infected person is asymptomatic but harbors M. tuberculosis organisms that might cause disease later (4,71). The mechanisms involved in latency and persistence are not completely understood (71,72).
For the majority of persons, the only evidence of LTBI is an immune response against mycobacterial antigens, which is demonstrated by a positive test result, either a tuberculin skin test (3) or, in certain circumstances, a whole blood antigen-stimulated interferon-γ release assay result (e.g., QuantiFERON ® -TB Gold test ). The tuberculin skin test measures delayed-type hypersensitivity; QFT-G, an ex vivo test for detecting latent M. tuberculosis infection, measures a component of cell-mediated immune response (102). QFT-G is approved by the Food and Drug Administration (FDA), and CDC will publish guidelines on its use. CDC had previously published guidelines for use of QuantiFERON ® -TB, an earlier version of the test that is no longer available (103). T SPOT-TB, ® an enzyme-linked immunospot assay for IFN-γ, is marketed in Europe along with QFT-G but is not FDA-approved for use in the United States. Although approved by FDA, the Tine Test ® is not recommended for the diagnosis of M. tuberculosis infection. Tests available in other countries to diagnose M. tuberculosis infection (e.g., T SPOT-TB and Heaf test) are not recommended for clinical use in the United States.
Once a person has contracted LTBI, the risk for progression to TB disease varies. The greatest risk for progression to disease occurs within the first 2 years after infection, when approximately half of the 5%-10% lifetime risk occurs (4,104). Multiple clinical conditions also are associated with increased risk for progression from LTBI to TB disease. HIV infection is the strongest known risk factor (4). Other key risk factors because of their prevalence in the U.S. population are diabetes mellitus (105), acquisition of LTBI in infancy or early childhood, and apical fibro-nodular changes on chest radiograph (106).
A recent addition to the known risk factors for progression from LTBI to TB disease is the use of therapeutic agents that antagonize the effect of cytokine tumor necrosis factor alpha (TNF-α) and have been proven to be highly effective treating autoimmune-related conditions (e.g., Crohn's disease and rheumatoid arthritis) (107). Cases of TB have been reported among patients receiving all three licensed TNFα antagonists (i.e., infliximab, etanercept, and adalimimab) (108). CDC has published interim guidelines for preventing TB when these agents are used (109).
# Epidemiology of TB in the United States
Surveillance (i.e., the systematic collection, analysis, and dissemination of data) is a critical component of successful TB control, providing essential information needed to 1) determine patterns and trends of the disease; 2) identify populations and settings at high risk; and 3) establish priorities for control and prevention activities. Surveillance is also essential for quality-assurance purposes, program evaluation, and measurement of progress toward TB elimination. In addition to providing the epidemiologic profile of TB in a given jurisdiction, state and local surveillance are essential to national TB surveillance.
CDC's national TB surveillance system publishes epidemiologic analyses of reported TB cases in the United States (110). Data for the national TB surveillance system are reported by state health departments in accordance with standard TB casedefinition and case-report formats (110,111). The system tracked the reversal of the declining trend in TB incidence in the United States in the mid-1980s, the peak of the resurgence in 1992 (with a 20% increase in cases reported during 1985-1992), and the subsequent 44% decline to an all-time low number (14,871) and rate (5.1 cases/100,000 population) of TB cases in 2003 (14,15) (Figure 1).
# Geographic Distribution of TB
Wide disparities exist in the geographic distribution of TB cases in the United States. In 2003, six states (California, Florida, Georgia, Illinois, New York, and Texas) each reported >500 cases and accounted for 57% of the national total (14). These states along with New Jersey accounted for approximately 75% of the overall decrease in cases since 1992. The highest rates and numbers of TB cases are reported from urban areas; >75% of cases reported in 2003 were from areas with >500,000 population (14). In 2003, a total of 24 states (48%) had incidence of <3.5 cases of TB/100,000 population, the rate established as the year 2000 interim target for the United States in the 1989 strategic plan for eliminating TB (11).
# Demographic Distribution of TB
In 2003, adults aged 15-64 years accounted for 73.6% of reported TB cases. Incidence of TB was highest (8.4 cases/ 100,000 population) among adults aged >65 years, who accounted for 20.2% of cases. Children aged <14 years accounted for 6.2% of reported cases and had the lowest incidence of TB; 61.3% of reported cases occurred among men, and case rates among men were at least double those of women in mid-and older-adult age groups. In 2003, the white, non-Hispanic population accounted for only 19% of reported cases of TB, and TB incidence among the four other racial/ethnic populations for which data were available was 5.7-21.0 times that of non-Hispanic whites (Table 2). Foreign-born persons accounted for 94% of TB cases among Asians and 74% of cases among Hispanics, whereas 74% of cases among non-Hispanic blacks occurred among persons born in the United States (15).
# Distribution of TB by Socioeconomic and Employment Status
# Socioeconomic status (SES).
Low SES is associated with an increased risk for TB. An analysis of national surveillance data that assigned socioeconomic indicator values on the basis of residence zip code indicated that the risk for TB increased with lower SES for six indicators (crowding, education, income, poverty, public assistance, and unemployment), with crowding having the greatest impact (112). Risk for TB increased uniformly between socioeconomic quartile for each indicator, similar to other socioeconomic health gradients for other chronic diseases, except for crowding, for which risk was concentrated in the lowest quartile. Adjusting for SES accounted for approximately half of the increased risk for TB associated with race/ethnicity among U.S.-born blacks, Hispanics, and American Indians (112).
Occupation. Increased incidence of TB among persons with certain occupations is attributable to exposure in the work environment and to an increased likelihood that workers will have other risk factors unrelated to occupation, such as foreign birth. A 29-state study of patients with clinically active TB reported during 1984-1985 indicated that increased incidence was independent of occupation. An association between general SES groupings of occupations and risk for TB also was demonstrated in that study (113). Chronically unemployed persons had high incidence of TB; this finding is consistent with surveillance data indicating that >50% of TB patients were unemployed during the 2 years before diagnosis (14).
TB among health-care workers (HCWs). Because transmission of M. tuberculosis in health-care institutions was a contributing factor to the resurgence of TB during 1985-1992, recommendations were developed to prevent transmission in these settings (10). In 2003, persons reported to have been HCWs in the 2 years before receiving their diagnoses accounted for 3.1% of reported TB cases nationwide (14). However, the elevated risk among HCWs might be attributable to other factors (e.g., birth in a country with a high incidence of TB) (114). A multistate occupational survey indicated that the majority of HCWs did not have a higher risk for TB than the general population; respiratory therapists, however, did appear to be at greater risk (113).
# Identification of Populations at High Risk for TB
# Contacts of infectious persons.
A high prevalence of TB disease and LTBI has been documented among close contacts of persons with infectious pulmonary TB (31). A study of approximately 1,000 persons from urban sites with pulmonary AFB sputum smear-positive TB indicated that more than one third of their contacts had positive tuberculin skin tests and that 2% of all close contacts had active TB. Contacts identified with TB disease were more likely to be household members or children aged <6 years (31).
Foreign-born persons. The proportion of TB cases in the United States occurring among foreign-born persons increased progressively during the 1990s; in 2003, persons born outside the United States accounted for 53% of reported cases (14) (Figure 3). Although foreign-born persons who received a diagnosis of TB in 2002 were born in >150 countries worldwide, as in each of the 6 previous years, five countries of origin accounted for the greatest number of foreign-born persons with TB: China (5%), India (8%), Mexico (26%), the Philippines (12%), and Vietnam (8%). During 1992-2003, the number of states in which >50% of the total reported cases occurred among foreign-born persons increased from four (8%) in 1992 to 24 (48%) in 2003 (15). Among states and cities, however, this profile can change rapidly, reflecting changes in patterns of immigration and refugee settlement (21).
# MMWR November 4,
Surveillance data indicate that incidence of TB among foreign-born persons is approximately 23 cases/100,000 population (14). Incidence varied by county of origin, appearing to reflect incidence of TB in the country of birth (21,115,116). In 2003, approximately 47% of foreignborn persons with TB received their diagnoses within 5 years of their arrival in the United States, and 19% received their diagnoses within 1 year of arrival. Among foreignborn persons, TB case rates decreased with longer duration of residence in the United States. TB rates were nearly four times higher among persons residing in the United States for 5 years (115,116).
HIV-infected persons. Because reporting of HIV infection among persons with TB is not complete, the exact prevalence of HIV infection among such persons is unknown. During 1993-2001, the prevalence of reported HIV infection occurring among persons also reported with TB decreased from 15% to 8% (14); this decrease has been attributed, in part, to reduced transmission of TB among HIV-infected persons (16). According to a recent worldwide epidemiologic assessment, however, 26% of adult TB cases in the United States are attributable to HIV infection (44).
Homeless persons. In 2003, persons known to have been homeless in the year before receiving a diagnosis accounted for 6.3% of cases of TB nationwide. On the basis of available population estimates (117), incidence of TB among homeless persons is approximately 30-40/100,000 population, more than five times the national case rate. However, a prospective study of a cohort of approximately 3,000 homeless persons in San Francisco documented an annual incidence of >250 cases/100,000 population (118). In addition, outbreaks of TB linked to overnight shelters continue to occur among homeless persons and likely contribute to the increased incidence of TB among that population (119,120).
Other populations at high risk. In 2003, persons known to have injected drugs in the year before receiving a diagnosis accounted for 2.2% of reported cases of TB, and noninjection drug use was reported by 7.3% of persons with TB. In certain U.S. communities, injection drug use is sufficiently prevalent so as to constitute a high risk for epidemiologic importance rather than simply an individual risk factor, especially when overlap exists between injection drug use and HIV infection (121,122).
# TB Among Detainees and Prisoners in Correctional Facilities
The proportion of cases of TB occurring among inmates of prisons and jails has remained stable at approximately 3%-4% since data began to be collected in 1993; it was 3.2% in 2003 (14). Inmates also have high incidence of TB, with rates often >200/100,000 population (123), and they have a disproportionately greater number of risk factors for TB (e.g., low SES, HIV infection, and substance abuse) compared with the general population (124,125). TB transmission in correctional facilities contributes to the greater risk among those populations, presumably because of the difficulties in detecting cases of infectious TB and in identifying, evaluating, and treating contacts in these settings (37,126).
TB outbreaks occur in both prison and jail settings. Dedicated housing units for prison inmates with HIV infection were sites of transmission in California in 1995 (126) and South Carolina in 1999 and in South Carolina in 1999 (37). In the South Carolina outbreak, delayed diagnosis and isolation of an inmate who apparently had active TB after entering the facility led to >15 outbreak cases. Transmission leading to TB infection in the community also was documented in an outbreak that occurred in a jail in Tennessee during 1995-1997 (127,128) that involved approximately 40 inmates; contact investigations were incomplete because of brief jail terms and frequent movement of inmates. During the same period, 43% of patients with TB in the surrounding community had previously been incarcerated in that jail (127), and, after 2 years, the jail outbreak strain was more prevalent in the community than it was during the jail outbreak. Genotyping studies indicated that the outbreak strain accounted for approximately 25% of TB cases in the community, including those among patients with no history of incarceration (128).
genotyping has been based on polymorphisms in the number and genomic location of mycobacterial repetitive elements. The most widely used genotyping test for M. tuberculosis is restriction fragment length polymorphism (RFLP) analysis of the distribution of the insertion sequence IS6110 (129). However, genotyping tests based on polymorphisms in three additional mycobacterial repetitive elements (i.e., polymorphic guanine cytosine-rich repetitive sequences, direct repeats , and mycobacterial interspersed repetitive units ) have also been developed (83). M. tuberculosis isolates with identical DNA patterns in an established genotyping test often have been linked through recent transmission among the persons from whom they were isolated.
When coupled with traditional epidemiologic investigations, analyses of the genotype of M. tuberculosis strains have confirmed suspected transmission and identified unsuspected transmission of M. tuberculosis. These analyses have also identified risk factors for recent infection with rapid progression to disease, demonstrated exogenous reinfection with different strains, identified weaknesses in conventional contact investigations, and documented the existence of laboratory cross-contamination. Genotyping has become an increasingly useful tool for studying the pathogenesis, epidemiology, and transmission of TB.
# Epidemiology of TB Among Contacts in Outbreak Settings
Conventional contact investigations have used the concentric circles approach to collect information and screen household contacts, coworkers, and increasingly distant contacts for TB infection and disease (17). The concentric circles model has been described previously (130). However, this method might not always be adequate in out-of-household settings. In community-based studies from San Francisco (131), Zurich (132), and Amsterdam (133), only 5%-10% of persons with clustered IS6110-based genotyping patterns were identified as contacts by the source-person in the cluster. This finding indicates that either 1) transmission of M. tuberculosis might occur more commonly than suspected and is not easily detected by conventional contact tracing investigations or 2) genotype clustering does not necessarily represent recent transmission (55). Because genotyping studies discover only missed or mismanaged contacts (i.e., those that subsequently receive a diagnosis of TB), such studies cannot explain the successes of the process or the number of cases that were prevented.
Certain populations (e.g., the urban homeless) present specific challenges to conducting conventional contact investigations. Genotyping studies have provided information about chains of transmission in these populations (118,119). In a prospective study of TB transmission in Los Angeles, the degree of homelessness and use of daytime services at three shelters were factors that were independently associated with genotype clustering (119). Additional studies support the idea that specific locations can be associated with recent or ongoing transmission of M. tuberculosis among homeless persons. Two studies among predominantly HIVinfected men have demonstrated evidence of transmission at specific bars in the community (134,135).
Genotyping techniques have confirmed TB transmission in HIV residential facilities (136), crack houses (i.e., settings in which crack cocaine is sold or used) (137), hospitals and clinics (54), and prisons (138,139). TB transmission also has been demonstrated among church choirs (140) and renal transplant patients (141) and in association with processing of contaminated medical waste (142) and with bronchoscopy (143,144).
# Communitywide Epidemiology of TB
TB might arise because of rapid progression from a recently acquired M. tuberculosis infection, from progression of LTBI to TB disease, or occasionally from exogenous reinfection (145). The majority of genotyping studies have assumed that clustered isolates in a population-based survey reflect recent transmission of M. tuberculosis. Certain studies have identified epidemiologic links between clustered TB cases, inferring that the clustered cases are part of a chain of transmission from a single common source or from multiple common sources (131,146).
The number and proportion of population-based cases of TB that occur in clusters representing recent or ongoing transmission of M. tuberculosis have varied from study to study; frequency of clustering has varied from 17%-18% (in Vancouver, Canada) to 30%-40% (in U.S. urban areas) (131,147,148). Youth, being a member of a racial or ethnic minority population, homelessness, substance abuse, and HIV infection have been associated with clustering (131,133,148,149).
The increasing incidence of TB among foreign-born persons underscores the need to understand transmission dynamics among this population. In San Francisco, two parallel TB epidemics have been described (150,151), one among foreignborn persons that was characterized by a low rate of genotype clustering and the other among U.S-born persons that was characterized by a high rate of genotype clustering. In a recent study from NYC, being born outside the United States, being aged >60 years, and receiving a diagnosis after 1993 were factors independently associated with being infected with a strain not matched with any other, whereas homelessness was associated with genotype clustering and recent transmission (152). Among foreign-born persons, clustered strains were more likely to be found among patients with HIV infection (152).
# Other Contributions of Genotyping
Genotyping can determine whether a patient with a recurrent episode of TB has relapsed with the original strain of M. tuberculosis or has developed exogenous reinfection with a new strain (64,153). In Cape Town, South Africa, where incidence of TB is high and considerable ongoing transmission exists, 16 (2.3%) of 698 patients had more than one episode of TB disease. In 12 (75%) of the 16 recurrent cases, the pairs of M. tuberculosis isolates had different IS6110-based genotyping patterns, indicating exogenous reinfection (154). However, in areas with a low incidence of TB, episodes of exogenous reinfection are uncommon (153). Because TB incidence in the majority of areas of the United States is low and decreasing, reinfection is unlikely to be a major cause of TB recurrence.
Genotyping has greatly facilitated the identification of false-positive cultures for M. tuberculosis resulting from laboratory cross-contamination of specimens. Previously, falsepositive cultures (which might lead to unnecessary treatment for patients, unnecessary work for public health programs in investigating cases and pseudo-outbreaks, and unnecessary costs to the health-care system) were difficult to substantiate (155). Because of its capability to determine clonality among M. tuberculosis strains, genotyping has been applied extensively to verify suspected falsepositive cultures (156)(157)(158) and to study the causes and prevalence of laboratory cross-contamination (159,160).
# The Role of Genotyping of M. tuberculosis in TB-Control Programs
In 2004, CDC established the Tuberculosis Genotyping Program (TBGP) to enable rapid genotyping of isolates from every patient in the United States with culture-positive TB (161). State TB programs may submit one M. tuberculosis isolate from each culture-positive case within their jurisdictions to a contracted genotyping laboratory. A detailed manual describing this program, including information on how to interpret genotyping test results and how to integrate genotyping into TB-control activities, has been published (162).
Genotyping information is essential to optimal TB control in two settings. First, genotyping is integral to the detection and control of TB outbreaks, including ruling a suspected outbreak in or out and pinpointing involved cases and the site or sites of transmission (54,(136)(137)(138)(139)(140)(141)(142)(143)(144). Second, genotyping is essential to detect errors in handling and processing of M. tuberculosis isolates (including laboratory cross-contamination) that lead to reports of false-positive cultures for M. tuberculosis (156,(158)(159)(160)163).
More extensive use of M. tuberculosis genotyping for TB control depends on the availability of sufficient program resources to compare results with information from traditional epidemiologic investigative techniques. Time-framed genotyping surveys and good fieldwork can unravel uncertainties in the epidemiology of TB in problematic populations at high risk (150)(151)(152)164). Genotyping surveys and epidemiologic investigations also can be used as a program monitoring tool to determine the adequacy of contact investigations (29,119,(132)(133)(134)(164)(165)(166) and evaluate the success of control measures designed to interrupt transmission of M. tuberculosis among certain populations or settings (167).
Programs that use genotyping for surveillance of all of the jurisdiction's M. tuberculosis isolates should work closely on an ongoing basis with the genotyping laboratory and commit sufficient resources to compare genotyping results with those of traditional epidemiologic investigations. Information from both sources is needed for optimum interpretation of the complex epidemiologic patterns of TB in the United States (84,168).
# Principles and Practice of TB Control Basic Principles of TB Control
The goal of TB control in the United States is to reduce morbidity and mortality caused by TB by 1) preventing transmission of M. tuberculosis from persons with contagious forms of the disease to uninfected persons and 2) preventing progression from LTBI to TB disease among persons who have contracted M. tuberculosis infection. Four fundamental strategies are used to achieve this goal (Box 4) (17,169), as follows:
- (171). Outbreaks of TB are also being reported with greater frequency (33,34,172,173). Institutional infection-control measures have been highly successful in health-care facilities (56), but other high-risk settings (e.g., correctional facilities , homeless shelters , bars ), and social settings that extend beyond single venues ) present challenges to effective infection control (172). Vaccination with BCG is not recommended as a means to control TB in the United States because of the unproved efficacy of the vaccine in the U.S. population (174,175), its effect of confounding the results of tuberculin skin
# Deficiencies in TB Control
Because TB control is a complex undertaking that involves multiple participants and processes, mistakes often occur, with adverse consequences. Common errors include 1) delays among persons with active TB obtaining health care; 2) delayed detection and diagnosis of active TB; 3) failed or delayed reporting of TB; 4) failure to complete an effective course of treatment for TB; 5) missed opportunities to prevent TB among children; and 6) deficiencies in conducting contact investigations and in recognizing and responding to outbreaks.
# Delays in Obtaining Health Care
Homeless patients with TB symptoms often delay seeking care or experience delays in gaining access to care (181), and fear of immigration authorities has been associated with patient delay among foreign-born persons (19). Patients who speak languages other than English or who are aged 55-64 years are more likely than others to delay seeking care (20).
Cultural factors that might affect health-seeking behavior by foreign-born persons include misinterpretation or minimization of symptoms, self-care by using over-the-counter or folk medicines, and the social stigma associated with TB (18). In certain societies, women with TB are less likely to take advantage of health-care services, perhaps because of stigma associated with the diagnosis, including a lower likelihood of marriage (182,183). Even in areas with open access to public health clinical services, persons at risk for TB might not seek evaluation and treatment because they are not aware that these resources are available for persons with limited financial means (118,(184)(185)(186).
# Delayed Detection and Diagnosis of Active TB
Delayed detection of a case of TB and resulting delays in initiation of treatment can occur if the clinician does not suspect the diagnosis. A survey conducted in NYC in 1994 found that the median delay within the health-care system (defined as the time from first contact to initiation of treatment for active TB) was 15 days (range: 0-430 days) (20). Asians and homeless persons were more likely to encounter delays in receiving a diagnosis than non-Asians and persons with stable housing. Persons without cough who had AFB smearnegative TB or who did not have a chest radiograph at their initial visit also experienced delays. In London, England, delays in diagnosis occurred among whites and among women of all racial/ethnic populations (187).
Regardless of the reason, the consequences of delays in diagnosis and initiation of effective therapy can be serious. In Maine, a shipyard worker aged 32 years who was a TB contact and who was untreated despite having symptoms of active TB, repeated medical visits, and a chest radiograph consistent with active TB did not receive a diagnosis of TB until 8 months after he became ill (188), and 21 additional cases of TB occurred among his contacts. Of 9,898 persons who were investigated as contacts, 697 (7.0%) persons received diagnoses of new LTBIs. A high school student in California was symptomatic for >1 year before TB was diagnosed (177). Subsequently, 12 additional TB cases among fellow students were linked to the source-case, and 292 (23%) of 1,263 students tested had positive tuberculin skin tests.
Other instances of delayed or missed diagnoses of TB have been reported that have resulted in extended periods of infectiousness and deaths (22,24,178). These problems reflect the increasing difficulty in maintaining clinical expertise in the recognition of TB in the face of declining disease incidence (41). Recognition of TB among patients with AFB-negative sputum smear results is a challenge for practitioners and has been associated with delays in reporting and treatment (22,189,190).
# Delayed Reporting of TB
Failure to promptly report a new TB case delays public health responses (e.g., institution of a treatment plan, casemanagement services, and protection of contacts). Although TB cases in the United States rarely remain unreported, timeliness of reporting varies (median: 7-38 days) (190).
# Failure to Receive and Complete a Standard Course of Treatment for Active TB
Failure to receive and complete a standard course of treatment for TB has adverse consequences, including treatment failure, relapse, increased TB transmission, and the emergence of drug-resistant TB (191)(192)(193). At least two reasons exist for failure to complete standard treatment. Patients frequently fail to adhere to the lengthy course of treatment (188). Poor adherence to treatment regimens might result from difficulties with access to the health-care system, cultural factors, homelessness, substance abuse, lack of social support, rapid clearing of symptoms, or forgetfulness (18,194). Also, as TB has become less common, clinicians might fail to use current treatment regimens (48). These adverse outcomes are preventable by case-management strategies provided by TB-control programs, including use of DOT (13,195,196).
# Missed Opportunities To Prevent TB Among Children
The absence of TB infection and disease among children is a key indicator of a community's success in interrupting the transmission of TB (197). The 1985-1992 TB resurgence included a reversal of the long-term decline in the incidence of TB among children, which indicated a failure of the public health system to prevent disease transmission (197). A study of 165 children reported with TB in California in 1994 found that for 59 (37%), an adult source-case was identified (198). Factors that contributed to transmission to children included delayed reporting, delayed initiation of contact investigations, and poor management of adult source-cases. Improvements in contact investigations might have prevented 17 (10%) of those cases (198).
# Deficiencies in Conducting Contact Investigations and in Recognizing and Responding to Outbreaks
Deficiencies in contact investigations and failure to recognize and respond to TB outbreaks are among the most important challenges to optimal control of TB in the United States. These topics are discussed in detail in this statement along with the other essential components of TB control.
# Importance of TB Training and Education
The 1985-1992 TB resurgence led ACET to call for a renewed focus on training and education as an integral part of strategies for TB control, prevention, and elimination (1). Factors indicating a need for this focus include the following:
- Deficiencies in clinical knowledge and practice.
Errors have been documented on the part of medical practitioners and TB-control staff in the diagnosis, reporting, treatment, and follow-up of TB cases. (3)(4)(5). However, the promulgation of guidelines alone does not necessarily improve provider practices (42,199). Guidelines are more effective when supplemented with targeted education (42) (42,200,201). Education is essential to the future control of TB in the United States and globally (2), and creating interest in TB among students of the health professions is critical to generating the competent workforce needed to eliminate TB in the United States and contribute human resources to fighting the global TB epidemic.
# Educating Patients and Communities at High Risk
Education of patients by clinicians, TB program staff, and trusted community members promotes acceptance and adherence to authoritative advice about controlling and preventing TB. Such education can influence patients' decision-making about whether to accept and complete treatment for LTBI (202).
Because cultural and health beliefs might act as barriers to effective control of TB (18,19), an increasing need exists for education targeted at populations at high risk (19). TB-control programs should enlist community-based organizations and other key informants to discover the health beliefs, norms, and values of communities at high risk in their jurisdictions (202,203). Professional associations and academic institutions (including schools of medicine, public health, and nursing) will be valuable partners in developing an understanding of the health perceptions of these populations. Education materials should be developed with input from the target audience to ensure that they are culturally and linguistically appropriate (203,204).
# The Strategic Plan for TB Training and Education
In Professional societies and specialty boards are means for reaching private medical providers. Including TB as a subject in state medical society programs, hospital grand rounds, and medical specialty board examinations would be a valuable resource for providers serving populations at low risk. New linkages should be established to reach providers serving populations at high risk (e.g., foreign-born, homeless, and HIV-infected persons). For example, the AIDS Education and Training Centers funded by the Health Resources and Services Administration are a resource for reaching HIV/ AIDS providers, and foreign physicians' associations and community-based organizations are potential partners for reaching international medical graduates and health-care providers of foreign-born persons.
# Laboratory Services for Optimal TB Control
The diagnosis of TB, management of patients with the disease, and public health control services rely on accurate laboratory tests. Laboratory services are an essential component of effective TB control, providing key information to clinicians (for patient care) and public health agencies (for control services).
Up to 80% of all initial TB-related laboratory work (e.g., smear and culture inoculation) is performed in hospitals, clinics, and independent laboratories outside the public health system, whereas >50% of species identification and drug susceptibility testing is performed in public health laboratories (205). Thus, effective TB control requires a network of public and private laboratories to optimize laboratory testing and the flow of information. Public health laboratorians, as a component of the public health sector with a mandate for TB control, should take a leadership role in developing laboratory networks and in facilitating communication among laboratorians, clinicians, and TB controllers.
# Role of Public Health Laboratories
Public health laboratories should ensure that clinicians and public health agencies within their jurisdictions have ready access to reliable laboratory tests for diagnosis and treatment of TB (206). Specific tasks to ensure the availability, accessibility, and quality of essential laboratory services are 1) assessment of the cost and availability of TB laboratory services and 2) development of strategic plans to implement and maintain a systems approach to TB testing (207). In this process, public health laboratories should assess and monitor the competence of laboratories that perform any testing related to the diagnosis, management, and control of TB within their jurisdictions; develop guidelines for reporting and tracking of laboratory results; and educate laboratory staff members, health-care providers, and public health officials about available laboratory tests, new technologies, and indications for their use. For example, public health laboratories should lead the discussion on the costs, logistics requirements (e.g., collection and transport of clinical specimens within the required time), and quality assurance issues associated with the use of QFT-G, the new test for latent M. tuberculosis infection (103). The process of coordinating TB laboratory services is usually best organized at the state level (208), and the Association of Public Health Laboratories has compiled descriptions of successful organizational models for integrated laboratory services (207).
# Role of Clinical Laboratories
Because the majority of initial TB laboratory work related to diagnosis of TB is conducted in hospitals, clinics, and independent laboratories (205), clinicians and public health agencies are increasingly dependent on the laboratory sector for the confirmation of reported cases, and public health laboratories are similarly dependent for referral of specimens for confirmatory testing and archiving. However, as a result of laboratory consolidation at the regional or national level (206), private laboratories are experiencing more difficulties in fulfilling this function. In certain instances, consolidation has resulted in poor communication among laboratory personnel, clinicians, and public health agencies (206,209). Problems also have been identified in specimen transport, test result reporting, and quality control (206,209,210). In response, certain states (e.g., Wisconsin*) have adopted laws and regulations that mandate essential clinical laboratory services for TB control (e.g., drug susceptibility testing and reporting of the first M. tuberculosis isolate from each patient and submission of isolates to the state public health laboratory).
The clinical laboratory sector should accept the responsibilities that accompany its emergence as a provider of essential TB testing (209). This statement provides recommendations to guide turnaround times for essential tests, reporting to clinicians and jurisdictional public health agencies, and referral of specimens to public health laboratories or their designees.
# Essential Laboratory Tests
Six tests performed in clinical microbiologic laboratories are recommended for optimal TB control services (Table 3). These laboratory tests should be available to every clinician involved in TB diagnosis and management and to jurisdictional public health agencies charged with TB control. In addition, other tests that are useful in the diagnosis and management of selected patients and for specific TB control activities include M. tuberculosis genotyping, serum drug levels, tests used for monitoring for drug toxicity, and QFT-G for diagnosis of latent M. tuberculosis infection (5,103,162). Access to these specialized tests should be provided as needed.
For suspected cases of pulmonary TB, sputum smears for AFB provide a reliable indication of potential infectiousness; and for AFB smear-positive pulmonary cases, a nucleic acid amplification assay (NAA) provides rapid confirmation that the infecting mycobacteria are from the M. tuberculosis complex. These two tests, which should be available with rapid turnaround times from specimen collection, facilitate decisions about initiating treatment for TB or a non-TB pulmonary infection, and, if TB is diagnosed, for reporting the case and establishing priority to the contact investigation.
Growth detection and identification of M. tuberculosis by culture of sputum and other affected tissue is essential for confirmation of the identity of the organism and for subsequent drug susceptibility testing, which is recommended on all initial isolates for each patient. Cultures also remain the cornerstone for the diagnosis of TB in smear-negative pulmonary and extrapulmonary cases and, along with sputum smears for AFB, provide the basis for monitoring a patient's response to treatment, for release from isolation, and for diagnosing treatment failure and relapse (5). The use of liquid media systems, which can provide information in less time than solid media (in certain cases, 7 days), should be available in all laboratories that perform culture for mycobacteria. Detailed descriptions of these recommended laboratory tests; recommendations for their correct use; and methods for collecting, handling, and transporting specimens have been published (3,211).
# Recommended Roles and Responsibilities for TB Control
This section delineates organizational and operational responsibilities of the public health sector that are essential to achieve the goals of TB control in the United States. However, a central premise of this statement is that continuing progress toward elimination of TB in the United States will require the collaborative efforts of a broad range of persons, organizations, and institutions in addition to the public health sector, which has responsibility for the enterprise. For example, clinicians who provide primary health care and other specialized health services to patients at high risk for TB, academic medical centers that educate and train them, hospitals in which they practice, and professional organizations that serve their interests can all make meaningful contributions to improve the detection of TB cases, one of the most important obstacles to continuing progress (Box 1). Similarly, important roles exist for such entities as communitybased organizations representing populations at risk for TB and the pharmaceutical industry, which takes academic advances and develops the tools for diagnosis, treatment, and prevention of TB. This section discusses the importance to the TB elimination effort of participants outside the public health sector and proposes specific roles and responsibilities that each could fulfill toward that goal. The sponsoring organizations intend for these proposals to serve as the basis for discussion and consensus building on the important roles and responsibilities of the nonpublic health sector in continuing progress toward the elimination of TB in the United States.
# Public Health Sector
The infrastructure for TB control has been discussed extensively in recent years. An analysis of contributing factors to the rise in the number of TB cases during 1985-1992 concluded that the resurgence never would have occurred had the public health infrastructure been left in place and supported appropriately (212). The need to maintain the TB-control infrastructure has been expressed repeatedly (1,2,13,213,214).
Public health activities have been described as consisting of four interrelated components: mission/purpose, structural capacity, processes, and outcomes (215). Among these four components, structural capacity (i.e., persons who do the work of public health, their skills and capacities, the places where they work, the way they are organized, the equipment and systems available to them, and the fiscal resources they command) represents the public health infrastructure for TB control.
The responsibility for TB control and prevention in the United States rests with the public health system through federal, state, county, and local public health agencies. Programs conducted by these agencies were critical to the progress that has been made in TB control, and the deterioration of those programs following the loss of categoric federal funding contributed to the resurgence of TB in the United States during 1985-1992 (1,2,13,(212)(213)(214). Since 1992, as a result of increased funding for TB-control programs, national incidence of TB disease has declined. In 2004, $147 million in federal funds were dedicated to domestic TB control, compared with $6.6 million in 1989, during the resurgence. These funds have been used to rebuild public health-based TB-control systems, and the success achieved highlights the critical role of the public health system in TB control.
TB control in the United States has traditionally been conducted through categoric programs established to address the medical aspects of the disease and the specific interventions required for its successful prevention and management (17,216). CDC's Division of TB Elimination, in partnership with other CDC entities that conduct TBrelated work, provides guidance and oversight to state and local jurisdictions by conducting nationwide surveillance; developing national policies, priorities, and guidelines; and providing funding, direct assistance, education, and program evaluation. Setting the national agenda for support of basic and clinical research is also a critical function of federal health agencies, including NIH and CDC, with support from nongovernment organizations such as ATS and IDSA.
To meet the priorities of basic TB control (Box 4), state and local public health agencies with responsibility for TB control should provide or ensure the provision of a core group of functions (Box 5). Jurisdictional public health agencies should ensure that competent services providing these core elements function adequately within their jurisdictions and are available with minimal barriers to all residents.
How the core components of TB control are organized differs among jurisdictions, depending on the local burden of disease, the overall approach to public health services within the jurisdiction, budgetary considerations, the availability of services within and outside the public health sector, and the relationships among potential participants. Certain jurisdictions provide core program components themselves, whereas other jurisdictions contract with others to provide them. In the majority of cases, the organization includes a mix in which the public health agency provides certain services, contracts for others, and works collaboratively with partners and stakeholders to accomplish the remainder (48). Sharing of direct services, including patient management, increases the importance of the public health sector, which retains responsibility for success of the process. This evolving role of the public health sector in TB control is consistent with the widely accepted concept of the three core functions of public health that IOM proposed in 1988: assessment, policy development, and assurance (43).
# Health Insurance Portability and Accountability Act
The Health Insurance Portability and Accountability Act (HIPAA) of 1996 included provisions to protect the privacy of individually identifiable health information. To implement these privacy protections, the U.S. Department of Health and Human Services has issued a ruling on how health-care providers may use and disclose personally identifiable health information about their patients; these regulations provide the first national standards for requirements regarding the privacy of health information (217).
HIPAA also recognizes the legitimate need for public health authorities and others responsible for ensuring the public's health and safety to have access to personal health information to conduct their missions and the importance of public health disease reporting by health-care providers. HIPAA permits disclosure of personal health information to public health authorities legally authorized to collect and receive the information for specified public health purposes. Such information may be disclosed without written authorization from the patient. Disclosures required by state and local public health or other laws are also permitted. Thus, HIPAA should not be a barrier to the reporting of suspected and verified TB cases by health-care providers, including health-care institutions. Additional information about HIPAA is available at .
# Roles and Responsibilities of Federal Public Health Agencies
# Roles and Responsibilities of Jurisdictional Public Health Agencies
Planning and policy development. The blueprint for TB control for a given area is a responsibility of the jurisdictional public health agency. Policies and plans should be based on a thorough understanding of local epidemiologic data and on the capabilities and capacities of clinical and support services for clients, the fiscal resources available for TB control, and ongoing indicators of program performance. Open collaboration is essential among public health officials and community stakeholders, experts in medical and nonmedical TB management, laboratory directors, and professional organizations, all of whom provide practical perspectives to the content of state and local TB-control policy. Policies and procedures should reflect national and local standards of care and should offer guidance in the management of TB disease and LTBI.
A written TB control plan that is updated regularly should be distributed widely to all interested and involved parties. The plan should assign specific roles and responsibilities; define essential pathways of communication between providers, laboratories, and the public health system; and assign sufficient resources, both human and financial, to ensure its implementation, including a responsible case manager for each suspected and verified case of TB. The plan should include the provision of expert consultation and oversight for TB-related matters to clinicians, institutions, and communities. It should provide special guidance to local laboratories that process TB-related samples, assist local authorities in conducting contact or outbreak investigations and DOT, and provide culturally appropriate information to the community. Systems to minimize or eliminate financial and cultural barriers to TB control should be integral to the plan, and persons with TB and persons at high risk with TB infection should receive culturally appropriate education about TB and clinical services, including treatment, with no consideration for their ability to pay. Finally, the plan should be consistent with current legal statutes related to TB control. Relevant laws and regulations should be reviewed periodically and updated as necessary to ensure consistency with currently recommended clinical and public health practice (e.g., mandatory reporting laws, institutional infection-control procedures, hospital and correctional system discharge planning, and involuntary confinement laws) (218).
Collection and analysis of epidemiologic and other data. The development of policies and plans for the control of TB within a jurisdiction requires a detailed understanding of the epidemiology of TB within the jurisdiction. Mandatory and timely case reporting from community sources (e.g., providers, laboratories, hospitals, and pharmacies) should be enforced and evaluated regularly. To facilitate the reporting process and data analyses, jurisdictions should modify systems as necessary to accommodate local needs and evolving technologies. State and local TB-control programs should have the capability to monitor trends in TB disease and LTBI in populations at high risk and to detect new patterns of disease and possible outbreaks. Populations at high risk should be identified and targeted for active surveillance and prevention, including targeted testing and treatment of LTBI (4). Timely and accurate reporting of suspected and confirmed TB cases is essential for public health planning and assessment at all levels. Analyses of these data should be performed at least annually to determine morbidity, demographic characteristics, and trends so that opportunities for targeted screening for disease or infection can be identified. Regular reviews of clinical data (e.g., collaborative formal case presentations and cohort analyses of treatment outcomes; completeness, timeliness and effectiveness of contact investigations; and treatment of LTBI) may be used as indicators of program performance.
Data should be collected and maintained in a secure, computerized data system that contains up-to-date clinical information on persons with suspected and confirmed cases and on other persons at high risk. Each case should be reviewed at least once monthly by the case manager and by field or outreach staff to identify problems that require attention. The TB-case registry should ensure that laboratory data, including data on sputum culture conversion and drug susceptibility testing of clinical isolates, are promptly reported, if applicable, to the health-care provider so any needed modifications in management can be made. This requires a communications protocol for case managers, providers, and the public health and private laboratory systems that will transmit information in a timely fashion. Aggregate program data should be available to the health-care community and to community groups and organizations with specific interests in public health to support education and advocacy and to facilitate their collaboration in the planning process.
Clinical and diagnostic services for patients with TB and their contacts. TB-control programs should ensure that patients with suspected or confirmed TB have ready access to diagnostic and treatment services that meet national standards (3,5). These services are often provided by state-or citysupported TB specialty clinics and staffed by health department personnel or by contracted service providers; however, persons may seek medical care for TB infection or disease in the private medical sector. Regardless of where a person receives medical care, the primary responsibility for ensuring the quality and completeness of all TB-related services rests with the jurisdictional health agency, and health departments should develop and maintain close working relations with local laboratories, pharmacies, and health-care providers to ensure that standards of care, including those for reporting, are met.
Clinical services provided by the health department, contracted vendors, or private clinicians should be competent, accessible, and acceptable to members of the community served by the jurisdiction. Hours of clinic operation should be convenient, and waiting intervals between referral and appointments should be kept to a minimum. Persons with symptoms of TB should be accommodated immediately (i.e., on a walk-in basis). Staff, including providers, should reflect the cultural and ethnic composition of the community to the extent that this is possible, and competent clinical interpreter services should be available to those patients who do not speak English. All clinical services, including diagnostic evaluation, medications, clinical monitoring, and transportation, should be available without consideration of the patient's ability to pay and without placing undue stress on the patient that might impair completion of treatment.
Clinical facilities should provide diagnostic, monitoring, and screening tests, including radiology services. Healthcare providers, including nurses, clinicians, pharmacists, laboratory staff members, and public health officials, should be educated about the use and interpretation of diagnostic tests for TB infection and disease. Clinics and providers should monitor patients receiving TB medications at least monthly for drug toxicity and for treatment response, according to prevailing standards of care (5). Counseling and voluntary testing for HIV infection should be offered to all persons with suspected and proven TB and to certain persons with LTBI, with referral for HIV treatment services when necessary. A case manager, usually a health department employee, should be assigned to each patient suspected or proven to have TB to ensure that adequate education is provided about TB and its management, standard therapy is administered continuously, and identified contacts are evaluated for infection and disease.
A treatment plan for persons with TB should be developed immediately on report of the case. This plan should be reviewed periodically by the case manager and the treating clinician and modified as necessary as new data become available (219). The treatment plan should include details about the medical regimen used, how and where treatment is to be administered, monitoring of adherence, drug toxicity, and clinical and bacteriologic responses. Social and behavioral factors that might interfere with successful completion of treatment also should be addressed.
Patient-specific strategies for promoting adherence to treatment should take into account each patient's clinical and social circumstances and needs (5). Such strategies might include the provision of incentives or enablers (e.g., monetary payment, public transportation passes, food, housing, child care, or transportation to the clinic for visits). Whether the patient's care is managed by a public health clinic or in the private sector, the initial strategy used should emphasize direct observation of medication MMWR November 4, ingestion by an HCW. Patient input into this process (e.g., regarding medications to be taken or the location of DOT) is often useful as it can minimize the burden of treatment and provide the patient a degree of control over an anticipated lengthy course of therapy. Expert medical consultation in TB should be available to the health-care community, especially for patients who have drug-resistant disease or medical diagnoses that might affect the course or the outcome of treatment. Consultants may be employees of the health department or clinicians with expertise who are under contract with the health department.
Inpatient care should be available to all persons with suspected or proven TB, regardless of the person's ability to pay. Hospitalized patients with suspected proven TB should have access to expert medical and nursing care, essential diagnostic services, medications, and clinical monitoring to ensure that diagnostic and treatment standards are met. Inpatient facilities that manage persons who are at risk for TB should have infection-control policies and procedures in place to minimize the risk for nosocomial spread of infection. Facilities should report persons with suspected or confirmed TB to the health department and arrange for discharge planning as required by statute.
Public health agencies should have legal authority and adequate facilities to ensure that patients with infectious TB are isolated from the community until they are no longer infectious. This authority should include the ability to enforce legal confinement of patients who are unwilling or unable to adhere to medical advice (218,220). This authority also should apply to nonadherent patients who no longer are infectious but who are at risk for becoming infectious again or becoming drug resistant.
TB-control programs should serve as sources of information and expert consultation to the health-care community regarding airborne infection and appropriate infectioncontrol practice. A TB program's presence raises overall provider awareness of TB and facilitates timely diagnosis, reporting, and treatment. Collaboration with local health-care facilities to design and assist in periodic staff education and screening is often a health department function. Expertise in airborne infections by TB-control personnel may be shared with biologic terrorism programs to assist in the design and implementation of local protocols.
Contact investigation, including education and evaluation of contacts of persons with infectious TB, is a key component of the public health mandate for TB control. Often the primary responsibility of the case manager, contact investigation should proceed as quickly and as thoroughly as indicated by the characteristics of the specific case and by those of the exposed contact (e.g., young children or immunocompromised persons). This statement includes recommendations on organizing and conducting contact investigations. TB-control programs that are prepared to implement enhanced TB-control strategies should initiate or facilitate implementation by other medical providers of programs for targeted testing and treatment of persons with LTBI on the basis of local epidemiologic data that identify populations at high risk. A public health approach to this activity is presented in this statement (see Essential Components of TB Control in the United States).
Liaison with communities at high risk is critical to the success of TB control in any jurisdiction. TB-control programs should develop strong lines of communication with local community groups and organizations and their healthcare providers to understand local priorities and beliefs about TB. Trusted community members can facilitate the design and implementation of strategies to improve TB diagnosis and prevention. Community-based clinical services that use local providers who are educated in TB treatment and prevention and who have a connection with the TB-control program can improve community acceptance of prevention and treatment of TB (221).
Training and education. TB-control programs should provide education and training in the clinical and public health aspects of TB to all program staff. Staff members should receive appropriate education at regular intervals on the basis of their particular responsibilities in the program and should demonstrate proficiency in those areas when tested. Public health TB programs also should educate health-care providers (both public and private), community members, public health officials, and policy makers on the basis of local epidemiology and needs. To ensure the availability of a competent workforce for TB that understands and meets the needs of its community, state TB programs should use resources from CDC-funded national TB centers, NIH-supported TB curriculum centers, NTCA, and other national and local agencies to create and implement education activities in coordination with schools of medicine, nursing, pharmacy, dentistry, and public health; community-based organizations and their constituents; local health-care providers; and healthcare institutions (222) (224,225) and should be prioritized by all TB-control programs. Information technology can improve care of patients with TB through standardized collection of data; tracking of test results and details of treatment, including administration of DOT; and prediction of interactions among medications. Information technology can also facilitate analysis and rapid distribution of epidemiologic data and the management of individualized treatment plans (5) and support ongoing program performance analyses. Barriers to successful implementation of information technology include costs and resistance to change.
Monitoring and evaluation. The systematic monitoring and analysis of program activities is a critical factor in enhancing program performance. Evaluation techniques provide TB programs with an evidence-based approach to assess and improve their TB-control strategies by understanding what causes good or bad program performance. Evaluation can also be used for program advocacy, assessing staffing needs, training and capacity building, directing limited resources to the most productive activities, accounting for available resources, generating additional resources, and recognizing achievement (226).
Each public health agency should develop its own priorities for program evaluation on the basis of the nature and dimensions of the TB problem in its jurisdiction and the way that services are organized. In general, the first priority for evaluation efforts should be to focus on those activities and outcomes that relate most directly to the key strategies of TB control: detecting patients with infectious TB and administering a complete course of treatment; finding contacts and other persons at high risk with LTBI and treating them; and interrupting transmission of M. tuberculosis in high-risk settings (Box 4).
Targets for program performance have been established by CDC (227) to assist public health agencies in treating TB patients, protecting their contacts, and improving the quality of case reporting for national surveillance (Table 4). These national objectives for program performance provide a starting point for state and local TB-control programs to use for program evaluation, but each TB-control program should establish methods to evaluate its performance.
TB case management has typically been evaluated by reviewing individual charts and case conferences. However, cohort analysis, a systematic evaluation of the treatment outcomes of all TB cases during a stipulated period of time, is the preferred means of determining the number and percentage of cases that complete a course of treatment in <12 months. Cohort analyses should be a cornerstone of evaluation by all TB-control programs. A guide to cohort analysis and other evaluation tools has been published (228). National objectives have been set for completing treatment for LTBI among contacts of infectious cases of TB (Table 4). Other program areas that should be monitored through formal evaluation methods include timeliness and completeness of reporting of TB cases and suspected cases, frequency of use of a recommended treatment regimen for patients with TB and LTBI, and quality of the program's databases for surveillance and case management.
To respond to the need for improved and standardized program evaluation activities, CDC and six state TB-control programs have established an Evaluation Working Group whose goal is to improve the capacity of TB-control programs to routinely conduct self-evaluations and use the findings to improve and enhance their programs. The group is developing indicators for program performance and an inventory of evaluation tools, including data collection instruments, data analysis methods, and evaluation training materials. During the next 2 years, a draft set of these materials will be tested in three TB-control programs for utility, feasibility, and accuracy. Ultimately, this package of evaluation materials and resources will be made available to all TB-control programs.
# Public Health Workforce
No single model exists for staffing public health TB-control programs. Approaches to TB control should be flexible and adaptable to local needs and circumstances. Two components of the public health workforce, public health nurses and community outreach workers, merit specific attention.
Public health nurses. Public health nurses are registered nurses with a Bachelor of Science degree who are employed or whose services are contracted for by health departments. Certain states require certification for additional competencies before being hired as a public health nurse. Public health nurses traditionally have played a prominent role in TB control in the United States. Their training, including that in nonmedical aspects of disease, has provided nurses with the special skills needed to manage or coordinate the medical and the social-behavioral concerns associated with the prevention and treatment of TB (229). Their training includes 1) designing contact and source-case investigations; 2) educating patients, contacts, and families; 3) identifying ineffective drug therapy regimens and drug toxicities; 4) recognizing patient behaviors that might lead to poor adherence; and 5) developing strategies to encourage completion of therapy. As health departments adapt to changing health-care environments, the role of public health nurses working to control TB also is evolving to accommodate the varied mechanisms by which services are delivered. Standards of practice for TB nursing are being updated by the National Tuberculosis Nurse Consultant Coalition, a section of NTCA, to guide jurisdictions in creating and maintaining a specialized nursing resource for TB control and prevention.
Community outreach workers. Community outreach workers are staff members who provide services, such as DOT, to patients outside of the clinic. They may also be classified as disease investigation specialists or community health educators. Because TB has become concentrated in specific populations (e.g., foreign-born and homeless persons) in the United States, outreach workers have assumed a key role in TB control. Often members of the communities they serve, outreach workers connect the health-care system with populations at high risk, ensuring that the principles and processes of TB control are communicated to and understood by those populations. Outreach workers' functions include facilitating treatment for patients and contacts; providing DOT; educating patients, their families, workplace personnel, and communities; and participating in contact investigations. In each case, outreach workers form a bridge between patients and healthcare providers to achieve common understandings and acceptance of plans for diagnoses and treatment. Clinicians with specialized expertise, including nurse-case managers, should supervise outreach workers.
# Clinicians
Clinicians in medical practice in the nonpublic health sector play a vital role in TB control throughout the United States. Hospital-or clinic-based medical practitioners, including those working in emergency departments (EDs), are usually the first source of medical care for persons with TB (230)(231)(232); they also may provide ongoing management for TB patients (48). The role of medical practitioners in TB control will increase as TB morbidity in the United States decreases and jurisdictions reduce or even eliminate public health clinical services for TB.
Medical practitioners are often not sufficiently knowledgeable about TB (233), and clinicians in private practice frequently do not follow recommended guidelines and make errors in prescribing anti-TB therapy (231,234,235). The failure of public health and private practitioners to interact effectively is a weak link in global TB control (236). Successful models exist for acknowledging and facilitating the work of private medical practitioners in the complex process of diagnosing and treating persons with TB. For example, for each reported TB case in New Mexico, a collaborative casemanagement strategy is used that includes treating clinicians and pharmacists from the private sector in addition to public health case managers (48). Another model of effective privatepublic partnerships was employed in NYC during the 1985-1992 TB resurgence, with health department case management and DOT for patients under private care (13).
As TB elimination efforts continue, the role of medical practitioners will further expand because they provide access to populations that have been targeted for testing and treatment of LTBI. Greater participation by the nonpublic health sector in preventive intervention has been advocated (2,51), and clinical standards have been published to guide medical practitioners in managing patients with TB disease and LTBI (8).
# Roles and Responsibilities of Clinicians
- Private medical practitioners should -understand prevalent medical conditions, including those with public health implications, of populations within their practice; -understand applicable state laws and regulations for reporting diseases and the need to report cases; -understand the range of responsibilities, statutory and otherwise, that arise when TB is suspected in a patient under medical evaluation, including 1) the need for prompt establishment of diagnosis; 2) use of consultants and hospitalization if indicated; 3) reporting the suspected case to the jurisdictional public health agency and cooperating with subsequent public health activities; and 4) developing, in partnership with the public health agency, a treatment plan that optimizes the likelihood that the patient will complete the recommended course of therapy; -incorporate current recommendations for diagnosis (3), standard treatment of TB (5), and targeted testing and treatment of LTBI ( 4
# Civil Surgeons
Civil surgeons are licensed physicians who are certified by the U.S. Citizenship and Immigration Service (CIS) to conduct a required health screening examination, including testing for LTBI and active TB disease, on foreign-born persons living in the United States who apply for permanent residency. In 2002, approximately 679,000 foreignborn persons applied for permanent residency and were screened by civil surgeons, compared with 245,000 such persons in 1995 (238). CDC has responsibility for providing guidance on screening and treatment but has no regulatory role in monitoring the quality or outcomes of these examinations.
Because of their access to foreign-born persons at high risk, civil surgeons are a critical component of TB control. U.S.-based immigration screening can identify foreign-born persons with LTBI for whom treatment is indicated (239). Although civil surgeons receive immigration-focused training, little information is available on the knowledge, attitudes, and practices of civil surgeons. A recent survey indicated that among 491 physicians serving as civil surgeons in California, Massachusetts, and New York, the majority were graduates of U.S. medical schools; 75% were primary care practitioners; and 47% were board certified in their specialty. Among 5,739 foreign-born applicants examined by these civil surgeons, 1,449 (25%) received nonstandard screening (240). As a result of these findings, efforts are under way to develop guidance documents and training materials for physicians who screen immigrants for TB infection and disease.
# Roles and Responsibilities of Civil Surgeons
- Civil surgeons should -understand current guidelines for the diagnosis (3) and treatment of TB (5) and LTBI (4), -establish a working relationship with the jurisdictional health agency and report suspected and confirmed cases of TB, and -develop a referral mechanism for evaluation for TB disease and LTBI of persons seeking adjustment of immigration status.
# Community Health Centers
Community health centers typically provide primary healthcare services to populations that encounter barriers to receiving those services at other sites in the health-care system, such as low-income working persons and their families, immigrants and refugees, uninsured persons, homeless persons, the frail elderly, and poor women and children. Patients at high risk for TB often receive primary and emergency health care in community health centers (51). For example, community health centers in certain inner-city areas might serve primarily a clientele of homeless persons, whereas centers in neighborhoods in which certain racial and ethnic populations are concentrated might become predominant healthcare providers for immigrants and refugees. Newly arriving refugee families are frequently directed to community health centers to receive federally supported health-screening services, which might include targeted testing and treatment for LTBI. Persons with symptoms of TB might go first for evaluation and care to a community health center. For these reasons, community health centers are a critical part of efforts to control and prevent TB.
# Roles and Responsibilities of Community Health Centers
- Community health centers should -provide their medical staff with the skills and knowledge needed to conduct a TB risk assessment of their clients, diagnose and initiate treatment for TB disease, and diagnose and treat LTBI (241); -develop close working relationships with consultant physicians, hospitals, and clinical laboratories and with the public health agency that serves their jurisdiction; -arrange for reporting patients with suspected TB, ensuring availability of diagnostic services (e.g., sputum smears for AFB, cultures for M. tuberculosis, and chest radiographs), and providing consultation and referral of patients for diagnosis, treatment, and hospitalization, as indicated);
-understand federal and state programs that support screening, diagnostic, and treatment services for patients at high risk and make prevention, diagnosis, and treatment of TB a high priority; -work with public health agencies to educate patients about the personal and public health implications of TB and LTBI and motivate them to accept prevention services; and -establish recommended infection-control practices (10) to protect patients and staff.
# Hospitals
Hospitals provide multiple services that are instrumental to the diagnosis, treatment, and control of TB. Hospitals with active outpatient and EDs often serve as sites of acute and primary medical care for homeless persons, inner-city residents, immigrants and refugees, and other persons at high risk for TB. Also, hospital staff members often provide medical consultation services for the diagnosis and management of TB by public health and community clinicians. Laboratory services provided by hospitals for community-based medical care providers might include key diagnostic tests for TB.
TB cases often are detected during hospitalization at acutecare hospitals (230,242). In a prospective cohort study at 10 sites in the United States, 678 (45%) of 1,493 patients reported with TB received their diagnosis during hospitalization (230). Hospital-based health professionals evaluate patients for TB, establish the diagnosis, and initiate treatment regimens and reporting of cases to public health departments. Instances of delayed recognition, diagnosis, and treatment for TB among hospitalized patients subsequently found to have TB have been reported (24,178), indicating a need for more effective training and education of hospital medical staff members.
Because 25%-45% of patients with TB receive their diagnostic evaluation while in a hospital (230,242), hospitals have an opportunity to provide patient-based teaching on TB for their own staff members and for health professionals from the community served by the hospital. Venues such as staff conferences and medical grand rounds, conducted regularly by hospitals, can be sources of training and education on clinical, laboratory, and public health concerns that arise during evaluation and initial medical management of hospitalized patients with TB.
Hospitals should protect their patients, staff, and visitors from exposure to M. tuberculosis. The importance of effective TB infection control was emphasized during the 1985-1992 TB resurgence in the United States, when hospitals were identified as sites of transmission of multidrug-resistant TB (243). Implementation of effective infection-control guidelines has been effective in reducing transmission of TB in hospitals (56,244,245).
# Roles and Responsibilities of Hospitals
# Academic Institutions
Academic institutions (including schools of medicine, public health, and nursing) have an opportunity to contribute to TB control in the United States and worldwide. Students from diverse disciplines, including the clinical and laboratory sciences, nursing, epidemiology, and health services should be introduced to applicable concepts of public health in general and, because TB is a major cause of preventable illness and death in developing countries (44), to TB in particular. During the resurgence of TB in the United States during 1985-1992, expertise in TB was limited. Federal funding for programs (e.g., the NIH National Heart Lung Blood Institute's Tuberculosis Academic Award program) helped provide funding to incorporate teaching of TB more fully into medical school curricula. Researchers at academic institutions are critical to efforts to improve the prevention, management, and control of TB because of their efforts to develop new tools, including new diagnostic tests, new drugs, better means of identifying and treating LTBI, and basic research to create a vaccine for TB (180,246,247).
As with hospitals, academic institutions can provide benefits to other participants in TB control. Conferences, grand rounds, and other presentations are a source of continuing education for private medical practitioners and other community-based HCWs. As well-trained specialists, researchers at academic institutions can provide clinical, radiographic, and epidemiologic consultation to medical practitioners and public health agencies. A majority of academic institutions manage university-based hospitals, which often serve populations at high risk. University hospitals can become models for TB risk assessment of patients, inpatient care, and infection-control practice, and they can serve as tertiary care sites for an entire community or region.
Partnerships between academic institutions and public health agencies are mutually beneficial (248). In certain cases, health departments and public health TB clinics are staffed or managed by faculty physicians from academic institutions. This partnership facilitates use of these clinics for graduate medical training for physicians in subspecialty areas (e.g., pulmonary and infectious diseases), enhances training for clinic staff, and provides opportunities for clinical and operational research.
# Roles and Responsibilities of Academic Institutions
# Medical Professional Organizations
Because they are involved with medical practice, research, education, advocacy, and public health, medical professional organizations are critical partners in TB control efforts. Greater participation of the nonpublic health medical sector is needed to maintain clinical expertise in the diagnosis and management of TB in an era of declining incidence. Organizations whose memberships include primary care medical practitioners can make significant contributions to the control, prevention, and elimination of TB by including TB in their training and education agendas.
ATS and IDSA both support TB control efforts in the United States. With a membership of approximately 14,000 health professionals, including clinicians trained in pulmonary diseases, ATS conducts research, education, patient care, and advocacy to prevent respiratory diseases worldwide. IDSA promotes and recognizes excellence in patient care, education, research, public health, and the prevention of infectious diseases. In recent years, IDSA has joined ATS in focusing education and advocacy activities on TB through its annual meetings, publications, and sponsorship of this series of statements.
Other medical professional organizations also can support TB control efforts. Medical professional organizations can 1) provide TB education to their members through meetings, symposia, statements, and web sites; 2) serve as venues for better communication between the private medical and public health sectors; 3) promote the TB research agenda locally and nationally; and 4) advocate for resources for strong TB control globally and in the United States.
# Community-Based Organizations
Involvement of community groups in TB control has long been encouraged (17). The critical importance of such involvement is underscored by the trend in the United States for TB to be limited to certain populations at high risk (e.g., contacts of persons with active cases, persons born outside the United States, homeless persons, incarcerated persons, and persons with HIV infection). Programs for education and targeted testing and treatment of LTBI should be organized for these populations.
The public health sector frequently experiences difficulty in gaining access to persons in populations of high risk (51). Such persons might be socially marginalized, as in the case of new refugees, or they might be suspicious of persons representing government agencies, as in the case of undocumented aliens. Furthermore, the target population's own view of its health-care priorities, often best articulated by community-based organizations that represent them, should be considered in the design of public health interventions (249). Social, political, religious, and health-related organizations that have arisen from grassroots efforts to meet community needs often can facilitate access to public health programs (221).
Community-based organizations can be particularly effective in providing information and education on TB to their constituencies. As part of the communities they serve, such organizations are often highly regarded in their communities, and their messages might be accepted more positively than those delivered by the jurisdictional health department.
# Roles and Responsibilities of Community-Based Organizations
- Community-based organizations should be aware of their constituents' health risks. Organizations providing services to populations at risk for TB should partner with the jurisdictional public health TB program and medical care providers from the community to facilitate access to diagnostic, treatment, and prevention services for the target population. As resources allow, organizations should provide assistance for treatment services to their constituency (e.g., DOT, incentives and enablers, and other outreach services). - When serving a population at risk for TB, communitybased organizations should become involved in advocacy initiatives, such as state and local TB advisory committees and coalitions. - Community organizations serving populations at high risk should work with public health agencies and educational institutions to develop education materials that are tailored culturally and linguistically to their populations.
# Correctional Facilities
Correctional facilities are common sites of TB transmission and propagation (250,251). Incidence of TB and of LTBI are substantially higher in prisons and jails than in the general population (252,253). TB is believed to be the leading cause of death for prisoners worldwide (254).
Targeted testing for and treatment of LTBI in correctional facilities have been demonstrated to have a substantial public health impact (124). Testing and treatment for LTBI is carried out more easily in prisons (255) because the length of stay is generally sufficient to permit completion of a course of treatment. Jails have proved convenient sites for targeted testing, but subsequent treatment of LTBI has proved challenging (256). Innovative methods for assuring completion of treatment for LTBI in jail detainees have been proposed (257).
Because of their communal living arrangements, correctional facilities, like health-care facilities, have the responsibility to limit the transmission of TB within the institution and to protect their inhabitants and staff from exposure. This is a particular challenge in jails because of the short lengths of stay for the majority of detainees. Even in prison systems, abrupt and unexpected transfers of detainees among institutions might occur, with little consideration for health issues. Prisons and jails frequently house HIV-infected persons in separate facilities to ensure adequate health care. However, recent publications describing outbreaks of TB in such settings have emphasized the hazard of this strategy (35,126).
# Roles and Responsibilities of Correctional Facilities
# Pharmaceutical and Biotechnology Industries
Because of their essential role in developing new diagnostics, drugs, and vaccines, the pharmaceutical and biotechnology industries are partners in TB control. Although development of new tools for diagnosis, treatment, and prevention of TB has been deemed essential to the effort to combat the disease globally and to continue to make progress toward its elimination in the United States and other developed countries (1,2,45,259), progress in these fields has been slow. Slow progress in this field has been attributed to private industry's perception that such products are not needed in developed countries and do not offer profit opportunities in resource-poor countries (246,260). However, new public-private partnerships are emerging to facilitate the development of essential new tools (261), including three partnerships established with support from the Bill and Melinda Gates Foundation: the Global Alliance for Tuberculosis Drug Development (http:// www.tballiance.org), the Aeras Global Tuberculosis Vaccine Foundation (), and the Foundation for Innovative New Diagnostics (). These organizations have provided venues to identify and address obstacles to developing new tools for TB among private industry, public and academic researchers, and philanthropic organizations. These organizations also receive support from the private sector.
The pharmaceutical industry has also contributed to the global TB control effort by assisting in making drugs for TB, including second-line drugs for patients with multidrugresistant TB, more affordable (262,263). Such actions can enable pharmaceutical companies to become leaders in efforts to improve TB control and prevention.
# Roles and Responsibilities of the Pharmaceutical and Biotechnology Industries
- The pharmaceutical and biotechnology industries should -understand the dimensions of the global TB epidemic and realize their key role in developing the necessary tools for diagnosis, treatment, and prevention of TB; -respond to the current surge of interest in TB globally by reexamining the costs of new product development and by considering potential new public and private funding and the markets for such products in developing countries; -contribute their perspectives and become involved in coalitions such as NCET, the Global Partnership to Stop Tuberculosis, the Global Alliance for Tuberculosis Drug Development, and the Foundation for Innovative New Diagnostics; and -work with other stakeholders to ensure access of essential products to those whose lives are at stake.
# MMWR November 4,
# Essential Components of TB Control in the United States Case Detection and Management
Case detection and case management include the range of activities that begin when a diagnosis of TB is first suspected and end with the completion of a course of treatment for the illness. TB case management describes the activities undertaken by the jurisdictional public health agency and its partners to ensure successful completion of TB treatment and cure of the patient. The rationale and methodology of TB case management have been described previously (5). Organizational aspects of case management from the viewpoint of the jurisdictional public health agency are also discussed in this statement.
Case detection includes the processes that lead to the presentation, evaluation, receipt of diagnosis, and reporting of persons with active TB. Case detection involves patients with active TB who seek medical care for symptoms associated with TB, their access to health care, their health-care providers, the consultants and clinical laboratories used by those healthcare providers, and the responsible public health agency. Although steadily increasing treatment completion rates (14) indicate that progress has been made in the management of TB patients, TB case detection is still problematic. Delays in diagnosis and report of TB cases continue to be common. Also, despite the 44% reduction in TB incidence in the United States since 1992, the proportion of pulmonary cases that are sputum smear-positive at diagnosis has changed little, accounting for >60% of all reported cases (14). The majority of pulmonary TB cases continue to be diagnosed at an advanced stage. Earlier diagnosis would result in less individual morbidity and death, greater success in treatment, less transmission to contacts, and fewer outbreaks of TB. Improvement in the detection of TB cases is essential to progress toward elimination of TB in the United States (Box 1).
The first step in improving TB case detection is to remove barriers in access to medical services that are often encountered by persons in high-risk categories. Such barriers might be patient-related, such as cultural stigmas associated with the diagnosis of TB, which might lead foreign-born persons to deny or hide symptoms (264,265), or fear of being reported to immigration authorities if medical care is accessed (19). Foreign-born persons, particularly recently arrived immigrants, refugees, and other persons of low SES might not have access to primary health services because they do not have health insurance or they are not familiar with the U.S. medical care system (20,118,266).
Removing patient-related barriers to health care is particularly difficult. Improved patient education about TB is needed (18). Continuing immigration from countries at high risk, often including persons with strong cultural views about TB, underscores the need for patient education. As with other interventions to enhance TB control and prevention, local public health action should be based on the local pattern of disease. In developing education messages and outreach strategies, public health authorities should work with organizations that serve communities at high risk to gain community input (203). This statement provides recommendations on working with community-based organizations, key informants, and academic institutions to gain ethnographic information, learn about the health beliefs and values of populations at high risk within the community, and develop targeted interventions that will be most effective.
The majority of TB cases are detected during the medical evaluation of symptomatic illnesses (19,267). Persons experiencing symptoms ultimately attributable to TB usually seek care not at a public health TB clinic but rather from other medical practitioners and health-care settings. In 18 California counties with the highest TB morbidity of persons during 1996-1997, initial points of entry into the health-care system for persons who received a diagnosis of TB were hospital inpatient evaluations (45%), private outpatient offices or clinic evaluations (32%), TB clinic evaluations (12%), and other sites (11%), including a non-TB clinic in a health department and correctional facilities (California Tuberculosis Controllers Association, unpublished data, 2003). A similar pattern was observed in Washington state. In Seattle and its suburban areas in 1997, primary care practitioners or clinics reported 48% of TB cases during evaluations of outpatients with symptoms and 32% during hospital evaluations; only 2% of cases were diagnosed during a public health TB clinic evaluation for a symptomatic illness (Seattle-King County Department of Public Health, unpublished data, 1998).
These data indicate that the professionals in the primary health-care sector, including hospital and ED clinicians, should be trained to recognize patients with symptoms consistent with TB. Dramatic reductions in TB were recorded in NYC (13) and Baltimore (195) in association with extensive education campaigns for health-care providers in the community. These studies indicate the need to maintain clinical expertise for the diagnosis and treatment of TB (24,41).
Because pulmonary disease among adults is most frequently associated with the spread of TB, the following discussion and recommendations regarding TB case detection are limited to considerations of pulmonary TB among adults. A classic set of historic features, signs, symptoms, and radiographic findings occurring among adults should raise a suspicion of pulmonary TB and prompt a diagnostic investigation (3,189,(267)(268)(269)(270)(271). Historic features include exposure to TB, a positive test result for M. tuberculosis infection, and the presence of risk factors such as immigration from a high-prevalence area, HIV infection, homelessness, or previous incarceration. Signs and symptoms typical of TB include prolonged coughing with production of sputum that might be bloody, fever, night sweats, and weight loss. On a chest radiograph, the classical findings of TB in immunocompetent patients are upperlobe infiltrates, frequently with evidence of contraction fibrosis and cavitation (270). However, these features are not specific for TB, and, for every person in whom pulmonary TB is diagnosed, an estimated 10-100 persons are suspected on the basis of clinical criteria and must be evaluated (272,273).
The clinical presentation of TB varies considerably as a result of the extent of disease and the host response. In addition, variation in clinical symptoms and signs of TB is associated with underlying illnesses (e.g., HIV infection, chronic renal failure, alcoholism, drug abuse, and diabetes mellitus). The signs of TB are also associated with race and ethnicity and are attributed to unknown factors (3,267,270). The chest radiograph among persons with advanced HIV infection and pulmonary TB, for example, might have lower-lobe and lobar infiltrates, hilar adenopathy, or interstitial infiltrates (274). TB should be suspected in any patient who has persistent cough for >2-3 weeks or other compatible signs and symptoms as noted previously (10,267,275).
In the drive toward TB elimination in the United States, effective TB case detection is essential, and medical practitioners should recognize patients in their practice who are at increased risk for TB and be aware of the possibility of diagnosing TB if they observe compatible symptoms. Guidelines have been provided for the initial steps of TB case detection in five clinical scenarios encountered by providers of primary health care, including those serving in medical EDs (Table 5). In these settings, evidence exists to support proceeding with a diagnostic evaluation for pulmonary TB. The subsequent management of suspected cases in these scenarios depends on the judgment of the medical practitioner, in consultation with specialists in internal medicine, pulmonary diseases, or infectious diseases if necessary (5). These recommendations do not cover the spectrum of clinical presentations of pulmonary TB in adults and are not meant to substitute for sound clinical judgment.
Cases of pulmonary TB also are detected through directed public health activities designed to detect TB disease among certain persons who have not sought medical care. Compared with persons whose cases were detected passively by medical practitioners among patients who have sought medical care, persons whose cases are detected actively are usually in a less advanced stage of pulmonary disease, as manifested by the absence of symptoms and by negative sputum AFB smear results. Although no supporting literature exists, cases detected in that stage of disease might be less advanced and easier to cure.
# TABLE 5. Guidelines for the evaluation of pulmonary tuberculosis (TB) in adults in five clinical scenarios
# Patient and setting
Any patient with a cough of >2-3 weeks' duration, with at least one additional symptom, including fever, night sweats, weight loss, or hemoptysis Any patient at high risk for TB † with an unexplained illness, including respiratory symptoms, of >2-3 weeks' duration Any patient with HIV infection and unexplained cough and fever Any patient at high risk for TB with a diagnosis of community-acquired pneumonia who has not improved after 7 days of treatment Any patient at high risk for TB with incidental findings on chest radiograph suggestive of TB even if symptoms are minimal or absent §
# Recommended evaluation
Chest radiograph: if suggestive of TB*, collect three sputum specimens for acid-fast bacilli (AFB) smear microscopy and culture Chest radiograph: if suggestive of TB, collect three sputum specimens for AFB smear microscopy and culture Chest radiograph, and collect three sputum specimens for AFB smear microscopy and culture Chest radiograph, and collect three sputum specimens for AFB smear microscopy and culture Review of previous chest radiographs if available, three sputum specimens for AFB smear microscopy and culture - Infiltrates with or without cavitation in the upper lobes or the superior segments of the lower lobes. SOURCE: Daley CL, Gotway MB, Jasmer RM.
Radiographic manifestations of tuberculosis: a primer for clinicians. San Francisco, CA: Francis J. Curry National Tuberculosis Center; 2003:1-30. † Patients with one of the following characteristics: recent exposure to a person with a case of infectious TB; history of a positive test result for Mycobacterium tuberculosis infection; HIV infection; injection or noninjection drug use; foreign birth and immigration in 10% below ideal body weight, silicosis, gastrectomy, or jejunoileal bypass). § Chest radiograph performed for any reason, including targeted testing for latent TB infection and screening for TB disease. Active efforts to detect cases of TB among persons who have not sought medical care are routinely made during evaluation of contacts of patients with pulmonary TB (30,31,276) and of other persons with newly diagnosed infection with M. tuberculosis (4). Screening for TB also is performed during evaluation of immigrants and refugees with Class B1 or Class B2 TB notification status (277)(278)(279), during evaluations of persons involved in TB outbreaks (34,35,136,172,280,281), and occasionally in working with populations with a known high incidence of TB (167,185). Screening for TB disease is indicated when the risk for TB in the population is high and when the consequences of an undiagnosed case of TB are severe (282), such as in jails and prisons (253,283).
Screening for TB disease (i.e., active case finding) might contribute substantially to overall TB case detection. A population-based study from Los Angeles indicated that 30% of reported TB cases during the period of study were detected through screening activities (267). During 1998-2001, of 356 TB cases reported by the Seattle-King County TB Program, 40 (11%) were detected through active case detection in contact investigation and evaluations of immigrants and refugees with Class B1 and B2 TB notification status.
The clinical settings in which TB has been effectively detected among persons without symptoms, the methodology of testing, and outcomes of the screening process have been described (Table 6). On the basis of its very high yield of detecting TB cases, domestic follow-up evaluation of immigrants and refugees with Class B1 and B2 TB notification status should be given highest priority by all TB-control programs. The yield of detecting TB cases during screening at homeless shelters increased sharply in an outbreak setting (Table 6). Although prevalence data from individual studies are not available, investigations undertaken to control TB outbreaks that involved diverse settings and groups of immunocompetent and immunocompromised persons have consistently been productive in detecting TB cases and high rates of LTBI among exposed persons (34,35,136,173,280,281). Outbreak investigations should be counted among the settings in which screening for active TB is recommended.
# Contact Investigation and Outbreak Control
Contact investigation is an essential function of TB control in the United States (Box 4) (1,17). The investigation of a case of TB results in identifying approximately 10 contacts (284). Among close contacts, approximately 30% have LTBI, and 1%-3% have progressed to TB disease (30,284). Without intervention, approximately 5% of contacts with newly acquired LTBI progress to TB disease within 2 years of the exposure (285). The prevalence of TB among close contacts is approximately 1,000/100,000 population (>100-fold higher than in the general population) (285). Examination of contacts is therefore one of the most important activities for identifying persons with disease and those with LTBI who have a high risk for acquiring TB disease.
Transmission of M. tuberculosis has occurred in healthcare facilities (286,287), bars (134,288), doctors' offices (289), airplanes (290), crack houses (291), respite facilities that provide care for HIV-infected persons (136), drug rehabilitation methadone centers (36), navy ships (292), homeless shelters (120), schools (173), church choirs (140), and renal transplant units (141). The utility and importance of contact investigations in those settings and also for populations at high risk (e.g., foreign-born persons , children , and persons exposed to multidrugresistant TB cases ) has also been documented.
In the United States, state and local public health agencies perform 90% of contact investigations as part of the public health mandate for TB control (Box 5) (2). Public health TB-control programs are responsible for ensuring that contact investigations are conducted effectively and that all exposed contacts are identified, provided with access to adequate care, and followed to completion of therapy. For health agencies to fully discharge this responsibility, adequate funding and political commitment are required.
Health agencies use a general epidemiologic framework for contact investigations (299). However, this approach alone might have limited effectiveness because of factors such as initial diagnostic delays and failure to ensure completion of therapy for LTBI. Consequently, programs have recognized the necessity of widening traditional contact investigation sites to include nonhousehold locations (e.g., homeless shelters, correctional facilities, nursing homes, and hospices that serve HIV-infected persons) and households. Genotyping studies have documented that traditional contact investigation methods have failed to identify contacts or detect transmission of M. tuberculosis (28,33,34,151,172). As a result, IOM (2) and ACET (1) have called for the development and implementation of enhanced techniques for contact investigation.
The primary goal of a contact investigation is to identify persons who were exposed to infectious M. tuberculosis and ensure that they are tested for M. tuberculosis infection, screened for TB disease, are followed up, and complete a standard course of treatment, if indicated. Secondary goals are to stop transmission of M. tuberculosis by identifying undetected patients with infectious TB and to determine whether a TB outbreak has occurred. In that case, an expanded outbreak investigation should ensue.
# Steps of a Contact Investigation
State and local public health agencies, often represented by TB-control programs, are responsible for initiating and conducting contact investigations and evaluating their outcomes to ensure their effectiveness. A contact investigation has 14 steps, as follows:
1. Setting priorities. A contact investigation is considered once a suspected or confirmed case of TB comes to the attention of the jurisdictional TB-control program. At that time, a decision should be made about the priority of that investigation among other TB-control activities. Not all cases of TB require a contact investigation, and certain investigations will have greater priority than others. Priorities should be decided on the basis of the characteristics of the source-case, of the environment of the place(s) of exposure, and of the contacts. The three most important categories of information used to establish priorities for cases for contact investigations are 1) the site of disease, 2) the results of sputum AFB smears and NAA testing, and 3) the findings on the chest radiograph. In general, patients with pulmonary TB, positive sputum AFB smear results, and cavitation noted on a chest radiograph are more infectious and therefore have a higher priority for contact investigation. The use of an NAA test is helpful in rapidly differentiating between pulmonary disease caused by M. tuberculosis and nontuberculous mycobacteria, thus avoiding unnecessary contact investigations. Persons with pulmonary TB who have negative sputum AFB smear results tend to be less infectious, and their contacts should be investigated, but with lower priority. Contacts of patients with extrapulmonary TB should be evaluated if the patient has concurrent pulmonary or laryngeal disease, the contacts are at increased risk for acquiring TB disease (e.g., children aged <5 years and HIV-infected persons), or the patient has pleural TB (300). Pleural TB is a manifestation of primary TB and often occurs among persons who have been recently infected. In addition, persons with pleural TB can have positive sputum AFB smear results. Children aged <5 years with TB, regardless of the site of disease, should have a contact investigation to identify the source-case.
# Defining the beginning and end of the period of infectiousness.
Before a contact investigation can be started, the period of infectiousness of the index case should be determined. This period sets the limits for the investigation, allows for setting priorities for contacts within the designated timeframe, and determines the scheduling for follow-up tests. Exactly when a patient becomes infectious is unknown; the usual assumption is that the patient becomes infectious approximately 3 months before diagnosis; however, it might be longer, depending on the history of signs and symptoms, particularly cough and the extent of disease. The end of the period is defined as the time when contact with the index case is broken or when all of the criteria for determining when during therapy a patient with pulmonary TB has become noninfectious (Box 3) are met. Patients with multidrug-resistant TB who are on inadequate therapy or who have persistently positive sputum AFB smear or culture results might remain infectious for a prolonged period of time. Those patients, if not in effective isolation, should be reassessed for new contacts as long as they remain infectious.
# Medical record review. For potential transmission
risk and infectiousness of a case to be assessed, all currently available information about the reported case or suspect is obtained through case medical record reviews, conversations with the health-care provider or other reporting source, and laboratory report reviews. This information can be disclosed by covered entities for public health activities as provided by the Privacy Rule of HIPAA (217). 4. Case interview and reinterview. The patient interview may be conducted in the hospital, at the patient's home, or wherever convenient and conducive to establishing trust and rapport. The ability to conduct an effective interview might determine the success or failure of the contact investigation. All persons with whom the patient has been in close contact and the locations that the patient commonly frequents should be identified. Good interviewing skills can elicit vital information that otherwise might not be forthcoming. For different reasons (e.g., stigmatization, embarrassment, or involvement in illegal activities), patients might be reluctant to identify contacts or places they frequent. Developing an ability to interview patients effectively so as to elicit contacts requires training and periodic review by supervisors, and only trained personnel should interview patients. A patient should be interviewed as soon as possible after notification and reinterviewed 1-2 weeks later to clarify data or obtain missing data. When possible, the second interview should be conducted at the patient's primary residence. Also, all interviews should be conducted in the patient's primary language and with sensitivity to the patient's culture. 5. Field investigation. Field investigations enable investigators to 1) interview or reinterview identified contacts and obtain an adequate medical history to evaluate previous exposure to TB, existence of prior M. tuberculosis infection, existence of disease and treatment, risk factors for acquiring TB, and symptoms;
2) obtain locator information; 3) apply a tuberculin skin test to identified contacts (the role of QFT-G in the assessment of contacts has not been determined); 4) observe contacts for any signs or symptoms suggestive of TB; 5) schedule subsequent medical evaluations and collect sputum samples from any contact who is symptomatic; 6) identify sources of health care and make referrals; 7) identify additional contacts who might also need follow-up; 8) educate contacts about the purpose of the investigation and the basics of TB pathogenesis and transmission; 9) observe the contact's environment for possible transmission factors (e.g., crowding and poor ventilation); 10) assess contacts' psychosocial needs and other factors that might influence compliance with medical recommendations; and 11) reinforce confidentiality. Visits to the exposure site(s) should be conducted as soon as possible. Contacts at higher risk for disease progression and more severe disease (e.g., young children and HIV-infected persons) require the most rapid follow-up. Transmission sites might involve social networks not customarily considered in traditional contact investigations. For example, in certain TB cases reported separately in different communities, participation in a church choir was identified as a common factor (140). The contact investigation failed to identify the sourcepatient's choir contacts, resulting in secondary cases of TB. In an outbreak associated with a floating card game, the outbreak was propagated because a network of persons engaged in illegal activities was not identified (172). These examples demonstrate the importance of congregate activities beyond work and socially defined high-risk contacts. 6. Clinical evaluation of contacts. All close contacts of patients with pulmonary or laryngeal TB and a positive culture result for M. tuberculosis or a positive sputum AFB smear result should receive a tuberculin skin test unless they have documentation of a previously positive test. Highest priority for tuberculin skin testing and follow-up evaluation should be given to 1) contacts identified as being at highest risk for recent infection on the basis of their history of exposure to the case-patient and risk for transmission and 2) those at high risk for progression from M. tuberculosis infection to TB disease (e.g., infants, young children, HIV-infected persons, and other persons whose medical conditions predispose them to progress from infection to disease). Among children and infants, children aged <3 years are at greatest risk for rapid progression and should receive the highest priority for all preventive interventions for contacts. For the greatest level of protection of children exposed to TB to be ensured, all children aged <5 years should be considered to be high-risk contacts.
Regardless of where the tuberculin skin test is performed (e.g., field visit, TB clinic, or referral site), arrangements should be made to ensure that the skin test is read within 48-72 hours. Contacts who have tuberculin skin test reactions >5 mm and who have no history of a prior positive result are considered at risk for newly acquired M. tuberculosis infection. Those persons should receive a chest radiograph and medical evaluation for TB disease. Adults and children aged >5 years should receive a single posterior-anterior radiograph (4); children aged 1% ). The local epidemiology of TB, HIV infection, and TB/HIV coinfection also may be used as a basis for the decision. If resources are limited, and if local data indicate that HIV infection contributes only minimally to the TB problem (i.e., the HIV seroprevalence of contacts is likely to approach 0.1% of the general U.S. population), then the highest priority for voluntary HIV counseling and testing should be assigned to contacts of HIV-infected persons with TB and those who have identified risk factors for HIV (303).
Contacts who have a documented prior positive tuberculin skin test and who are not known or likely to be immunocompromised generally do not require further evaluation unless they have symptoms suggestive of TB disease. However, candidates for treatment of LTBI on the basis of other criteria (4) should first receive a medical evaluation, including a chest radiograph, to exclude TB. Contacts with a negative tuberculin skin test should be retested approximately 8-12 weeks after the first test unless the initial skin test was performed >8 weeks after the contact's last exposure to the index patient. Every 3 months, all contacts with negative skin test results who remain in close contact with an infectious patient should receive a repeat tuberculin skin test and, if symptoms of TB disease are present, a chest radiograph. A contact whose repeated test is positive (>5mm) should receive a chest radiograph if one has not been taken recently. If the radiograph is normal, the contact should be evaluated for treatment of LTBI; if it is abnormal, the patient should be evaluated for TB disease or other cause of the abnormality.
TB-control programs should find and evaluate all persons who have had sufficient contact with an infectious TB patient to become infected. Contacts at high risk (e.g., infants, young children, and HIVinfected persons) should be identified and evaluated rapidly to prevent the onset of serious, potentially lifethreatening complications (e.g., TB meningitis). In certain jurisdictions, legal measures have been put in place to ensure that contact evaluation and follow-up occurs (304). The use of existing communicable disease laws should be considered for contacts that fail to comply with the examination requirements. All contacts should be assessed routinely for obstacles to their participation in the evaluation process. Any structural barrier that impedes the ability of the patient to access services (e.g., inconvenient clinic hours or location, work or family obligations, and lack of transportation) should be addressed. Consideration can then be given to expanding the investigation to include contacts at lower risk for infection. In general, the contact investigation need be expanded only if excessive transmission is detected, on the basis of the following criteria: 1) secondary cases of TB are identified in contacts; 2) documented skin test conversions exist; and 3) comparison of skin test positivity among contacts with available data on the baseline prevalence of skin test positivity in the population indicates the probability of transmission. When a contact investigation is expanded, resources should continue to be directed to persons identified as being at greatest risk. In any case, the total contacttracing process should be completed <3 months after initiation of the investigation, unless evidence of transmission requires further expansion of testing.
# Data management and use in decision-making.
Maintenance of data is crucial to all aspects of the contact investigation. Protocols should be developed to maximize the efficiency of the process, given available resources. Data should be collected for cases and contacts by using standardized forms (paper or electronic) with standard definitions and formats, according to national guidelines (305). Data elements should mirror those collected by the states and CDC, but individual jurisdictions may elect to expand the data elements. 11. Evaluation. Contact investigation steps should be adequately documented, so the process can be monitored and evaluated. National performance measures for TB control stipulate that programs should complete treatment of LTBI among 61% of contacts of infectious TB cases (Table 4). Additional parameters should also be tracked and evaluated. Programs should determine whether the indications given previously for conducting a contact investigation are applied to all reported cases. In addition, for each TB case that is investigated, the number of contacts identified should be recorded. For each contact identified, outcomes to monitor include 1) whether the contact evaluation took place (including placing and reading the first and second tuberculin skin tests, if applicable) and was completed and 2) whether the recommended protective interventions (including screening for TB disease, treatment for LTBI, and prophylaxis within the window period) were offered, accepted, started, and completed. Results of the evaluation should be aggregated and recorded for stipulated intervals of time, as follows: 1) among identified contacts, the number and percentage that were referred for evaluation; 2) among those referred, the number and percentage that completed evaluation; 3) among those evaluated, the number and percentage eligible for treatment of LTBI; and 4) among those eligible, the number and percentage that started and completed treatment. Surveillance of individual contacts is not conducted routinely in the United States. However, CDC collects aggregate data on the outcomes of contact investigations from state and local TB control programs through the Aggregate Report for Program Evaluation. Routine collection and review of these data can provide the basis for evaluation of contact investigations for TB control programs.
# Education and training for contact investigations.
The education needs for all aspects of the investigation process (including medical abstraction, patient interviewing, cultural competency, maintaining patient confidentiality, and how to perform tuberculin skin testing) should be continuously assessed. All involved staff should receive ongoing training. CDC-funded regional training centers offer training courses in contact investigation and interviewing skills. 13. Confidentiality. Maintaining confidentiality is a critical component of the contact investigation process. Guidelines for release of confidential information related to conducting contact investigations should be developed. An example of appropriate release of confidential medical information is the release of an index case patient's drug susceptibility test results to a clinician caring for a contact with LTBI or one who has progressed to active TB.
# Contact investigations among special populations.
Contact investigations often are conducted among special populations or locations (e.g., homeless shelters, correctional facilities, HIV residential facilities, schools, worksites, health-care facilities, active drug users, and living along the U.S.-Mexico border). Guidelines offering specific recommendations for contact investigations under these circumstances have been published (305).
# Outbreak Investigations
Failure to recognize an increase in the occurrence of TB (162) or to expand a contact investigation when needed can result in continued transmission of TB. Missed epidemiologic links among patients with TB can have severe consequences as evidenced in an outbreak associated with a floating card game in the rural south (172) and an outbreak in Kansas among exotic dancers and their close contacts that occurred during a 7-year period (38).
When TB occurs with high incidence, clusters of cases that have epidemiologic links likely occur constantly but tend to blend into the generally high morbidity (306). In a low-incidence setting, however, clusters of linked TB cases can be identified more readily. Three criteria have been established to determine that a TB outbreak is occurring (162): 1) an increase has occurred above the expected number of TB cases; 2) transmission is continuing despite adequate control efforts by the TB-control program; and 3) contact investigations associated with the increased cases require additional outside help.
TB outbreaks have occurred in low-incidence areas in which expertise and experience in dealing with such outbreaks might be lacking. Such outbreaks have occurred among different populations and settings, including a young foreign-born child in North Dakota (25); exotic dancers and their contacts in Kansas (38), homeless persons in Syracuse, New York (120); factory workers in Maine (188); and limited, seemingly unrelated clusters of cases that were the cause over time of perpetuating transmission in Alabama (307).
For an increase in the expected number of TB cases (the first criterion of an outbreak) to be identified, the local epidemiology of TB should be understood. Detection of a TB outbreak in an area in which prevalence is low might depend on a combination of factors, including recognition of sentinel events, routine genotype cluster analysis of surveillance data, and analysis of M. tuberculosis drug-resistance and genotyping patterns.
When an outbreak is identified, short-term investigation activities should follow the same principles as those for the epidemiologic part of the contact investigation (i.e., defining the infectious period, settings, risk groups, mode of transmission, contact identification, and follow-up). However, longterm activities require continued active surveillance, M. tuberculosis genotyping, additional contact investigations and related follow-up for additional cases, and continuing education of providers, staff, and patients. Consequently, a plan for long-term support should exist from the outset of the investigation.
A written protocol should be developed. At a minimum, the protocol should outline the outbreak response plan, including indications for initiating the plan, notification procedures, composition of the response team, sources of staffing, plan for follow-up and treatment of contacts, indications for requesting CDC assistance, and a process for evaluation of the outbreak response. The outbreak response plan should also include information on how to work strategically with the media during the public health emergency. CDC offers training packages to assist public HCWs in media communications, including emergency and crisis communication.
This training emphasizes prevent planning, event response activities, and post-event follow-up. Information on public health communication programs is available at http:// www.cdc.gov/communication/cdcynergy.htm.
# Targeted Testing and Treatment of LTBI
An estimated 9.5-14.7 million persons in the United States have LTBI (39). Continued progress toward eliminating TB in the United States and reducing TB among foreign-born persons will be impossible without devising effective strategies to meet this challenge. Guidelines on targeted testing and treatment of LTBI have been published (4) and revised (308). Those guidelines include recommendations for diagnosing LTBI and treating infected persons, limiting the possibility of treatment-associated hepatotoxicity, and identifying persons and populations to target for testing. A new diagnostic test for LTBI, QFT-G, has been approved by FDA, and guidelines for its use will be published by CDC. This section outlines a recommended approach to planning and implementing programs for targeted testing and treatment of LTBI to create an effective public health tool for communitywide prevention of TB.
Targeted testing and treatment of persons with LTBI is not a new concept for the prevention of TB in the United States (309). The effectiveness of treating LTBI among populations at high risk has been established in clinical trials (285), but this intervention has not been proven to have an impact on the incidence of TB in the United States. Theoretically, the epidemiologic impact would be considerable if cases of TB in a population were largely the result of progression of LTBI and if all persons at high risk with latent infection could be identified and treated successfully. Practically, those circumstances rarely exist. In the United States, the effectiveness of targeted testing and treatment of LTBI as a public health measure has been limited by concern for the side effects of treatment (notably hepatotoxocity) (310), poor acceptance of the intervention among health professionals (311), and poor adherence among patients to the lengthy course of treatment (45,312).
Two approaches exist to increasing targeted testing and treatment of LTBI. One is to promote clinic-based testing of persons who are under a clinician's care for a medical condition (e.g., HIV infection or diabetes mellitus) that also confers a risk for acquiring TB. This approach, which depends on a person's risk profile for TB and not on the local epidemiology of the disease, requires education of health-care providers and depends ultimately on their initiative. Although difficulties exist in quantifying and evaluating its effectiveness, this approach could conceivably become a useful tool to reduce the incidence of TB among foreign-born and other persons at high risk because they can be accessed conveniently where they receive primary health-care services. The other approach is to establish specific programs that target a subpopulation of persons who have a high prevalence of LTBI or who are at high risk for acquiring TB disease if they have LTBI, or both. This approach presumes that the jurisdictional TB-control agency has identified the pockets of high TB risk within its jurisdiction through epidemiologic analysis and profiling (313)(314)(315)(316). Those high-risk pockets might consist of foreign-born, homeless, or HIV-infected persons, or they might be geographic regions (e.g., a neighborhood within a city or town) or specific sites (e.g., a homeless shelter or an HIV-housing facility).
The epidemiologic profile should include an assessment of the risk for TB in the population or at the site, the ease of access to the population or site, and the likelihood of acceptance of and adherence to targeted testing and treatment. For this assessment to be facilitated, populations at high risk may be separated into three tiers (Box 6). Assignment of groups to these three tiers is based on six criteria: 1) incidence of TB; 2) prevalence of LTBI; 3) risk for acquiring TB disease if the person is infected with M. tuberculosis; 4) likelihood of accepting treatment for LTBI and adhering to it; 5) ease of access to the population; and 6) in a congregate setting, the consequence of transmission of M. tuberculosis. Tier 1 is made up of well-defined populations at high risk that can also be conveniently accessed and followed, either in locations such as clinics or community health centers, prisons, or other congregate living sites or through mandatory registration. Persons in this tier often have a high prevalence of TB and LTBI (immigrants and refugees with Class B TB notification status), an increased risk for TB disease if infected with M. tuberculosis (persons with HIV infection), or both (certain homeless and detained populations). The consequences of the spread of TB in congregate settings increase the necessity of preventive action. Location-based, high-risk communities in Tier 1 are, for the most part, readily identifiable and easily accessible; often have their own resources; and generally include the probability of access for a long enough period to permit completion of treatment for LTBI. These populations should be the first priority for targeted testing programs.
# BOX 6. Priority population subpopulations and sites for targeted testing and treatment of latent tuberculosis (TB) infection
Persons enrolled in substance-abuse treatment centers may be considered transitional between Tier 1 and Tier 2, depending on local epidemiologic and demographic factors. Substance abusers might have a high prevalence of LTBI. Injection drug users also might have an increased risk for acquiring TB if they are infected with M. tuberculosis and at increased risk for HIV infection (317). Access and factors related to acceptance and completion of therapy also might vary by location. Typically, substance abuse treatment centers that include long-term inpatient treatment or regularly scheduled appointments (e.g., methadone treatment centers) are the best choices for intervention because ease of ongoing access allows sufficient time for completion of therapy. Voluntary HIV counseling and testing should be offered routinely as part of any targeted testing program among this population.
Populations in Tier 2 also include identifiable and accessible populations made up of persons at high risk, but the distinguishing characteristic is that obtaining satisfactory rates of completion of treatment for LTBI might be difficult because of dispersal of the population throughout a larger community or a brief duration of residency in congregate settings. For example, in Atlanta, Georgia, after local epidemiology of TB was analyzed, community sites for targeted testing and treatment of LTBI of residents of high-risk inner-city areas were identified (184). Sites of access included outpatient areas of the public hospital, the city jail, clinics serving homeless persons, and neighborhoods frequented by substance abusers. Although 65% of the targeted population that had a tuberculin skin test placed returned to have the skin test read, only 20% of those with an indication for treatment of LTBI completed a course of therapy; this represented 1% of persons who underwent targeted testing. Tier 3 consists of persons born in countries with a high incidence of TB or U.S.-born persons in racial/ethnic minority populations with high prevalence of LTBI who do not necessarily have an increased risk for progressing to TB disease. Eventually, the control of TB among foreign-born persons and progress toward elimination of TB in the United States depends on achieving greater success in preventing TB among populations at high risk by widespread targeted testing and treatment of LTBI in the public and private medical sectors. However, establishing successful targeted testing and treatment programs for foreign-born persons who are not found in Tier 1 or Tier 2 settings is challenging. Obstacles include the limitations of the tuberculin skin test to differentiate between reactions attributable to BCG or infection with M. tuberculosis, the prevalent belief among a substantial number of foreign-born persons that BCG vaccination is the cause of a positive test for M. tuberculosis infection and is also protective against TB disease, language and cultural barriers, barriers in access to medical care, and difficulties in providing outreach and education.
Typical Tier 3 populations are new refugee and immigrant groups that are not yet assimilated into U.S. society. Such populations might be ignorant of their TB risk, usually lack ready access to health-care services, and might have strong cultural understandings about TB that are at variance with those that guide TB-control activities in the United States. TB-prevention activities in this kind of community are highly cost-intensive (221). Engaging such communities is a challenging task.
Community-based TB prevention for Tier 3 populations requires a partnership between the jurisdictional health department and the affected community. The community should gain an understanding of the TB problem as it relates to them and should participate in the design of the intervention. Community education is essential for this approach to succeed. The target population should be involved in the design and implementation phases of the intervention, interventions should be developed within the cultural context of the targeted population, and intermediate goals or benchmarks should show the population that program activities are achieving success. For example, in Los Angeles, California, the public health TB program contracted with community-based organizations to screen and provide treatment for LTBI to persons at risk in Latino and Asian neighborhoods and at schools teaching English as a second language (249). In Cambridge, Massachusetts, a coalition of Haitian community groups identified TB education as an issue for their community; strategies to achieve this goal included development of a videotape written and produced for viewing in Haitian barbershops and beauty salons in the community, a lottery, and measures for evaluation in terms of knowledge and future access to care (S. Etkind, Massachusetts Department of Health, personal communication, 2002).
For communities in Tier 3, TB is only one (and often not the most important) of multiple medical and public health needs. A broad approach should be adopted that includes TB prevention with other activities to improve health status. Certain Tier 3 populations have achieved sufficient self-identity and development to establish access to health care through a community health center, individual medical providers, or clinics. Those communities that have an already established route of access to health care have an infrastructure in place to establish programs for targeted testing and treatment of LTBI. Obstacles to overcome often include lack of medications and chest radiographs, the need for a system to track patients who do not return for monthly appointments, and the capacity to evaluate the program.
Programs for population-based targeted testing and treatment for LTBI often have been conducted by public health agencies through TB-control programs. However, recent studies have also described the establishment of such programs in nonpublic health venues. Promising results, in terms of access to persons at high risk and completion of treatment of LTBI, have been achieved from nontraditional sites, including syringe exchanges (318), jails (256), neighborhood health clinics (319), homeless shelters (320), and schools (321,322). This trend indicates a widening interest in this means of preventing TB and is possibly influenced by the emergence of community-oriented primary care (241,323), which places primacy on interventions for specific patients that help prevent disease and preserve the health of the entire population from which these patients are drawn.
As programs move from Tier 1 to Tier 2 and Tier 3 populations, the complexity of the effort and the cost of the program will increase. Also, because persons in Tier 3 populations generally have a lower risk for progression from LTBI to TB disease, the effectiveness and impact of a program will be less than efforts directed to Tier 1 and Tier 2 populations. Whatever population is selected or strategy is employed for the targeted testing project, programs should systematically evaluate the activity to ensure the efficient use of resources. Process, outcome, and impact indicators should be selected and routinely monitored by the program.
For purposes of monitoring and evaluation, activities associated with targeted testing and treatment for M. tuberculosis infection can be divided into three phases: the testing itself, the medical evaluation of persons with positive test results, and the treatment of those persons with LTBI. Performance indicators should be selected for each phase. For the testing phase, indicators include the number of persons at high risk identified and the number and proportion of those that were actually tested. Among those tested, the number and proportion that had a positive result for M. tuberculosis infection should be tracked. Useful indicators for the medical evaluation phase include the proportion of persons with a positive test result who completed a medical evaluation and the number and proportion that were determined to have TB disease. Indicators for the treatment phase include the proportion of eligible persons starting treatment for LTBI and the number and proportion that completed treatment. Reasons for failure to complete treatment (e.g., adverse drug effects, loss of interest, and loss to follow-up) should be monitored. Costs should be measured for each phase of the project. The cost per person with LTBI completing treatment provides a measure of the relative efficiency of the program. Finally, the impact of the program can be estimated by estimating the number of cases of TB prevented, which is dependent on the number of persons completing treatment and the estimated risk for progressing to TB disease.
Surveillance of persons with LTBI does not routinely occur in the United States. However, CDC has recently developed a national surveillance system to record serious adverse events (i.e., hospitalization or death) associated with treatment of LTBI. Surveillance of these events will provide data to evaluate the safety of treatment regimens recommended in current guidelines (4,324).
# Control of TB Among Populations at Risk
This section contains recommendations for measures to control and prevent TB in five populations (children, foreignborn persons, HIV-infected persons, homeless persons, and detainees and prisoners in correctional facilities). Each of these populations occupies an important niche in the epidemiology of TB in the United States. Individual members of each population have been demonstrated, on the basis of their membership in the population, to be at higher risk for exposure to M. tuberculosis or for progression from exposure to disease, or both. Furthermore, nationwide surveillance and surveys (27,(118)(119)(120)127,136,139,150,198,295,315,325,326) indicate that the epidemiology of TB in these populations is similar from community to community, which suggests that the recommended control measures are subject to generalization and can be applied more or less uniformly throughout the United States.
Children, foreign-born persons, HIV-infected persons, homeless persons, and detainees and prisoners should not be assumed to be the only populations at high risk for TB, nor are homeless shelters and detention facilities the only settings in need of enhanced TB-control strategies. Local surveillance and surveys frequently have identified populations and settings of high TB risk and transmission that required the formulation of specific control measures (122,137,152,313,315,316,327,328). This is the primary reason why state and local surveillance should be conducted to develop a clear understanding of the epidemiology of TB at the jurisdictional level.
Most important, the concept of identifying and targeting populations and settings at high risk should be viewed as a dynamic rather than as a static process. Such populations emerge and recede in importance at the local, state, and national levels. For example, foreign-born persons received little attention in the 1992 edition of this statement (6). A population whose risk for TB is now being recognized and delineated is U.S.-born non-Hispanic blacks, who account for approximately 25% of TB morbidity in the United States and who have TB rates approximately eight times those of whites (329,330) (Table 2). CDC and collaborating public health agencies in Chicago, Illinois and the states of Georgia and South Carolina are exploring new strategies to address this problem (331).
# Control of TB Among Children and Adolescents
The occurrence of TB among infants and young children indicates recent transmission of M. tuberculosis and often the presence in the community of an unidentified adult with infectious TB. Thus, a case of TB in a child is a sentinel health event that signals a public health breakdown (197). Also, certain features of TB among children mandate special considerations in case detection and case management, contact investigations, and targeted testing and treatment of LTBI. For example, if LTBI results from exposure to TB in infancy and early childhood, a substantial risk exists for rapid progression to TB disease, including the development of potentially lethal forms of TB (198,294,325). The recommendations in this statement for control of TB among children and adolescents should receive high priority in all state and community TB-control plans.
# Basis for Recommendations for TB Control Among Children and Adolescents
Case detection and primary prevention strategy: contact investigation of adults with pulmonary TB. The majority of infants and children who acquire TB disease do so within 3-12 months of contracting M. tuberculosis infection. Infants and toddlers aged <3 years are especially prone to the rapid progression from infection to disease, and they often acquire severe forms of TB, including meningitis and disseminated disease. The most important step to detect and prevent TB among children is the timely identification and effective treatment of adults with active TB. The cornerstone of TB prevention among children is highquality contact investigations of suspected cases of pulmonary TB in adults, because 20%-40% of pediatric cases of TB could have been prevented if contact investigation had been more timely and thorough (198,293,325).
Contact investigation of adult pulmonary TB cases is crucial to the detection, control, and prevention of pediatric TB and its complications (332,333). The yield of detection of TB and LTBI is high, with an average of 50% of childhood household contacts having LTBI or TB disease (31,60). Because <50% of cases of TB among children are asymptomatic despite abnormal radiographic findings, contact investigation leads to earlier discovery of TB among children, better treatment outcomes, and fewer complications (326). Also, children with LTBI or TB disease identified through contact investigation are more likely to receive DOT at the same time as the source-case, which increases adherence to therapy.
Another benefit of contact investigations is the ability to identify and treat infants and young children who have been exposed to a person with a contagious case of TB and who might be infected but nevertheless have a negative tuberculin skin test (the role of QFT-G for diagnosis of LTBI in children aged <17 years has not been determined). A tuberculin skin test might take 2-3 months after infection to become positive in an infant or toddler. However, the incubation period for severe TB, including meningitis and disseminated disease, might be only 4-6 weeks. Failure to give empiric treatment for LTBI to exposed infants and young children with negative tuberculin skin test results, particularly those aged <3 years, might therefore result in rapid acquisition of disease (295,325).
Case management. The record for adherence to treatment for TB is no better for children than it is for adults (333). Children with TB might live in socially disorganized or disadvantaged homes and receive care from multiple adults. A chaotic environment can lead to a poor understanding of TB and its treatment and decreased adherence. DOT is effective in TB treatment for children and adolescents. However, almost 10% of children receiving DOT experience gaps in treatment that require extensions of therapy (326). Intensive-case management, including use of incentives and enablers, is a crucial element of a TB-treatment plan for children.
Contact investigation of cases of TB among children and adolescents. Contact investigations for children with suspected TB are generally conducted to identify the adult source-case. Identifying a source-case serves to establish the diagnosis of TB in the majority of children and, if the sourcecase is culture-positive for M. tuberculosis, to determine the likely drug susceptibility pattern of the infecting strain of M. tuberculosis in the child.
Even with optimal medical evaluation, M. tuberculosis can be isolated from <50% of children with clinically suspected TB. While microbiologic testing determines the diagnosis of TB for the majority of adults, positive culture results often are lacking for children. In the majority of cases, the diagnosis of pediatric TB is established by the triad of 1) a positive tuberculin skin test result, 2) either an abnormal chest radiograph or physical examination or both, and 3) discovery of a link to a known or suspected case of contagious pulmonary TB. Because culture yields from children with TB are low, determining the drug susceptibility pattern from the source-case isolate often is the only way to determine optimal treatment for children with either LTBI or TB disease (334,335).
Because TB among infants and young children usually occurs within weeks to months of contracting infection with M. tuberculosis, having a child with disease is a marker of recent transmission from someone in the child's environment. The source-case, often a parent or other caregiver (336-338), might not have been identified as having TB by the time the child becomes ill. Consequently, parents and other adults who are close contacts of children hospitalized with TB should be evaluated themselves for TB disease as soon as possible to serve as a case-detection tool and to prevent nosocomial transmission of M. tuberculosis (339). A chest radiograph should be performed on these family members to exclude pulmonary TB; certain centers have implemented this recommendation by requiring that adults who accompany a child have a chest radiograph performed and interpreted immediately while at the health-care facility (339). Other adult family members or friends also should be required to show evidence of a normal chest radiograph, performed by the health department or other provider, before being allowed to visit the child. Because TB in the child, not LTBI, is the reliable marker of recent infection, chest radiograph screening of accompanying adults is not necessary if the child has LTBI without TB disease.
Associate investigations (i.e., efforts to identify and evaluate household contacts of a child with LTBI to identify the infectious person responsible for the child's infection) are often performed as part of the evaluation of a child with LTBI (5,17,(340)(341)(342)(343). The usefulness of this approach depends on the criteria for placing skin tests on children. If testing of children at low risk is undertaken, associate investigations will be costly, have a low yield, and divert TB-control resources from more important activities. Associate investigations of children at high risk, however, usually detect a limited number of persons with TB but do identify substantial numbers of other persons with LTBI who are candidates for treatment (341)(342)(343).
Targeted testing and treatment of LTBI. In the 1950s and 1960s, child-centered TB control activities were based on periodic testing of all children for LTBI (344). However, as the number of TB cases dropped, the disease became concentrated among persons at high risk in particular subpopulations. Consequently, the majority of U.S. children have negligible risk for acquiring LTBI. Among children at low risk, the majority of positive tuberculin skin test results are false positives caused by nonspecific reactivity or exposure to nontuberculous mycobacteria in the environment (344). Falsepositive results lead to unnecessary health-care expenditures and anxiety for the child, family, school, and HCWs (345). Thus, while the testing of children with an expected high prevalence of LTBI is desirable, mass testing of children with a low prevalence of LTBI is counterproductive and should not be undertaken.
The optimal approach is to perform tuberculin skin testing only on those children with specific risk factors for LTBI. A questionnaire that assesses risk factors for TB can be used successfully in clinics and private offices to identify children at risk for LTBI (237,(346)(347)(348); this approach can also be used to identify at-risk college students (349). The screening tool is the questionnaire; only those children whose answers indicate that they are at risk for LTBI should receive a tuberculin skin test. Use of a questionnaire can also address issues related to discrimination; all children in a setting such as a school or child-care center can be screened easily, but only those with identified risk factors for LTBI should receive a tuberculin skin test, thereby diminishing the number of false-positive results.
No single questionnaire has been validated for use in all settings and for all ages of children. Factors that have correlated highly with risk for LTBI among children in more than one study include 1) previous positive tuberculin skin test result; 2) birth in a foreign country with high prevalence; 3) nontourist travel to a high-prevalence country for >1 week; 4) contact with person with TB; and 5) presence in the household of another person with LTBI. Questions pertaining to a locally identified population with a high rate of TB should be included in a questionnaire, but validation of these questions is difficult.
In certain treatment programs for LTBI among children in the United States, the completion rate associated with 6-9 months of self-supervised isoniazid therapy is only 30%-50%. As LTBI among young children might progress rapidly to TB disease, DOT is recommended. Children with LTBI, who are most likely to benefit from DOT because of their high risk for rapid progression of infection to disease, include contacts of persons with recently diagnosed cases of pulmonary TB, infants and young children, and children with immunologic deficiencies, especially HIV infection. The increase in cases of TB among foreign-born persons has been attributed to at least three factors (350). First, the number of persons entering the United States from other countries in which TB occurs with high incidence (44) now accounts for >75% of the immigrant flow (116,278); during 1994-2003, an estimated 80%-86% of immigrants admitted to the United States came from high-incidence countries (351). Second, foreign-born persons are subject to cultural and linguistic barriers that might affect healthseeking behavior and access to medical care, resulting in delays in diagnosis and difficulty in understanding and completing treatment (18,19,194,325). Third, these barriers, which have implications for the treatment, control, and prevention of TB among foreign-born persons, have not been sufficiently appreciated and addressed in TBcontrol program planning in the United States.
# Control of TB Among Foreign-Born Persons
Precise information is lacking to assist in the identification of foreign-born persons who have an elevated risk for acquiring TB during residence in the United States. Immigrants entering either Canada or the United States have a risk for TB during their early years of residence that approximates that of residents of the country of birth (115,352,353). Over time, the risk declines and approaches that of residents of the host country. Consequently, recent guidelines have designated immigrants from countries with a high prevalence of TB who have resided in the United States <5 years as foreign-born persons at high risk (4).
Criteria for characterizing countries as having a high prevalence of TB have not been developed, and no consensus exists on which countries should be designated as having a high prevalence of TB. In rank order, the 14 countries listed most frequently as country of origin of foreign-born persons with reported TB in the United States are Mexico, the Philippines, Vietnam, India, China, Haiti, South Korea, Somalia, Guatemala, Ecuador, Ethiopia, Peru, El Salvador, and Honduras), and these 14 countries accounted for 76% of cases among foreign-born persons during 1999-2002 (14). Estimated incidence rates of TB in these countries in 2002 ranged from 33/100,000 population (Mexico) to 406/100,000 population (Somalia) (354). However, the country of origin of foreign-born persons with TB can vary substantially among localities within a state and between states and regions across the United States.
State and local TB control programs should develop their own profiles of risk for TB among foreign-born persons as part of the jurisdiction's overall epidemiologic analysis of TB and then define which immigrant and foreign-born populations in their areas should be considered as being at high risk for TB. Data sources for TB programs to use in making this determination include 1) WHO data on the estimated incidence of TB in countries of origin (354); 2) local epidemiologic and surveillance data (151,152,(313)(314)(315)(316)355); 3) published guidelines (4,279), and other sources of data (115,116); 4) qualitative information on refugee and immigrant movement into the jurisdiction; and 5) availability of resources to establish control and prevention measures targeted toward the foreign-born population. The principles and priorities of TB control among foreign-born persons at high risk are not different from those for control of TB among U.S.-born persons (Box 4). However, for the reasons given previously, TB control among foreign-born persons at high risk might present challenges requiring targeted strategies specific to that population (152,356).
# How Foreign-Born Persons Enter the United States
Foreign-born persons enter the United States legally through different official channels (Table 7). As a condition of entry, persons migrating as immigrants, refugees, and asylees are required to be screened for diseases of public health significance, including TB. Persons entering in the nonimmigrant category do not require preentry screening. Persons who enter the country without legal documentation are referred to as unauthorized aliens.
During 1992-2002, an estimated 380,000-536,000 persons entered the United States annually as immigrants, refugees, or asylees (Table 8). In 2002, among the estimated 516,000 persons in those categories, 86.6% were from countries with high incidence of TB. Immigrants, refugees, and asylees constitute only a fraction of foreign-born persons who enter the United States each year; the majority (20-35 million persons) enter in one of the nonimmigrant subcategories (Table 8). The majority of entering nonimmigrants are tourists or business travelers who spend only a short time in the United States. However, an estimated 850,000-1.9 million workers, students, and other visitors and their families might reside in the United States for multiple years (Table 8).
A nonimmigrant, refugee, or asylee residing in the United States who meets the eligibility requirements and applies for a change in visa status to that of a lawful permanent resident should undergo required health screening assessment by a civil surgeon. During 2002, of the 679,305 persons who adjusted their immigration status under this program, 536,995 (79%) were from countries with high incidence of TB (238). In addition, an estimated 7 million unauthorized aliens resided in the United States in January 2000, and during 1990-1999, the unauthorized alien population increased annually by approximately 350,000 persons (357).
# Current Requirements for TB Screening of Immigrants
U.S. immigration law mandates screening outside the United States for applicants designated as immigrants who are applying for permanent residence status and for applicants designated as refugees or asylees (Table 7). The purpose of mandated screening is to deny entry to persons who have either communicable diseases of public health import or physical or mental disorders associated with harmful behavior, abuse drugs or are addicted to drugs, or are likely to become wards of the state. The current list of infectious diseases of public health significance that are grounds for exclusion include infectious TB, HIV infection, leprosy, and certain sexually transmitted diseases (358). Worldwide, approximately 400 licensed local physicians, designated as "panel physicians," perform these medical examinations. Panel physicians are appointed by U.S. embassies and consulates that issue visas. CDC is responsible for monitoring the quality of these examinations and for providing technical guidance and consultation for TB diagnosis and treatment.
The TB screening process is a program for active TB case detection designed to deny entry to persons with infectious pulmonary TB (identified by positive sputum AFB smear results). For persons aged >15 years, a brief medical history and a chest radiograph are obtained (Figure 4). If the chest radiograph is considered compatible with pulmonary TB, three sputum specimens are obtained and examined for AFB. Although procedures vary from site to site, smears are usually performed by Ziehl-Neelsen staining and examined with light microscopy. Cultures for M. tuberculosis are not required and are not routinely performed. Persons aged <15 years are evaluated only if they have symptoms that are consistent with TB or are a contact of person with infectious TB. A test for M. tuberculosis infection is performed, and a chest radiograph is obtained if the test is positive or if the child is suspected to have TB.
Persons with abnormal chest radiographs suggestive of TB and with AFB-positive sputum smear results are classified as having Class A TB, which is an excludable condition for entry into the United States (358). Persons so designated have two options: 1) to complete a course of treatment for TB, including documented negative sputum AFB smears at the end of treatment, at which point they are classified according to their chest radiograph results and may enter the United States; or 2) to receive TB treatment until sputum smear results for AFB convert from positive to negative and then apply for an immigration waiver. A U.S. health-care provider who agrees to assume responsibility for the completion of TB treatment after a person's arrival in the United States should sign the waiver. The waiver is countersigned by a representative of the jurisdictional public health agency of the person's intended U.S.
destination. An applicant whose chest radiograph is compatible with active TB but whose sputum AFB smear results are negative is classified as having Class B1 status and may enter the United States. If the chest radiograph is compatible with inactive TB, no sputum specimens are required, and the applicant enters the country with Class B2 status (358).
Immigrants with a Class A waiver or with Class B1 or B2 status are identified at ports of entry to the United States by CIS on entry to the United States and reported to CDC's Division of Global Migration and Quarantine (DGMQ). DGMQ notifies state and local health departments of refugees and immigrants with TB classifications who are moving to their jurisdiction and need follow-up evaluations. Persons with a Class A waiver are required to report to the jurisdictional public health agency for evaluation or risk deportation. For persons with Class B1 and B2 status, however, the stipulated evaluation visits to the health agency are voluntary.
# Persons Seeking Adjustment of Status After Arrival
Persons seeking to adjust their immigration status after residing in the United States with nonimmigrant visa status should undergo a medical evaluation by one of the approximately 3,000 U.S. medical practitioners designated by DGMQ as civil surgeons. TB screening by civil surgeons is based on tuberculin skin testing; QFT-G is also approved for detecting LTBI. If an applicant seeking adjustment of status has a tuberculin skin test reading of >5 mm, a chest radiograph is required. If the radiograph is compatible with active TB, the person is referred to the jurisdictional public health agency for further evaluation (358). Civil surgeons are also advised that persons with a positive tuberculin test result and no signs or symptoms of TB disease should be referred to public health agencies for evaluation for treatment of LTBI, following ATS/CDC/IDSA guidelines (4,324).
Because data on the outcomes of TB screening of persons seeking to adjust their immigration status are not aggregated or analyzed, only limited information is available. In an evaluation of the screening practices in five U.S. Immigration and Naturalization Service jurisdictions, among 5,739 applicants eligible for screening through tuberculin skin testing, 4,290 (75%) were considered to have been screened appropriately (240). In Denver, Colorado, where health department physicians serve as civil surgeons, 7,573 persons were evaluated for adjustment of status during May 1987-December 1988 (239). Applicants were screened with tuberculin skin testing, chest radiographs, or both. Among 4,840 applicants that had a tuberculin skin test Consistent with active TB Please note: An erratum has been published for this issue. To view the erratum, please click here.
# MMWR
November 4, placed, 2,039 (42%) had a reaction >10 mm. Sixteen persons (0.7%) were sputum culture-positive for M. tuberculosis. Therapy with isoniazid was recommended for 1,029 applicants, of whom 716 (70%) completed 6 months of treatment.
# Immigration Status of Foreign-Born Persons with TB
Studies have sought to identify the initial immigration status of foreign-born persons with reported TB. During 1992-1993 in Hawaii, 78% of TB cases occurred among immigrants, 4% among student nonimmigrants, and 4% among nonimmigrant tourists (350); in 14% of cases, the immigration status could not be determined. During 1992-1994 in Seattle, Washington, 58% of TB cases among foreign-born persons who had resided in the United States for <1 year occurred among immigrants or refugees (293); immigration status was not determined among the remaining foreign-born persons. During 1998-2000, a total of 59% of foreign-born persons with TB in Tarrant County, Texas, were immigrants or refugees, 24% were unauthorized immigrants, and 17% were nonimmigrant students and workers (316).
# Assessment of TB Screening Requirements for Immigrants
The priority for immigration screening efforts is to detect cases of pulmonary TB among persons applying for permanent residence in the United States and to prevent the most infectious persons from entering the United States. However, requirements for screening outside the United States do not apply to the majority of foreign-born persons entering the United States because those classified as nonimmigrants and unauthorized immigrants do not undergo screening (Table 7) (277).
Furthermore, a significant proportion of immigrants with Class B1 (4%-14%) and B2 (0.4%-4%) status allowed to enter the United States with abnormal chest radiographs because of having AFB-negative sputum smears on screening outside the United States are later discovered (on the basis of follow-up evaluations by U.S. public health agencies) to have active TB at the time of entry (350). This finding has great importance for TB-control activities in certain U.S. jurisdictions.
IOM, NTCA, and CDC have suggested changes in the screening procedures for immigrants, as follows:
- IOM has recommended that testing for M. tuberculosis infection be added as a requirement to the medical evaluation for immigrant visa applicants from countries with high incidence of TB (2).
- IOM has recommended that 1) a Class B4 TB designation be created for persons with normal chest radiographs and positive tuberculin skin tests and that 2) immigrants with B4 status be required to undergo an evaluation for TB and, when indicated, complete an approved course of treatment for LTBI before receiving a permanent residency card. - CDC has proposed enhancing training and oversight of panel physicians outside the United States and of civil surgeons in the United States to improve the quality of immigration screening (359). CDC is also working to develop an electronic system for notifying jurisdictional public health agencies about the arrival of Class B immigrants. - NTCA has called for 1) clarification of legal and fiscal issues associated with domestic evaluation and treatment of immigrants; 2) efforts to educate immigrants with Class B1 and B2 status about their responsibilities for followup; and 3) operational research to address the cost effectiveness of screening additional categories of immigrants. - Consideration also should be given to broadening the scope of medical evaluations for immigrants. The costs and benefits of extending the requirement for screening to all visa applicants planning to reside in the United States for >6 months should be examined. Consideration is being given to adding sputum cultures to the sputum AFB smear evaluation of visa applicants who, on the basis of an abnormal chest radiograph, are suspected to have pulmonary TB or who, at least for persons with smearpositive TB cases, are from countries with known high rates of drug resistance.
# TB Control at the U.S.-Mexican Border
The U.S.-Mexican border presents specific challenges to TB control. Four U.S. states (California, Arizona, New Mexico, and Texas) and six Mexican states (Baja California Norte, Sonora, Chihuahua, Coahuila, Nuevo León, and Tamaulipas) comprise the U.S.-Mexican border, and an estimated 1 million persons cross the border daily. In the six Mexican border states, estimated annual TB incidence is 27.1 cases/100,000 population, compared with 5.1 cases/100,000 population in the United States (359). In 1999, Mexico was the country of origin for 23% of foreignborn persons in the United States with reported TB, and 75% of those cases were reported from the four U.S. border states. In 1996, those same states reported 83% of TB cases among foreign-born Hispanics (360). The high rate of TB at the border, the substantial number of border crossings, the substantial geographic area involved, and the prevalent cultural and linguistic barriers make TB control a challenge in this region.
Recommendations to improve TB control at the U.S.-Mexican border have been published (361). These recommendations include use of a binational case definition and development of a binational registry of TB cases, improvements in clinical care of binational TB patients and close contacts by cross-border case-management strategies, development of performance indicators for these activities, and setting research priorities (361).
# Basis for Recommendations on TB Control Among Foreign-Born Persons
Surveillance. The inability to distinguish imported TB present at the time of entry of foreign-born persons into the United States from domestically occurring disease obscures the progress that certain states and cities have made in TB control. Standardized reporting of new TB cases does not allow separating TB among foreign-born persons that is present at the time of entry from cases that arise during residence in the United States. This is more than a semantic distinction because cases of TB that occur among shortterm visitors and workers, students, and unauthorized aliens are counted as U.S. incident cases even though a substantial number are imported (115). Surveys using sputum cultures indicate that 4%-13% of immigrants and refugees with Class B1 status have TB disease at the time of entry (279). TB present at the time of entry is likely to contribute to the higher incidence rates of TB noted among foreign-born persons in the first 2 years after arrival (115). The importance of imported cases and the need to distinguish them from domestic cases has also been demonstrated in the smallpox, polio, and measles eradication efforts in North America.
Case detection. Multiple factors common to the experience of foreign-born persons in the United States might lead to delays in the detection of TB. Preexisting culturally derived beliefs about TB might serve as a disincentive to seek health care when symptoms of TB are experienced (18,279). Also, foreign-born persons wishing to receive a medical evaluation might encounter financial, linguistic, or other barriers to access (19). Once medical services are sought, foreign-born persons are likely to receive their evaluation from certain kinds of health-care providers (e.g., foreign-born physicians or those working in community health centers or hospital EDs) rather than from TB clinics conducted by public health agencies. These challenges to optimal case detection among foreignborn persons will require 1) targeted public education for foreign-born populations at high risk to explain that TB is a treatable, curable disease; 2) better access to medical services, especially for recently arrived immigrants and refugees; and 3) maintenance of clinical expertise in the diagnosis and management of TB among medical practitioners (Box 1).
The TB-screening process for visa applicants (i.e., identification of persons with abnormal chest radiographs) has provided opportunities for active case detection in followup evaluations in the United States. Data derived from programs that have sought to identify active TB cases on the basis of positive sputum cultures for M. tuberculosis among immigrants with Class B notification status indicate that 3%-14% of the approximately 6,000 immigrants with Class B1 status who enter the United States each year and 0.4%-4.5% of the 12,000 immigrants with Class B2 status have TB at the time of entry (279). In San Francisco, California, during July 1992-December 1993, of 182 immigrants with Class B1 status who received follow-up evaluations, 27 (14.8%) had active TB, and 134 (73.3%) had inactive TB (362). Among 547 immigrants with Class B2 status, 24 (4.3%) had active TB, and 301 (54.5%) had inactive TB. In California, 3.5% of all persons with a Class B notification status who arrived during January 1992-September 1995 were reported to have active TB <1 year of arrival (277). Recent arrivals with Class B notification status accounted for 38% of all foreign-born persons with TB reported <1 year of arrival. Among 124 immigrants and refugees in Hawaii who were reported during 1992-1993 to have TB <1 year of arrival, 78 (63%) had been classified as having Class B1 status and 17 (14%) as having Class B2 status (350). However, a study from Los Angeles suggested that the visa application process was more effective in identifying cases among persons recently arrived from Southeast Asia than among those from Mexico and Central America (363).
An active Class B1/B2 follow-up program can be relatively cost effective. During October 1995-June 1996, in Santa Clara County, California, 87% of immigrants with Class B status responded to letters inviting them to receive a follow-up evaluation, resulting in a cost of $9.90 to locate one immigrant with Class B1/B2 status and $175.88 to locate one person with TB (364).
Case management. As with case detection, cultural and linguistic differences might impede successful treatment outcomes among foreign-born persons. Case management of persons whose primary language is not English depends on reliable and competent medical translation. Providers and agencies that work with foreign-born patients at high risk should ensure that adequate translation and interpretation services are available. In jurisdictions in which the majority of the cases occur among foreign-born person, providing these services can be costly. For example, in 2000, the Tarrant County Health Department TB Program (Fort Worth, Texas), spent approximately $24,000 on professional translation services (365). Ideally, professional services should be used for translation rather than relatives or family friends (365).
Culturally derived attitudes and beliefs about TB and its treatment can also be impediments to the management of TB among foreign-born persons. Each culture has its own knowledge, attitudes, and beliefs about TB and how it should be treated. For example, in a study that used focus groups to evaluate attitudes regarding TB among Filipino immigrants, participants expressed a belief that TB was extremely contagious (264) and mentioned the associated social stigma and isolation. Although all participants agreed that medical therapy was necessary, participants also trusted the effectiveness of traditional treatments. As more of the burden of TB in the United States is borne by foreign-born persons, the need for health-care providers to understand cultural attitudes toward TB will increase.
Case management is particularly difficult at the U.S.-Mexico border where, until recently, tracking systems for persons who migrated between the two countries were not in place. A new binational system has been established to ensure continuity of care and completion of TB treatment for patients who migrate between the United States and Mexico and to coordinate the referral of patients between the health systems of both countries. The project is now being tested in four U.S.-Mexican jurisdictions (San Diego, California, and Tijuana, Baja California; El Paso, Texas-Las Cruces, New Mexico, and Ciudad Juarez, Chihuahua; Webb and Cameron Counties, Texas, and Matamoros, Tamaulipas; and Arizona and Sonora). If the pilot project proves successful, this binational TB patient referral and information system will likely be expanded to other parts of the United States and Mexico.
Contact investigation. Contact investigations have a particularly high yield when conducted on foreign-born patients. In Seattle, for example, contacts of foreign-born persons with TB were more numerous (6.0 versus 3.4 per case) and substantially more likely to be have positive tuberculin skin test results (50% versus 18%) and to be started on treatment for LTBI (40% versus 23%) than were contacts of U.S.-born persons with TB (293). A multicenter survey from around the United States demonstrated that the tuberculin skin test was positive among 71% of foreignborn contacts compared with 32% of all close contacts (31). Although not all foreign-born contacts identified during a contact investigation are recently infected, the majority would nevertheless be considered candidates for treatment of LTBI under current guidelines (4). In addition, a Canadian study indicated that contact investigations were more cost effective than preimmigration screening and postarrival surveillance (276).
Targeted testing and treatment of LTBI. Surveys using molecular epidemiologic methods have consistently demonstrated that less clustering of M. tuberculosis isolates occurs from foreign-born patients than from U.S.-born patients; this has been interpreted as evidence that less person-to-person spread of TB occurs among foreign-born persons in the United States and that the majority of cases of TB among foreignborn persons occur as a result of activation of a latent infection (150)(151)(152)356). In fact, one reason for the lack of progress in reducing TB among foreign-born persons might be that insufficient attention has been given to targeted testing and treatment (152), which should be the most applicable prevention strategy for this population, in which TB disease occurs mainly by progression from LTBI.
The success of programs for targeted testing and treatment of LTBI among populations at high risk in the United States has been thwarted by poor interest in the intervention on the part of medical practitioners and poor adherence by patients (51). Among foreign-born persons, these problems are magnified by the lack of access to care and by cultural and linguistic obstacles. Successful models for administering targeted testing and treatment of LTBI among refugees have been published; these models are resource-intensive and require a commitment to working within the population's cultural contexts (202,221). In addition, the use of DOT increases treatment completion rates (366).
Other opportunities to conveniently access foreign-born persons for targeted testing programs include school-based testing of foreign-born students. The majority of persons residing as students in the United States remain long enough to receive targeted testing for LTBI and, if TB is diagnosed, to complete a course of treatment. Screening for TB is required by 61% of colleges and universities: for all students in 26%, for all international students in 8%, and for students in specific academic programs in 47% (367). School-based screening also has been evaluated among younger students (150,322,345). In California, widespread TB screening of kindergarten and high school students yielded a low prevalence of skin test reactors and a limited number of cases of TB, but foreign-born students were >30 times more likely than U.S.born students to have the infection (345). In a cost-benefit analysis, screening all students would be expected to prevent 14.9 cases/10,000 children screened, whereas targeted testing would prevent 84.9 cases/10,000 screened and would be less costly (345).
# Control of TB Among Persons with HIV Infection
HIV and M. tuberculosis interact in ways that tend to worsen both diseases among coinfected persons (368).
When a person with HIV infection is exposed to a patient with infectious TB, the risk for acquiring TB disease soon after that exposure is markedly increased (369). In outbreaks in which the start of exposure could be determined, HIV-infected persons acquired active TB in as little as a month after exposure to a person with infectious TB (136). HIV coinfection is also a highly potent risk factor for progression from LTBI to TB (44,46,370). Persons with LTBI and HIV coinfection have a risk for progressing to TB disease of approximately 10%/year (317,371,372), which is 113-170 times greater than the risk for a person with LTBI who is HIV-seronegative and has no other risk factors (4,44).
On a global level, HIV infection has had a substantial effect on the epidemiology of TB. Areas of the world most heavily affected by the global epidemic of HIV/AIDS (e.g., sub-Saharan Africa) have also sustained increases in the incidence of TB (44,46,373). TB is the most common infectious complication and the most common cause of death among persons with HIV/AIDS in places where the incidence of both diseases is high (374). In the United States, HIV infection has been associated with TB outbreaks in institutional settings, including health-care facilities (53), correctional facilities (37), and homeless shelters (33).
Before the advent of highly active antiretroviral therapy (HAART) in the early 1990s, HIV infection caused a progressive decline in immune competence and death. However, the use of HAART using combination therapy plus protease inhibitors has prolonged the survival among persons with HIV infection (375)(376)(377). The introduction of HAART has also decreased the incidence of TB among HIV-infected persons: an 80% decrease in risk for TB has been demonstrated among HIV-infected persons receiving HAART (378).
With the declining incidence of TB in the United States since 1992, the incidence of HIV infection among persons with TB also has decreased. This is likely attributable to increased understanding of the biologic interactions between the two pathogens, leading to more targeted TB-control efforts and to the introduction of HAART. Another factor is improved TB infection control in health-care facilities, because HIV-infected persons were particularly affected by health-care-associated transmission of M. tuberculosis (53).
HIV infection was a prominent cause of the 1985-1992 TB resurgence in the United States, especially the incursion of health-care-associated TB (including multidrug-resistant disease). That fact, along with the knowledge that the global epidemics of HIV infection and TB are continuing unabated (44), dictates a high degree of respect and vigilance for the adverse consequences that HIV infection could impose on the epidemiology of TB in the United States.
# Basis for Recommendations of Control of TB Among Persons with HIV Infection
HIV counseling and testing. Knowledge of the presence of HIV infection among patients with TB is useful for surveillance purposes to ensure that an optimal drug regimen is chosen for treatment (5), refer persons for HIV primary care if the case is newly detected, and guide decisions about contact investigations. TB is frequently the first illness that brings a person who has not previously received a diagnosis of HIV infection into the health-care system.
Voluntary counseling and testing for HIV is recommended for all patients with TB ( 5), but this recommendation has not been fully implemented, and reporting of HIV among persons with TB is incomplete (14). In 2003, HIV testing was performed for <50% of patients reported with TB in the United States, and only 63% of persons in the age group at greatest risk (persons aged 25-44 years) were tested (14). HIV counseling and testing has also been recommended for contacts of persons with TB (302). However, recent data indicate that contacts of HIV-infected persons with TB have a high rate of HIV infection but that contacts of persons with TB without HIV infection do not (301). HIV testing for other persons with LTBI should be limited to those who have clinical or behavioral risk factors for HIV infection.
Case detection. HIV coinfection affects the clinical and radiographic manifestations of TB. HIV-infected patients are more likely than persons without HIV infection to have extrapulmonary and miliary TB (379,380), and those who have pulmonary TB tend to have atypical findings (e.g., they are less likely to have apical cavities and are more likely to have lower lobe or instititial infiltrates and mediastinal or paratracheal lymphadenopathy). These atypical features are heavily dependent on the patient's CD4 cell count; those with CD4 cell counts >300 cells/µL usually have manifestations, such as upper lobe cavitary infiltrates (274). Persons with HIV infection might also have pulmonary TB despite a normal chest radiograph (274,379).
HIV-infected patients are also vulnerable to other pulmonary and systemic infections such as Pneumocystis carinii and pneumococcal pneumonias and disseminated M. avium complex disease. Although the symptoms and signs of TB are usually different to the trained clinician from those caused by other prevalent invasive pathogens (273,381), HIV co-infection often results in delay in the diagnosis of TB as a result of altered clinical and radiographic manifestations (23).
Undetected transmission of M. tuberculosis to HIV-infected persons can have serious sequelae (136). A substantial outbreak of TB in a prison in South Carolina in 1999 demonstrated the widespread consequences of an unrecognized TB case in a congregate setting with a substantial number of HIV-infected persons (37). In that outbreak, 32 TB cases and 96 tuberculin skin test conversions resulted from a single unrecognized case. Similar outbreaks have occurred in hospitals (53,244), HIV-living facilities and day-treatment programs (136), and homeless shelters (33). Such outbreaks underscore the importance of aggressive TB screening and treatment in settings in which HIVinfected persons congregate. Screening for TB in those settings has been successfully conducted by using symptom checklists, tuberculin skin testing, and chest radiographs (37,118,136).
Case management. Management of TB among persons with HIV infection is complex. Drugs used to treat TB and those employed in combination antiretroviral therapy have overlapping toxicities and potentially dangerous drug interactions (382). Paradoxical responses to TB therapy are more common among HIV-infected persons (383). Use of multiple potentially toxic medications also provides further challenge to adherence with TB treatment. Therefore, integration of management of both HIV infection and TB is critical to the success of management of both. Comprehensive case management, including DOT, is particularly important (5). Among HIV-infected TB patients, more favorable outcomes and survival have been associated with DOT than with self-administered therapy (384). ATS/ CDC/IDSA guidelines should be consulted for recommendations on length and mode of treatment and selection of drug regimens (5). Finally, patients with HIV and TB bear the brunt of two conditions that are associated with clinical and social complexities that can be personally overwhelming. Both HIV infection and TB are associated with stigmatization, and patients with these concomitant conditions often suffer from isolation and a lack of social support.
Contact investigation. Despite controversy as to whether HIV-coinfected patients with TB are more or less infectious than HIV-seronegative patients (385,386), they are clearly capable of transmitting M. tuberculosis; contacts of the two populations of patients have comparable rates of LTBI (369,387). The higher risk for progressing rapidly from exposure to M. tuberculosis to TB disease means that all of the medical and public health interventions (case detection and reporting, initiation of an effective drug regimen, and identification and evaluation of contacts) are more urgent when working to control HIV-associated TB (388).
Although offering HIV counseling and testing to all contacts of persons with infectious TB has been recommended (302), this undertaking would be resource-intensive. Whereas prevalence of HIV infection among contacts of HIV-infected persons is high, prevalence among contacts of persons with TB without HIV infection or with undetermined status is negligible (301).
Targeted testing and treatment of LTBI. HIV coinfection is the most important known risk factor for persons with LTBI acquiring active TB (317,371,372). Treatment of LTBI is effective in reducing the risk for progression to TB disease among HIV coinfected persons (372,389). Thus, all possible efforts should be made to ensure that HIV-infected persons are tested for M. tuberculosis infection and that those found to have latent infection receive and complete a course of treatment. In addition, knowledge of the HIV status of persons being evaluated for LTBI is desirable 1) in interpreting the tuberculin skin test result (e.g., >5 mm of induration is considered a positive test among persons with HIV infection ) and 2) in counseling persons with positive skin test results about the risks and benefits of treatment for LTBI (the role of QFT-G for testing persons with HIV infection for LTBI has not been determined). According to current guidelines (302), persons being evaluated for LTBI should also be screened for HIV infection by using self-reported clinical and behavioral risk factors.
Institutional infection control. Infection-control measures recommended to prevent transmission of M. tuberculosis have been effective in limiting exposure of HIV-infected persons, including patients, visitors, and staff members, to M. tuberculosis in hospitals, extended care facilities, and correctional facilities (9,244). Nevertheless, the risk for rapid progression from exposure to TB disease means that HIVinfected persons should continue to be advised of any potential sites of institutional exposure so an informed choice regarding employment or volunteering can be made.
# Control of TB Among Homeless Persons
The persistence of TB among homeless persons in the United States is a major public health problem. The homeless population is not insubstantial; in 1995, an estimated 5 million persons (2.5% of adult U.S. residents) either were or had recently been homeless, living in streets or shelters, or marginally housed (e.g., living on public support in residential hotels) (390). TB incidence is high among homeless persons, and evidence exists of considerable transmission of M. tuberculosis. Among 2,774 homeless persons enrolled during 1990-1994 in San Francisco, California, 25 incident cases were identified for 1992-1996, for an annual rate of 270 cases/100,00 population (118). Among 20 M. tuberculosis isolates from incident cases that were subjected to genotyping study, 15 (75%) were clustered, indicating chains of transmission in the population. Other molecular epidemiology studies also have identified homelessness as an important risk factor for clustering of M. tuberculosis isolates (33,119,391,392).
Shelters are key sites of TB transmission among homeless persons throughout the United States (27,33,(118)(119)(120)166,(391)(392)(393). In Los Angeles, California during March 1994-May 1997, three homeless shelters were sites of TB transmission for 55 (70%) of 79 homeless patients (33). In Fort Worth, Texas during 1995-1996, clusters of cases among homeless persons occurred simultaneously in four homeless shelters (27). In Alabama, genotyping of isolates from TB cases reported in 1994-98 revealed an undetected statewide outbreak of TB that was traced to transmission in a correctional facility and in two homeless shelters (166). In an outbreak in a shelter in Syracuse, New York, during 1997-1998, a shelter resident was probably infectious for 10 months before receiving a diagnosis; ventilation in the shelter was poor, and the population included vulnerable persons with risk factors that included HIV infection, substance abuse, and malnutrition (120).
Multiple barriers to the control of TB among homeless persons have been identified. Delays in detection of infectious cases have been reported (20); in a computer simulation study that modeled multiple strategies for TB control among homeless persons, a 10% improvement in access to treatment led to greater declines in disease and death after 10 years than comparable improvements in treatment programs (394). Traditional methods of conducting contact investigations often fail to identify contacts of homeless persons with TB (30,119,120). Difficulties also have been encountered in completing treatment for homeless patients with active TB (395) and LTBI (167,184).
# Basis for Recommendations for Control of TB Among Homeless Persons
Surveillance and case detection. Delays in diagnosis and treatment of TB among homeless persons might occur as a result of delays in seeking medical care (181) and to the failure of medical providers to detect TB among those seeking care (20). Homeless persons with TB are disproportionately likely to receive care in hospital EDs and other urgent care clinics (232). For example, during 1994-1996, homeless persons in Atlanta, Georgia, were more likely than other patients to receive a diagnosis in a hospital ED (184). On the basis of sputum AFB smear results and radiologic findings, homeless persons had more advanced disease at the time of diagnosis, another indication that they received diagnoses later in the course of their disease (184).
Shelters have proved to be effective sites for case detection by use of screening procedures among homeless per-sons. During May 1996-February 1997, among 127 homeless persons in Alabama for whom shelter-based screening was conducted by using symptom evaluation, sputum culture, and chest radiographs as the screening package, four (3.1%) persons had TB disease (281). Symptom evaluation alone was not proven to be useful. In a similar study from London, United Kingdom, that employed symptom evaluation, tuberculin skin testing, and chest radiography, 1.5% of homeless persons were determined to have TB (396).
On the basis of findings of a high prevalence of TB in shelter-using homeless populations, certain communities have implemented compulsory screening of shelter residents based on symptom evaluation or tuberculin skin testing with radiographs for those with positive tests. One such program in Portland, Oregon, initiated in 1985, was associated with an 89% reduction in TB morbidity in the geographic area served by participating shelters during 1980-1995 (397). The implementation of a similar screening program in shelters in Denver, Colorado, in 1995 led to lower rates of active TB and reduced transmission of TB disease, as demonstrated by less genotype clustering by DNA fingerprinting (167). Both screening programs were based on symptom evaluation, tuberculin skin testing, and chest radiography. The decrease in TB morbidity in both these studies was attributed to shelter-based case detection through screening activities.
Case management. Completion of treatment for active TB is more difficult for homeless persons, particularly those who report substance abuse, including alcohol abuse (395). Homeless persons with active TB are at high risk for poor adherence even with enhanced DOT and are more likely to default and move from the area of initial diagnosis. They are also more likely to have legal action taken in the form of court-ordered treatment or detention. Comprehensive case management that includes a variety of incentives and enablers, including food, temporary housing, transportation vouchers, and treatment for substance abuse and mental illness has improved rates of treatment completion in this population.
Costs for homeless persons who are hospitalized for initial treatment of active TB have been $2,000 more than costs for persons who were not homeless (398). Excess hospital utilization could be attributable to social considerations, clinical indications (especially the need to render a patient noninfectious before discharge to a congregate living setting), or concerns about adherence to the plan of treatment. In San Diego, California, a novel housing program that used hostels facilitated the completion of treatment of TB in homeless persons (399). Completion rates of 84%-100% were achieved for persons housed at a designated hostel in 1995. Certain TB-control programs in cities with substantial homeless populations routinely provide temporary or longer-term housing in attempts to improve completion of treatment. The California Department of Health allots funds for temporary housing of persons with TB to each of its county and local jurisdictions. The U.S. Department of Housing and Urban Development also provides funding for housing patients with TB.
The beneficial impact on treatment outcomes of an integrated approach to managing homeless patients with TB has been emphasized (394). For example, a social care and health follow-up program among homeless patients in Spain was associated with a decrease in TB rates from 32.4/100,000 in 1987 to 19.8 cases/100,000 in 1992, and better completion rates and reduced costs for hospitalizations were also documented (400). In Massachusetts, 58 (34.5%) of 214 persons hospitalized in a dedicated inpatient unit for difficult TB patients during 1990-1995 were homeless (401). Regardless of the case-management plan that is chosen, all such interventions should take into consideration the importance of addressing major gaps in knowledge, attitudes, and beliefs about TB among homeless persons (181).
Contact investigation. Contact investigations for cases of TB among homeless persons are particularly challenging. Homeless patients with TB often fail to identify contacts during routine investigation (30). Completing a contact evaluation in identified contacts and completing treatment for LTBI among contacts that are homeless are often difficult (320,391,402). Interpretation of the results of tuberculin skin testing of contacts of homeless cases is problematic because the background prevalence of positive tuberculin skin tests in the population is usually higher than that of the general population. As with contact investigations among other populations at high risk, discerning when a contact investigation has become a targeted testing program is often difficult. A proposed alternative approach to conducting contact investigations of homeless persons is to focus on possible sites or locations of exposure, such as shelters (391,393).
Targeted testing for and treatment of LTBI. When homeless persons are identified as a population at high risk on the basis of the local epidemiology of TB, targeted testing and treatment protocols tailored to local circumstances should be developed. However, low rates of completion of therapy for LTBI are commonly observed (167,184,402). For example, among 7,232 inner city residents (including homeless persons) screened for LTBI during 1994-1996 in Atlanta, Georgia, 4,701 (65%) completed tuberculin skin testing; of 809 (17%) who had a positive test, 409 (50%) were candidates for isoniazid therapy, and 84 (20%) completed treatment (184). In another study conducted in San Francisco, Cali-fornia, during 1991-1994 that was designed to improve adherence, two novel interventions (biweekly preventive DOT with either a $5 incentive or a peer health adviser) were compared with the usual method of self-supervised treatment (402). Even though completion of treatment was not high for any of the three groups, multivariate analysis indicated that independent predictors of completion were being offered the monetary incentive and residence in a hotel or other stable housing at entry into the study. That report confirmed an earlier finding that advocated offering monetary incentives (320).
Institutional and environmental controls. Efforts have been made to reduce transmission of TB in shelters for homeless persons by enhancing institutional control measures. These efforts have included reducing shelter size ( 13), improving ventilation systems, and using germicidal ultraviolet light (280).
# Control of TB Among Detainees and Prisoners
Correctional facilities in the United States include jails and prisons, which serve different but complementary functions. Jails serve as pretrial detention centers and house persons (detainees) awaiting trial and those sentenced to 1 year. State governments, the federal government, and the military all operate prison systems. On any given day, approximately 2 million persons in the United States are incarcerated; 1.4 million of those are imprisoned, and the remainder are detained in jails. Approximately 6 million persons are incarcerated in jails or prisons each year for variable lengths of time (124,125).
Detainees and prisoners represent the poorest and most medically underserved segments of the U.S. population, the same population segments at risk for LTBI and TB disease (124,252,253). Persons entering prisons have usually spent time in jail, and detainees and prisoners eventually reenter the community. Consequently, TB outbreaks among detainees, prisoners, and the general population of a geographic area are interrelated (127,403), and close coordination of TB-control activities is needed between health programs in correctional facilities and jurisdictional public health agencies.
Prisons have long been identified as sites of transmission of M. tuberculosis to other inmates and workers (38,139,(404)(405)(406)(407)(408), including those with HIV infection (38,139,405,408). In addition, time spent in jail is a risk factor for subsequent acquisition of TB (127,250,256), an indication that jails often are also sites of transmission. Correctional facilities are among the most important sites of transmission of M. tuberculosis in the United States.
Failure to detect TB in correctional facilities results in TB outbreaks, which have been well documented (37,139,(404)(405)(406)(407)(408). Outbreaks of multidrug-resistant TB involving inmates and staff, including HIV-infected persons, were a prominent component of the 1985-1992 TB resurgence in the United States (404,405,(409)(410)(411). However, outbreaks have continued to occur (37,139), even though TB control, including control of M. tuberculosis transmission, in the United States has improved.
# Basis for Recommendations on Control of TB Among Detainees and Prisoners
Case detection and case management. Despite the importance of jails and prisons in sustaining and amplifying the reservoir of TB in the United States (127,405,407), little is known about the optimal means of case detection of TB among detainees and prisoners. The majority of prisons have adopted a case-detection strategy that is based on a survey of TB symptoms obtained on admission, in which all entrants are tested for M. tuberculosis infection <14 days of admission; universal chest radiographs of all entrants are rarely offered (410). No data have been published supporting the effectiveness of symptom surveys and testing for M. tuberculosis infection for detecting cases of TB and preventing transmission within jail systems, although screening by tuberculin skin testing was effective in controlling TB in one prison system (411). Certain substantial urban jails perform chest radiographs on all persons entering the institution in an attempt to minimize transmission of TB (283,412), and data indicate that this approach is cost effective (412). Because nearly all prison entrants have first been detained in a jail system, effective TB case-detection programs in jails will substantially decrease the probability that persons with undetected active TB will be admitted to prison.
Once cases are detected, strategies similar to those used in the community have led to high rates of successful treatment completion (413). A particular problem for case management in a jail setting is the unanticipated release of detainees, which often precludes the development of an effective discharge plan. Strategies to better coordinate discharges with public health authorities should be promoted.
Contact investigation. Continuing outbreaks of TB in correctional facilities (37,139) underscore the importance of prompt and thorough contact investigations in jails and prisons. Contact investigations in correctional facilities involve two steps: 1) identifying and evaluating persons exposed before the source-case was incarcerated, and 2) identifying and evaluating persons exposed during incarceration of the source-case. Effective case detection is important to limit the size of the latter group. Contact investigations often need to be conducted broadly, among more than one facility, because of the movement of detainees within the correctional system (414).
Conducting contact investigations based on the concentric circle method is difficult in correctional institutions. Frequently, a single infected person can expose up to several hundred persons both before and after incarceration. Cases involving persons who were exposed before incarceration should be managed by the jurisdictional public health agency for the community in which the person lived before arrest. For the jurisdictional public health agency to carry out those contact investigations effectively, prompt notification and case reporting by the detention facility is necessary. Guidelines for conducting contact investigations in jails have been published (258).
Targeted testing and treatment of LTBI. Targeted testing and treatment of latent TB among detainees and prisoners has been described in detail (415)(416)(417). Because of the high risk for transmission of M. tuberculosis in correctional facilities, inmates incarcerated for >14 days usually receive a test for M. tuberculosis infection as part of TB case detection. Detainees and prisoners with LTBI often are considered to be candidates for treatment of latent TB (124,252,253). Prisons often are an ideal setting for effective treatment of LTBI because of known location of the patient, length of stay, prohibition of illicit drugs and alcohol, and a predictable diet. Nevertheless, achieving high rates of completion of treatment for LTBI in prisons or jails has been difficult (257,416,417).
The majority of jail detainees are released in <14 days of entry. If treatment for LTBI is started in the jail setting, community follow-up after release from jail is essential. Without specific interventions to assure such follow-up, the probability of completion of treatment might be <10% (256,257,418). Recent developments in short-course treatment of latent TB with a combination of rifampin and pyrazinamide for 2 months offered promise in improving treatment completion rates (419). However, the toxicity of this regimen precludes its routine use (324), and this combination should generally not be used for the treatment of LTBI in correctional settings because the rates of toxicity have been similar to those observed in the wider community. In addition, detainees and prisoners have high rates of hepatitis C infection, making them especially prone to serious hepatotoxicity.
Institutional infection control. Correctional institutions have been sites of virulent outbreaks of TB, including multidrug-resistant TB, that have involved HIV-infected inmates and staff (37,139,405,408). Common findings in these outbreaks have included the failure to isolate persons with active TB quickly. Another common finding has been disease associated with rapid transmission of M. tuberculosis when immunosuppressed detainees and prisoners are housed together. An effective infection-control program can decrease the likelihood of TB transmission in correctional institutions (420). Guidelines to assist correctional institutions in developing effective infection-control programs have been published (258).
# Control of TB in Health-Care Facilities and Other High-Risk Environments
During the 1985-1992 TB resurgence in the United States, TB cases resulted from transmission of M. tuberculosis in settings where patients with infectious TB congregated closely with susceptible persons (52)(53)(54)170,421). This epidemiologic disease pattern had not been recognized in the United States since the development of effective drugs against TB starting in the 1950s. Hospitals and other health-care facilities were the primary, but not the only, sites of transmission (405,406,408), and HIV-infected persons were prominent among those who contracted M. tuberculosis infection and rapidly acquired TB disease (52)(53)(54)170,406,408). Although incidence of TB in health-care facilities has been markedly reduced because of the development and deployment of effective infectioncontrol measures (56,(422)(423)(424) and decreasing incidence of TB in different communities, TB disease attributable to recent transmission of M. tuberculosis in other settings has not only persisted but has been recognized in a wide variety of sites and settings and become an established epidemiologic pattern.
As a consequence of the changed epidemiology of TB in the United States, the primary strategies now required to control the disease include measures for its prevention in settings in which a risk for transmission of M. tuberculosis exists (Box 4). Recommendations for infection-control measures in high-risk settings are provided in this statement. The approach to control of TB and other airborne infections that was developed for health-care facilities (10) is the most successful model and is outlined in detail in this statement. Recommendations are also provided for control of transmission of M. tuberculosis in extended care facilities, correctional facilities, homeless shelters and other high-risk settings.
# Control of TB in Health-Care Facilities
Strategies for control of TB in health-care facilities, which also are applicable for other settings in which high-risk persons congregate, are based on comprehensive guidelines issued by CDC in 1994 (10). New CDC guidelines for preventing transmission of M. tuberculosis in health-care facilities will be published in 2005. A draft † of these guidelines has been published in the Federal Register. In the assessment of institutional risk for TB, three levels of risk (low, medium, and potential ongoing transmission) are recommended, based on the recent experience with TB in the institution and in the community it serves. The recommended frequency of testing of employees for M. tuberculosis infection varies, depending on the institution's level of risk. The tuberculin skin test is recommended for testing HCWs and other employees with a risk for exposure to M. tuberculosis. QFT-G is also approved for detecting LTBI; guidelines for the use of QFT-G will be published in the MMWR.
The risk for TB associated with health-care facilities is related to the incidence of TB in the community served by the facility and to the efficacy of infection-control measures (422). Implementation of infection-control guidelines (10) has markedly reduced risk for exposure to TB in health-care facilities during the past decade (56,(422)(423)(424) and has also contributed to the decreasing numbers of TB cases. Implementation of effective infection-control measures in the medical workplace is thus an important element of broader national and international strategies to prevent transmission of TB (244).
Epidemiologic investigations of the early outbreaks of TB in health-care facilities, including those involving multidrugresistant cases, indicated that transmission usually occurred because of failure to identify and isolate patients with infectious forms of TB. In certain instances, diagnosis of TB was delayed as a result of the atypical presentation of TB among patients with HIV infection, especially those with low CD4 counts. Transmission was also facilitated by 1) the intermingling of patients with undiagnosed TB with patients who were highly susceptible; 2) inadequate laboratory facilities or delayed laboratory reporting; and 3) delayed institution of effective therapy. Other factors facilitating transmission included a lack of negative pressure respiratory isolation rooms, recirculation of air from respiratory isolation rooms † to other parts of the hospital, failure to isolate patients until they were no longer infectious, allowing isolated patients to leave their rooms without wearing a mask, and leaving respiratory isolation room doors open (52)(53)(54)170,421,425,426). CDC guidelines recommend a hierarchy of TB infectioncontrol measures (10). In order of importance, these measures are administrative controls, engineering controls, and personal respiratory protection (PRP) (Box 7). Administrative controls consist of measures to reduce the risk for exposure to persons with infectious TB, including screening of patients for symptoms and signs of TB at the time of admission, isolating those with suspected disease, establishing a diagnosis, and promptly initiating standard therapy (5). Engineering control measures are designed to reduce dissemination of droplet nuclei containing M. tuberculosis from infectious patients and include the use of airborne infection isolation rooms. The third level (and the lowest on the hierarchy of controls) is the use of PRP devices such as N-95 respirators. Respirator usage for the prevention of TB is regulated by the Occupational and Health Safety Administration under the general industry standard for respiratory protection § .
In implementing a comprehensive infection control program for TB, institutions should first conduct a risk assessment to determine what measures are applicable. Risk for transmission of M. tuberculosis varies widely, and procedures that are appropriate for an institution in an area of high TB incidence (e.g., an inner-city hospital or homeless shelter in a metropolitan high-incidence area) differ from those applicable to an institution located in a low incidence area that is rarely used by patients with TB. The jurisdictional public health TB-control program should assist in the development of the assessment, which should include data on the epidemiology of TB in the community served by the institution and the number of TB patients receiving evaluation and care.
The institutional risk for TB can be stratified according to the size of the institution and the number of patients with TB as low risk, medium risk, or potential ongoing transmission. Hospitals with >200 beds that provided care for fewer than six patients with TB during the previous year are categorized as low risk whereas those that cared for six or more patients are categorized as medium risk. For hospitals with <200 beds, those with fewer than three TB patients in the previous year are considered low risk, and those with three or more cases are considered medium risk. Outpatient clinics, outreach programs, or home health § Personal Protective Equipment, 29 C.F.R. Sect. 1910Sect. .134 (2003. How often employees at health-care facilities and other at-risk sites for M. tuberculosis infection are tested depends on the risk assessment. The positive predictive value of the tuberculin skin test is low when populations with a low prevalence of infection with M. tuberculosis are tested (424,427). Consequently, frequent testing by using that method in lowincidence, low-risk settings is discouraged. In addition, false-positive tests have been reported when institutions changed brands of Purified Protein Derivative (PPD) reagent, for example from Tubersol ® to Aplisol ® (427).
At the time of employment, all HCWs should undergo baseline testing (with two-step testing if the tuberculin skin test is used and no testing was performed during the preceding year) (10). Those in medium-risk settings should be tested annually. Follow-up testing is recommended for workers in low-risk settings only if exposure to a patient with infectious TB (i.e., a patient not initially isolated but later found to have laryngeal or pulmonary TB) has occurred. Institutions in which ongoing transmission of M. tuberculosis is documented should carry out testing for M. tuberculosis infection of HCWs at risk every 3 months until transmission has been terminated.
Employees testing positive for M. tuberculosis infection should receive a chest radiograph to exclude TB disease and should be evaluated for the treatment of LTBI based on current recommendations (4,324). Compliance with therapy for LTBI among HCWs, including clinicians, has historically been poor (428)(429)(430). Employee health clinics and infection-control departments should emphasize to HCWs the importance of completion of therapy for LTBI. In a comprehensive infection-control program that encourages HCWs to complete treatment for LTBI, higher completion rates have been reported (431,432).
# Control of Transmission of M. tuberculosis in Other High-Risk Settings
Extended Care facilities. Elderly persons residing in a nursing home are almost twice as likely to acquire TB as those living in the community (252,433,434). Certain considerations for control of TB in hospitals apply also to extended care facilities, including 1) maintaining a high index of suspicion for the disease; 2) promptly detecting cases and diagnosing disease; 3) isolating infectious persons and initiating standard therapy; 4) identifying and evaluating contacts; and 5) conducting contact investigations when indicated. The value of treating LTBI in elderly residents of nursing homes so as to prevent future outbreaks has been documented (435).
In 1990, CDC published recommendations for TB control in extended care facilities (433). Those long-term care facilities that do not have airborne-infection-isolation rooms should transfer patients suspected to have infectious TB to other facilities (including acute-care hospitals) until the disease is ruled in or out and treatment is started if indicated and continued until the patient is noninfectious (10). The risk assessment and frequency of testing for LTBI for employees at long-term care facilities are similar to those described previously. Residents should be tested on admission to the facility and should provide a history and undergo physical examination to identify symptoms and signs of TB. Residents with LTBI should be offered treatment according to current recommendations (4,324), with careful monitoring for drug toxicity.
Correctional facilities. Common findings in outbreaks of TB in correctional facilities were the failure to recognize and isolate patients with TB and rapid progression of outbreaks when immunosuppressed detainees were housed together (405,406,408). Because of the substantial numbers of cases of TB infection and disease that might result from outbreaks at correctional facilities and the natural movement of inmates from incarceration to the general population, correctional facilities should be viewed as being among the most important sites of transmission of M. tuberculosis in the United States (128,436).
Guidelines for control of TB transmission in correctional facilities (123) have emphasized that the infection-control principles developed for health-care facilities (10) are also applicable to correctional facilities. In prisons and jails, the most important activity in TB infection control is efficient detection of infectious TB cases, including those that are prevalent among persons entering the facility and those that arise during detention. A prompt diagnostic evaluation, respiratory isolation (including transfer out of the facility if airborneinfection-isolation rooms are not available), and institution of a standard treatment regimen are urgent priorities when suspected cases are encountered. If this process is delayed, a substantial number of persons might be exposed as a result of the congregate living arrangements that characterize correctional facilities.
Because of crowded conditions that favor the spread of M. tuberculosis (420) and the high prevalence of HIV infection among prisoners (255), contact investigations should be undertaken immediately once a case of TB has occurred at a facility. In a study conducted in the Maryland state correctional system, prisons that conducted programs for targeted testing and treatment of LTBI among inmates experienced lower rates of tuberculin skin test conversions, an indication that this measure can contribute to successful infection control (420). A template is now available to assist jails in instituting an effective infectioncontrol program (258).
Shelters for homeless persons. As with correctional facilities, homeless shelters are important sites of transmission of M. tuberculosis and an important cause of the continuing high incidence of TB among the homeless population (33,118). Effective infection-control strategies in those venues are use of M. tuberculosis genotyping for rapid identification of clustered cases and sites of transmission (27,33), screening shelter users for TB disease, wide-ranging contact investigations, and engineering controls, including ultraviolet germicidal irradiation (437). A systematic shelter-based program for targeted testing and treatment of LTBI in Denver was also demonstrated to decrease incidence of TB in the homeless population (167).
Because crowding and poor ventilation are often prevalent in shelters, infection-control efforts should also include engineering modifications to decrease exposure to M. tuberculosis. A guide to assist shelters in improving the safety of their environment through modifications in ventilation, air filtration, and the introduction of ultraviolet germicidal irradiation has been published (438).
Other high-risk settings. As the incidence of TB has receded in recent years, new patterns of transmission have become evident. Epidemiologic investigations prompted by an increase in the incidence in TB in a community or state or by the identification of clusters of cases with identical M. tuberculosis genotype patterns have detected transmission in such venues as crack houses (137) and bars (27). In addition, transmission has been identified in association with certain social activities that are not typically considered in routine contact investigations; a church choir (140), a floating card game (172), exotic dancers and their contacts (38), a transgender social network (34), and persons who drink together in multiple drinking establishments (439).
Although special techniques have been developed for exploring chains of transmission of M. tuberculosis in complex social networks (439), transmission of M. tuberculosis in such settings is not amenable to prevention by available infection-control strategies. These newly identified patterns of transmission of M. tuberculosis might be too complex to be detected and controlled by traditional approaches, and real-time M. tuberculosis genotyping capable of identifying unsuspected linkages among incident cases might be increasingly useful (131).
This new TB threat, transmission in previously unknown settings, has emerged at a time when local TB-control programs often are not prepared to respond. As TB morbidity decreases in the United States and TB-control programs necessarily contract, new approaches will emerge, particularly in low-incidence areas. One model envisions that local public HCWs who do not work exclusively on TB are served by regional TB supervisors, who in turn are supported by statewide consultants and CDC specialists (172).
# Research Needs to Enhance TB Control
Implementation of the recommendations contained in this statement will likely improve TB control and allow progress to be made toward eliminating TB in the United States. However, achieving TB elimination as defined by ACET (i.e., one annual case of TB per one million population ) will require substantial advancements in the technology of diagnosis, treatment, and prevention of the disease. IOM has estimated that at the current rate of decline, approximately 6% annually, eliminating TB in the United States would take >70 years (2). New tools are needed for the diagnosis, treatment, and prevention of TB to accelerate the decline in TB incidence and reach the elimination threshold sooner (1,2,45). In addition, improved tests for the diagnosis of TB and LTBI and more effective drugs to treat them are needed to reduce the substantial worldwide burden of disease and death resulting from TB (44).
AFB smear microscopy and the tuberculin skin test, the most commonly used tests for the diagnosis of TB and latent infection respectively, derive from technology developed in the 19th Century; the only available vaccine against TB, BCG, dates from the early 20th Century; and rifampin, the most recent novel compound for treatment of TB, was introduced in 1963. In the long term, the development of a new and effective vaccine would have the greatest impact on the global epidemic of TB, and the United States should lead the research and advocacy efforts to develop such a vaccine (180,440). However, other advances in TB diagnosis and treatment might substantially improve the control of TB in the United States. Better means to diagnose and treat LTBI are needed immediately. Breakthrough diagnostics and drugs that would facilitate the more effective usage of this therapeutic intervention to prevent TB would have an immediate and lasting effect on the incidence of the disease in the United States by affecting at least three of the major challenges to TB control in the United States: the substantial pool of persons with LTBI, TB among foreign-born persons, and TB among contacts of persons with infectious TB (Box 1).
Public health interventions to control TB should be based on practices that have been demonstrated to be effective. Because an established scientific basis is lacking for certain fundamental principles of TB control, including certain recommendations contained in this statement, logic, experience, and expert opinion have been used to guide clinical and public health practice to control TB. In the preparation of these recommendations for TB control, deficiencies in evidence were frequently noted. Better understanding is needed of which persons among the millions of foreign-born persons that enter the United States each year (Table 8) are at sufficient additional risk for TB to warrant public health intervention. The approaches recommended for the development of programs for targeted testing of LTBI need additional verification. The new concepts of identifying contacts of infectious TB cases (439) require refinement. The optimal method of reducing the concentration of M. tuberculosis in ambient air in venues such as homeless shelters is not yet defined (438). Methods to monitor and evaluate TB control programs, and in particular, new activities such outbreak surveillance and response (441), should be delineated and standardized.
The epidemiology of TB in the United States is constantly changing. Recent examples, as noted throughout this statement, are the increase in TB among foreign-born persons, the upsurge in reports of TB outbreaks, and the persistent high incidence of the disease among U.S.-born non-Hispanic blacks. Epidemiologic studies, including economic analyses, are needed to augment surveillance data and facilitate decisions about allocation of effort and resources to address newly identified facets of the epidemiology of TB.
As new diagnostics are introduced to TB control, operational, economic, and behavioral studies will be needed to determine their most effective use. For example, QFT, a new diagnostic test for LTBI, was licensed in 2001, and early research indicated that this new test might have advantages over the tuberculin skin test in distinguishing between latent M. tuberculosis infection and infection with nontuberculous mycobacteria or vaccination with BCG (102). However, guidelines on testing for LTBI recommended that QFT should not be used in the evaluation of contacts of infectious cases of TB, for children aged <17 years, for pregnant women, or for patients with immunocompromising conditions, including HIV infection, because of a lack of data from studies in those populations (103). A newer version of the test, QFT-G, was licensed in 2004. The role of this new test in these populations has not been determined Thus, considerable research remains to be done to delineate the advantages this new test can bring to TB control.
Despite the best efforts of national, state, and local TB programs, nonadherence to prescribed treatment for TB and latent infection remains a major barrier to TB elimination. As evidence of the importance of that intervention, completion of a course of treatment is the first national performance standard for TB (Table 4). For the outcome of TB treatment to be improved, both patient and health-care provider behaviors related to adherence to TB treatment must be understood, and that understanding should be used to design and implement methods for improving adherence. Although considerable research has been conducted in this field, no comprehensive effort has been undertaken to examine and compile the results and identify best practices. Gaps in knowledge remain, and the need exists to develop and implement a comprehensive behavioral and social science research agenda to address these deficiencies.
# Graded Recommendations for the Control and Prevention of Tuberculosis (TB)
Recommendations for TB Laboratory Services
# Recommendations for TB Control Among Children and Adolescents Case Detection and Primary Prevention Strategy
- Timely reporting of suspected cases of infectious TB is crucial to the prevention of TB among children (AII). - Contact investigation of adults with infectious TB is the most important activity for early detection of TB among children, identification of children with LTBI who are at high risk for progressing to primary TB and its sequellae, and determination of the drug susceptibility pattern of the M. tuberculosis isolate causing TB disease or LTBI in a child. Contact investigations should be timely and thorough, and adequate resources for them should be made available. This should be one of the highest priority goals of any TB-control program (AII).
- Children aged <5 years who have been identified as contacts of persons with infectious TB should receive a clinical evaluation, including a tuberculin skin test and chest radiograph, to rule out active TB. Once active TB has been ruled out, children with positive tuberculin skin test results should receive a full course of treatment for LTBI. Those who have negative skin test results should also receive treatment for presumed LTBI. This intervention is especially critical for infants and toddlers aged <3 years but is recommended for all children aged <5 years. A second tuberculin test is then placed at least 3 months after exposure to infectious TB has ended. If the second test result is positive, treatment should be continued for a full course of treatment for LTBI. If the second test result is negative, treatment may be stopped (AII).
# Case Management
- DOT should be the standard of care for treatment of TB disease among children and adolescents (AII). - As adherence to treatment is no better for children than for adults, all efforts should be made to support children and families through treatment of TB through comprehensive case management (AIII).
- Cases identified as a result of targeted testing activities should be distinguished from those identified by noting symptoms of active TB (AIII). - For TB control along the U.S.-Mexico border to be facilitated, a binational TB case definition and TB registry system should be adopted and evaluated (AIII).
# Case Detection
- Jurisdictional public health agencies responsible for TB control should undertake or engage community groups to undertake education campaigns for foreign-born persons at high risk. These campaigns should communicate the importance of TB as a personal and public health threat, the symptoms to look for, how to access diagnostic and targeted testing services in the community, and the concept of LTBI. The purpose of this education is to destigmatize the infection, acquaint the population with available medical and public health services, and explain the approaches used to treat, prevent, and control TB (AIII). - Public health agencies conducting TB-control programs should establish liaisons with primary care physicians, community health centers, hospital EDs, and other organizations that provide health care for foreign-born populations at high risk to provide TB publications and guidelines and education about the local epidemiology of TB (AIII). - Public health agencies conducting TB-control programs should establish liaisons with civil surgeons within their jurisdictions. They should also ensure that civil surgeons have access to recent TB publications and guidelines and that they promptly report all suspected cases of TB (AIII). - CDC should provide standardized education and training programs with a formal certification process for panel physicians and civil surgeons. As part of the certification, continuing education programs should be required (AIII). - Federal, state and local public health agencies should assign high priority to the follow-up of immigrants with a Class A TB waiver and Class B1 and B2 TB notification status (AII).
# Case Management
- Culturally appropriate case management should be instituted, including readily available professional translation and interpretation services, for all foreign-born persons. If possible, outreach workers should be from the patient's own cultural background (AII).
# Case Detection
- Physicians who provide primary care to persons with HIV infection or populations at increased risk for HIV infection should maintain a high index of suspicion for TB. Every patient in whom HIV infection has been newly diagnosed should be assessed for the presence of TB or LTBI. This should include a history for symptoms compatible with TB (e.g., cough of >2-3 weeks' duration, fever, night sweats, weight loss, or hemoptysis, or unexplained cough and fever [
# Contact Investigation
- Contact investigations of persons with TB and known or suspected HIV infection and those conducted in any circumstance in which HIV-infected persons could have been exposed to a person with infectious TB should have the highest priority and be completed without delay (AII). - Persons with known or suspected HIV infection who have contact with a patient with infectious pulmonary TB should be offered a full course of treatment for LTBI regardless of the initial result of tuberculin skin testing once active TB has been ruled out (AII).
# Targeted Testing and Treatment of LTBI
- Targeted testing and treatment for LTBI are strongly recommended at the time the diagnosis of HIV infection is established (AII). - For HIV-infected persons whose initial tuberculin skin test is negative, repetitive testing is recommended (at least yearly) if the local epidemiologic setting indicates an ongoing risk for exposure to TB (AII). - An HIV-infected patient who is severely immunocompromised and whose initial tuberculin skin test result is negative should be retested after the initiation of antiretroviral therapy and immune reconstitution, when CD4 cell counts are greater than 200 cells/µL) (AII). - HIV-infected persons who receive a diagnosis of LTBI should receive high priority for DOT (BIII).
# Institutional Infection Control
- HIV-infected persons should be advised that certain occupations and activities increase the likelihood of exposure to TB. These include employment and volunteer work in certain health-care facilities, correctional institutions, and shelters for the homeless, as well as in other high-risk settings identified by jurisdictional health authorities. The decision about continuing employment or volunteer activities in a high-risk setting should be made in consultation with a health-care professional and be based on factors such as the person's specific duties in the workplace, prevalence of TB in the community, and the degree to which precautions are taken to prevent TB transmission in the workplace (AIII).
# Recommendations for TB Control Among Homeless Persons
Surveillance and Case Detection
- Information on whether the person is homeless should be included for each reported TB case to determine the importance of homelessness in the TB morbidity in the state or community. This is particularly important for communities that provide shelters or other congregate living facilities that are conducive to the transmission of TB (AII). - In designing programs for control and prevention of TB in homeless persons, public health agencies should work closely with providers of shelter, housing, primary health care, treatment for alcoholism or substance abuse, and social services to ensure a comprehensive approach to improving the health and welfare of this population (AIII).
- Public health agencies should closely monitor the location, mode (i.e., screening or symptomatic presentation), and timeliness of diagnosis of TB in homeless persons in their community and use such data to develop more effective control strategies (AIII). - Public health agencies should identify providers of medical care for homeless persons and facilities that serve homeless persons (e.g., hospital EDs and correctional institutions) to ensure that practices and procedures are implemented to readily detect and report suspected cases of TB (AIII). - Providers of primary health care for homeless persons should be knowledgeable about how to diagnose (Table 5), isolate, and report suspected cases of TB (AIII). - Public health agencies should have ready access to an inpatient facility for the isolation and induction phase of therapy of homeless patients with infectious TB (AII). - Public health agencies should be prepared to conduct activities to detect TB among persons without symptoms and enhance TB case detection as part of a plan for TB control among homeless persons (Table 6).
Indications for screening for TB disease include 1) a documented outbreak, 2) an increase in incidence of TB in the homeless population, and 3) evidence of current transmission of TB in the population. Shelters should always be suspected as sites of transmission (AII).
# Case Management
- Case management for homeless persons with TB should be structured to encourage adherence to treatment regimens by making TB treatment a major priority for the patient. It should include provision of housing, at least on a temporary basis; an increasing number of models have demonstrated the importance of a housing incentive in successful treatment of TB in homeless persons. Case management should also include establishing linkages with providers of alcohol and substance treatment services, mental health services, and social services (AII).
# Contact Investigation
- Health departments should regularly evaluate their methods for contact investigation for cases of TB among homeless persons to identify barriers and develop alternative strategies, such as shelter-or other location-based contact investigations oriented to possible sites of transmission. Factors to evaluate should include timeliness of completing contact investigations, number of contacts identified and evaluated per case, proportion of evaluated contacts with LTBI and TB disease, and completion of treatment of LTBI among contacts (AII).
# Targeted Testing and Treatment of LTBI
- Targeted testing and treatment of LTBI should be a priority for homeless populations because studies from throughout the United States have demonstrated high rates of transmission of M. tuberculosis in this group. This epidemiologic situation, causing a high ongoing risk for acquiring LTBI and TB disease, might necessitate repetitive testing for M. tuberculosis infection among homeless persons (AII). - When high rates of transmission of M. tuberculosis are documented among homeless persons, those with a positive test for M. tuberculosis infection should be presumed to be recently infected and treated for LTBI (AIII).
# Institutional and Environmental Controls
- Organizations that provide shelter and other types of emergency housing for homeless persons should develop institutional TB-control plans. Guidelines to facilitate this process are available from CDC ( 9) and the Francis J. Curry National TB Center (403) (AII).
# Recommendations for TB Control Among Detainees and Prisoners Case Detection and Case Management
- All jails and prisons should conduct a TB case detection program for detainees and prisoners entering the facility as well as for those who become ill during incarceration to ensure prompt isolation of contagious cases of TB (AII). - Strategies for case detection for incoming detainees and prisoners include symptom surveys (BIII), testing for M. tuberculosis infection followed by chest radiography (BIII) for those with a positive test, and universal chest radiography in jails (BII). In each setting, the adopted strategy should receive ongoing evaluation. - Each correctional facility's health-care program for inmates and staff should ensure that training in the clinical and public health aspects of TB and other diseases of public health significance is provided in an ongoing manner (SP). - Detainees and prisoners with signs and symptoms of TB should be placed in respiratory isolation on-site or offsite until infectious TB is ruled out (SP). - Case-management strategies including DOT and incentives should be used to assure completion of therapy of detainees and prisoners with TB (BII). - When detainees and prisoners receiving therapy for TB are transferred to another facility or released from detention, responsibility for continuation of the treatment plan should be transferred to the appropriate facility or agency, and the jurisdictional TB-control program should be notified (SP).
# Contact Investigation
- Contact investigations of infectious TB cases in corrections facilities should receive equal priority as effective case detection as the primary means of aborting TB outbreaks. Facilities should have written procedures for contact investigations and have adequate staff to ensure prompt and thorough contact investigations. They should also consult with the jurisdictional public health TBcontrol program (AII).
# Targeted Testing and Treatment of LTBI
- Prisons should implement a treatment program for prisoners with LTBI as part of the effort to prevent the transmission of M. tuberculosis within their walls and to contribute to the overall goal of TB elimination (AII). - Treatment programs for LTBI in jail detainees should be undertaken only if it is possible to develop a successful plan for community follow-up of released persons on treatment (AII). - Reducing the length of treatment for LTBI is more likely to lead to completion of treatment in correctional facilities; 4 months of rifampin is recommended as an alternative for the treatment of LTBI (4,324). Correctional health providers need to consider the costs and benefits of this regimen compared with the standard 9-month course of isoniazid in each individual case (BIII).
# Institutional Infection Control
- Jails and prisons should implement effective infectioncontrol programs including risk assessment, staff training, screening and treatment of LTBI, isolation of inmates with infectious forms of TB, treatment and discharge planning, and contact investigation (AII). - HIV-infected detainees and prisoners should not be housed together in a separate facility unless institutional control programs following current guidelines have been established and proved to be effective in preventing the transmission of M. tuberculosis (AIII).
# Recommendations for TB Control in Health-Care Facilities and Other
High-Risk Settings
# Recommendations on Research for Progress Toward Elimination of TB
# Contact Investigation
- Infants and younger children with primary TB disease are rarely if ever contagious. They do not need to be excluded from activities or isolated in health-care settings (AII). - Children and adolescents of any age with characteristics of adult-type TB (i.e., productive cough and cavitary or extensive upper lobe lesions on chest radiograph) should be considered potentially contagious at the time of diagnosis (AII). - Infants with suspected or proven congenital pulmonary TB should be considered contagious and effective infection-control measures should be undertaken (AII). - Adults who accompany and visit children with TB in health-care settings should be evaluated for TB disease as soon as possible to exclude the possibility that they are the source case for the child. These adults should have a chest radiograph to rule out pulmonary TB and to prevent the possibility of transmission within the healthcare setting (AII). - Testing of the contacts of children aged <4 years with LTBI is recommended for persons sharing a residence with the child or those with equally close contact. Such investigations may be performed by public health agencies or primary health-care providers (BII).
# Targeted Testing and Treatment of LTBI
- Contact investigations of adults with TB and targeted tuberculin skin testing of foreign-born children from countries with a high incidence of TB are the best and most efficient methods for finding children with LTBI (AII). - Because foreign birth in a country with a high prevalence of TB is the greatest attributable risk factor for LTBI, children born in or with extensive travel to such countries should be targeted for testing for LTBI. This includes foreign-born adopted children. | 53,699 | {
"id": "8b311b802981989eb66ff48b9e0361704212db22",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Herpes zoster vaccine (Zostavax ) was licensed in 2006 and recommended by the Advisory Committee on Immunization Practices (ACIP) in 2008 for prevention of herpes zoster (shingles) and its complications among adults aged ≥60 years (1). The Food and Drug Administration (FDA) approved the use of Zostavax in 2011 for adults aged 50 through 59 years based on a large study of safety and efficacy in this age group (2). ACIP initially considered the use of herpes zoster vaccine among adults aged 50 through 59 years in June 2011, but declined to recommend the vaccine in this age group, citing shortages of Zostavax and limited data on long-term protection afforded by herpes zoster vaccine (2). In October 2013, ACIP reviewed the epidemiology of herpes zoster and its complications, herpes zoster vaccine supply, short-term vaccine efficacy in adults aged 50 through 59 years, short-and long-term vaccine efficacy and effectiveness in adults aged ≥60 years, an updated cost-effectiveness analysis, and deliberations of the ACIP herpes zoster work group, all of which are summarized in this report. No vote was taken, and ACIP maintained its current recommendation that herpes zoster vaccine be routinely recommended for adults aged ≥60 years. Meeting minutes are available at . gov/vaccines/acip/meetings/meetings-info.html.
# Herpes Zoster Vaccine Background
The burden of herpes zoster increases as persons age, with steep increases occurring after age 50 years. Not only does the risk of herpes zoster itself increase with age, but among persons who experience herpes zoster, older persons are much more likely to experience postherpetic neuralgia (PHN) (3), nonpain complications (3), hospitalizations (4), and interference with activities of daily living (5). Because persons aged 50 years can expect to live an additional 32 years and persons aged 60 years, another 23 years (6), vaccination must offer durable effectiveness to protect against this increasing burden of disease.
Merck is the only U.S. supplier of varicella zoster virus (VZV)-containing vaccines (Zostavax; varicella vaccine ; and combined measles, mumps, rubella, and varicella vaccine ). Beginning in 2007, Merck experienced production shortfalls of the bulk product used to manufacture VZV-based vaccines, leading to intermittent delays in filling of Zostavax orders. As a result of increased production capacity and reliability, by January 2012, Merck had resumed routine supply of varicella-containing vaccines, and Zostavax returned to normal shipping (7). As of August 2014, no subsequent supply disruptions have been reported.
# Studies of Efficacy and Duration of Protection
One randomized, placebo-controlled trial has evaluated short-term efficacy of herpes zoster vaccine administered to adults aged 50 through 59 years. This study of 22,439 adults in this age group showed a vaccine efficacy of 69.8% (95% confidence interval = 54.1%-80.6%) for the prevention of herpes zoster over a mean follow up period of 1.3 years (8). Efficacy for prevention of PHN and long-term vaccine efficacy in this age group were not studied.
Two studies have evaluated the short-term efficacy of the zoster vaccine in adults aged ≥60 years. The shingles prevention study (SPS) (9), a randomized controlled trial, followed 38,546 subjects for up to 4.9 years after vaccination (median = 3.1 years) and found a vaccine efficacy of 51.3% (CI = 44.2%-57.6%) for prevention of herpes zoster and 66.5% (CI = 47.5%-79.2%) for prevention of PHN. The short-term persistence substudy (STPS) (10) followed a subset of 14,270 SPS subjects primarily 4 to 7 years after vaccination and found a vaccine efficacy of 39.6% (CI = 18.2%-55.5%) for prevention of herpes zoster and 60.1% (CI = -9.8%-86.7%) for prevention of PHN. The point estimates for vaccine efficacy for prevention of herpes zoster by year after vaccination
# Recommendations for routine use of vaccines in children, adolescents, and adults are developed by the Advisory Committee on Immunization Practices (ACIP). ACIP is chartered as a federal advisory committee to provide expert external advice and guidance to the Director of the Centers for Disease Control and Prevention (CDC) on use of vaccines and related agents for the control of vaccine-preventable diseases in the civilian population of the United States. Recommendations for routine use of vaccines in children and adolescents are harmonized to the greatest extent possible with recommendations made by the American Academy of Pediatrics (AAP), the American Academy of Family Physicians (AAFP), and the American College of Obstetrics and Gynecology (ACOG). Recommendations for routine use of vaccines in adults are harmonized with recommendations of AAFP, ACOG, and the American College of Physicians (ACP). ACIP recommendations adopted by the CDC Director become agency guidelines on the date published in the Morbidity and Mortality Weekly Report (MMWR).
Additional information regarding ACIP is available at http:// www.cdc.gov/vaccines/acip.
# Update on Recommendations for Use of Herpes Zoster Vaccine
Craig M. Hales, MD 1 , Rafael Harpaz, MD 1 , Ismael Ortega-Sanchez, PhD 1 , Stephanie R. Bialek, MD 1 (Author affiliations at end of text) from the combined SPS and STPS studies decreased from 62.0% (CI = 49.6%-71.6%) in the first year after vaccination to 43.1% (CI = 5.1%-66.5%) in year 5. The 95% CIs around the point estimates for years 6 (30.6%) and 7 (52.8%) included zero; therefore vaccine protection could not be demonstrated after year 5. Vaccine efficacy for prevention of PHN decreased from 83.4% (CI = 56.7%-95.0%) in year 1 to 69.8 (CI = 27.3%-89.1%) in year 2. Estimates for years 3 through 7 after vaccination were not statistically significantly different from zero, although point estimates were generally higher compared with estimates of vaccine efficacy against herpes zoster.
The long-term persistence study ( 11) continued to follow 6,687 vaccinated subjects from STPS primarily from year 7 through year 10 after vaccination. By the end of the STPS, subjects in the placebo group had been vaccinated; therefore, no concurrent control group was available for comparison. Instead, a statistical model estimated herpes zoster and PHN incidence in a comparable unvaccinated group using historical SPS control subjects. The model estimated a vaccine effectiveness of 21.1% (CI = 10.9%-30.4%) for prevention of herpes zoster and 35.4% (CI = 8.8%-55.8%) for prevention of PHN over years 7 to 10 combined. Methodologic challenges in reliance on herpes zoster incidence in historical controls for calculation of vaccine effectiveness against herpes zoster include the fact that several studies (3,(12)(13)(14) have shown increases in herpes zoster incidence over time. The lack of a concurrent control group seriously diminishes the strength of evidence for duration of vaccine protection from years 7 through 10. In addition, although some vaccine protection is demonstrated during the combined years 7-10 using this methodology, there is a high degree of uncertainty about trends in vaccine effectiveness over this time frame. For these reasons, effectiveness of herpes zoster vaccine administered to persons aged ≥60 years for preventing herpes zoster beyond 5 years remains uncertain.
# ACIP Review
At the October 2013 meeting, ACIP reviewed results from an updated cost-effectiveness analysis comparing health outcomes, health care resource utilization, costs, and quality-adjusted life years (QALYs) related to herpes zoster, PHN, and non-pain complications among unvaccinated persons and persons vaccinated at either age 50, 60, or 70 years (15). The model assumed waning of vaccine protection against herpes zoster to zero over 10 years for all ages, based on SPS, STPS, and long-term persistence study data. Projecting outcomes from ages 50 to 99 years, vaccination at age 60 years would prevent the most shingles cases (26,147 cases per 1 million persons) followed by vaccination at age 70 years and then age 50 years (preventing 21,269 and 19,795 cases, respectively). However, vaccination at age 70 years would prevent the most cases of PHN (6,439 cases per 1 million persons), followed by age 60 years and then age 50 years (preventing 2,698 and 941 PHN cases, respectively). From a societal perspective, vaccinating at age 70, 60, and 50 years would cost $37,000, $86,000, and $287,000 per QALY saved, respectively. The high cost per QALY saved with vaccination at age 50 years results from limited impact on prevention of PHN and other complications from ages 50 through 59 years and no remaining vaccine protection after age 60 when risk for PHN and other complications increases sharply. Results were robust in sensitivity analyses in which various more optimistic and pessimistic assumptions were made regarding waning of vaccine protection.
Because the protection offered by the herpes zoster vaccine wanes within the first 5 years after vaccination, and duration of protection beyond 5 years is uncertain, it is unknown to what extent persons vaccinated before age 60 years will be protected as they age and their risk for herpes zoster and its complications increases. Because duration of protection offered by the vaccine is uncertain, the need for revaccination is not clear. Assuming waning of vaccination protection according to currently available studies, the cost-effectiveness model projects a substantially greater reduction of disease burden, health care utilization, and costs with vaccination of older adults who have higher incidence of herpes zoster and related complications. Considering that the burden of herpes zoster and its complications increases with age and that the duration of vaccine protection in persons aged ≥60 years is uncertain, ACIP maintained its current recommendation that herpes zoster vaccine be routinely recommended for adults aged ≥60 years. After approval by the Food and Drug Administration for use of zoster vaccine in adults aged 50 through 59 years in 2011, ACIP initially considered use of the vaccine among adults in this age group, but declined to change its recommendations at that time, citing shortages of Zostavax and limited data on longterm protection afforded by herpes zoster vaccine. A new review was conducted because the manufacturer has resumed routine supply of Zostavax and additional data on long-term protection have become available.
# What is currently recommended?
Considering that the burden of herpes zoster and its complications increases with age and that the duration of vaccine protection in persons aged ≥60 years is uncertain, ACIP's recommendation remains unchanged; herpes zoster vaccine is routinely recommended only for adults aged ≥60 years.
With FDA approval, Zostavax is available in the United States and indicated for use among adults aged ≥50 years. Vaccination providers considering the use of Zostavax among certain persons aged 50 through 59 years despite the absence of an ACIP recommendation should discuss the risks and benefits of vaccination with their patients. Although the vaccine has short-term efficacy, there have been no long-term studies of vaccine protection in this age group. In adults vaccinated at age ≥60 years, vaccine efficacy wanes within the first 5 years after vaccination, and protection beyond 5 years is uncertain; therefore, adults receiving the vaccine before age 60 years might not be protected when their risks for herpes zoster and its complications are highest. CDC is actively monitoring postmarketing data on duration of vaccine protection in adults vaccinated at age ≥60 years. As additional data become available, ACIP will reevaluate the optimal age for vaccination and the need for revaccination to maintain protection against herpes zoster and its complications. | 2,579 | {
"id": "108df12ae2d9bc11577354e5a4367b211e1b3ecc",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # MAHC
MAHC Table of Contents ANNEX viii
MAHC Table of Contents ANNEX xiv
MAHC Table of Contents ANNEX xv
MAHC Table of Contents ANNEX xvii
MAHC Table of Contents ANNEX xxi MAHC Table of Contents ANNEX xxii MAHC Table of Contents ANNEX xxiii MAHC Table of Contents ANNEX xxiv
# Rationale
With hundreds of millions of visits 1 to AQUATIC FACILITIES, waterparks, and natural recreational water sites each year, BATHERS expose themselves to many potential dangers in and around AQUATIC FACILITIES. In recent decades, public health practitioners have seen a dramatic increase in waterborne disease outbreaks associated with public disinfected AQUATIC FACILITIES (e.g. swimming pools, water parks, etc.). 2 Drowning and falling, diving, chemical use, and suction injuries continue to be major public health injuries associated with AQUA CILITIES, particularly for young children. 3,4,5,6,7,8,9,10 TIC FA Thus, public health and SAFETY is essential to consider starting with the design, construction, operation, and maintenance of public AQUATIC FACILITIES.
# Recreational Water-Associated Illness Outbreaks and Injuries
# RWI Outbreaks
Since 1978, the number of recreational water-associated WATERBORNE DISEASE outbreaks (WBDOs) reported annually has increased dramatically. 11 This increase is probably due to a combination of factors including:
- The emergence of PATHOGENs, especially CHLORINE-tolerant Cryptosporidium, Increased participation in aquatic activities, Increases in the number of AQUATIC FACILITIES, and Increased recognition, investigation, and reporting of outbreaks that may have previously gone undetected.
1.0 Preface ANNEX 26 Over 2009-2010, a total of 81 recreational water-associated WBDOs and 1,366 cases of illness and 62 hospitalizations were reported to the CDC. CDC documented that 57 of these outbreaks and 78% of the cases were associated with disinfected water venues. 12 Multiple challenges exist for providing adequate cleaning and disinfecting of swimming water. Sunlight, urine, exposure to air, and inorganic and organic matter (i.e. sweat, saliva, and feces) can quickly deplete FREE AVAILABLE CHLORINE, the primary disinfectant used in POOLS. AQUATIC FACILITIES also provide potential exposure to FECAL contamination from other swimmers. These incidents are common in AQUATIC FACILITIES, especially from diaper-aged BATHERS who are not toilet trained (babies and toddlers). 13
# Significance of Cryptosporidium
One such pathogen is Cryptosporidium 14 (fecal-orally spread from person to person or from contaminated objects/media like pool water), which can survive for days in resistant. 15 , 16 , 17 chlorinated AQUATIC FACILITIES because it is extremely CHLORINE Cryptosporidium causes a profuse watery diarrhea that contains large numbers of infectious OOCYSTS so, if the water or surfaces at AQUATIC FACILITIES get contaminated, an outbreak can occur. Cryptosporidium and other waterborne pathogens have a low infectious dose and can still be excreted from the body weeks after diarrhea ends. These factors increase the potential for a waterborne disease outbreak. Waterborne diseases and outbreaks can include the following: Gastrointestinal illness resulting from exposure to pathogens such as Escherichia coli O157:H7 or Cryptosporidium, Infections of the brain, skin, ear, eye, and lungs, Wounds, and Exposure to pool-related chemicals.
There were 21 treated recreational water-associated outbreaks reported in 2009-2010 that were caused by Cryptosporidium, a substantial increase from the eight reported for treated AQUATIC FACILITIES in 1997 In addition, during 1999-2008 Cryptosporidium was identified as the cause of 74.4% of gastroenteritis outbreaks at 1.0 Preface ANNEX 27 disinfected AQUATIC FACILITIES, making it the leading cause of diarrheal disease outbreaks at disinfected AQUATIC FACILITIES. 20
# Drowning and Injuries
Drowning and falling, diving, chemical use, and suction injuries continue to be major public health injuries associated with AQUATIC FACILITIES. Drowning is a leading cause of injury death for young children ages 1 to 4, and the fifth leading cause of unintentional injury death for people of all ages. 21,22 Typically, CODE adoption bodies (federal, state, and local governments) publish a notice of their intent to adopt a CODE, make copies available for public inspection, and provide an opportunity for public input prior to adoption. As is also outlined in the FDA Model Food Code, this is usually done in one of two ways.
- The recommended method is the "short form" or "adoption by reference" approach where a simple statement is published stating that certified copies of the proposed CODE are on file for public review. This approach may be used by governmental bodies located in states that have enabling laws authorizing the 1.0 Preface ANNEX 29 adoption of CODES by reference. An advantage to this approach is a substantial reduction in the cost of publishing and printing. The alternative method is the "long form" or "section-by-section" approach where the proposed CODE is published in its entirety. Both methods of adoption allow for the modification of specific provisions to accommodate existing law, administrative procedure, or regulatory policy.
# The MAHC Revision Process
# MAHC Revisions
Throughout the creation of the MAHC, the CDC accepted concerns and recommendations for modification of the MAHC from any individual or organization through two 60-day public comment periods via the email address
# Future Revisions
CDC realizes that the MAHC should be an evolving document that is kept up to date with the latest science, industry advances, and public health findings. As the MAHC is used and recommendations are put into practice, MAHC revisions will need to be made.
As the future brings new technologies and new aquatic health issues, the Conference for the Model Aquatic Health Code (CMAHC), with CDC participation, will institute a process for collecting national input that welcomes all stakeholders to participate in making recommendations to improve the MAHC so it remains comprehensive, easy to understand, and as technically sound as possible. These final recommendations will then be weighed by CDC for final incorporation into a new edition of the MAHC. Given the vision, mission, and goals of the MAHC as discussed in MAHC Section 1.3, the CDC will be especially interested in addressing any problems identified. CDC encourages interested individuals to consider raising issues and suggesting solutions through the CMAHC process.
User Guide ANNEX 30
# User Guide
# Overview
# MAHC Structure and Format
The MAHC utilizes the format also found in the FDA Model Food Code; thus within the MAHC, references are made to the FDA Model Food Code and the Conference for Food Protection. These are purely for reference and to gain a better scope of perspective and protocol.
# Annex
# Rationale
The annex is provided as a supplement to the code; thus, the annex material is not intended to be interpreted or enforced as model code in order to keep future laws or other requirements based on the MAHC straightforward. However, the annex is provided specifically to assist users in understanding the intent behind code provisions and applying the provisions uniformly and effectively.
# Content
To use the MAHC more effectively, users should preview the annex contents before using parts of the MAHC model code language. The annex is structured to present the information by the specific MAHC section number to which they apply. The Annex and Appendices also provide information and materials intended to be helpful to the user such as forms and checklists.
Glossary of Acronyms & Terms ANNEX 31
# Glossary of the MAHC Code and Annex
# Acronyms and Initialisms Used in This Code and Annex
# Facility Design Standards and Construction
The MAHC has worked extensively with ICC and IAPMO to eliminate conflicts between the three codes. These discussions, along with NEHA participation, have resulted in changes in the MAHC and plans to change items in the other codes as they are brought up for revision. The MAHC is committed to resolving these conflicts now and in the future as these codes evolve.
# Filtration Flow Rate
The particle contamination burden determines the filtration flow rate for a given AQUATIC FACILITY. It is not possible to predict the particle contamination burden for every individual AQUATIC VENUE because the sources will likely vary significantly from one AQUATIC FACILITY to another. However, it is important to understand the upper limit of particle contamination to provide information for filtration designs. If the upper limit of the particle contamination burden is known, then it should be possible for the designer to specify a filtration system that can handle the maximum particle burden and ensure that water turbidity does not increase above an allowable or desirable level. Essentially, the RECIRCULATION SYSTEM needs to be designed to remove particles at least at the same rate at which they are being added by the environment (e.g., windblown and settling dust), BATHERS (e.g., personal care products, body excretions), and other sources.
# Determining Maximum Rate of Particle Contamination
The best means for determining this maximum rate of particle contamination is through direct measurement at operating facilities to ensure the data are indicative of normal activity. The rate of contamination (n, particles/time/gallon) is likely to vary by AQUATIC VENUE location, BATHER COUNTS, BATHER age, time of year, time of day, weather, and proximity to urban and desert environments.
Data Search An extensive literature search turned up no relevant data defining the particulate contamination burden in AQUATIC FACILITIES. It is recommended that a model be developed that describes particle addition and subsequent removal by the filtration system. This would include developing a correlation between particle size and turbidity or clarity index; this correlation is needed from a practical point of view since regulations are likely to be developed based on turbidity or clarity. These data could then be used for making concrete, data-based decisions on removal rate requirements and help with defining the required filtration and circulation capacities.
Facility Design & Construction ANNEX 34 The rate of CHLORINE loss can be reduced by the use of other oxidizers, including potassium monopersulfate and ozone, or UV, which can destroy CONTAMINANTS which would otherwise react with CHLORINE. Additional research on the contributing factors to disinfectant demand (i.e. nitrogenous waste) may be warranted in the future as treatment methods are developed to reduce or eliminate them by means other than OXIDATION. It is anticipated that this research would identify the introduction rate of the CONTAMINANT, resulting concentrations, and the effect that reduction or elimination of this CONTAMINANT would have on disinfectant demand or other ancillary benefits (i.e. reduction of combined chlorines).
Chemical Feed Pump Sizing Further data collection on CHLORINE usage in real world AQUATIC VENUE situations under different environmental and operational conditions could be used to develop an effective rate law from which the sizing of chemical feed pumps could then be calculated. 31,32 The criteria for specifying a chemical feed pump for an AQUATIC VENUE are based on its ability to feed against the process piping pressure and to provide sufficient feed rate to maintain a disinfectant residual in the water. Several states require chemical feed pumps for CHLORINE to be capable of providing up to 10 PPM of CHLORINE in the pipe returning water from the RECIRCULATION SYSTEM back to the POOL. Once actual CHLORINE usage is obtained, a surplus SAFETY factor could be introduced to slightly oversize the feed pump to ensure that the disinfectant dosing amount can be increased to meet increases in demand. Any such sizing requirements need to specify the timeframe within which the pump must be able to satisfy the CHLORINE dosing required.
Disinfection By-Product Issues
Chlorination of Water Chlorination, using CHLORINE as the disinfectant, is the most common procedure for AQUATIC VENUE water DISINFECTION and inactivation of waterborne microbial pathogens. BATHER activity and environmentally-introduced material provides a broad range of precursors with which disinfectants can react (e.g., perspiration, urine, mucus, skin particles, hair, body lotions, fecal material, soil, etc.). When CHLORINE reacts with these precursors, a variety of chemical reactions take place, including the formation of DISINFECTION BY-PRODUCTS (DBPs) 33,34,35,36,37 . DBPs may also be introduced into the AQUATIC VENUE via the water used to fill the AQUATIC VENUE depending on the supply water quality. Municipal fill water can also include chloramines as some municipal
Facility Design & Construction ANNEX 35 systems switch from chlorination to chloramination to meet EPA DISINFECTION byproduct requirements. 38
# C
. 39,40,41 HLORINE gas, if used, is also extremely toxic
Types of Disinfection By-Products DBPs can be organic 42 ,43 or inorganic ,44, 45,46 (e.g. chloramines and cyanogen chloride). The major by-products of DISINFECTION using hypobromous acid (HOBr) and hypochlorous acid (HOCl) are bromoform (CHBr 3 ) and chloroform (CHCl 3 ), respectively. Chloroform and bromoform are highly volatile compounds that can be inhaled in AQUATIC VENUE environments and also readily absorbed through the skin. 47,48,49
Classes of Organic DBPs Some classes of organic DBPs 50 are: TRIHALOMETHANES (total trihalomethane is the sum of the concentrations of chloroform, bromoform, bromodichloromethane, and dibromochloromethane); Chlorinated phenols (2-chloro-, 2,4-dichloro-and 2,4,6-trichlorophenol), 58,59,60,61 . The one prospective study available 62 suggests swimming does not increase the risk of asthma. To the contrary, the study found swimming increased lung function and reduced the risk of asthma symptoms at age seven. The health benefits associated with swimming include improvement of asthma symptoms and cardiovascular fitness. Pediatricians have long recommended swimming for asthmatic children because of its lower asthmogenicity compared to other forms of exercise. The Belgian Superior Health Council 63 reviewed the available science related to AQUATIC VENUE swimming and the development of childhood asthma. The Council, in its 2011report No. 8748 (and reiterated in its 2012 concludes swimming remains highly advisable, even in the case of asthma. According to the Council, "For this target group, the advantages of swimming under good hygienic conditions in monitored AQUATIC VENUES outweigh the risk of toxicity linked to CHLORINE and its by-products." 64,65,66,67,68 World Health Organization, states that "the risks from exposure to chlorination byproducts in reasonably well-managed AQUATIC VENUES would be considered to be small and must be set against the benefits of aerobic exercise and the risks of infectious disease in the absence of DISINFECTION." 75 Improved water quality management is recommended to minimize formation and accumulation of these compounds.
haloketones (1,1-dichloropropanone, 1,1,1-trichloropropanone); Haloketones ( bromopropanone, 1,1-dichloropropanone, 1,1,1 trichloropropanone,
Urea Concentrations in Pool Water A major CONTAMINANT in AQUATIC VENUE water is urea. Urea is chiefly derived from swimmers urinating in AQUATIC VENUE water, but is also present in swimmer's sweat. It has been shown that urea reacts with hypochlorous acid to produce TRICHLORAMINE. However, while breakpoint destruction of ammonia is very fast, reaction of hypochlorous acid with urea is very slow. Therefore, urea is difficult to remove quickly by shocking the AQUATIC VENUE water. There are no guidelines in the U.S. for MONITORING the urea concentration in AQUATIC VENUE water or suggested levels of concern. Input of urea is most effectively minimized by changes in swimmers' behavior and hygiene. 76,77,78,79,80
Technical Details Detailed specifications are required to ensure that there is no misunderstanding, ambiguity, or omission between the design professional and the AHJ reviewer.
75 . Guidelines for safe recreational-water environments, Volume 2: Swimming pools, spas and similar recreational-water environments.
76 Blatchley E, et al. Reaction mechanism for chlorination of urea. Environ Sci Technol. 2010 Nov 15;44(22):8529 34. doi: 10.1021/es102423u. Epub 2010 Oct 21. 77De Laat J, et al. "Concentration levels of urea in swimming pool water and reactivity of chlorine with urea", Water Research, 2011, 45(3) 1139-1146. 78 Schmalz C, et al. "Trichloramine in swimming pools -Formation and mass transfer", Water Research, 2011 2681-2690. 79 Fuchs J. Chlorination of pool water: urea degradation rate. Chemiker Ztg. -Chem. Apparatur (1962);86( 3
# Plan Approval
The construction of public AQUATIC FACILITIES should not be undertaken without a thorough review and approval of the proposed construction plans by the AHJ. Construction costs for AQUATIC FACILITIES can be in the millions of dollars and very costly mistakes in design and equipment choices can occur if plans are not reviewed before construction. These mistakes could result in both public health hazards and additional remodeling costs.
Most of the states require that plans be submitted for review and approval by the regulatory authority before a public AQUATIC FACILITY can be constructed. Although there is considerable variation in the amount of information and detail required on the plans, most of the jurisdictions require at least a plot plan with sufficient detail to allow for a reasonable review of the proposed project.
The licensed professional engineer or architect should have at least one year of previous experience with public AQUATIC FACILITY design. Most states will allow any professional engineer or architect to design an AQUATIC FACILITY. However, since AQUATIC DESIGN technology is sufficiently complex, specific prior experience in AQUATIC FACILITY construction and design is strongly recommended. A minimum of one year of previous experience in AQUATIC FACILITY design and construction is recommended.
Any final approval of plans by the AHJ should be dependent on approval by all other appropriate agencies.
For example, the assumption of responsibility for reviewing plans for structural SAFETY and ensuring the AQUATIC FACILITY is designed to withstand anticipated loading, not only the POOL shell, but also in cases where the POOL may be located on an upper floor of a building or a rooftop is generally that of the local building department. If there is no local building code department or requirements, the design engineer or architect must assume responsibility. This may include requiring the architect or engineer to certify the structural stability of the POOL shell during full and empty conditions.
# Replacements
Most jurisdictions allow for replacements in-kind.
Compliance Certificate
Design Parameters There are multiple forms of acceptable finishes available including but not limited to: paint, marcite plaster finish, quartz plaster finish, aggregate plaster finish, vinyl or PVC liner / paneling systems, stainless steel, tile, etc. Each system shall have advantages and disadvantages associated with cost, durability, clean-ability, etc. These advantages and disadvantages are also subject to installation design issues (e.g. indoors/outdoors, above/below water level, environmental effects, freezing or temperature exposures, etc.).
Smooth Finish SKIMMER POOLS require a six inch (152 mm) to 12 inch (305 mm) high finish due to the varying height of water associated with in-POOL surge capacity of SKIMMER POOL systems. Gutter or perimeter overflow systems require a minimum finish height of two inches (51 mm). If dark colors are utilized for the POOL finish, the POOL finish should not exceed a maximum height of 12 inches (305 mm) for contrasting purposes. Typical finishes include: tile, stainless steel, vinyl, fiberglass, etc.
Slip Resistant "Slip resistant" is usually considered to mean having a static coefficient of friction of 0.6 or better for both wet and dry conditions. Water three feet (0.9 m) and less is considered shallow water and the majority of BATHERS are capable of walking on the POOL bottom at these depths, so a slip-resistant surface is required. At depths greater than three feet (0.9 m), most BATHERS are sufficiently buoyant making the coefficient of friction for the POOL floor surface less important. Slip resistant surfaces shall meet or exceed the minimum coefficient of friction (typically 0.8 for ramped surfaces and 0.6 for other wet surfaces; currently, ASTM standard C1028 is under revision.) as set forth by the following groups: Cold Weather Paints suitable for use as vapor retarders usually have high solids, and must be carefully applied to achieve a rating of 0.4 perm for one coat. It is important to get very good coverage without gaps or thin spots. The paint supplier or manufacturer should be consulted for ratings and BEST PRACTICEs.
Paint or Coating One U.S. perm equals 1.0 grain of moisture per square foot per hour per inch-of mercury differential pressure. One U.S. perm equals 57 SI perm.
Mechanical Systems
# Indoor Aquatic Facility Air Pressure
Air-pressure-supported INDOOR AQUATIC FACILITIES may require pressurization of adjoining or connected SPAces.
Air Ducts Refer to the 2011 ASHRAE Applications Handbook on Natatorium Design for recommendations.
Indoor Aquatic Facility Doors Where exterior doors of an INDOOR AQUATIC FACILITY may be exposed to temperatures below the freezing temperature of water, the frames should be constructed to minimize the risk of the door freezing closed. The issue here is one of emergency exit. There is a large amount of water vapor available to freeze into the gap between doors, etc., that can inhibit emergency exiting.
# Exception:
Other doors should be acceptable, subject to approval by the AHJ, where heating systems are so arranged as to maintain such doors at least 5°F (-15°C) above the freezing temperature of water.
Indoor Aquatic Facility Windows Windows are usually maintained above -air dew point to prevent condensation and mold growth by heated supply air flowing over them. Heavy window frames on the interior side interfere with the proper flow of this heated air by the Coanda effect (a corollary of Bernoulli's principle). There are many ways to mechanically address window condensation issues. Air supply can be dumped on glazing from both above and below. Fin tube heaters have also been effectively employed along sills in many instances.
# Overflow System / Gutters
At the release date of the MAHC 1 st Edition, overflow system gutters products are currently listed by NSF to an engineering specification. Language is ready for ballot into NSF/ANSI Standard 50.
# Skimmers
At the release date of the MAHC 1 st Edition, NSF/ANSI Standard 50 2013 is the current version of the applicable STANDARD for SKIMMERS.
# Main Drain System
At the release date of the MAHC 1 st Edition, American National Standards Institute /Association of Pool and Spa Professionals (ANSI/APSP) STANDARD 16 -2011, titled "American National Standard for Suction Fittings for Use in Swimming Pools, Wading pools, Spas and Hot Tubs" is the current version of the applicable STANDARD for main drain systems.
# Multiport Valves
At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 50 2013 is the current version of the applicable STANDARD for multiport valves.
# Face Piping
At the release date of the MAHC 1 st Edition, face piping products are currently listed by NSF to an engineering specification. It is currently at the Task Group Level for development of language for inclusion into NSF/ ANSI Standard 50.
# Diaphragm Valves
At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 14 -2008e is the current version of the applicable STANDARD for diaphragm valves. Product is currently at the Task Group Level for development of language for inclusion into NSF/ ANSI. Standard 14 -2008e is the current version of the applicable STANDARD for check valves. Product is currently at the Task Group Level for development of language for inclusion into NSF/ ANSI Standard 50 as well.
# Fittings
At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 14 -2008e is the current version of the applicable STANDARD for fittings. Product is currently at the Task Group Level for development of language for inclusion into NSF/ ANSI Standard 50 as well.
# Pipe
At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 14 -2008e is the current version of the applicable STANDARD for pipe. Product is currently at the Task Group Level for development of language for inclusion into NSF/ ANSI Standard 50 as well.
# Pumps
At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 50 -2013, UL 1081(non-metallic pumps up to 5 hp), California Assembly Bill 1953and United States National Electrical Code NFPA-70 (2008 are the current version of the applicable STANDARDS for pumps.
# Strainers
At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 50 2013 is the current version of the applicable STANDARD for strainers.
# Gauges
At the release date of the MAHC 1 st Edition, gauges are currently listed by NSF to an engineering specification. It is currently at the Task Group Level for development of language for inclusion into NSF/ ANSI Standard 50.
# Flow Meters
At the release date of the MAHC 1 st Edition, flow meters are currently listed by NSF to an engineering specification. Language is ready for ballot into NSF/ ANSI Standard 50.
# Notes About Component Requirements: Heaters
# HVAC and Dehumidifiers
At the release date of the MAHC 1 st Edition, UL 1995 is the current version of the applicable STANDARD for HVAC and dehumidifiers.
# Solar Pool Heaters
At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 50 2013 is the current version of the applicable STANDARD for solar POOL heaters.
Facility Design & Construction ANNEX 46 Furnaces At the release date of the MAHC 1 st Edition, ANSI Z83. 8-2006 Gas Heaters andGas-Fired Duct Furnaces, CSA 2.6 -2006 Gas Heaters and Gas-Fired Duct Furnaces and UL 757 Oil-Fired Furnaces are the current version of the applicable STANDARDS for furnaces.
# Boilers
At the release date of the MAHC 1 st Edition, ASME Boiler Code, ANSI Z21.13 -CSA 4.9 Gas Fired Hot Water Boilers are the current version of the applicable STANDARDS for boilers.
# Gas-Fired Pool Heaters
At the release date of the MAHC 1 st Edition, ANSI Z21.10.3 CSA 4.3 and ANSI Z21.56/ CSA 4.7 is the current version of the applicable STANDARDS for gas-fired POOL heaters. Language is ready for ballot into NSF/ ANSI Standard 50.
# Flues
At the release date of the MAHC 1 st Edition, UL 1777 is the current version of the applicable STANDARD for flues.
# Notes About Component Requirements: Filtration
# Rapid Sand Filters
At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 50 2013 is the current version of the applicable STANDARD for rapid sand filters.
# High-Rate Sand Filters
At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 50 2013 is the current version of the applicable STANDARD for high-rate sand filters
# Precoat Filters
At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 50 2013 is the current version of the applicable STANDARD for precoat filters. Filters previously known as diatomaceous earth filters changed to precoat filters based on significant use of alternate filter media such as perlite.
# Filter Media
At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 50 2013 is the current version of the applicable STANDARD for filter media.
# Cartridge Filters
At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 50 2013 is the current version of the applicable STANDARD for cartridge filters.
Facility Design & Construction ANNEX 47 Other Filter Types At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 50 2013 is the current version of the applicable STANDARD for other filter types.
# Notes About Component Requirements: Disinfection Equipment
# Mechanical Chemical Feeding Equipment
At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 50 -2013 and UL 1081, CSA C22 are the current versions of the applicable STANDARDS for mechanical chemical feeding equipment.
# Ozone
At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 50 -2013, UL 1081, CSA C22 and United States National Electrical Code NFPA-70 (2008 are the current versions of the applicable STANDARDS for ozone generators.
# Ultraviolet Light
At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 50 -2013, which includes testing for Cryptosporidium validation, CSA C22 and United States National Electrical Code NFPA-70 (2008) are the current versions of the applicable STANDARDS for ultraviolet light systems.
Other potential guidance can be found in the U.S. EPA UV Design Guidance: .
# In-line Electrolytic Chlorinator
At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 50 -2013, UL 1081, CSA C22, United States National Electrical Code NFPA-70 (2008) and Canadian PMRA are the current versions of the applicable STANDARDS for in-line electrolytic chlorinators.
# Brine Batch Electrolytic Chlorine or Bromine Generator
At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 50 -2013, UL 1081, CSA C22, United States National Electrical Code NFPA-70 (2008 and Canadian PMRA are the current versions of the applicable STANDARDS for brine batch electrolytic CHLORINE or bromine generators.
# Copper/Silver and Copper Ion Generator
At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 50 -2013, UL 1081, CSA C22, United States National Electrical Code NFPA-70 (2008 and Canadian PMRA are the current versions of the applicable STANDARDS for copper/ silver and copper ion generators.
# Chemical Storage
At the release date of the MAHC 1 st Edition, United States National Fire Code NFPA-1 (2009) is the current version of the applicable STANDARD for chemical STORAGE. (2008) are the current versions of the applicable STANDARDS for AUTOMATED CONTROLLERs.
# Water Quality Testing Device
At the release date of the MAHC 1 st Edition, NSF/ ANSI Standard 50 2013 is the current version of the applicable STANDARD for WATER QUALITY TESTING DEVICEs.
# Notes About Component Requirements: Electrical Equipment
# National Electrical Code
At the release date of the MAHC 1 st Edition, United States National Electrical Code NFPA-70 (2008) is the current version of the applicable STANDARD for general electrical.
# Lights
At the release date of the MAHC 1 st Edition, UL 1241 -Junction Boxes for Swimming Pool Luminaires, UL 676-Underwater Luminaires and Submersible Junction Boxes, UL8750-Light Emitting Diode (LED) Equipment for Use in Lighting Products, and UL379-Transformers for Fountain, Swimming Pool, and Spa Luminaires are the current versions of the applicable STANDARDS for lights.
# Notes About Component Requirements: Deck Equipment
# Diving Boards and Platforms
At the release date of the MAHC 1 st Edition, ANSI/ NSPI-1 2003 is the current version of the applicable STANDARD for diving boards and platforms.
# Starting Blocks
At the release date of the MAHC 1 st Edition, ANSI/ NSPI-1 2003, FINA, NFSHSA, and NCAA are the current version of the applicable STANDARDS for starting blocks.
# Life Guard Chairs
At the release date of the MAHC 1 st Edition, ANSI/ NSPI-1 2003 is the current version of the applicable STANDARD for lifeguard chairs.
# Ladders
At the release date of the MAHC 1 st Edition, ANSI/ NSPI-1 2003 is the current version of the applicable STANDARD for ladders.
# Handrail
At the release date of the MAHC 1 st Edition, ANSI/ NSPI-1 2003 is the current version of the applicable STANDARD for handrail.
# Facility Design & Construction ANNEX Stairs
At the release date of the MAHC 1 st Edition, ANSI/ NSPI-1 2003 is the current version of the applicable STANDARD for stairs.
# Handicapped Lifts
At the release date of the MAHC 1 st Edition, the AMERICANS WITH DISABILITIES ACT is the applicable STANDARD for handicapped lifts and is regulated by the Department of Justice.
# Safety Covers
At the release date of the MAHC 1 st Edition, ANSI/ NSPI-1 2003, ASTM 1346, and UL2452 are the current version of the applicable STANDARDS for SAFETY covers.
# Aquatic Venue Operation and Facility Maintenance
4.5 Aquatic Venue Structure
# Design for Risk Management
Working with the owner and/or aquatic risk management consultant, the designer can outline the anticipated zones of PATRON surveillance and place fixed lifeguard stations accordingly. It is important to have a person knowledgeable in aquatic risk management to advise on placement of fixed lifeguard stations and the general design of the AQUATIC VENUE as it relates to placement of lifeguards so to avoid blind spots, glare issues, and other obstructions being included in the design. This also allows the owner to influence design so it meets the anticipated labor requirements. In some operations where the AQUATIC VENUE design requires more lifeguards, this puts pressure on owners to minimize labor by extending zones of PATRON surveillance. Small design changes could reduce zone size without taking away from PATRON enjoyment. This is also a critical need when considering alterations such as the addition of new AQUATIC FEATURES (e.g., waterslides, mushroom) that change visibility and the PATRON zones of surveillance and increase the number of lifeguards needed. This knowledge is important to have while deciding on the benefits of the new AQUATIC FEATURE so they can be balanced with the increased labor cost.
Under Five Feet A maximum slope of 1:12 is used in water under five feet (1.5 m) for consistency with ADA since these ramps can be used for access. Variances may be considered by the AHJ.
Facility Design & Construction ANNEX 50 4.5.2.4 Drain POOLS should be designed to allow for the water to drain to a low point in order to prevent standing water from creating a contamination issue.
Pool Access / Egress 4.5.3.1 Accessibility As required by the Department of Justice, all POOL designs shall be compliant with the Americans with Disabilities Act (ADA). The POOL design shall not create SAFETY hazards with regards to maintaining necessary clearances, not infringing upon the recirculation of AQUATIC VENUE water, or creating areas for potential entrapment.
# Stairs
# Deep Water
It is common, especially in high-end diving wells with ten-meter towers, for there to be "swim-out" stairs underneath the dive tower. This provision is allowing for those types of deep water stairs without requiring the stairs to continue down to the bottom of the POOL (which may be 17 feet deep and impractical in the diving well example). Perimeter Gutter Systems It is not the intent of this section to eliminate the "roll out gutter" as they need to be a minimum six inches (15.2 cm) from DECK to water level. ADA 34 inch (86.4 cm) standards. The current MAHC language stipulates that 28 inches (711 cm) is a minimum, which does not preclude a designer from using 34 inch (86.4 cm) railings.
ADA Accessibility The outside diameter that the handrail configuration and dimensions need to conform to for the POOL access requirements outlined in ADA are not associated with ADA requirements, but these parameters are intended to address the necessary structural requirements which are not addressed in ADA. In the end, ADA STANDARDS will always take precedence over anything in the MAHC.
Another source for guidance is the Architectural Barrier's Guide -refer to Swimming Pools, Wading Pools, and Spas section numbers 242 and 1009.
Dimensions Dimensions of handrails should conform to requirements of MAHC Table 4.5.5.7 and MAHC Figure 4.5.5.7.1. This pertains to the handrail comments in MAHC Annex Section 4.5.5.5. The MAHC does not intend to choose only certain aspects of ADA to enforce; the MAHC agrees that all components of the current ADA requirements will stand irrespective of the MAHC language. However, ADA does not address structural requirements.
Pool Wall This is a design criterion for POOLS in some of the western states. The initial intent was to design against entrapment between the railing and the POOL wall --both for fingers and also the hands/wrists/arms of smaller children. CPSC recommends four inches (10.2 cm) based on child anthropometry tables. Anthropometric charts were reviewed in establishing the current allowable range.
Support The structural requirements in the ladder, handrail, railing section are taken from commercial manufacturers and their recommended data.
Zero Depth (Sloped) Entries The term "light pastel color" should be consistent with Munsell color value 6.5 or higher.
School, facility or team logos incorporated on the POOL finishes are acceptable but will require review by the AHJ to ensure the design of such logos do not impede the color and finish functionality listed above.
Ultimately, water clarity is the primary criteria with which to be concerned. If a POOL has crystal clear water conditions and a BATHER is lying on the floor of a POOL with a blue finish versus one with a white finish, it's logical to think that the BATHER would be more identifiable against the darker finish. However, there's also the argument for recognizing dirt and debris at the bottom of the POOL.
# Munsell Color Value
The State of Wisconsin uses the Munsell color chart and requires values of 6.5 or greater. The Munsell color system looks at color purity, hue, and lightness to assign a value. This system is used in other industries and information on this system is easily available.
A contractor could provide a mock-up during the submittal process to the AHJ or engineer for review and approval. Plaster and other quartz aggregate manufacturers have reflectance testing that is available for finish samples.
The American Plasterer's Council defers to ASTM Standard E 1477 -98a title "Standard Test Method for Luminous Reflectance Factor of Acoustical Materials by Use of Integrating Sphere Reflectometers" to determine LRV values. It's a fairly simple test method where "Test specimens are measured for (total) luminous reflectance factor by standard color-measurement techniques using a spectrophotometer, tristimulus (filter) colorimeter, or other reflectometer having a hemispherical optical measuring system, such as an integrating sphere. The specular component is included to provide the total reflectance factor condition. The instrument STANDARD is referenced to the perfect reflecting diffuser. Luminous reflectance factor is calculated as CIE tristimulus value Y for the CIE 1964 (10°) standard observer and CIE standard illuminant D 65 (daylight) or F 2 (cool white fluorescent).
# Structural Stability
Expansion and/or CONSTRUCTION JOINTs should be utilized when determined prudent by a licensed structural engineer. Any joints should utilize waterproofing strategies such as water stops as they are subject to compromising a POOL'S integrity regarding water tightness. The condition of all joints should be inspected regularly to ensure their condition.
# Hand Holds
Based on anthropometric data for children between the ages of 6.5 to 7.5 years of age, the difference between their stature and vertical grip reach averages 9.3 inches (23.6 cm) so this measurement has been reinstated to nine inches (22.9 cm) is an inaccessible area because the DECK needs to end in order to achieve the "infinity" effect-typically this is achieved by an elevation difference-the DECK continues to extend around the POOL perimeter, but below the edge. The MAHC goal was to allow these types of design features while ensuring that these areas of the POOL are safe and still readily accessible for emergency response.
Handholds INFINITY EDGES can be accomplished with an obtuse angle or knife edge, or even a C701 handhold. It is typically submerged a fraction of an inch.
Maximum Height Building CODES typically require a railing for heights greater than 30 inches (76.2 cm) for SAFETY purposes.
# Underwater Benches
UNDERWATER BENCHes are intended to allow BATHERS to sit in locations along the POOL wall. These chair/bench-like structures either protrude into the POOL from the POOL wall or are recessed into the POOL wall. To accommodate the size of most people, the seat itself is often 16 inches (40.6 cm) to 18 inches (45.7 cm) wide and is located 12 inches (30.5 cm) to 24 inches (61.0 cm) below the water line.
Slip Resistant Slip-resistant surfaces shall meet or exceed the minimum coefficient of friction as set forth by the following groups: Americans with Disabilities Act (ADA) Occupational Safety and Health Administration (OSHA)
Maximum Water Depth The five foot (1.5 m) depth restriction is to address the potential safety issue of stepping or otherwise moving off a bench into deep water. The seat depth below the water line is limited to 20 inches (50.8 cm) maximum so a non-swimmer may be comfortable at that depth but once they move from the bench into a greater water depth it may exceed their comfort and/or skill level. (1.5 m). They may be provided at the deep end of a competition POOL or other POOL with swim lanes. A ledge for resting may also be provided along the sidewalls of the same POOLS to allow resting for swimmers using the POOL for recreational swimming.
Five Feet or Greater A ledge for resting should not allow a person to use the ledge to cross from a shallow area into a deeper area of a POOL.
Structural Support UNDERWATER LEDGEs for structural support for an upper wall (structural ledge) are often located at a water depth of about three feet (0.9 m) depending on the wall manufacturer. The upper wall is a product manufactured of stainless steel, fiberglass, acrylic, or other materials. The support ledge and wall below the ledge is concrete, gunite, or other materials that the wall manufacturer specifies. Although POOLS using this wall structure are generally smaller POOLS, these POOLS can be any depth.
# Underwater Shelves
UNDERWATER SHELVES can be areas such as an expanded top tread of a stairway or a separate area many feet wide and long. The main purpose is often for small children, lounging in very shallow water or in chairs, or contoured as couches. (2.5 cm) of letter height to ten feet (3.0 m) of viewing distance for oversized letters or one inch (2.5 cm) of letter height to 16.6 feet (5.1 m) which is ideal. A one inch (2.5 cm) letter height to 30 feet (9.1 m) of viewing distance is the minimum.
Feet and Inches Some states may require both units of measurement in feet, inches, and meters. Some states do not allow for abbreviation of units.
No Diving Markers
# Depths
The symbol is required as it is the universally recognized symbol for "No Diving" and can be understood by those who do not read and non-English speaking individuals. Diving boards are permitted only when the diving envelope conforms to the STANDARDS of the certifying agency that regulates diving at the facility -National Collegiate Athletic Association (NCAA), the National Federation of State High School Associations (NFSHSA), the Federation Internationale de Natation Amateur (FINA), or U.S. Diving. If the AQUATIC VENUE does not have competitive diving, then the diving envelope must conform to these diving envelope STANDARDS. The intent of this section is to prohibit recreational and/or unsupervised users from performing DECK level diving into water five feet (1.5 m) or shallower. It is not intended to apply to competitive divers competing under the auspices of an aquatics governing body (e.g., FINA, U.S.A. Swimming, NCAA, NFSHSA, YMCA) or under the supervision of a coach or instructor. The vast majority of current STANDARDS allow for diving off the side of the POOL in water five feet (1.5 m) deep or greater. Water depths of at least five feet (1.5 m) are generally considered as safe for diving from the edge of a POOL where the coping/DECK is the typical six inches (150 mm) above the water surface. AQUATIC VENUE size and geometry may necessitate additional depth marking placements about all sides of the AQUATIC VENUE.
The American Red Cross recommends nine feet (2.7 m) of water depth based on analyses of spinal cord injuries. 82 The organization has clarified this recommendation to state, "Be sure water is at least nine-feet deep unless performed with proper supervision and in water depths that conform with the rules of the concerned regulating body, such as USA Swimming, the National Collegiate Athletic Association (NCAA), the Amateur Athletic Union (AAU), the National Federation of State High School Associations (NFSHSA), YMCA of the USA, and the International Swimming Federation (FINA)."
Facility Design & Construction ANNEX 60 Although there are some national data on spinal cord injuries (SCIs) in general, data on diving-specific SCIs are limited, particularly for SCIs involving public POOL-related competition diving.
General data on spinal cord injuries: For SCIs in general, approximately 40 SCIs/million population occur each year in the US (about 12,400 injuries for 2010) with approximately 4.5% related to diving injuries. 83 SCIs are a catastrophic public health problem leading to disability and decreased life expectancy with a large economic and social burden for those that suffer the injury. 84,85 The MAHC recommends that these national databases be re-analyzed with aquatics in mind to gather more detailed information on SCIs related to diving in treated AQUATIC VENUES, particularly public AQUATIC VENUES. (1.5 m). Only one injury occurred in water between six and seven feet (1.8 to 2.1 m). Another global review study showed that 89% of diving-associated neck injuries occurred in water less than five feet. 89 These data support keeping non-competition DECK level diving to water depths greater than five feet (1.5 m).
# Deck level diving and swimming pool-related
An example of an international "No Diving" Marker:
Dual Marking System A symmetrical AQUATIC VENUE design is a design which is circular in nature where there is a shallow end around the entire perimeter and the bottom slopes from the perimeter towards a deeper portion in the center.
Wading Pool Depth Markers A WATERSLIDE RUN-OUT in a WADING POOL may hold up to six inches (15.2 cm) of water without necessitating a no-diving sign or depth marker.
# Aquatic Venue Shell Maintenance
# Special Use Aquatic Venues
During the final comment period, SURF POOLS were identified as different from WAVE POOLS, and many of the requirements for MAHC Section 4: Design and Construction are not applicable to SURF POOLS. The term SPECIAL USE AQUATIC VENUE has been added to potentially allow construction and use of SURF POOLS and any other, yet to be identified, AQUATIC VENUE or POOL that, while meeting the intent of CODE applicability to public AQUATIC FACILITIES, cannot practically be designed to meet existing design standards and keep the intended use. It is anticipated that appropriate design standards will be developed and incorporated in the CODE as part of the MAHC revision process.
There are three types of SURF POOL systems currently available or being developed. Manual controls would almost certainly be set based on time of day. As the amount of daylight fluctuates throughout the year, these would need to be adjusted.
Light Levels
# Minimum Levels
The minimum light levels are as recommended in the Illuminating Engineering Society of North America (IESNA) RP-6-88, "Recommended Practice for Sports and Recreational Area Lighting" for the recreational class of use. Higher light levels are recommended for various competitive classes of use. There is a difference between indoor and outdoor settings because outdoor settings usually have a higher contrast with darkness that does not occur indoors.
Overhead Lighting Avoid glare by keeping overhead lighting directed 60-90 degrees horizontal of the eye. Glare on water can be avoided by direct lighting (i.e., the more direct the light, the less opportunity for glare). Also consider maintaining a close ratio of the lighting underwater and overhead to obtain a balance thus avoiding glare.
Artificial Lighting Glare from artificial light that interferes with the lifeguard's view of the swimmers shall be avoided. For today's light sources for higher-efficiency lamps (i.e., more light output per watt), this requirement no longer makes sense. Consider using a measure of light output (e.g., lumens) instead.
Based on an existing 300W General Electric R40 AQUATIC VENUE lamp that produces 3,750 initial lumens of light output, the conversion between watts and lumens is as follows: 0.5 watts/sq.ft. x 3,750 lumens/300 watts = 6.25 lumens/sq.ft.
# Example:
Lighting comparison between Incandescent & LED lamps for a 2,400 square foot (223 m 2 ) AQUATIC VENUE. Notice that LED lamps are 90% more efficient (lumens/watt) than Incandescent lamps.
"A replacement lamp will need to provide 6.25 lumens per square foot of surface area."
# Additional Information:
The incandescent lamp has an average life of 2,000 hours. For an AQUATIC VENUE that is operational 12 hours per day for 365 days (4,380 hours per year), the incandescent lamp failure rate would be approximately two times a year. Note that in-water AQUATIC VENUE lighting remains on when the AQUATIC VENUE is closed to swimming. For the 50,000 hour life of an LED lamp, the failure rate would be 11.4 years.
The AQUATIC VENUE surface replacement would be a determination of replacement lamps and not the lamp itself. Annual energy savings per lamp would be 1,183 KWH.
Minimum Requirement A common practice has been to express underwater lighting requirements in watts per square foot of POOL surface. Light output efficacy (lumens per watt) can vary greatly depending on the light source. Incandescent lighting, the most historically prevalent underwater light source, also has the lowest or worst efficacy. Some of the most common incandescent lamps are listed below, along with their initial lumen output and calculated efficacy:
4.0 Facility Design & Construction ANNEX 64
For the purposes of these requirements, the underwater lighting requirements have been converted from incandescent watt equivalents to initial lamp lumens using a conversion factor of 12.0 lumens per watt. The converted watts per square foot of POOL surface requirements are 0.5 watts , 1.5 watts , 1.5 watts , 2.5 watts .
It is recommended that future studies be conducted to determine minimum lighting requirements based on water depth, hours of operation, and overhead lighting design. The main goal is to be able to see the bottom of the POOL, in particular a person on the bottom, at all times when the POOL is open to the public.
Night Swimming with No Underwater Lights Providing higher lighting levels (15 footcandles (161 lux)) than the minimum requirements (10 footcandles (108 lux)) of MAHC Section 4.6.1.3.1 eliminates the requirement for underwater lighting in outdoor POOLS.
Emergency Lighting This section isn't intended to provide less stringent requirements, just a baseline STANDARD of design for locales that may not address this requirement. The industry commonly uses 0.5 foot-candle (5.4 lux)as an industry design STANDARD.
Glare Consider the sun's positioning through different seasons as well as the window placement to avoid glare. Consider moveable lifeguard stands or positions to avoid glare in different seasons. Consider tint and shades when natural light causes glare.
Windows and any other features providing natural light into the POOL space and overhead POOL lighting should be arranged to avoid glare on the POOL surface that would prevent identification of objects on the POOL bottom.
Careful consideration should be given to the placement of windows and skylights about the POOL. Natural light from directly overhead is less likely to create glare than light through windows at the sides and ends of the POOL.
Control of glare from artificial light is more likely if the angle of incidence of the main light beam is less than 50 degrees from straight down. Diffuse or indirect light sources may also help to minimize glare.
# Facility Design & Construction ANNEX 65
The MAHC had a very difficult time coming to a consensus on MAHC wording regarding glare that could be defended and enforced from a regulatory standpoint. How does a plan reviewer determine that glare based on design documents are excessive (perhaps only in certain months of the year)? The MAHC felt that design recommendations would best be addressed in the Annex. As a result of this varied and sometimes vague approach to defining "proper" ventilation, it is critical that the MAHC begins to better define AIR HANDLING SYSTEMS and establish parameters for air quality that reduces the risk of potential health effects. The aquatics industry has always had a challenge with indoor air quality. With the relatively recent increases in building large indoor waterparks, which have high BATHER COUNTs and contamination burdens and exposure times unseen before, indoor air quality is an increasingly important health concern. The media focus in recent years has highlighted this challenge.
Although the AIR HANDLING SYSTEMS of these AQUATIC FACILITIES are quite sophisticated, there are many variables to consider. In addition, much research is still needed in water chemistry and the use of other technologies to improve indoor air quality. The MAHC outlines the design, performance, and operational parameters that can be detailed using data available at the current time. The Annex information provides insight into the Ventilation and Air Quality Technical Committee's rationale and also identifies areas where more research is needed before additional parameters can be set.
The MAHC's intent is to require the design of an INDOOR AQUATIC FACILITY to be conducted by a licensed professional engineer with experience in the design of mechanical systems. The MAHC approached this section assuming designs will be evaluated by the AHJ in the location in which the system is to be installed. Following the first public comment period, the ventilation requirements were dramatically changed and draft material was removed from both the Code and Annex. The thinking behind those initial recommendations was saved for future consideration in MAHC Appendix 2. is to assure the health and comfort of the users of the AQUATIC FACILITY. A variety of health effects can occur as a result of poor ventilation that leads to accumulation of chemical and biological products in the air. The following section reviews some of the issues of concern for INDOOR AQUATIC FACILITIES.
# Chemical By-Products
The OXIDATION of waterborne organic and inorganic compounds by CHLORINE-or other halogen-based products is a complex process leading to creation of a large number of OXIDATION and DISINFECTION BY-PRODUCTS (DBPs) during the drinking water and aquatic water treatment processes. The source of these compounds is variable but includes source water CONTAMINANTS, BATHER waste (e.g., feces, urine, sweat, skin cells), and environmental introductions (e.g., dirt). Although the identity of many of these compounds is known, many others are uncharacterized and the health effects associated with short and long-term exposure to these compounds are only just starting to be characterized for the aquatic environment. Several of these compounds are known to be volatile and can accumulate in the air surrounding an INDOOR AQUATIC VENUE. Multiple publications discuss the acute and potentially long-term health effects of exposure to these compounds in the aquatic setting. 90,91,92 The nitrogenous OXIDATION by-products DICHLORAMINE and TRICHLORAMINE (e.g., chloramines) are known to be irritants that cause acute eye and lung distress. Accumulation of these compounds in indoor settings has been previously documented in several occupational settings where workers routinely use chlorinated solutions to rinse organic products such as poultry 93,94 and uncooked produce. 95 Similar symptoms of ocular and respiratory distress have been documented in outbreaks associated with use of INDOOR AQUATIC FACILITIES. 96,97,98,99,100,101 Other DISINFECTION BY-PRODUCTS (DBPs) such as the TRIHALOMETHANES have been studied extensively due to their production during treatment of drinking water. These investigations have greatly impacted U.S. EPA water treatment regulations, so there is now a major emphasis on reducing production of DBPs. The effects of these compounds in model systems show long-term exposure associated with chronic health effects such as bladder cancer. 102,103 Investigators are beginning to examine the long term health effects of exposure to DBPs during swimming. Although limited, some data suggests the potential for INCREASED RISK of asthma 104,105 and bladder cancer. 106 However, many of these studies are ecologic in design, which makes it difficult to definitively link exposures, actual exposure levels, and swimming. 107,108,109 Biological By-Products A variety of biological organisms that grow naturally in the environment (e.g., Legionella, Mycobacterium avium complex and other non-tuberculous mycobacteria, gram negative bacteria) or their constituents (e.g., proteins, lipo-polysaccharides, endotoxin) can be spread in the INDOOR AQUATIC FACILITY environment and cause infections 110 , 111 and hypersensitivity/allergic reactions (e.g., "Hot tub lung"; "Lifeguard lung", Pontiac fever). 112,113,114,115,116 The levels of pathogens and their constituents can be minimized with adequate INDOOR AQUATIC FACILITY ventilation, maintenance, and required water quality.
Facility Design & Construction ANNEX 68 4.6.2.2 Exemptions The MAHC decided that only "buildings" as defined in the building code would be included in the scope of the INDOOR AQUATIC FACILITY definition since there are many variables to consider for places like open buildings (may not have a roof or missing sides) such as variations in weather, geographic zone, etc. that would impact AIR HANDLING SYSTEM design even if one was needed. The guidelines in this module are meant to address the SAFETY and health of users in environments in which air quality is managed by mechanical means due to the "closed" environment since fresh air is not able to freely flow through the building. There are two SAFETY functions of the AIR HANDLING SYSTEM: to bring in fresh air and to protect the PATRONS and the building, which requires movement of air. The current ASHRAE 62.1 standard states 0.48 cfm/ft 2 fresh air is the minimum but still requires an air change. The MAHC needs to consider air delivery rate like TURNOVER for AQUATIC VENUES. To assure good indoor air quality, it is likely that the design should consider THEORETICAL PEAK OCCUPANCY, water type (e.g., flat, agitated, hot) and the size and use of the building.
The current STANDARDS approach to ventilation is based on square footage of the AQUATIC FACILITY and yet AQUATIC FACILITIES vary in size. Some facilities have a 20 foot (6.1 m) ceiling and in the case of indoor waterparks and stadium-style INDOOR AQUATIC FACILITIES, the ceiling heights can reach 60+ feet (18.3 m). In addition, the water surface area has a great deal to do with the amount of CONTAMINANTS released into the air but this is generally not included in design criteria.
There are many microclimates in larger AQUATIC FACILITIES with varied AQUATIC VENUES and AQUATIC FEATURES. Air movement will need to be targeted within these microclimates.
The challenge is that ASHRAE 62.1 only takes into account the building square footage and number of spectators rather than bathers. ASHRAE fundamentals require an air delivery rate for the volume of air. Designers felt water chemistry, fresh air, THEORETICAL PEAK OCCUPANCY, water surface area and type, and distribution of air (barring condensation) are more or equally as important as the air delivery rate. The MAHC discussed the various chemical and biological contaminants, the availability of testing protocols, and data to support developing health effect thresholds.
The researchers on the committee were able to provide a list of research regarding the thresholds of such CONTAMINANTS that produced symptoms in users of INDOOR AQUATIC FACILITIES. More detailed summaries of these data can be found in Appendix 1: Summary of Health and Exposure Data for Chemical and Biological Contaminants.
After evaluating possible CONTAMINANTS, the committee felt the most frequently detected adverse health symptoms associated with indoor air quality were related to chemical CONTAMINANTS. In evaluating the various chemical CONTAMINANTS, it was found that was the most prevalent reported 117 TRICHLORAMINE CONTAMINANT . Therefore, this section of the MAHC focused on TRICHLORAMINE as the major chemical CONTAMINANT for design considerations.
The table below summarizes findings on the threshold amounts that produced adverse health symptoms. In evaluating the TRICHLORAMINE research, it was apparent there is not a single test method used throughout the research. Without a validated test method, it is difficult to compare and benchmark the data from the various studies. As a result, a firm threshold could not be determined solely on the published research to date. Also, without a validated and simple test method, there is not an easy way for health departments or owner/operators to test routinely or with any consistency. For these two reasons, the MAHC felt it could not establish a threshold to be enforced by this section of the MAHC at this time. More research using a validated test method may lead to determination of a threshold level in the future. To enforce such a threshold level, the test also needs to be commercially available and easily performed by aquatics staff and health officials.
Therefore, the performance requirements for the AIR HANDLING SYSTEM have parameters for fresh air and dew point/humidity. To accomplish this, several design criteria were kept in mind: Fresh air requirements are established to specific levels. The theory is that if the building mechanical system is able to evacuate enough air to remove TRICHLORAMINE, then by default the other airborne CONTAMINANTS would also be evacuated. Dew point/humidity levels are set to avoid mold growth and damage to the building structure. The efficacy of UV and ozone are well documented for their effect on biological CONTAMINANTS but the photochemistry taking place is a different reaction for DISINFECTION versus controlling combined CHLORINE levels. Further research is needed to determine the effectiveness of UV and ozone on destroying DBPs before they can be considered in the design of an AIR HANDLING SYSTEM. Guidance is included in the MAHC for the use of UV and ozone for DISINFECTION. It is unknown at this time if the parameters for the equipment to achieve DISINFECTION will also result in the reduction of DBPs.
The initial draft of the MAHC Ventilation and Air Quality Module included discussion of fresh air requirements for facilities utilizing UV and ozone, which allowed for a reduction in the amount of fresh air required for ventilation compared to basic water treatment. However, until the efficacy of these technologies in reducing DBP formation can be established and parameters can be set in which any installation of these technologies can meet minimum requirements, we cannot include these technologies as a method to reduce fresh air requirements. Such information should be considered when efficacy data become available.
For future development of minimum performance requirements for UV and ozone, one should consider dose as a function of concentration and contact time. Many systems are designed for full flow treatment but contact time is very limited. These minimum parameters may help to attain efficacy, but as noted, more research is required. Below are some proposed statements for use once system efficacy can be determined: Minimum Outdoor Air Requirements Significant numbers of public comments were received regarding the proposed increase, above ASHRAE 62.1 STANDARDS of required outdoor air. The commenters noted that the requirements will result in increased costs for equipment and operation while lacking adequate data to support the increase. Based on the potential negative impact and the need for additional research and data to differentiate the causes and sources of indoor air quality problems on design criteria (e.g., design, inappropriate operation, inadequate maintenance), the MAHC decided to defer to ASHRAE 62.1 outdoor air requirements in this version of the MAHC. The MAHC thought it important to preserve the work done by the Technical Committee, so the proposed code language for additional outdoor air has been moved to Appendix 2: Air Quality Formula in the MAHC along with preserving the corresponding Annex discussion. A research agenda should be developed and should be a priority to better address the contributing factors to indoor air quality problems and the appropriate design and operational requirements needed to address those factors.
System Alarm There are several methods to add a MONITORING station to the outside air portion of the AIR HANDLING SYSTEM to establish the volume of outside air being introduced into the AQUATIC FACILITY. Such a MONITORING station should be installed. In addition, it should be noted that a negative pressure must be maintained during all operating modes. This negative pressure must be set-up at the commission stage by the installing contractor or by means of automatic operation by the AIR HANDLING SYSTEM or Building Automation System.
Relative Humidity Relative humidity is a ratio, expressed in percent, of the amount of atmospheric moisture present relative to the amount that would be present if the air were saturated. Since the amount of atmospheric moisture is dependent on temperature, relative humidity is a function of both moisture content and temperature. For consideration in designing the facilities structure, dew point is a better measure of absolute moisture levels. Dew point has a relationship with relative humidity. A high relative humidity indicates that the dew point is closer to the current air temperature. Relative humidity of 100% indicates the dew point is equal to the current temperature and that the air is maximally saturated with water. For human comfort factors, the maximum relative humidity has been specified for a very narrow range of indoor temperatures and thus is an easily measured and understood metric by users and owners of an AQUATIC FACILITY. For the building design, dew point is a more important metric because the outdoor conditions can be over a very wide temperature range. The design professional must be able to calculate the internal dew point for all building structure components to avoid condensation. Condensation occurs when the inside surface temperatures equal the dew point of the space.
Using a properly calibrated instrument designed to measure relative humidity eliminates the complexity of calculating relative humidity by hand. It is important to collect a series of representative relative humidity measurements inside the INDOOR AQUATIC FACILITY. The building should be divided into representative areas if necessary, depending upon size and various AQUATIC FEATURES. Measurements should be taken from each occupied area. Measurements should be taken at DECK level and recorded. Arithmetic average of measurements will provide an estimation of the relative humidity in the INDOOR AQUATIC FACILITY. This may require consultation with design professionals.
Disinfection By-Product Removal It is the MAHC's intent not to limit the development of new technologies. Although the efficacy of these technologies are not readily apparent, in the future there is a hope that the CODE will allow for the design professional to decrease the outside air requirements when secondary technology is used and the design professional can prove the efficacy of the added technology. Other methods and technology for decreasing DBPs include: Ventilating surge tanks to remove off-gassing TRICHLORAMINE before the water re enters the POOL area, and Use of a cooling tower to force water to off-gas TRICHLORAMINE before reintroducing water to the POOL area.
Purge When an AQUATIC FACILITY has an event (e.g. pool is shocked) that requires the introduction of a larger volume of outdoor air, the PURGE mode can be manual triggered to provide a flush of the INTERIOR SPACE. The intent is to run the air handling system at purge capacity until the contaminant causing odor/eye/lung discomfort has dissipated to 4.0 Facility Design & Construction ANNEX 74 an acceptable level. The lack of an assay for airborne chloramines, means that "acceptable" is arbitrary and unenforceable since it relies on an operator assessment. When appropriate tests are available, the MAHC would like to set a numerical action threshold that would be enforceable.
Air Handling System Filters Manufacturers/designers could consider developing/incorporating specialized solid phase (e.g., activated carbon or other media) chloramine removal air filtration as another means to sequester chloramines and potentially reduce fresh air requirements. However, such systems need to show proven efficacy. With new methods development, such systems could eventually be designed with sensors confirming that the combined CHLORINE levels are at an acceptable level (when such air measurement methods become available). If levels increased, then the AIR HANDLING SYSTEM could proportionally increase the amount of outside air.
# Air Quality -Health
No rapid, simple, and commercially available tests for di-and tri-chloramine exist at the current time. However, MONITORING for TRICHLORAMINEs can also be effectively accomplished by training POOL operators to be on alert for the distinctive chloramine odor and eye and lung irritation it causes. The odor threshold for TRICHLORAMINE is 0.1 mg/m 3 and health symptoms start happening around 0.3-0.5 mg/m 3 , so odor MONITORING generally works well as an early warning system. 124,125,126,127,128,129
# Air Turnover Rates
Monitoring combined chlorines in the water or VOC concentrations in the air can be used as an alternative to measuring air quality. The AQUATIC FACILITY design engineer should specify what the alternative measurement limit should be in establishing an alternate ventilation AIR DELIVERY SYSTEM.
Facility Design & Construction ANNEX 75
# Indoor Aquatic Facility Electrical Systems and Components
Nothing in this code should be construed as providing relief from any applicable requirements of the NEC or other applicable code.
# General Guidelines Wiring
Wiring located near or associated with equipment for bodies of water should be installed in compliance with the NEC or with other applicable code, except where the MAHC is more restrictive.
# Metal Raceways
Metal raceways should be equipped with a grounding conductor sized according to NEC Article 250 to maintain device ground potential in the event of degradation of the raceway.
- See ANSI/IEEE 241, Section 5.17.6. 250-110(2).
# Electronics
All electrical equipment, devices and fixtures should be listed and labeled for the expected atmosphere of the space.
- See NFPA 70HB08, Article 100: Labeled, Explanatory Note. See NFPA 70HB08, Article 100: Listed, FPN.
# Light Switches
Any light switches installed inside interior CHEMICAL STORAGE SPACES should be approved for use in wet and corrosive atmospheres, or shall be otherwise protected, as by a weather-proof actuator cover with a gasket. See NEC Article 110.11: Deteriorating Agents.
# Permanent Electrical Devices
All permanently connected electrical devices should be grounded per the NEC or other applicable code, using a separate grounding conductor which does not depend on the conductive integrity of any metal conduit exposed to chemical-STORAGE space air.
# Uncontrolled Condensation
Uncontrolled condensation in a building can lead to the growth of molds, with subsequent health effects. Uncontrolled condensation in a building can lead to property damage from rust, rot, ice pressure, and other.
Condensation can be controlled by:
- Controlling the evaporation rate of the water, Controlling the temperature and relative humidity of the room air, and Maintaining all exposed building surfaces above room-air dew point.
# Facility Design & Construction ANNEX 80
# Evaporation Rate
The POOL evaporation rate is affected by the: Size of the POOL, Agitation of the water, Heat of vaporization of the water at that temperature and pressure, Temperature difference between the POOL water and the room air and the associated difference in vapor pressures, and Speed of the air over the POOL'S surface. See Places of Assembly, ASHRAE Handbook of Fundamentals 131 Example for Note: A design POOL-water temperature is 82°F (27.8°C) with a design air temperature of 84°F (28.9°C). It is decided to raise the POOL water temperature to 83°F (28.3°C); the air temperature should be raised to 85°F (29.4°C) to maintain the same evaporation rate. Any surface which is exposed to room air and which cools below the dew point of the room air will become wet with condensation. Such surface may not be visible, e.g. inside a wall.
# Space Heating
- Space heating should be available year-round. Space heating should not be disabled seasonally.
# Exceptions may include:
- Space heating need not be provided during such times as the POOL(s) may be drained completely, all AQUATIC FEATURES and other evaporative loads are disabled and drained, and the room relative humidity does not rise above the design range. Space heating may not be required if ventilation with outdoor-air is sufficient to prevent room temperature from falling below the design range, and room relative humidity from rising above the design range.
# Seasonal Disabling
Where POOLS are filled or partially filled, the evaporation rate will increase as room air temperature decreases. Seasonal disabling of space heating could allow a drop of room temperature, with a subsequent increase in evaporation rate and possible uncontrolled condensation.
Surfaces where the temperature may decrease below the design dew point of the space under normal operation shall be identified as part of the design process. At least one inspection should be done during the first heating season to identify any other such surfaces. The addition of heat to surfaces identified may be necessary to maintain their temperature above the design dew point for the space. Where forced air is used to heat identified surfaces, the heating method specified shall be so installed as to heat the 4.0 Facility Design & Construction ANNEX 81 room's air supply. The temperature, flow rate, and delivery of the supply air for each identified surface shall be such as to heat that surface above the design dew point of the space, under the worst-case design conditions. Such surfaces may have low values of thermal resistance. Such surfaces may include, but are not limited to windows and their frames, doors and their frames, any metal structural members that penetrate the vapor retarder, and any under-insulated sections of walls or roofs.
- See Thermal and Water Vapor Transmission Data, ASHRAE Handbook of Fundamentals 132
# Combustion Heaters
Where combustion space heaters or combustion heaters required are located inside a building, the space in which the heater(s) or an assembly including the heater(s) is located shall be considered to be an EQUIPMENT ROOM for the purposes of MAHC Section 4.9.1. The requirements of MAHC Section 4.9.1 shall apply. Exceptions may be made for space heaters listed and labeled for installation in the atmosphere shall be acceptable without ISOLATION from chemical fumes and vapors.
Note: Not all space heaters listed for heating INDOOR AQUATIC FACILITY air are listed for installation in an INDOOR AQUATIC FACILITY. Combustion space heaters should not be installed in an INDOOR AQUATIC FACILITY, unless the heater is rated for the atmosphere.
High Temperature This temperature limit shall not be construed to be the maximum limit of the bulk water (water in the heater) temperature. Bulk-water temperature limits can be much higher. The temperature limit of MAHC Section 4.6.4.1 is for water in contact with bathers. To meet the limits set in 4.6.4.1, water heaters can: Heat the water stream to the limit of MAHC Section 4.6.4.1 and return the water directly to the AQUATIC VENUE, or Heat the bulk water above the limit set in 4.6.4.1 and then use mixing or other methods to ensure that BATHERS are not exposed to temperatures above the limit of MAHC Section 4. 6.4.1. 133 Examples of "applicable CODES" include but are not limited to: Equipment Room Requirements Combustion heaters should not be installed in an INDOOR AQUATIC FACILITY, or exposed to other chemical fumes, unless the heater is rated for the atmosphere.
First Aid Area 4.6.5.1 Station Design A conveniently designated first aid station location should be provided for use when BATHERS report with minor injuries and/or illness. The first aid station must be easy to locate and must have first aid supplies to care for minor injuries and more serious injuries until emergency assistance can arrive. Some AQUATIC FACILITIES may have a formal First Aid Station that is a stand-alone and others may have a location for first aid equipment. The MAHC felt it would allow for flexibility in design to call out the location for first aid equipment rather than designate a stand-alone station. Some AQUATIC FACILITIES are large and a single first aid station is not as practical as distributing first aid equipment throughout the AQUATIC FACILITY (e.g., to individual aquatic venues). From a design standpoint, the designer must address the location of such equipment and as stated in MAHC Section 4.5.1, should work with the owner and/or aquatic risk management consultant to designate these locations.
Emergency Exit
Drinking Fountains 4.6.7.1 Provided A drinking fountain is required at an AQUATIC FACILITY simply to encourage swimmers not to drink the POOL water and to keep swimmers hydrated. At an outdoor AQUATIC FACILITY, the drinking fountain can be located inside an adjacent building to allow year-round use when the AQUATIC FACILITY is closed for the winter. The drinking fountain would not need to be winterized. When a drinking fountain is not located in the AQUATIC FACILITY, it should not be located more than 25 feet (7.6 m) from the AQUATIC FACILITY entrance. The AHJ may approve a bottled water supply in place of a drinking fountain. The water from a bottled water supply shall be as readily accessible to BATHERS as would the water from a drinking fountain.
# Additional Width
The MAHC tried to distinguish the word "BARRIER" from "ENCLOSURE." Those definitions are in the glossary. As currently defined, a "BARRIER" is simply intended to be an obstacle intended to deter direct access from one point to another. For example, a simple post and rope solution would meet MAHC intent.
# Balcony
The intent is to prevent people from using a balcony as a diving platform. If a balcony is close to overhanging an AQUATIC VENUE, some people may try and use it to jump or dive into the AQUATIC VENUE. The more substantial and preventive the BARRIER at the balcony is, the less likely is that a person will use it.
Bleachers Many building code jurisdictions may not be aware of the new ICC 300 bleacher STANDARD. Once jurisdictions adopt the 2007 International Building Code and supplements, the bleacher STANDARD will become better known.
# Recirculation and Water Treatment
Recirculation Systems and Equipment
# General
# Rationale for Prescriptive Approach
Recirculation and water treatment systems guidance tends to be more prescriptive than performance based because it is quite difficult and expensive to measure the performance of the filtration and RECIRCULATION SYSTEM with regard to pathogen removal and/or inactivation. Even the measurement of water clarity (e.g., turbidity) can be difficult (due to potential bubble formation, instrument fouling, and instrument calibration procedures) and can cost more than a thousand dollars to continuously measure turbidity at a single point.
# Reasons to Exceed the Minimum Standards
There is no single TURNOVER time or one type of filtration system that is optimal for every AQUATIC VENUE. Requiring the most aggressive design for every AQUATIC VENUE is not the intent of the MAHC (or even necessary). However, some AQUATIC VENUES, particularly those with high numbers of BATHERS per unit water volume or a BATHER population more likely to contaminate water (e.g., children less than five years old), could need higher recirculation rates and more efficient filtration than the minimum STANDARDS. Since it is not always possible to predict the number of BATHERS in an AQUATIC VENUE, the MAHC recommends a modest overdesign of the RECIRCULATION SYSTEM pipes and thus ample space be left for expansion of pumping and filtration capacities, which will be referred to henceforth as the hydraulic flexibility 4.0 Facility Design & Construction ANNEX 84 recommendation. Future editions of the MAHC could have higher minimum STANDARDS that AQUATIC FACILITIES might wish to comply with without having to remove and replace a lot of concrete to accommodate slightly larger pipes.
# Hydraulic Flexibility Recommendation
The hydraulic flexibility recommendation made in the section above will also reduce friction losses in the pipes that may lead to energy savings and reduced operating costs. With the formalization of a new turndown system for AQUATIC VENUES, it is hoped that AQUATIC VENUES may be designed for worst-case conditions and then operated according to the demands placed on the system. A turndown system could be used to operate below the minimum operational STANDARDS set by the MAHC when the AQUATIC VENUES is not occupied as an additional cost-saving measure as long as water quality criteria are maintained.
Combined Aquatic Venue Treatment There are some important considerations to take into account when considering combined AQUATIC VENUES treatment, and this practice is generally discouraged for most installations. First, to respond to a contamination event, it would be necessary to shut down all AQUATIC VENUES and water features on a combined AQUATIC VENUE treatment system since contamination of one AQUATIC VENUE would rapidly contaminate all combined AQUATIC VENUES. Second, including an INCREASED RISK AQUATIC VENUE on a combined system would require secondary DISINFECTION to be installed for all AQUATIC VENUES on the RECIRCULATION SYSTEM. The two scenarios would involve isolating Cryptosporidium to a single AQUATIC VENUE (limiting the number of bathers exposed while keeping the concentration high) or diluting it as much as possible between all AQUATIC VENUES (to limit the maximum concentration or exposure level while increasing the number exposed).
Based on the infectious dose concept (i.e., the number of oocysts required to be ingested to cause an infection), diluting Cryptosporidium or other CONTAMINANTS is one way of reducing outbreak potential but the high numbers of Cryptosporidium OOCYSTS that may be excreted (e.g., 10 8 -10 9 per contamination event134,135) may overwhelm modest dilution factors while greatly increasing the number of people exposed. While the number of BATHERS exposed may increase, the exposure level will decrease if circulation rates were the same, meaning dilution of a very small AQUATIC VENUE into a large POOL might reduce the Cryptosporidium level from 1000's of OOCYSTS per mL swallowed to less than 1 per mL in the combined system. However, smaller AQUATIC VENUES can be circulated at faster rates through the SECONDARY DISINFECTION SYSTEM and therefore can have OOCYSTS loads reduced faster if they are in a small volume, rapidly circulating AQUATIC VENUE.
Design modeling is needed to compare the efficacy of these two scenarios under different OOCYST concentrations. The dilution scenario only works if an INCREASED RISK AQUATIC VENUE of small volume is combined with a large volume AQUATIC VENUE. For
Facility Design & Construction ANNEX 85 AQUATIC VENUES similar in size, the impact of dilution is small while the number of people exposed might double or more. There could also be benefits with a combined system that would make it easier to provide more stable water quality parameters (in terms of pH and chlorine level) because larger water volumes tend to be easier to control. Again, the potential positive impact of combined water treatment is limited to combining small POOLS with much larger POOLS, which is not likely if the DISINFECTION requirements differ between the AQUATIC VENUES. Hydraulically isolating a given AQUATIC VENUE on a combined treatment system with valves is discouraged because doing so necessarily prevents filtration and recirculation of the water. However, ISOLATION capabilities are recommended for maintenance purposes (as well as separate drain piping).
# Inlets
# General
# Flow Velocity
The velocity of flow through any INLET orifice (at between 100% and 150% of the total recirculation flow rate chosen by the designer) should normally be in the range of seven to 20 feet per second (2.1 to 6.1 m/s). The range of velocities through the INLETS was selected to balance two competing goals.( 1) The velocity should be high enough to push water effectively to the center of the POOL (or to within the range of the floor inlets for wider pools), but ( 2) the velocity should not be so high as to waste an unnecessary amount of energy. The INLETS still being within design range at 150% of the design recirculation flow rate is to accommodate the hydraulic flexibility recommendation discussed previously. This recommendation ensures proper operation at both the current and any future flow rates up to at least 150% of the recirculation flow.
# Floor Inlets
# Maintain and Measure
The use of floor INLETS might require additional considerations for draining them when the POOL is not in use. The likelihood of biofilm proliferation in pipes not in use is thought to increase significantly as the FREE CHLORINE RESIDUAL is dissipated. Drinking water distribution pipes are normally coated with biofilm even in the presence of a constant CHLORINE residual 136 , 137 . Since it is more difficult to inactivate microorganisms in a biofilm 138 , there is potentially increased risk of human exposure to pathogens shielded by biofilm once the POOL reopens. Leoni and coworkers found mycobacteria in 88.2% of POOL water samples analyzed and reported that swimming POOLS provided a suitable habitat for the survival and reproduction of mycobacteria 139 . Significant damage to the RECIRCULATION SYSTEM pipes and surroundings can result from the expansion of water as it freezes. Both dangers may be alleviated by simply draining water from the pipes For standard POOLS, since the majority of the water leaving the POOL does so at the surface, locating the INLETS 24 inches (61.0 cm) below the design operating water level would reduce short-circuiting of water from the INLETS to the surface removal system.
# Inlet Spacing
Wall INLETS have a limited range for how far they can push water out toward the center of the POOL especially as the flow of water is being pulled out of the POOL at the wall via gutters or SKIMMERS. The likelihood of forming regions in the center of the POOL that are not efficiently filtered or chlorinated increases as the width of the POOL increases. For POOLS less than 4 feet (1.2 m) in depth, the average velocity of the water is thought to be increased as the volume of water served by a single INLET is expected to decrease assuming equal spacing.
Step areas, swim outs, and similar recessed or isolated areas are likely to create a dead zone. Placement of one or more INLETS in these areas will help ensure distribution of chlorinated, filtered water to these areas.
# Dye Testing
Dye testing should be performed to determine and adjust the performance of the RECIRCULATION SYSTEM. Dye studies tend to be qualitative in nature 140 .
A dye test may not be necessary for "standard" designs previously determined to provide effective mixing. It may be particularly important for irregular shaped POOLS.
Some judgment is generally required to determine whether a dye study should be classified as passing or a failing. In general, dead zones (or areas of poor circulation) would indicate a failure that could be fixed by adjusting the INLETS or other system hydraulics. If the POOL does not reach a uniform color within 15-20 minutes, then adjustments are required.
Refer to Appendix 3: Dye Testing Procedure for additional information.
The use of SKIMMERS could be limited to POOLS with surface areas of less than 1,600 square feet (149 m 2 ), and the maximum width for POOLS using SKIMMERS could be restricted to less than 30 feet (9.1 m).The use of SKIMMERS has been limited to smaller POOLS with light BATHER COUNTs since their inception. The limitations of SKIMMERS versus gutters appear to be physical in nature. For example, a 30 feet x 50 feet (9.1 m x 15.2 m) POOL may be served by three SKIMMERS rated at 500 square feet (46.4 m 2 ) each. If each SKIMMER is one foot (30.5 cm) wide, then all of the skimmed water is being drawn off from only three foot (0.9 m) of the POOL perimeter (i.e., 1.9 % of the total perimeter). This would lead to higher water velocities over the floating weirs and water being collected from a greater depth (as opposed to actual surface skimming) relative to a gutter system that extends around the perimeter of the POOL. In this example, 98.1% of the perimeter of the POOL is not being used to skim water and could produce regions of limited flow and scum collection. Theoretically, enough SKIMMERS might be added to produce effective skimming comparable to a gutter system, but the research to demonstrate this in practice could not be found. Practical experience says that having no scum lines or dead zones in corners with stagnant debris are inherent advantages. There could also be practical hydraulic limitations for heavily BATHER loaded POOLS related to use of in-POOL surge as opposed to a surge tank. Equalizer lines are recommended to prevent SKIMMERS from pulling air into the pump and potentially causing loss of prime, flow surges, and interference with proper filter operation.
# Hybrid Systems
Hybrid systems that incorporate surge weirs in the overflow gutters to provide for in-POOL surge shall meet all of the requirements specified for overflow gutters Since the number of BATHERS determines the type of surface overflow system in use, the hybrid systems should be able to meet all code requirements regardless of how many BATHERS are present and which components are in active use.
When the POOL is inactive (no bathers in the water) the surge weirs provide surface skimming. The operating water level during the period when there are no BATHERS in the water is designed to be below the rim of the gutter and flows over the surge weirs by gravity. When BATHERS enter the water, the level rises (in-pool surge capacity), the surge weir openings close, and the water flows over the gutter as in standard gutters.
# Surge Weir
The manufacturers of these gutter systems typically have flow capacities (gpm/surge weir) established for their surge weirs. The number of surge weirs necessary to accommodate the portion of the recirculation rate to be removed from the surface is
Facility Design & Construction ANNEX 90 calculated by using the percentage of the total recirculation rate for surface skimming (i.e., 80 % of total flow) divided by the flow rate for each surge weir. The total recirculation rate must not be used for this calculation, as it will result in a greater number of surge weirs; operationally less water will need to be removed from the surface, which will likely result in inadequate flows over the weirs for effective surface skimming.
The required number of surge weirs are to be uniformly spaced around the POOL perimeter in the gutter. The 100 percent of the total recirculation flow rate chosen by the designer is recommended as part of the hydraulic flexibility recommendation.
Skimmer Flow Rate SKIMMERS should provide for a flow-through rate of 30 gallons per minute (114 L/min), or 3.75 gallons per minute (14 L/min) per linear inch (2.5 cm) of weir, whichever is greater. The AHJ may approve alternate flow-through rates so long as the SKIMMERS are NSF or equivalent listed and manufacturer's design specifications are not exceeded.
# Flotation Test Procedure
# Materials Needed:
- Yellow wooden stars (55 -110 minimum depending on the pool's surface area) Video camera Tripod
# Conditions Prior to Test:
- TURNOVER time and recirculation flow rate are operated as normal for the POOL INLETS and outlets are positioned as normal for the POOL SKIMMERS or gutter system is not flooded If using SKIMMERS, make sure that the weirs are present Water level is at the appropriate height above the weir/gutter (about ¼ inch or 6.4 mm) Set up video camera to record Test 1: Circulation 1. Determine how many stars are necessary by using the following:
- POOL surface area 2,500 square feet (232 m 2 ). use a minimum of 110 stars. 2. Randomly toss the stars into the POOL. Try to toss the stars so that there is an even distribution throughout the surface of the POOL. 3. Record and observe the stars as they travel.
Facility Design & Construction ANNEX 91 4. Record the motion of the stars in each area of the POOL (e.g., clockwise, counter clockwise, no movement) and any other observations. 5. Passing criteria may vary, but suggestions include 90% removal within one hour.
# Test 2: Skimmer/Gutter Draw
1. Stand behind one of the SKIMMERS or the gutter and drop a star into the water at arm's length distance (about 2 feet (61 cm)) in front of it. 2. Record how long it takes for the star to enter the SKIMMER or gutter. Then repeat this process at the same location three times.
Submerged Suction Outlet Note that in the VGB Act, no specific distances are listed. CPSC's question and answer section for implementation indicates three feet (0.9 m) measured center to center.
# Flow Distribution and Control
The 125% of the total recirculation flow rate chosen by the designer is recommended as part of the hydraulic flexibility recommendation in MAHC Annex 4.7.1.1. The proportioning valve(s) are recommended to restrict flow by increasing the head loss in the pipe(s) typically on the main drain lines where flow rates are less than those from the surface overflow system lines.
The main drain system shall be designed at a minimum to handle recirculation flow of 100% of total design recirculation flow rate. A minimum of two hydraulically balanced filtration system suction outlets are required as protection from suction entrapment. The branch pipe from each main drain outlet shall be designed to carry 100% of the recirculation flow rate so in the event that one drain outlet is blocked the other has the capacity to handle the total flow.
Where three or more main drain outlets are connected by branch piping in accordance with MAHC Section 4.7.1.6.2.1.1 through 4.7.1.6.2.1.3,it is not necessary that each be designed for 100% flow. Where three or more properly piped drain outlets are provided, the design flow through each drain outlet may be as follows:
- Q max for each drain= Q(total recirculation rate) / (number of drains less one)
- Q max =Q total / (N-1)
The result is that if one drain is blocked, the remaining drains will cumulatively handle 100% of the flow.
# Example:
- Q total = 600 gpm recirculation rate N = 3 drains RECIRCULATION SYSTEM piping should be designed so the water velocities should not exceed eight feet (2.4 m) per second on the discharge side of the recirculation pump. This is a maximum value as opposed to a good design value. The head loss in a pipe (and hence the energy loss in the recirculation system) is proportional to the square of the velocity in the pipe (i.e., if you cut the velocity in half, then you reduce the head loss to ¼ (25%) of the original value). In the interest of conserving energy, velocities in the range of six to eight feet (1.8 m to 2.4 m) per second are recommended. Without a minimum INLET velocity, uniform water distribution within the supply piping will not happen. Absolute pressure on the liquid surface -friction losses in the suction line -vapor pressure of the water + static head of liquid above impeller eye.
Hydraulic calculations for piping and pumps should be prepared by a qualified engineer.
# Additional Considerations
Gravity piping must be sufficiently sized to accommodate the recommended flow (including surges) without water surcharging above the INLET. Careful consideration of available head, the head losses, and the combined flow from multiple inputs into a 4.0 Facility Design & Construction ANNEX 93 single pipe is a necessity. The two feet (61.0 cm) per second value is a value derived from common practice with no clearly identifiable theoretical basis.
# Drainage and Installation
# Draining Recommendation
The draining recommendation for all equipment and piping serves multiple functions. First, any sediment or rust particles that gather in the pipe can be flushed by means of the drainage system. Since bacteria and biofilms are mostly water, drying out a biofilm can be an effective means of controlling growth. Whereas leaving a pipe full of water during a period of maintenance or no use could lead to dissipation of the CHLORINE residual and proliferation of a biofilm inside of pipes and/or equipment. Biofilms can lead to bio-corrosion of metal components of the RECIRCULATION SYSTEM and serve as protection for microbes and pathogens.
# Designed
All equipment and piping should be designed and constructed to drain completely by use of drain plugs, drain valves, and/or other means. All piping should be supported continuously or at sufficiently close intervals to prevent sagging and settlement. All suction piping should be sloped in one direction, preferably toward the pump. All supply and return pipe lines to the AQUATIC VENUE should be provided to allow the piping to be drained completely.
# Individual Drain
The individual drain to facilitate emptying the POOL in case of an accidental gross contamination event is intended to prevent further contamination of any pipes, pumps, multi-port valves, filters, or other equipment associated with the RECIRCULATION SYSTEM, which might be more difficult to clean than the inside of the AQUATIC VENUE. In the case of combined AQUATIC VENUE treatment systems, this drain could prevent crosscontamination of multiple AQUATIC VENUES. Clearly marking pipes will prevent misidentification that could lead to cross-connections and contamination of the AQUATIC VENUE. Pipe marking will also facilitate easier identification of locations for additional equipment installation and/or sample lines.
Color Coding Recommendations: Pipes and valves, when color-coded, may be colorcoded in accordance with the following: Variable frequency drives (VFDs) may be allowed because the energy savings could be substantial if flow is reduced at night and water quality criteria are continuously maintained. At this time, we are not aware of public health benefits or deficits associated with VFD use so these pumps are allowed but not required. Operators should be aware that VFDs can flatten out a pump curve so if they are installed on a filter pump, operators may want more active control to maintain operations. It is recommended that operators use VFDs with a compatible flow meter with a feedback control to optimize VFD function.
# Total Dynamic Head
The recirculation pump should be selected to meet the recommendations of the designer for the system. However, the following guidelines are suggested as starting points for designers. The recirculation pump(s) must be selected to provide the recommended recirculation flow against a minimum total dynamic head of the system, which is normally at least 50 feet (15.2 m) for all vacuum filters, 70 feet (21.3 m) for granular media and cartridge filters, or 60 feet (18.3 m) for precoat filters. A lower total dynamic head could be shown to be hydraulically appropriate by the designer by calculating the total head loss of the system components under worst-case conditions.
# Operating Gauges
# Pressure Measurements
A second set of pressure measurement ports could be recommended (tapped into the pump volute and discharge casing) to accurately calculate the flow of the pump. These gauges are a way of verifying the pump curve is correct. One can also use the pressure/vacuum gauges and pump curve to verify the flow meter reading and look for differences between the two. During startup, it is possible to shut off a valve on the discharge side of the pump and verify that the maximum discharge pressure measured agrees with the value on the pump curve.
It is recommended that all pumps be located on a base so as to be easily accessible for motor service.
# Facility Design & Construction ANNEX 95
# Vacuum Limit Switches
The vacuum limit switch is intended to shut down the pump if the vacuum increases to a point which could cause damage to the pump (cavitation).
Flow Measurement and Control 4.7.1.9.1 Flow Meters Over 22% (approximately 20,000) of the POOL inspections that led to POOL closures in the state of Florida in 2012 were caused by non-functioning flow meters. This section of the MAHC is intended to improve this flow meter reliability problem (as well as to address a problem with accuracy). Since flow rates are critical for proper filtration, sizing, and operational calculations, it is recommended that operators purchase a more accurate flow meter for all systems or when replacing older flow meters on their existing system. Improved accuracy improves an operator's chance of understanding the true flow in their system. Operators should be mindful of flow meter placement by installing according to manufacturer recommendations and adhering to recommended distance parameters.
A flow meter or other device that gives a continuous indication of the flow rate in gallons per minute (L/min) through each filter should be provided. If granular media filters are used, a device should be provided to measure the backwash flow rate in gallons per minute (L/min) for each filter. Flow meters should have a measurement capacity of at least 150% of the design recirculation flow rate through each filter, and each flow meter should be accurate within +/-5% of the actual design recirculation flow rate. The flow measuring device should have an operating range appropriate for the anticipated flow rates and be installed where it is readily accessible for reading and routine maintenance. Flow meters should be installed with 10 pipe diameters of straight pipe upstream and 5 pipe diameters of straight pipe downstream or in accordance with the manufacturer's recommendations. Acrylic flow meters will not meet the accuracy requirement (and are prone to fouling/clogging) and hence should not be installed as the primary flow meter on any RECIRCULATION SYSTEM. However, acrylic flow meters could prove useful as a backup or auxiliary flow meter. A paddle-wheel flow meter, when used, should be located on the effluent side of the filter to prevent fouling.
More accurate flow meters are recommended to conserve energy and increase regulatory compliance. Magnetic and ultrasonic flow meters offer greater accuracy (typically less than +/-1% error) and less potential for fouling, but the aforementioned flow meters tend to be more expensive (e.g., $1,000 or more). An ultrasonic flow meter (such as clamp-on transit-time models) can be used to measure flows through the wall of a pipe, so they can be installed and uninstalled without modifying the existing plumbing. One ultrasonic flow meter could be used to routinely verify the flow readings of multiple other flow meters that are more prone to error. An annual cleaning and evaluation of flow meter accuracy could be useful in maintaining compliance with existing regulations. For example, here is a set of example calculations for an indoor POOL in a hotel that is 20 feet (6.1 m) wide by 30 feet (9.1 m) long with an even floor slope that goes from 4 feet (1.2 m) at the shallow end to 6 feet (1.8 m) at the deep end.
Facility Design & Construction ANNEX 100 When POOL recirculation rate recommendations are broken down to their essential elements, it is essentially about removing suspended matter (including microbial contaminants) with the filters and effectively maintaining uniform FREE CHLORINE RESIDUAL at the proper pH. Both the FREE CHLORINE RESIDUAL and the microbial concentrations are a function of the number of BATHERS in a given volume of water. While it is not possible to always accurately predict the BATHER COUNT for a given POOL on a given day, it is generally possible to estimate the maximum number of BATHERS likely to be in any given type of POOL per unit surface area (since most batherS have at least their head above water most of the time and the primary activity in a pool often dictates the comfort level in regards to bathers per unit surface area and hence the likelihood of bathers entering or leaving the pool). After establishing a maximum sustainable bather load (MSBL) or maximum number of BATHERS expected in a POOL, it is possible to calculate the recommended flow of recirculated water necessary to be treated in order to handle the pathogen load and CHLORINE demand imparted by the BATHERS. An empirically-derived multiplier was used by PWTAG 141 to convert the MSBL to the recommended recirculation rate. The empirical multiplier used in this code was derived independently using English units specifically for use in the U.S. The value of the U.S. multiplier is approximately 29% smaller than the PWTAG value using equivalent units because POOL design in the UK is more conservative than in the US.
# Unfiltered Water
Unfiltered water shall not factor into TURNOVER time. This section is to address/clarify water that may be withdrawn from and returned to the AQUATIC VENUE for such AQUATIC FEATURES as slides, features, etc. by a pump separate from the filtration system. That flow rate from the separate pump system shall not be included in the turnover time calculation.
# Turnover Times
The recommended design TURNOVER time can then be calculated by dividing the volume by the recommended flow. This procedure can be performed for individual sections of a POOL or the entire POOL depending on the number of zones, which are based on depth of the water. Adjustments can then be made to this calculation to account for extraordinary conditions. For example, since a SPA has higher water temperature than a POOL, a PATRON would be expected to sweat more; an indoor POOL might experience less contamination from pollen, dust, and rain than an equivalent outdoor POOL; and a POOL filled with diaper-age children would be considered an increased-risk POOL requiring more aggressive treatment. Aquatic facilities that enforce showering prior to POOL entry could reduce the organic load on the POOL by 35-60% with showers lasting only 17 seconds 142 . The BATHER LOAD calculation based on surface area of the POOL has been proposed by PWTAG 143 in 1999 and has influenced the CODES proposed by the World Health Organization 144 and Australia 145 . This approach has been adapted for use
Facility Design & Construction ANNEX 101 in the U.S. by slightly increasing the area recommended per BATHER in shallow waters and decreasing the area in deep POOLS to account for the intensity of deep water activities, the relatively low surface area to volume ratios of deep waters relative to shallow waters, the typically poorer mixing efficiency in deeper water, the increased amount of time typically spent underwater in deeper water, and the larger average size of bathers commonly found in deeper water. These values were empirically derived for the MAHC to match typical U.S. practices at the time of this writing and can be changed as necessary to achieve the desired water quality goals.
Effectively handling BATHER COUNT in terms of pathogen removals and CHLORINE demand is a paramount concern for which the above calculations should provide some science-based guidance. However, there are other factors that must be considered when selecting a recirculation rate for an AQUATIC VENUE. For example, effectively distributing treated water to avoid dead spots recommends minimum water velocities to reach the POOL center and extremities. Similarly, effective surface skimming recommends adequate velocities at the surface of the POOL to remove floating CONTAMINANTS. Due to the kinetics of DISINFECTION and CHLORINE decay, CHLORINE must be replenished at some minimum intervals to maintain the recommended FREE CHLORINE RESIDUAL. For these reasons, MAHC Table 4.7.1.10 was developed to provide some maximum TURNOVER time limits for AQUATIC VENUES that are not dominantly influenced by BATHER LOAD to help ensure proper physical transport of CONTAMINANTS and DISINFECTANT. Values in this table are derived from historical practice and design experience worldwide. All AQUATIC VENUES must be designed to meet the lesser of the two maximum TURNOVER times.
Reuse Ratio This section is intended to address those INTERACTIVE WATER PLAY VENUE designs that remove water from the INTERACTIVE WATER PLAY treatment tank by an AQUATIC FEATURE pump separate from the filtration system pump. The limit/ratio of INTERACTIVE WATER PLAY FEATURE water pump rate to the filtration system water pump rate is to acknowledge the typically high level of contaminates and turbidity introduced to the INTERACTIVE WATER PLAY treatment tank. The introduction and build-up of turbidity can exceed the rate at which it is removed by the filtration system which can result in interference with chemical DISINFECTION and UV treatment.
Flow Turndown System The flow turndown system is intended to reduce energy consumption when AQUATIC VENUES are unoccupied without doing so at the expense of water quality. A turbidity goal of less than 0.5 NTU has been chosen by a number of U.S. state CODES (e.g., Florida) as well as the PWTAG 146 and WHO 147 . The maximum turndown of 25% was selected to save energy while not necessarily compromising the ability of the RECIRCULATION SYSTEM to remove, treat, and return water to the center and other extremities of the POOL. The
Facility Design & Construction ANNEX 102 MAHC does not allow stopping recirculation since uncirculated water would soon become stagnant and loose residual disinfectant likely leading to biofilm proliferation in pipes and filters. This could compromise water quality and increase the risk to bathers. Future research could determine that more aggressive turndown rates are acceptable. Some POOLS are already reportedly using the turndown system without a turbidimeter or precise flow rates. The intent of this section is to formalize a system for doing the turndown that does not compromise public health and SAFETY. Additional research in this area could identify innovative ways to optimize and improve this type of system. The likelihood of turbidimeters being cleaned and maintained is likely to be good because turbidimeters tend to give higher reading when not maintained properly. AQUATIC VENUES designed above the minimum design STANDARDS would have the flexibility to increase system flows to maintain excellent water quality during periods of peak activity. The flow turndown system is intended to reduce energy consumption when AQUATIC VENUES are unoccupied without doing so at the expense of water quality.
An electronic turbidity and RECIRCULATION SYSTEM flow feedback system would provide a quantifiable means of determining the water quality suitability if a facility desires to "turndown" the recirculation pumps to achieve a flow of up to 25% less than the minimum required recirculation flow rate when the AQUATIC VENUES is not occupied. The integration of feedback from both the flow meter and turbidimeter must be maintained for the VFD to be able to reduce the system flow rate below the level required to achieve the TURNOVER time requirement.
# Variable Frequency Drives
Variable frequency drives (VFDs) offer the benefits of energy savings, operational flexibility, and in most cases the ability to automatically increase the pump flow as the filter clogs by interfacing the VFD with a flow meter (or potentially a filter effluent pressure transducer) by means of a proportional-integral-derivative (PID) controller. VFDs may also offer the added benefits of protecting piping, pumps, and valves. Energy savings and benefits will vary depending on the design of the system.
# Filtration
System Design
The filtration system should be designed to remove physical CONTAMINANTS and maintain the clarity and appearance of the AQUATIC VENUE water. However, good clarity does not mean that water is microbiologically safe. With CHLORINE-tolerant human pathogens like Cryptosporidium becoming increasingly common in AQUATIC VENUES, effective filtration is a crucial process in controlling waterborne disease transmission and protecting public health.
# Water Quality
If filtration is poor, water clarity will decline and drowning risks increase since swimmers in distress cannot be seen from the surface as well as needed. DISINFECTION will also be compromised, as particles associated with turbidity can surround microorganisms and shield them from the action of disinfectants. Particulate removal through coagulation and filtration is important for removing Cryptosporidium OOCYSTS and Giardia cysts and some other protozoa that are resistant to chemical DISINFECTION. 148
# Pathogen Removal
One of the most significant recommended changes of the MAHC is changing the filtration system from one that only provides good clarity and appearance to one that efficiently removes waterborne human pathogens from the AQUATIC VENUE water. Water clarity is only an indicator of potential microbial CONTAMINATION, but it is the most rapid indicator of possible high CONTAMINATION levels. CHLORINE residual can be sufficiently high to kill indicator bacteria while leaving protozoa relatively unharmed and infective. Therefore, testing for indicator bacteria may not be useful as a measure of AQUATIC VENUE water quality, and testing for Giardia cysts and Cryptosporidium is very expensive and time-consuming. So, both measures are impractical as an operational tool for water quality measurement. Cryptosporidium is a widespread threat responsible for causing outbreaks in AQUATIC VENUES each year in the U.S. 149,150 With CHLORINE-tolerant human pathogens like Cryptosporidium becoming increasingly common in AQUATIC VENUES, effective filtration is a crucial process in controlling waterborne disease transmission and protecting public health. 151 , 152 Furthermore, an accidental fecal release could overwhelm the DISINFECTANT residual and leave physical removal as the only means of removing pathogens. 153 Filtration has been cited as the "critical step" for the removal of Cryptosporidium, Giardia, and free-living amebae that can harbor opportunistic bacteria like Legionella and Mycobacterium species. 154
# Cryptosporidium
Cryptosporidium is a CHLORINE-tolerant protozoan pathogen that causes the majority of waterborne disease outbreaks in swimming POOLS in the U.S. as shown in MAHC
# 3-Log Reduction
The current CT VALUES for a 3-log reduction in viability of fresh Cryptosporidium OOCYSTS with FREE CHLORINE are 10,400 mg/Lmin (Iowa-isolate) and 15,300 mg/Lmin (Maine-isolate) at pH 7.5. 160 At a concentration of 1 mg/L, FREE CHLORINE can take more than 10 days to inactivate 99.9% of Cryptosporidium OOCYSTS (CT=15,300 mg/Lmin), but many people are likely to be swimming in the AQUATIC VENUE during that 10-day period and risk being exposed to infective parasite concentrations. Infected individuals may then return to the AQUATIC VENUE and/or visit other AQUATIC VENUES to perpetuate the spread of the parasite. Sand filters are commonly used and often serve as the only potential physical BARRIER to Cryptosporidium in U.S. AQUATIC VENUES, but sand filters without coagulant typically only remove about 25% of OOCYSTS per passage through the filter 161 . Based on the slow kinetics of CHLORINE inactivation of Cryptosporidium, the known inefficiency of sand filter to remove OOCYSTS, and the recent increased incidence of cryptosporidiosis in the U.S., additional measures appear necessary to effectively safeguard public health.
# Review of Recreational Water Filtration Research Findings
# Sand Filters
Sand filters often provide the only physical BARRIER to Cryptosporidium in U.S. AQUATIC VENUES, but sand filters meeting the recommendations of pre-existing POOL CODES typically only remove about 25% of OOCYSTS per passage through the filter 165 . A quantitative risk assessment model of Cryptosporidium in AQUATIC VENUES confirmed there is a "significant public health risk". 166 Some changes are necessary to effectively safeguard public health and will be discussed subsequently. Recent research in the U.S. and U.K. has shown that sand filters can remove greater than 99% of OOCYSTS per passage when a coagulant is added prior to filtration 167,168 . The addition of coagulants to swimming POOL filters used to be common practice in the U.S. with rapid sand filters, but it fell out of favor as high-rate sand filters began to dominate the U.S. POOL market. The importance of coagulant addition to efficient pathogen removal in drinking water is well-documented and recommended in all U.S. surface water treatment facilities for drinking water production by the U.S. EPA. 169 , 170 , 171 , 172 , 173 The U.S. EPA expects
Facility Design & Construction ANNEX 108 drinking water treatment facilities to remove or inactivate a minimum of 99% (2 log) of Cryptosporidium OOCYSTS and up to 99.997% (4.5 log) for facilities treating source water with the highest concentration of OOCYSTS. 174 While more research and quantitative risk assessment models will be recommended to determine the safe level of removal in most AQUATIC VENUES, it is clear that the current removal rates of approximately 25% can lead to a significant number of outbreaks each year. Based on the research available for existing AQUATIC VENUE filtration technologies and risk models, a new minimum removal goal for Cryptosporidium removal by filters used in new and renovated swimming POOLS is recommended to be at least 90% (1 log) per single pass.
# Filtration Systems
Multiple types of AQUATIC VENUE filtration systems have already been shown to achieve removals exceeding 99% depending on the filter design, water quality, and operational variables.
MAHC Annex Table 4.7.2.1 (below) contains a current summary of published research on Cryptosporidium or Cryptosporidium-sized microsphere removals via filtration in pilot-scale trials. Bench-scale results were not included due to concerns that the laboratory results might not be reproducible at pilot-or full-scale as has been observed in previous studies. MAHC Annex Table 4.7.2.1 is sorted in order of increasing filter removal efficiency, and the data is roughly divided into three groupings (i.e., 99% removal). Operating conditions falling into the first group would not be expected to reliably meet the new 90% (single pass) removal recommendation that is recommended for all new and renovated AQUATIC VENUES. Coagulant dosage, surface loading rate, and media depth can significantly impact filtration removals. Careful selection of both design and operating values is essential to achieving excellent pathogen removal with AQUATIC VENUE filters.
Facility Design & Construction ANNEX 109
# Filtration Products
At the time of this writing, the following filtration products are believed to be untested for Cryptosporidium/4.5-micron carboxylated microsphere removal in AQUATIC VENUE water: Regenerative media filters, Sand followed by cartridge, (with 5-micron absolute or 1-micron nominal rating), Macrolite filter media, Charged zeolite media, Crushed-recycled glass filter media, and Any others not listed in MAHC Annex Table 4.7.2.1.
# Brief Historical Review of Water Filtration Practices for Aquatic Venues
In the U.S. in the 1920s, rapid sand filters on swimming pools were typically operated at 3-5 gpm/ft 2 (7-12 m/h) with coagulation prior to filtration, but high-rate sand filters have largely replaced rapid sand filters because they operate at 15-20 gpm/ft 2 (37-49 m/h) without coagulant. 176,177 While high-rate sand filters are definitely cheaper and smaller, they are also less effective at removing Cryptosporidium-sized particles. The majority of U.S. drinking water treatment facilities still use rapid sand filters with coagulation and typically operate them at 3-5 gpm/ft 2 (7-12 m/h). The U.S. EPA, after an extensive review of peer-reviewed research, decided to give drinking water treatment facilities credit for removing 99% of Cryptosporidium OOCYSTS for properly employing this technology (i.e., granular media filtration with coagulation prior to filtration). Research has shown that More efficient filtration of AQUATIC VENUE water will, in most cases, lead to higher rates of pressure development in filters and more frequent backwashing of filters. The smaller the pores in the filter media at the surface of the filter, the more rapidly pressure would be expected to increase. Fortunately, there are a number of options available to design engineers that could reduce the rate of pressure development. These options include: The use of more uniformly graded filter media, Skimming fines from filter media prior to startup, More efficient backwashing of filters, Lowering the flow rate per unit surface area, and The use of two types of filter media in filters.
Granular Media Filters
General Design Tip: When a single pump feeds two filters at 10 gpm/ft 2 (24 m/h), redirecting the entire flow through one filter into the backwash line of the other should result in a backwash rate of approximately 20 gpm/ft 2 (49 m/h). The backwash water would be unfiltered water that would have to be plumbed to bypass the filter. With three filters, it would be possible to redirect water from two filters into the backwash influent pipe of the third filter to provide clean backwash water.
# Listed
Equipment testing of filters to industry STANDARDS is critically important, but it is only one aspect of performance. A filter certified with the hydraulic capability to pass water at 20 gpm/ft 2 (49 m/h) does not mean this filter should be operated at 20 gpm/ft 2 (49 m/h). Granular media filters perform better at removing particles and microbes at lower filter loading rates (all other factors equal), and this finding has been repeatedly observed in practice and can be explained theoretically. Filters might need to be held to higher STANDARDS of performance in terms of water quality than the current industry STANDARD. Filter Location and Spacing Sufficient floor space should be available to accommodate installation of additional filters to increase the original filtration surface area by up to 50% should it be recommended by future regulations or to meet current water quality STANDARDS. This is part of the hydraulic flexibility recommendation of newly constructed POOLS. The idea is to recommend space for additional filters should they become necessary at some point in the future. The 'extra' space could be utilized to make EQUIPMENT ROOMS safer and more functional.
A port and ample space for easy removal of filter media is also recommended. Filter media might be changed every 5 years. This process could be exceedingly difficult if filters are not designed with a port for this purpose or if the filters are installed without proper clearance to access the media removal port.
Filtration and Backwashing Rates The granular media filter system should be designed to backwash each filter at a rate of at least 15 gallons per minute per square foot (37 m/h) of filter bed surface area, unless explicitly prohibited by the filter manufacturer. Specially graded filter media should be recommended in filter systems backwashing at less than 20 gpm/ft 2 (48.9 m/h) to be able to expand the bed at least 20% above the fixed bed height at the design backwash flow rate, which is subject to approval by the local authority. Filtration and backwashing at the same flow rate is likely to lead to poor performance of both processes. Backwashing at double the filtration rate is not all that complicated with a 3-filter system, where the flow of two filters is used to backwash the third. Further, backwashing with unfiltered water is possible in a 2-filter system by backwashing with the entire recirculation flow through each filter individually. Variable drive pumping systems and accurate flow meters also contribute to the likelihood of successful backwashing as well as effective filtration.
# Effective Filtration
Filtration at 10 gpm/ft 2 (24 m/h) is really pushing the envelope for attaining effective filtration and would not be recommended for a municipal drinking water system using sand filters due to doubts about the ability of such a filter to remove particulate CONTAMINANTS reliably. There are instances where multi-media deep bed filters or monomedium filters with large diameter anthracite and 6 foot (1.8 m) deep or greater beds of media are used, such as those owned and operated by the Los Angeles Department of Water and Power.
# Facility Design & Construction ANNEX 112
Effective filtration of drinking water at high filtration rates recommends careful and exact management of coagulation. Whereas filtration rates are not explicitly addressed in much of the research on water filtration, the experience of researchers, regulators, and consultants is that high rate filtration recommends extra attention and talent. For example, over three decades ago, the State of California allowed the Contra Costa Water District to operate filters at 10 gpm/ft 2 (24 m/h) but other water utilities were not allowed to do this. The exception was permitted because of the design and the high level of operating capability at the plant where the high rate was used.
Operation at very high rates either causes very rapid increases of head loss in sand filters (water utility experience resulted in the conclusion that operating sand filters at rates above 3 or 4 gpm/ft2 (7-10 m/h) was impractical) or very little particle removal occurs as water passes through the sand bed, thus enabling filters to operate for a long time at high rates. For this reason following World War II, the use of anthracite and sand filters became the norm for filters designed to operate at 4 or 5 gpm/ft 2 (10-12 m/h) or higher. Finally, in the 1980s, workers in Los Angeles showed that a deep (6 foot (1.8 m)) filter with 1.5 mm effective size anthracite media could effectively filter water at rates of close to 15 gpm/ft 2 (37 m/h).
However, for very high rates of filtration to be effective, pretreatment has to be excellent, with proper pH and coagulant dosage, probably use of polymer, and in some cases, use of a pre-oxidant to improve filter performance. This is well understood by filter designers and professors who specialize in water filtration. Articles published on the Los Angeles work done by James Montgomery Engineers showed the importance of proper pretreatment. Papers written by experts on filtration have noted the importance of effective pretreatment (including proper coagulation) for dependable filter performance, and those writers were focused on rates employed in municipal filtration plants (e.g. 3 to 10 gpm/ft 2 (7-24 m/h)). As filtration rate increases, water velocity through the pores in the sand bed increases, making it more difficult for particles to attach to sand grains and remain in the bed instead of being pushed on through the bed and into filter effluent. When filters do not work effectively for pathogen removal, the burden is put on DISINFECTION to control the pathogens. For Cryptosporidium, the DISINFECTION approach that is typically most cost-effective is UV, so a very high rate filter may need to be followed by UV for pathogen inactivation, and the very high rate filters would just have to clarify the water sufficiently that there is no interference from particulate CONTAMINANTS with the UV inactivation process.
# Backwash System Design
For a granular media filter system to be able to backwash at a rate of at least 15 gallons per minute per square foot (37 m/h) of filter bed surface area, the pump(s), pipes, and filters must be designed accordingly. As many professionals have sought to improve water quality by decreasing the filtration rate to values lower than 15 gpm/ft 2 (37 m/h), they have sometimes failed to recognize that while lowering the filtration rate may generally produce a positive change in performance, a similarly lower backwash rate could lead to a total filtration system failure. In cases where a backwash rate of 15 gpm/ft 2 (37 m/h) is explicitly prohibited by the filter manufacturer, the filter may still be used, provided that specially graded filter media is installed that will expand to a minimum of 20% bed expansion at the specified backwash flow rate. Viewing windows 4.0 Facility Design & Construction ANNEX 113 are highly recommended in all filters since they will allow direct observation of the bed expansion during backwashing, cleanliness of the media and backwash water, and the depth of the sand in the filter. Croll and coworkers 179 used a backwashing rate of 25 gpm/ft 2 (61 m/h) to achieve 25% bed expansion of their filter.
# WHO Recommends
The WHO recommends a backwash rate of 15-17 gpm/ft 2 (37-42 m/h for sand filters, but the media specifications are not given nor is it clear whether or not air-scour is expected prior to backwashing. 180 Backwashing swimming POOL sand filters with air scour is common in the UK and elsewhere. 181.182 It has also been reported that air-scour washed AQUATIC VENUE filters are more efficient than filters washed by water only. 183 It is reasonable that lower backwashing rates would be used for water backwash when following air-scour since the air-scour dislodges most of the particles attached to the media grains (as opposed to relying on the sheer force of the water passing over the surface of the particles). It is not feasible to operate sand filters in drinking water treatment plants without an auxiliary backwash system such as air scour. 184 The practice of operating AQUATIC VENUES and filters (that were not using coagulation) without air scour has been STANDARD practice in the U.S. for many years, which has seen mixed results ranging from no problems to total system failures requiring replacement of all filter media. PWTAG recommends air-scouring filters at 32 m/h (13 gpm/ft 2 ) (at 0.35 bar). 185
# Polyphosphate Products
Polyphosphate products are sometimes used to sequester metals in POOLS, but this practice is not recommended when granular media filters are used because polyphosphate is an effective particle dispersant that can reduce the removal efficiency.
# Filter Bed Expansion
Sufficient freeboard (or space between the top of the media and the backwash overflow) to allow for a minimum of 35% filter bed expansion during backwashing adds a factor of SAFETY when the target bed expansion is 20% to prevent the washout of filter media during backwashing.
The regions underneath the lateral underdrains in granular media filters can become stagnant when filled with sand or gravel, which can lead to low disinfectant residuals and ultimately biofilm growth. Filling this area with concrete at the time of installation
Facility Design & Construction ANNEX 114 may prevent this potential problem. 186 It is fundamentally difficult to suspend (i.e., fluidize) and hence clean filter media or gravel that is below the level where the backwash water enters the filter.
Minimum Filter Media Depth Requirement The performance of high-rate granular media filters at removing pathogens and particles is contingent upon the depth of the filter media (as shown in MAHC Annex Table 4.7.2.1), especially at rates of 15 gpm/ft 2 (37 m/h), which is why these filters recommend at least 24 inches (61.0 cm) of filter media. The WHO recommends filtration at 10-12 gpm/ft 2 (24-29 m/h) for sand filters while the PWTAG recommends 4-10 gpm/ft 2 (10-24 m/h) as the maximum filtration rate for all non-domestic POOLS using sand filters. 187,188 The STANDARD sand filter bed depth typically varies from 0.55 to 1 m (22 to 39 inches) in the UK. 189
# Minimum Depth
For swimming POOL filters with less than 24 inches (61.0 cm) of media between the top of the laterals and the top of the filter bed, lower filtration rates (e.g., 10 gpm/ft 2 (24 m/h) are recommended to efficiently remove particles and pathogens. Improvements in particle removal with decreasing filtration rates have been documented. 190 Drinking water treatment facilities typically limit filtration to less than 4 gpm/ft 2 ,(10 m/h), which is similar to the filtration rates recommended in AQUATIC VENUES in the 1920s. 191,192 The minimum depth of sand in POOL filters was 36 inches (0.9 m) in 1926. 193 Sand filters are typically designed in drinking water treatment for an L/d ratio of 1000 or greater, where "L" is the depth of the media and "d" is the diameter of the media grain. 194 For example, a 0.6 mm effective size sand would recommend a minimum 0.6 m (23.6 inches) bed depth, and a 12 inch (30.5 cm) deep sand bed with 0.5 mm grains would have an L/d of only 610.
The minimum depth of filter media above the underdrains (or laterals) is recommended be 24 inches (61.0 cm) or greater with sufficient freeboard (or space between the top of the media and the backwash overflow) to allow for a minimum of 35% filter bed expansion during backwashing. Sand or other approved granular media should be carefully graded to ensure fluidization of the entire filter bed during backwashing.
Facility Design & Construction ANNEX 115 A design backwash rate of at least 30% higher than the minimum fluidization velocity of the d90 size of the media in water at the larger of 86°F (30°C) or the maximum anticipated operating temperature is recommended. A backwash rate higher than the minimum could be necessary to effectively clean the media during backwashing. Variations in the media type, density, water temperature, effective size, or uniformity coefficient may cause changes in the recommended backwash flow rate and/or bed expansion, which should be subject to approval by the local authority provided hydraulic justification by the design engineer.
Sand or other approved granular media should be carefully graded to ensure fluidization of the entire filter bed during backwashing. The specifications of POOL filter sand (or lack thereof) can lead to filter media being installed that cannot be effectively cleaned during backwashing. Sand that cannot be properly cleaned can lead to filter failures and/or biofilms in the bottom of a filter. Researchers have found nematodes, rotifers, ciliates, zooflagellates, amoebic trophozoites and cysts, as well as bacterial masses in the backwash water of swimming POOL sand filters. 195 A design backwash rate of at least 30% higher than the minimum fluidization velocity of the d90 size of the media in water at the larger of 86°F (30°C) or the maximum anticipated operating temperature is recommended, but a backwash rate higher than the minimum could be necessary to effectively clean the media during backwashing. These backwashing recommendations are based on drinking water treatment practice. 196 For a sample of AQUATIC VENUE filter sand examined at UNC Charlotte, the d90 size (i.e., 90% of the grains smaller than this diameter) of the media was estimated from the sieve analysis results in MAHC Annex Figure 4.7.2.1.4.1 (below) to be 1.06 mm. The calculated minimum fluidization velocity of this sized sand grain in water at 86°F (30°C) was calculated to be 16.7 gpm/ft 2 (41 m/h). Since this backwash velocity would be expected to leave approximately 10% of the grains in the filter that were larger than the d90 unfluidized, common practice is to recommend a backwashing rate 30% greater than this minimum value (or 21.7 gpm/ft2) (53 m/h). The recommended backwash flow for this media by Kawamura 197 was graphically estimated to be 20.9 gpm/ft 2 (51 m/h) at 68°F (20°C). This is the rationale for requiring at least a 15 gpm/ft 2 (37 m/h) backwashing rate of all swimming POOL sand filters.
To ensure compatibility with the minimum recommended backwashing rate of 15 gpm/ft 2 (37 m/h), filter sand should pass through a number 20 U.S. standard sieve or equivalent (i.e., all sand grains should be smaller than approximately 0.85 mm). While this recommendation of "#20 Silica sand" is common in swimming POOL manuals and by filter manufacturers, it does not appear to be representative of the actual sand that might be installed. Sieve analyses of two brands of commercially available "pool filter sand" are provided in MAHC Annex Figures 4.7.2.1.4.1 and 4.7.2.1.4.2. Sand can also be specified by an effective size (E.S.) of 0.45 mm with a uniformity coefficient (U.C.) of less than or equal to 1.45, which is roughly equivalent to a 20/40 mesh sand. A 20/40
Facility Design & Construction ANNEX 116 mesh sand would pass through a #20 (0.85 mm sieve) and be retained on a #40 (0.42 mm) sieve. In order to reduce the rate of headloss accumulation at the top of the filter bed (and the frequency of backwashing), a 20/30 mesh sand could be specified where the smallest grains at the top of the filter would be approximately 0.60 mm (30 mesh) instead of 0.42 mm (40 mesh). The depth of the expanded bed during backwashing should be at least 20% greater than the depth of the fixed bed after backwashing. Experiments to determine the backwashing rates recommended to fluidize a bed of POOL filter sand in 3-inch (7.6 cm) and 6-inch (15.2 cm) diameter clear PVC filter columns based on visual observation were conducted. Fluidization is somewhat subjective when observed visually because sand grains could be moving sluggishly prior to fluidization and because the smaller grains at the top of the filter will fluidize long
Facility Design & Construction ANNEX 117 before the larger grains at the bottom. For this reason, bed expansion was measured and recorded along with visual observations of when the bed actually fluidized. Fluidization was visually observed to occur between 20 and 23 gpm/ft 2 (49-56 m/h), which coincided with 19-23% bed expansion in both sized columns for the unaltered commercial filter media at 68°F (20°C). Expansion data from the 3-inch (7.6 cm) diameter filter column is shown in MAHC Annex Table 4.7.2.1.4.1 (below). The 20/30 mesh fraction of the same filter media was examined under the same conditions, and the experimental results are provided in MAHC Annex Table 4.7.2.1.4.2. The media was observed to be fully fluidized at 19.9 gpm/ft 2 (49 m/h)with a bed expansion of 21.8% at 68°F (20°C). Calculations based on Cleasby and Logsdon 198 indicate that filter backwashing rates should increase by approximately 18% for this media as the temperature is increased from 68° to 86°F (20° to 30°C) due to changes in the viscosity of water with temperature.
Fluidization can be somewhat complicated to estimate, but filter bed expansion can be easily measured in the field with granular media filters that use viewing windows. Furthermore, a model exists that can be used to calculate filter bed expansion of sand in a filter during backwashing. 199 This model tends to be sensitive to fixed bed porosity, but using a value of 42% porosity with a sphericity of 0.85 and density of 2.65 g/cm 3 yielded a bed expansion of 22.7% at 20 gpm for water at 86°F (30°C). This is the rationale for requiring the depth of the expanded bed during backwashing being at least 20% greater than the depth of the fixed bed. PWTAG recommends 15-25% bed expansion following air scouring at 32 m/h (13 gpm/ft 2 )(at 0.35 bar). 200 In a study funded by PWTAG, researchers used a backwashing rate of 25 gpm/ft 2 (61 m/h) to achieve 25% bed expansion of their filters. 201 Variations in the media type, density, water temperature, effective size, or uniformity coefficient may cause changes in the recommended backwash flow rate and/or bed expansion, which should be subject to approval by the local authority provided hydraulic justification by the design engineer.
# Facility
Design & Construction ANNEX 118
# Coagulant Injection Equipment Installation
To enhance filter performance, a coagulant feed system, when used, should be installed with an injection point located before the filters and, for pressure filters, on the suction side of the recirculation pump(s) capable of delivering a variable dose of a coagulant (e.g., polyaluminum chloride or a pool clarifier product) to enhance filter performance. Pumps should be properly sized to allow for continuous delivery of the recommended
Facility Design & Construction ANNEX 119 dosage of the selected coagulant. Products used to enhance filter performance should be used according to the manufacturers' recommendations. The coagulant feed system should consist of a pump, supply reservoir, tubing, ISOLATION valve, and BACKFLOW prevention device. Sand filters used as pre-filters for membranes or cartridge filters with 1-micron nominal or 5-micron absolute size ratings or less should not be recommended to have coagulant injection equipment. Specialized granular filter media capable of removing Cryptosporidium OOCYSTS or an acceptable 4.5-micron surrogate particle with an efficiency of at least 90% (i.e., a minimum of 1 log reduction) without coagulation should not be recommended to provide coagulant injection equipment, but this media should be replaced or reconditioned as recommended to sustain the minimum recommended particle removal efficiency stated above. Sand filters located ahead of a UV or ozone DISINFECTION system may be excluded from supplying coagulation equipment with the approval of the local authorities. Local authorities should consider the efficiency of the supplemental DISINFECTION process for Cryptosporidium inactivation but should also consider that a side-stream system does not have any effect on the Cryptosporidium OOCYSTS that bypass the system on each TURNOVER. For example, a UV system that is 99.999% effective at inactivating Cryptosporidium that only treats half of the recirculated water flow is on average only 50% effective (per pass) because all of the Cryptosporidium in the bypass stream remain unaffected by the UV.
Coagulation is the key to effective granular media filtration, which has long been recognized in the drinking water industry. 202,203,204,205,206 Operation of granular media filters without coagulation is not permitted by U.S. EPA regulations for drinking water treatment, with the exception of slow sand filters. Thus, if pathogen removal is a goal of water filtration for swimming POOL sand filters, coagulation would be essential. This is the rationale for recommending future consideration of coagulation in swimming POOLS. A coagulant feed system should be installed with an injection point located ahead of the filters to facilitate particle removal by filtration (instead of settling to the bottom of the pool), and injection ahead of the recirculation pump(s) will provide mixing to evenly distribute the coagulant among the particles. A variable dose of a coagulant (e.g., polyaluminum chloride, or pool clarifier) is recommended because coagulant dosages may vary with BATHER LOAD. Products used to enhance filter performance should be used according to the manufacturers' recommendations since overfeed or underfeed of coagulants is known to impair performance.
Although polyaluminum chloride (PACl) is not a widely used coagulant in the U.S. at present, it has been used extensively abroad. 207,208 However, recommended dosages abroad may not be optimized for pathogen removal. PWTAG recommends a
Facility Design & Construction ANNEX 120 polyaluminum chloride dosage of 0.005 mg/L as Al, but research has shown that 0.05 mg/L is recommended to exceed 90% removal and 0.21 mg/L or higher could be optimal with filters operated based on U.K. STANDARDS. 209
# New Challenges: The Impact of Coagulation on Backwashing
Coagulation is likely to make cleaning of sand filters more challenging. Drinking water treatment facilities in the U.S. employ auxiliary backwash systems such as air-scour to improve the cleaning process. Using water alone for backwashing has not been found to be effective for media cleaning in drinking water treatment applications. 210 Air scour systems are common in European AQUATIC VENUE filters and should be investigated further in the U.S. More frequent backwashing is recommended with water-only backwash, and the clean-bed headloss (pressure) should be recorded after each backwash to detect early signs of ineffective backwashing and prevent filter system failures.
# Initial Headloss and Headloss Accumulation Rate
Increased headloss (or pressure buildup) in filters is expected with coagulation as particles are likely to be removed faster (more efficiently) and closer to the top of the filter thereby clogging the top of the filter more quickly. This is actually a sign that the coagulation/filtration system is working effectively. The initial headloss after backwashing should remain relatively constant however. Coagulants have been used successfully in the U.S. in the past and are currently being used in POOLS abroad. 211,212,213,214 In systems not properly designed to backwash with filter effluent from other filters, the coagulant feed system should not be operated during backwashing so as to prevent introduction of coagulant into the backwash water.
Precoat Filters
# Filtration Rates
The design filtration rate of 2.0 gallons per minute per square foot (4.9 m/h) might be overly conservative and is the same upper limit on filtration rate typically used in drinking water treatment applications. 215 However, drinking water applications typically use finer grades of precoat media at application rates of 0.2 lbs/ft 2 (1 kg/m 2 ). 32 Lange and coworkers 216 have used filtration rates up to 4 gpm/ft 2 (10 m/h) with no adverse effect on Giardia cyst removal although the removal of turbidity and bacteria were
# Precoat Media Introduction System Process
The precoat process shall follow the manufacturer's recommendations and requirements of NSF/ANSI Standard 50.
# Separation Tank
Precoat filter media has the potential to settle out of suspension in sewer pipes depending on the flow velocities, which could lead to fouling or clogging of sewer pipes. Local authorities may recommend removal of precoat media prior to discharge in sewer systems so POOL operators should check the AHJ.
Continuous Filter Media Feed Equipment Filter performance can be significantly impacted by the selection of the precoat filter media, which could alter water clarity, pathogen removal, and cycle length. Multiple grades of precoat media are available in the marketplace. Precoat media can be specified by median particle size of the media or by permeability of the media. Cleaning procedures for cartridges are not well-established and education in proper cleaning procedures is likely necessary to avoid contaminated cartridges being reinstalled into filters potentially providing a protected region for proliferation of biofilm bacteria the could lead to an outbreak. Cartridge filter elements are typically cleaned manually, usually by hosing them down with a water hose and replacing them. Exposure concerns exist since concentrated streams containing Legionella, Mycobacteria, Cryptosporidium, and other pathogens can potentially be sprayed or splashed on the operator/lifeguard as well as the surrounding environment perhaps even including the inside of the filter or the surfaces surrounding the AQUATIC VENUES.
Facility Design & Construction ANNEX 122 An extensive survey of manufacturers' cleaning recommendations was conducted after there was a Legionella outbreak in a facility with cartridge filters. Legionella, Pseudomonas, and biofilms were found in the filters. The cleaning procedure employed was to take them outside, rinse them with a water hose, and replace them. Operators reported that they would occasionally degrease or bleach them. Further investigation revealed that this cleaning procedure was common at other facilities.
Filter manufacturers were surveyed for cleaning procedures and most often did not have a cleaning process and simply deferred to the cartridge manufacturer since many filter manufacturers do not make the cartridges. The cartridge manufacturers also did not have a cleaning procedure or a very minimal one that did not account for biofilms or heavy organic loads commonly encountered in SPAS. CHLORINE is generally ineffective at inactivating bacteria in a biofilm or removing particulate or organic filter foulants. One effective way to control the biofilms is to completely dry them out.
Based on the known poor performance in removing pathogens increasing the likelihood of waterborne disease outbreaks and the potential for dangerous microbial (and perhaps chemical) exposures to the operators during routine maintenance and cleaning, cartridge filters are not currently recommended. This is not to say that all of the current issues and/or concerns with cartridge filters could not be resolved.
Filtration Rates Cartridge filter elements should have a listed maximum flow rate of 0.375 gallons per minute per square foot (0.26 L/s/m 2 ), but the design filtration rate for surface-type cartridge filter should not exceed 0.30 gallons per minute per square foot (0.20 L/s/m 2 ).Cartridges don't recover 100% capacity when cleaned after fouling. Systems designed to the maximum limit cannot sustain performance (or minimum pool turnover requirements) over time. For example, if a filter only recovers to 80% of the original flux after cleaning, then a filter flow rate of 0.375 gallons per minute per square foot (0.26 L/s/m 2 ) would become 0.30 gallons per minute per square foot (0.20 L/s/m 2 ). Cartridge replacement would be necessary following fouling levels greater than 20% of the maximum rated capacity.
# Supplied and Sized
The pore size and surface area of replacement cartridges should match the manufacturer's recommendations.
Spare Cartridge An extra set of elements, with at least 100 percent filter area, and appropriate cleaning facilities and equipment should be provided to allow filter cartridges to be thoroughly cleaned. Two sets of filter cartridges should be supplied to allow for immediate replacement and cleaning procedures that involve complete drying of the filter elements. To provide for a healthy and safe swimming environment in INDOOR AQUATIC FACILITIES, it is important to consider a number of issues that could impact health. Proper ventilation and humidity control are important in removing excess heat, moisture, noxious odors, and harmful Proper Chemical Use
In addition, proper usage of chemicals can also improve the quality of the indoor air environment. 221,222,223 High Chloramines
High levels of chloramines and other volatile compounds in the air can increase the possibility of health effects such as upper respiratory illnesses and irritation of the mucous membranes including eyes and lungs. 224,225 Furthermore, these CONTAMINANTS can also cause metal structures and equipment to deteriorate.
# Shock Oxidizer
While proper ventilation is critical for INDOOR AQUATIC FACILITIES, water chemistry also can dramatically affect air quality. Levels of chloramines and other volatile compounds can be minimized by reducing CONTAMINANTS that lead to their formation (e.g., urea, creatinine, amino acids, and personal care products), as well as by supplemental water treatment. Effective filtration, water replacement, and improved BATHER hygiene can reduce CONTAMINANTS and chloramine formation. Research has shown that the use of non-CHLORINE shock oxidizers is selective in OXIDATION and may not prevent nor reduce inorganic chloramines though they may reduce some organic chloramines. 226 The EPA final guidelines state that manufacturers of "shock oxidizers" may advertise that their "shock oxidizer" products "remove," "reduce," or "eliminate" organic CONTAMINANTS. 227 Shock dosing with CHLORINE can destroy inorganic chloramines that are formed. SECONDARY DISINFECTION SYSTEMS such as ozone and ultraviolet light may effectively destroy inorganic as well as some organic chloramines.
# Facility Design & Construction ANNEX 124
# Swimmer Education
In addition, swimmers should be educated that their behavior (e.g., failing to shower, urinating in the pool) can negatively impact air quality by introducing nitrogen-containing CONTAMINANTS that form volatile compounds. 228
# Reduce and Minimize Impact
These steps can help reduce the chemical role in creating poor indoor air quality, and help maintain an environment that minimizes health effects on BATHERS as well as decrease deterioration of AQUATIC FACILITIES and equipment.
Feed Equipment
# General
If recirculation pumps stop but chemical feed pumps continue to pump chemicals into the return lines it can result in a high concentration of acid and CHLORINE being mixed so that eventually when concentrated solutions of CHLORINE and acid are mixed, CHLORINE gas will be formed. The CHLORINE gas could then be released into the AQUATIC VENUE when the recirculation pump is turned on again or in the pump room if there is an opening in the line as has been documented in CDC's Waterborne Disease and Outbreak Surveillance System 229 . To prevent the hazardous release of CHLORINE gas, the chemical feed system shall be designed so that the CHLORINE and pH feed pumps will be deactivated when there is no or low flow in the RECIRCULATION SYSTEM.
Sizing of Disinfection Equipment High use facilities, such as water parks and health clubs, require a greater capacity of feed equipment and production. These facilities generally have higher recirculation rates and experience accelerated consumption and should be sized differently to provide the minimum dosing.
Types of Feeders All UV units shall be installed into the system by means of a bypass pipe to allow maintenance on the UV unit while the AQUATIC VENUE is in operation.
Feeders for pH Adjustment It is recommended that the solution's reservoir supply be sized to hold a minimum of one week's supply.
Automated Controllers Constant and regular MONITORING of key water quality parameters such as the disinfectant level and pH are critical to prevent recreational water illness and outbreaks. AUTOMATED CONTROLLERS are more reliable as a MONITORING device than personnel and The SECONDARY DISINFECTION SYSTEM is to be designed to reduce an assumed total number of infective Cryptosporidium OOCYSTS in the total volume of the AQUATIC VENUE from an assumed 100 million (10 8 ) OOCYSTS to a maximum concentration of one infective OOCYST/100 mL by means of consecutive dilution.
# Equation
In considering the potential for outbreaks, it was decided that a treatment system should be designed to limit the outbreak to a reasonable period of time, preferably to a single day of operation. By this, it is meant that all pathogens of concern that may still be present at infective concentrations at the close of operations are reduced to below a level of infectivity by the opening time of the following day. This approach has been recommended because numerous multi-day outbreaks have been well documented. 231,232,233 In order to design a treatment system that can reduce the duration of exposure to a single day, the MAHC Committee made the following assumptions: (16-128 mL) would be a reasonable target for overnight remediation of the water to reduce the risk of transmission beyond the day of initial contamination. 236 , 237 The concentration chosen was one OOCYST/100mL. The only effective means currently to reduce the concentration of OOCYST in an
AQUATIC VENUE while open for bathing is by dilution (this does not include hyperchlorination that requires closure of the water to bathers). Accomplishing this through the introduction of sufficient makeup water is not practical. Instead, the solution is to remove a portion of the water, treat it to reduce the concentration of infectious OOCYSTS, and then return that water to the AQUATIC VENUE. Any treatment system that demonstrates this reduction in Cryptosporidium OOCYSTS specified herein is suitable for use. It is not the intent of the MAHC to limit technology only to UV and ozone as discussed in the CODE, but rather to specify the outcome of the treatment.
# Facility Design & Construction ANNEX 128
# Purpose
The purpose of secondary DISINFECTION is to reduce the viable Cryptosporidium OOCYSTS to a number below that which is considered an infective concentration, should the parasite be introduced into an AQUATIC VENUE. While 100% UV treatment of recirculated water is an option, it is important to note that this will not ensure the SAFETY of the BATHERS immediately following a fecal event, but it will reduce the time required for the system to get below an infective dose. While this is beneficial, mandating UV on 100% of the recirculated water flow may lead owners and designers to minimize the total recirculated flow so as to not incur the additional capital and operating cost of the required additional UV, ozone, or other SECONDARY DISINFECTION SYSTEMS. Cryptosporidium control is not the only consideration when designing an INCREASED RISK AQUATIC VENUE, and it is important that this requirement does not negatively influence other design considerations-such as amount of filtration needed for particulate removal and control of turbidity.
Consideration was therefore given to what should be the maximum time a system takes to reduce the viable OOCYST concentration to below an effective dose. Because a fecal event can release 100 million OOCYSTS and an infective dose is as little as one OOCYST per 100 mL, it is impossible with available technology today to ensure the SAFETY of BATHERS in the AQUATIC VENUE both at the time the fecal event occurs and in the immediate aftermath. A reasonable and logical maximum time for reducing the OOCYST concentration to below one OOCYST/100mL was determined to be the lesser of nine hours or 75% of the time an AQUATIC VENUE is closed in a 24-hour period. The goal of this is to ensure an AQUATIC VENUE is free of viable Cryptosporidium OOCYSTS, or at least have the number below an infective concentration every day the AQUATIC VENUE opens to the public.
# Example of Equation
The actual calculation used to determine the amount of needed SECONDARY DISINFECTION is based upon the understanding that the treatment of recirculated AQUATIC VENUES involves serial dilution, whether we are talking about particulate removal or rendering Cryptosporidium OOCYST ineffective. Assuming an initial concentration of 10 8 OOCYSTS, recognizing the limit of an infective dose is one OOCYST/100 mL, and allowing for a 99.9% reduction in infective OOCYST by the SECONDARY DISINFECTION SYSTEM, it can be derived that needed flow through the SECONDARY DISINFECTION SYSTEM is as given in the MAHC code. For a 100,000 gallon (378,541 L) AQUATIC VENUE which is closed 12 continuous hours out of every 24 hours, 75% of which is 9 hours: 100,000 x { / (60 x 9)} = 609 gpm Therefore, the 100,000 gallon (378,541 L) AQUATIC VENUE would require a SECONDARY DISINFECTION SYSTEM which has a flow rate of at least 609 gpm. If this AQUATIC VENUE is designed with a two hour filtration TURNOVER rate, the flow through the filters would be 833 gpm. An owner or designer can choose to size the SECONDARY FILTRATION SYSTEM to be 609 gpm, 833 gpm, or anything in between. If the owner or designer chooses to size the SECONDARY DISINFECTION SYSTEM equal to the filtration flow rate (833 gpm) the time it would take to reduce 10 8 OOCYST to 1 OOCYST/100 mL would be 6.6 instead of 9 hours.
# An example of how to calculate for the needed flow is as follows:
Q = V x {[14
# Flow Rate Measurements
Consideration was given for simplifying the sizing of the SECONDARY DISINFECTION SYSTEM and having the flow rate through the SECONDARY DISINFECTION SYSTEM equal to the overall treatment system flow rate. While this was initially recommended by the MAHC, ultimately this approach was rejected. A basic premise of the MAHC is to establish performance-based STANDARDS supported by data and science whenever possible. Sizing the SECONDARY DISINFECTION SYSTEM equal to the overall treatment system flow rate, while simplifying the design and operation of the facility, does not meet any defined criteria for reducing or eliminating risk to the PATRONS using the AQUATIC FACILITY. It was felt that establishing specific criteria for sizing the SECONDARY DISINFECTION SYSTEM independent of the criteria for sizing other treatment system processes (e.g. filtration flow rate) was the approach most likely to protect the public's health.
# Maximum Concentrations
In developing this approach, the MAHC considered establishing maximum permissible concentrations of OOCYSTS, which would be monitored and verified, but the MAHC rejected that approach as impractical since this would require actual lab testing.
Establishing a concentration based STANDARD for the water cannot readily be implemented because:
- There is no practical method to rapidly determine the number of OOCYSTS in the water and thus no method to enforce the STANDARD. There are multiple and interrelated biological variables in exposure estimations.
These include the number of OOCYSTS released per fecal incident, the number of incidents per day, strain differences in pathogenicity, the amount of water swallowed, and differences in individual susceptibility. The circulatory patterns in facilities are complex and unique to each AQUATIC FACILITY.
Requiring that the SECONDARY DISINFECTION SYSTEM deliver a treatment that ensured the OOCYST concentration was reduced to a specified level would require multiple biological Validation is a process by which any UV unit is tested against a surrogate microorganism in order to determine its performance. Validation is required because there is no on-line test of a UV unit's ability to disinfect and, due to the relatively short contact time, it is impossible to size units accurately based on just calculations.
It is important to note that evidence of testing is not the same as validation.
Validation must adhere to the following criteria:
- Follow one of the approved validation systems, preferably the USEPA DGM 2006, Have been carried out be a genuine third party, and Include all the required validation factors and RED bias.
The validated performance is based on the flow and transmissivity of the water to be treated. Therefore it is essential that the system is used within its validated performance range. A system operated outside its validated range is NOT acceptable.
# Validation Factor
The validation factor is used to account for statistical variations in the recorded data during third party testing. The validation factor is required to ensure that the equipment's actual performance will always be equal to or better than it's validated performance. This figure can be between 15% and 35% depending on the quality of the testing and must be included in any validated performance curve.
# Transmissivity (Transmission)
The transmissivity (often called transmission) of the water to be treated is an important design factor in sizing a UV system. The transmissivity is normally quoted as a % value in either a 1 cm, 4 cm, or 5 cm cell. It is measured in a UV Spectrophotometer.
Facility Design & Construction ANNEX 131 In many water treatment applications, this value will vary considerably but AQUATIC VENUES are for the most part consistent, due to the bleaching effect of the CHLORINE used as a residual disinfectant.
Typically AQUATIC VENUES will have a transmission of between 94% and 95% in a 1 cm cell, with splash pads and other INTERACTIVE WATER PLAY VENUES between 92% and 94%.
The installation of a UV unit itself will increase the transmission by perhaps 2% due to the improvement in the POOL water quality so the values noted above refer to a situation where a UV unit is installed and operational.
Design transmissions over 94% are not recommended, and exceptionally heavily loaded AQUATIC VENUES may consider using a lower number as a design basis.
It is also important to understand that as transmission is reduced, the performance of the equipment is reduced and the RED bias increases, requiring the UV to deliver more performance. For this reason, the performance difference between any equipment's validated performance at 98% transmissivity and actual field performance at 94% transmissivity can be 40% lower. When presented with validated performance data at 98% transmission, operators should therefore be aware that the equipment may only deliver half the performance when installed.
# Validation Range
A validated system will have different performance levels at different water qualities and flows. The relationship between these is traditionally represented as a performance curve where the performance can be noted at any point on this curve. However the lowest transmission test point and the highest flow tested are normally considered the extents of the validated range. This means that any UV unit tested at 95% and above is NOT validated at transmissions lower than 95%. For the same reason, a unit tested at a maximum flow of 500 gpm is NOT validated for any flow over 500 gpm.
Validation factors can reduce equipment validated performance by 30%, so it is essential that systems without validation factors built into performance curves are not considered validated.
The performance of a UV system in the field is measured by a combination of flow and intensity readings from the UV sensors. Performance in the field can be verified on inspection by regulators who will compare actual sensor readings with those indicated on the performance charts, so these charts must be retained at the AQUATIC FACILITY for each validated system.
UV equipment is utilized for its ability to disinfect CHLORINE-tolerant pathogens and for its ability to reduce combined CHLORINES in the POOL water. For the latter, typically a calculated dose of 60mJ/cm 2 is utilized based on the total UV-C and UV-B spectrum. This is similar to the validated dose requirements of the SECONDARY DISINFECTION SYSTEMS.
Facility Design & Construction ANNEX 132 Where UV is fitted as a SUPPLEMENTAL TREATMENT SYSTEM the CODE allows some operational and equipment concessions. Operators should note that the regulations as stated represent BEST PRACTICE; but where specific circumstances dictate, then the equipment specifications may be reduced.
For a SUPPLEMENTAL TREATMENT SYSTEM, the operator may consider reducing the dose applied to the process. This will reduce performance accordingly and operators should consider carefully such reduction in performance, and assure themselves that the equipment will still provide a beneficial level of performance. 240 , and Pseudomonas aeruginosa 241 , along with any other microorganism potentially found in AQUATIC VENUES, and is a strong oxidizer. Exposure to ozone gas can result in irritation to the eyes and respiratory tract if not generated and handled correctly. Therefore the Occupational Safety and Health Administration (OSHA) has identified a time weighted average (TWA) of 0.1 PPM (0.1 mg/L) as the permissible exposure limit for ozone.
# Third Party Validation
Validation is a process by which any ozone unit is tested against a surrogate microorganism in order to determine its performance. Validation is required because there is no on-line test of an ozone unit's ability to disinfect and, due to the relatively short contact time, it is impossible to size units accurately based on just calculations.
It is important to note that evidence of testing is not the same as validation. Standard 50; it is not an Annex but a portion of the ozone section in the whole STANDARD and was published in the 2013 STANDARD.
# Suitable for Use
All materials must be ozone resistant.
The strong oxidizing power of ozone shall be considered when choosing materials for pipes, valves, gaskets, pump diaphragms, and sealant. Materials for water piping, tanks, and other conveyance shall be nearly inert.
For generators that produce ozone under pressure and utilize a negative pressure (Venturi) ozone delivery system, or introduce ozone under pressure (such as a pressurized diffuser into an atmospheric holding tank), any leak or break in the system will immediately cause the release of ozone gas.
Suitable materials and their uses are: Properly applied Teflon tape may be used successfully for sealing joints; however, threaded fittings shall be avoided where possible. Hypalon® and silicone sealers which do not contain rubber filler are also successful. For generators that produce ozone under pressure and utilize a negative pressure (Venturi) ozone delivery system, or introduce ozone under pressure (such as a pressurized diffuser into an atmospheric holding tank), any leak or break in the system will immediately cause the release of ozone gas.
Copper / Silver Ion System The scientific data available on efficacy of these systems is predominantly for bacterial 242,243 inactivation and usually includes FREE AVAILABLE CHLORINE. There is limited scientific literature that documents the efficacy of these systems on viruses and parasites.
Facility Design & Construction ANNEX 135 Given the importance and frequency of recreational water illnesses associated with these other microorganisms (viruses and parasites), it is essential that DISINFECTION chemicals / systems are also effective against such microorganisms as well.
Ultraviolet Light / Hydrogen Peroxide Systems UV/peroxide systems have not been registered by the US EPA as primary disinfectant systems for recreational water. Although UV is a disinfectant, it does not impart a persistent residual disinfecting property to water. To overcome this, UV/peroxide systems claim, or in some cases imply, that the inclusion of hydrogen peroxide in the system supplies a disinfectant in the bulk water in the AQUATIC VENUE. Hydrogen peroxide is used as a hard surface disinfectant and has been granted registration for this purpose by the US EPA. When used as a hard surface disinfectant, hydrogen peroxide is normally used at around 3%. When used in recreational water, hydrogen peroxide is used at 27 to 100 PPM (mg/L), which is 1111 and 300 times, respectively, more dilute than that used on hard surfaces. At these low concentrations hydrogen peroxide is not an effective disinfectant. Thus, UV/peroxide systems do not provide a persistent disinfectant in the bulk of the water in the AQUATIC VENUE. Further, hydrogen peroxide is not registered by the US EPA for use as a disinfectant in recreational water. Since it is not EPA-REGISTERED, the use of hydrogen peroxide as a disinfectant, or any market claims that implies hydrogen peroxide provides any biological control, is a violation of the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA). UV/peroxide system should not be used as a SUPPLEMENTAL TREATMENT SYSTEM on CHLORINE treated AQUATIC VENUES. The addition of hydrogen peroxide to a CHLORINEtreated POOL will inactivate the hypochlorous acid. If sufficient hydrogen peroxide is added, the hypochlorous acid will be completely eliminated and no disinfectant for inactivation of pathogenic organisms will remain.
Water Quality Testing Devices and Kits Water Quality Testing Devices (WQTDs) should be stored as specified by the manufacturer's instructions. Failure to properly store WQTDs will result in incorrect readings. NSF/ANSI Standard 50 for WQTDs in 2013 currently contains specified precision and accuracy requirements for measuring pH, free & total CHLORINE, and free & total bromine. There are three levels of accuracy and precision deemed level 1, 2 and 3, with the highest accuracy and precision in level 1 devices. The test water specifications include alkalinity, calcium hardness, and total dissolved solids.
It is important for a QUALIFIED OPERATOR to use equipment that is easy to read and as objective as possible. The current, common means of testing AQUATIC VENUE water using colorimetric test kits is subjective because the color and intensity must be compared. Titration testing for free and combined CHLORINE is an objective test, which is accurate to 0.2 mg/L with an easily recognizable start and end point. Therefore, titration testing is recommended over colorimetric testing. Due to the use of inconsistent concentration gradations (i.e., the difference in concentration between adjacent color blocks) and the subsequent rapid darkening of the color blocks (e.g., above 1.5 mg/L), the accuracy of colorimetric test methods is likely to be lower than for titration test methods. Visual colorimetric methods are accurate only to +/-half the difference
Facility Design & Construction ANNEX 136 between the adjacent color blocks, and thus the confidence limits for these methods are wider at higher concentrations (e.g., above 1.5 mg/L). Where portable colorimeter test kits are affordable, these are the most accurate kits available for use at POOLSIDE.
Most water tests involve color development. Interferences in the water can cause them to produce a different color, or produce the wrong color intensity, or be unable to produce the expected color. Color matching tests for CHLORINE/bromine provide accuracy equal to approximately half the difference between known values of the color STANDARDS. As the CHLORINE/bromine concentration rises, the greater the difference will be between the known color STANDARDS. Thus, the readings become subjective as the difference increases. The following MAHC Table 4.7.3.5 summarizes some common interferences and how they impact the test color in disinfectant tests. If the water sample indicates high CHLORINE levels, usually over 10 PPM (10 mg/L), the DPD reagents may partially or totally bleach out, resulting in a false low or zero CHLORINE reading. The addition of double the quantity of DPD reagent during testing may minimize this interference or the analyst can use a smaller sample size or dilute the sample with distilled or deionized water (DI) water. Reference the WQTD's use instructions to guard against false readings and interferences.
# High Chlorine Effects on pH Testing:
If the CHLORINE reading is high, the tester must wait until it is lowered to a normal level before retesting the pH, to assure an accurate reading. Some analysts neutralize the DISINFECTANT first by adding a drop of CHLORINE neutralizer (i.e., sodium thiosulfate). This is not recommended since the reaction between thiosulfate and CHLORINE can change the pH of the sample and give an inaccurate reading.
# High Chlorine Effects on Total Alkalinity Testing:
High CHLORINE will affect the Total Alkalinity reading. Some reagents will bleach out and the color change will be from blue to yellow instead of the expected green to red/pink. Refer to the WQTD's instruction manual to prevent false readings and interferences.
# Metals:
Be sure to identify the source of the metal in order to remove the problem for the AQUATIC FACILITY owner. Likely sources are copper from algaecides or corroded pipes, or iron and manganese from the fill water.
# Effect of Metals on Calcium Testing:
For the calcium test, copper, iron, and manganese dissolved in the water may prevent the expected blue color (indicating the end of the test) from fully developing. As the end of the test approaches blue, it fades to a light purple instead, which results from the metals in the water. Repeat the test, but before proceeding with the test instructions, add 5 or 6 drops of titrant. Remember to add the 5 or 6 drops to your final drop count when finished to determine the calcium concentration.
# High Calcium Effects on Chlorine Testing:
When high calcium levels are in the water, the sample may turn cloudy with the addition of DPD #1 liquid reagent, which is alkaline. Addition of DPD #2 liquid reagent may not clear up the cloudiness. With high calcium water, adding DPD #2 prior to adding DPD #1 will acidify the sample, turning it slightly pink, and the cloudiness will not appear. Add DPD #1 to complete the test and obtain the proper pink color for the amount of CHLORINE in the water.
# Potassium Monopersulfate Shock:
Potassium monopersulfate produces a false high combined CHLORINE reading whenever it is present in the water. Monopersulfate will also produce a false positive FREE RESIDUAL CHLORINE reading when the monopersulfate concentration is high (over 25 PPM). Monopersulfate interference can be removed by a variety of products found in the market place. Refer to the WQTD's instruction manual to prevent false readings and interferences. It is known that certain microorganisms, because of their ecology and/or structure, can be tolerant of chemical disinfectants (e.g., chlorine, bromine). Legionella pneumophila, Pseudomonas aeruginosa, Cryptosporidium parvum, Entamoeba histolytica cysts, and Mycobacterium avium complex are a few examples of pathogenic microbes that have been reported to show some tolerance to chemical disinfectants. In addition, sessile (in the biofilm) microorganisms in biofilm are likely to receive additional protection from oxidizers (such as chlorine) when the exposure concentration of these oxidizers is reduced at the interface with the biofilm due to reaction with biofilm material.
# Facility Design & Construction ANNEX 140
Biofilm is a complex community of microorganisms which attach to the sides, piping, and filters of AQUATIC VENUES 249 . Even at elevated concentrations, oxidizing and nonoxidizing chemicals have reduced effectiveness in controlling biofilm when their concentrations and contact times are not sufficient for penetrating the biofilm 250 . Biofilm formation in AQUATIC VENUES is also a concern because microorganisms in the biofilm or the biofilm itself can detach and multiply 251 . Following BEST PRACTICE guidelines for AQUATIC VENUE cleaning and continuous DISINFECTION is critical to avoid biofilm growth and expansion problems 252,253 .
If biofilm-related problems arise, it can be useful to incorporate biofilm sampling to develop a comprehensive evaluation of the risk factors for water quality impairment and potential solutions to identified problems 254 .
MAHC Annex Table 4. 7.3.6 (below) identifies microorganisms for which chlorination may have, or is known to have, reduced efficacy 255,256,257 . MAHC Annex Table 4.7.3.6 also identifies methods that may be used to detect these microbes in AQUATIC VENUE systems, but the methods identified are not necessarily rapid. Additional research is needed to evaluate the benefits of microbiological testing data for AQUATIC VENUES, especially for improving public health protection. This is particularly important for the protozoans, amoebas, and sessile bacterial pathogens that co-exist in biofilms. It should be noted that the use of fecal indicator organisms for AQUATIC VENUE water quality evaluation may not be sufficient for certain AQUATIC VENUE operation, maintenance, and public health investigations, especially in public health investigations related to inhalation, skin breaks, or ocular exposure routes. Since health risks in AQUATIC VENUES and similar environments may be fecal or non-fecal in origin, investigation of fecal indicators and non-fecally-transmitted microorganisms (e.g. P. aeruginosa, S. aureus and Legionella spp.) may be warranted. 1. NOTE a. Many elderly and/or immuno-compromised people use SPAS making them more susceptible to disease; b. P. aeruginosa can be tolerant of CHLORINE and is found in biofilm; c. Hot tub folliculitis is the most common illness associated with hot tubs; and d. Coliform testing is not an indication of P. aeruginosa contamination; e. Since this is a non-reportable disease, we have no information on the incidence of this disease. 2. Grobe, Wingender, & Flemming, 2001;Price, 1988;Clements, 2000. 3. Muraca, Stout, & Yu, 1987Clements, 2000. It is not feasible or cost effective to test for all infectious organisms. Therefore MAHC Annex Table 4.7.3.6 identifies those organisms which have readily available test methods and/or cause illnesses that are common, very serious, or fatal. It is important to note that these test methods may not allow for rapid remediation, decision making, or public health intervention on a timely basis.
# Facility
Design & Construction ANNEX 141
The Heterotrophic Plate Counts (HPC) method has not been included in the list of microbial water quality tests in MAHC Annex Table 4.7.3.6. While HPC data are generally a good indicator of microbial water quality and efficacy of POOL operations (e.g., water treatment), this parameter has been reported to show no correlation to the presence of Legionella 258 , planktonic pathogens 259 , or the presence of biofilm 260 . HPC tests (as do all culture tests) under-report the actual concentration of viable bacteria. Therefore, it is recommended that the use of this test be restricted for assessing the level of planktonic, non-pathogenic bacteria only. HPC data are not sufficient to assess the public health risk of POOLS, SPAS, and waterparks 261 .
Since the MAHC is intended to be a living document with changes anticipated as our knowledge increases, it is prudent to acknowledge that a paradigm shift is occurring in the world of microbiology that likely will impact how pathogen testing will be conducted and interpreted in the future. Culture tests are gradually being replaced with cultureindependent test methods such as Polymerase Chain Reaction (PCR) testing and microarray testing. Years ago when PCR was first used commercially, the cost of the tests was prohibitively expensive. Now test costs have decreased and are competitive with culture dependent tests. A recent development is the commercialization of microarray testing which can screen for the presence of a wide variety of bacterial and viral pathogens without the need for an isolation step. However, the costs associated with microarray testing are prohibitively expensive as of this MAHC publication.
Facility Design & Construction ANNEX 144 EPA is re-evaluating the use of culture-based fecal indicator bacteria (FIB) tests in recreational water testing (i.e., total and fecal coliforms, E. coli and Enterococcus) and is researching the use of PCR for Bacteroides and Enterococcus testing as a possible replacement for these culture tests. Two of the most compelling reasons for this re evaluation are: Incubation times for culture tests prevent quick decision-making to minimize public exposure to water with a potentially elevated disease risk, and Molecular tests are generally considered to have higher specificity (lower false positive rates) than traditional culture tests.
PCR can be a good method for investigating whether pathogenic microbes were present in AQUATIC VENUES (e.g., sampling filter backwash) since the technique detects the DNA of pathogens regardless of whether they are live, dead, or viable-but-not culturable. Another benefit is that PCR culture tests can be completed in hours versus days. However, while PCR can be effective for determining whether pathogens have been present in an AQUATIC VENUE, the technique is less effective as a measure of DISINFECTION effectiveness since it detects DNA from both viable and non-viable organisms. New techniques, such as the use of propidium monoazide (PMA) have been reported to enable PCR to characterize the viability status of microorganisms, so in the future PCR may be an effective option for DISINFECTION studies 262 .
# Water Replenishment System
A WATER REPLENISHMENT SYSTEM allows for POOL water to be removed from the POOL and properly disposed of so that it can be replaced with fresh water containing lower concentrations of dissolved CONTAMINANTS. A WATER REPLENISHMENT SYSTEM should be used to control the dissolved organic CONTAMINANT concentrations (e.g., sweat, oils, chlorination by-products, and urine) and dissolved inorganics (e.g., salts and metals) because POOL filtration systems are not effective at removing dissolved CONTAMINANTS.
# Discharge and Measure
A means of intentionally discharging and measuring or calculating the volume of discharged POOL water (in addition to the filter backwashing system) should be provided and designed to discharge a volume of water of up to four gallons (15 L) per BATHER per day per facility through an air gap. Knowing pump gallons per minute and knowing how much time one is backwashing can be used to calculate the volume discharged. Water replacement or replenishment at a rate of eight gallons (30 L) per BATHER per day per AQUATIC VENUE 263,264,265 have been widely used. PWTAG 266 states that as much as half of the recommended amount could be associated with filter backwashing. There does
Facility Design & Construction ANNEX 145 not appear to be any research to support the use of the 30 L/day/BATHER number used abroad. So, since gal/day/BATHER is roughly half of this amount (and typically met by filter backwashing alone), it seems like a reasonable place to start incorporating this practice into operations. A requirement could be made once the science is there to support a higher or lower value. With a WATER REPLENISHMENT SYSTEM in place, AQUATIC FACILITY operators will be able to experiment with higher WATER REPLENISHMENT rates to obtain improved water (and indoor air) quality. It should also be easy to comply with any future regulations related to WATER REPLENISHMENT as only the flow rate would require adjustment. WATER REPLENISHMENT for a large AQUATIC FACILITY would be based on the number of BATHERS in the entire AQUATIC FACILITY (not the total number swimming in a particular aquatic venue on a given day since most BATHERS are expected to distribute the bather count over a range of aquatic venues and/or rides on a given day). However, WATER REPLENISHMENT should be proportional to the number of BATHERS in each individual treatment system. It would not be allowable to send to waste all of the water from the WAVE POOL and none from the other attractions (unless the water was shared through a combined aquatic venue treatment system).
Alternate Systems The code allows for use of alternate systems to meet the intent for removal of organic compounds and salts. Currently, the MAHC is not aware of such systems. Fundamentally, these sources all seek to eliminate standing water from the DECK, typically recognizing that smoother surfaces convey water more efficiently than rougher ones. Relating slopes to texture, rather than specific materials, provides the ability for any otherwise suitable DECK material or finish to be considered by the adopting jurisdiction.
There is an inherent conflict in sloping of DECKS. Steeper slopes provide more construction tolerance and surety in conveying water, particularly in active soil 4.0 Facility Design & Construction ANNEX 147 conditions. Shallow slopes are required to meet accessibility guidelines -particularly for cross-slopes. It is the intent of this section to encourage positive and proper drainage without running afoul of accessibility guidelines.
Cross Connection Control Consult local AHJ regarding specific chemical handling and use to properly dispose, including discharge to the watershed or sanitary sewers where appropriate.
# No Drain
This requirement prevents sewage from backing up into the AQUATIC VENUE water. This isolates the treated system and does not allow mixing of other sources of water that could contaminate.
Materials / Slip Resistance ASTM is developing a STANDARD to define a method to test for slip resistance.
Slip Resistance While much research has been done and several STANDARD tests created for defining "slip resistance", no industry STANDARDS exist specifically related to aquatic environments. Most studies have been performed in the interest of SAFETY in employment, providing guidance with respect to work surfaces and footwear. The most commonly accepted test for slip resistance, using a device called the James Machine (ASTM D2047), is not suitable for testing wet surfaces, and is not portable for testing in the field 267 .
Carpet Carpet and artificial turf have been found to be inappropriate finish materials for the wettest area immediately around the POOL, i.e. PERIMETER DECK. Although the materials that carpet is manufactured from are durable and do not support mold growth, when they are installed over a relatively impermeable surface, water flows very slowly through the carpet. Soil and CONTAMINANTS entering into the carpet are not easily removed. Since the carpet stays wet longer, soil and CONTAMINANTS remain in the carpet, and mold and algae growth can occur. Therefore carpeting is not an acceptable finish material in the wet PERIMETER DECK.
Finish materials for the PERIMETER DECK should not block DECK drains or impair water flowing to DECK drains.
Carpeting may be installed beyond the DECK drains, i.e. DRY DECK.
Wood Properly treated or composite wood materials may be a suitable material for DRY DECKS provided all other decking requirements are maintained. Fasteners must be regularly
Facility Design & Construction ANNEX 148 inspected to ensure structural integrity and that all heads are flush or recessed into the DECK surface.
Dry Deck Regional materials, local practices, and particular facility design intentions vary widely with respect to DRY DECK. This section intends to provide the opportunity for regulatory oversight of DRY DECK, without limiting these variables best understood by AHJ.
Landscaping It is acknowledged that landscaping near AQUATIC VENUES is not an uncommon practice in enhancing an AQUATIC VENUE environment. Landscape materials themselves and the design of special AQUATIC VENUES vary so widely as to require special consideration with respect to landscaping. This section intends to provide the opportunity to allow landscaping, but only through the lens of the AHJ.
The landscaping materials are not intended to be placed in the wet PERIMETER DECK area. It is assumed here that the POOL DECK will be designed and sloped to prevent drainage from landscaping materials from reaching the POOL.
For an outdoor AQUATIC VENUE, it is not possible to prevent wind from moving dirt, bugs, plant material, etc. around and perhaps into the AQUATIC VENUE. The landscape designer must consider the type and location of landscape materials placed inside or outside of an outdoor AQUATIC VENUE ENCLOSURE.
# Textured Surface
The walking surface should not be rough so as to cause injury or discomfort to BATHERS. ANSI defines where a trip hazard is considered as a level change that is greater than ¼ inch (6.4 mm). Other definitions include an abrupt or unexpected level change in surfaces. The separated areas may have differing uses, flow rates, currents, or water depths.
# Perimeter Overflow System
The MAHC Committee defines WING WALLS as interior elements of the POOL and interior to the PERIMETER OVERFLOW SYSTEM, so the MAHC did not feel it was appropriate to say that WING WALLS longer than some specified length should require PERIMETER OVERFLOW SYSTEMS. It would be a function of the width of the WING WALL as to whether or not it can be properly constructed. If the POOL has a gutter system, it would probably need four feet (1.2 m) of width to get a normal trough on either side. SKIMMERS could be achieved for narrower walls because they could be staggered.
Pool Perimeter WING WALLS do not contribute to the overall POOL perimeter so should not be included in AQUATIC VENUE perimeter calculations that are used as part of multiple critical design calculations.
Deck Drainage The MAHC did not feel that DECK drains should be required on WING WALLS since they are considered part of the POOL and not subject to regular foot traffic. As for DECK level POOLS, the WING WALLS would be at or below water level making drains impractical.
Islands A seven foot (2.1 m) minimum clearance overhead is required since it is consistent with requirements of building code minimum ceiling clearances.
Heated Decks Heated DECKS are occasionally used in cold climates to provide pedestrian paths to and around outdoor heated AQUATIC VENUES. This section provides that when heated DECKS or snow-melt systems are provided, a minimum slope must be uniformly provided. Clear delineation is required because icy areas and/or pathway edges near otherwise DRY DECK poses an unsafe condition. Diving Envelope This code is designed to encourage POOLS to be built to the STANDARDS of the agency that will certify the diving at the AQUATIC FACILITY. The code dimensions are purposely a compilation of the most conservative STANDARDS of diving envelope dimensions and are in no way intended to supersede the certifying agencies dimensions, but instead are intended to be used only when there is no certifying agency for the AQUATIC FACILITY. Since NCAA, USA Diving, and FINA do not have STANDARDS for boards less than one meter in height, the State of Michigan table (R325.21.33, Table 1), shown below as MAHC Table 4.8.2.2, was revised to the most conservative STANDARD found for 0.5 meter and 0.75-meter boards. These minimum dimensional requirements were then dictated to be more conservative in certain instances based largely on interpolations.
Concerning use of diving boards higher than 1-meter, these boards are not recommended for non-competitive use. However, if the boards are constructed to this code or NCAA standards, then non-competitive use can be allowed under careful adult supervision or with QUALIFIED LIFEGUARDS on duty. However, non-conformance with these STANDARDS is unsafe for recreational diving purposes.
Steps and Guardrails Although there are some national data on spinal cord injuries (SCIs) in general, data on diving-specific SCIs are limited, particularly for SCIs involving public POOL-related competition diving.
# General Data on Spinal Cord Injuries:
For SCIs in general, approximately 40 SCIs/million population occur each year in the US (about 12,400 injuries for 2010) with approximately 4.5% related to diving injuries. 268 SCIs are a catastrophic public health problem leading to disability and decreased life expectancy 269 with a large economic and social burden for those who suffer the injury 270,271 .
# Non-Deck Level Diving, Competition Diving, and SCIs:
Data related to SCIs occurring as a result of competition diving off starting platforms are limited. Since starting platforms are several feet above the POOL, the entering velocity of swimmers is greater than for DECK level diving making it more difficult to alter trajectory once executed 272 . One large study investigated 74 SCIs in non-competitive divers occurring with use of springboards and/or jumpboards; 45% of the POOLS were public 273 . Of these injuries, 12.2% occurred in water less than 4 feet (1.2 m); 66.2% occurred in water less than 5 feet (1.5 m); 94.6% occurred in water less than 6 feet (1.8 m). All SCIs occurred in water of less than 7 feet (2.1 m). The MAHC requires that starting blocks be removed, if possible, or blocked off to prevent recreational divers from using them when not in use by competitive swimmers.
Data demonstrates that competitive swimmers can be trained to perform shallow water dives from starting blocks to reduce the risk of SCIs 274,275,276 . As a result, competitive aquatics governing bodies (e.g., FINA, U.S.A. Swimming, NCAA, NFSHSA, YMCA) allow starting blocks to be placed over water as shallow as 4 feet (1.2 m) in depth as long as competition is conducted under the auspices of the governing body or by a coach or instructor. A progressive training regimen can be used so that diver training is conducted in deeper water until the diver has mastered the technique before the certified personnel approve their starting block entries into shallower depths 277 .
Facility Design & Construction ANNEX 153 However, further data are needed on the adequacy of an intervention, like training, that relies on correctly performing a technique to prevent injury; aquatics governing bodies state they have not documented injuries since this progressive training regimen has been adopted. However, it is noted that high speed video of competing athletes during competition dives from starting platforms illustrates the following: About 3% of athletes diving into 4 feet (1.2 m) of water 278 (the pool had the minimum depth recommended for athletes using starting blocks) touched the bottom; Nearly half approach within 0.5 meters (1.6 ft) of the bottom; Over half exceeded head speed thresholds deemed possible to cause severe head trauma; and There was some anecdotal information suggesting some divers touched intentionally.
Conversely, filming of athletes diving into 7.5 feet (2.3 m) of water (the pool studies exceeded Olympic competition depths of 6.5 feet (2.0 m) below starting platforms) showed that very few swimmers approach to within even one meter (3.3 ft.) of the POOL bottom 279 . These data suggest that injury risk from using starting platforms is likely to be higher for older, presumably heavier, or inexperienced divers, particularly when diving into shallower depths.
# Future Directions and Research
The MAHC recommends that these national databases be re-analyzed with aquatics in mind to gather more detailed information on SCIs related to diving in treated AQUATIC VENUES, particularly public AQUATIC VENUES to further inform this discussion. Future analysis of national databases should be undertaken, if possible, to assess the occurrence of SCIs in competitive swimmers and platform heights and water depths at which the injury occurred. Historical analysis and peer-reviewed publication of data or reports collected by aquatics governing groups on SCIs and other diving injuries would also be important to understand POOL-specific diving injuries occurring in competitive swimmers and the efficacy of current progressive training or other interventions.
# Emergency Communication Equipment
A communication device is required in the Operations (MAHC 5.0) section of the MAHC, but it also needs to be considered in the design so the designer can plan for the wiring for such devices. Consider larger facilities or other types of facilities who may have a phone in the nearby building. Consider a telephone labeled with location of
Facility Design & Construction ANNEX 154 phone/address. Some facilities may be so equipped to properly respond to an event and phones may not be required. Large AQUATIC FACILITIES with lifeguard/trained response may not need phones installed everywhere. The intent is for BATHERS to have access to a phone to call for help when help is not necessarily part of the AQUATIC FACILITY operation.
QUALIFIED LIFEGUARDS or other emergency response staff are to be trained and may have communication devices such as whistles or radios which initiate their emergency response which includes the ability to contact outside emergency personnel when necessary. Often AQUATIC VENUES can be at a distance from support personnel and the designer should consider methods for personnel to communicate whether via radio, telephone, intercom, or other method. For alternate communication systems or devices, the intent is that an emergency phone or communications system or device is immediately available to PATRONS from all AQUATIC VENUES within an AQUATIC FACILITY. Some alternate communication systems might include a handset or intercom system to a location that is constantly manned whenever the AQUATIC VENUE is open for use (e.g. a front desk at a hotel, the check in desk at a fitness club, or other continuously manned location); a commercial emergency contact device that connects to a MONITORING service, or directly to 911 dispatch; or devices that alert multiple staff on site when activated (e.g. pagers systems, cellular telephone systems and radio communication alert systems). Also see MAHC Section 5.8.5.2.1 for additional requirements.
Safety Equipment Required at Facilities with Lifeguards
# Lifeguard Chair and Stand Placement
This section refers to only those chairs that are permanently installed and does not indicate that a permanent chair or stand is required. The location of the chairs must give the QUALIFIED LIFEGUARD complete visibility to all parts of the zone of PATRON surveillance. The number of chairs is determined by the ability to provide surveillance of the AQUATIC VENUE by creating zones of PATRON surveillance. It is intended that the designer should be working with an aquatic consultant or the owner/operator to make sure the location of chairs and stands allows for clear line of sight.
# Lifeguard Chair and Stand Design
Chairs and stands are exposed to elements; therefore, they should be made to withstand the environment. The intent for such a chair is to facilitate better surveillance and such the chair should be elevated sufficiently above the heads of BATHERS to have a better view and combat glare. Considerations for the SAFETY of QUALIFIED LIFEGUARDS using these chairs should include access and egress as well as BARRIERS to unauthorized access if installed at an elevation.
# UV Protection for Chairs and Stands
Protection from ultraviolet radiation exposure can include a shade attached to the stand, a shade structure external to the stand, or other types of shade such as surrounding features. The designer should consider which method will be employed to provide UV protection for the stand. 10.2 cm) sphere in the body of the code. From a building code perspective, this is not consistently enforced and these CODES don't regulate that small of an opening. Building codes allow standard 2 ¼ inch (5.7 cm) mesh fencing and is not necessarily specific for AQUATIC VENUES. Building codes typically dictate minimum height and proximity to property lines -unless it's a fall issue. With AQUATIC FACILITIES, we are mainly concerned with discouraging unauthorized entry / break-ins.
Emergency Exit Paths It is the intent of this section to prevent emergency egress routes from exposing building occupants to unguarded AQUATIC VENUE areas. It is not the intent of this section to permanently segregate multiple AQUATIC VENUES on the same site. Temporary or seasonal ENCLOSURES (properly maintained and employed) may be used to segregate paths of egress from a building or adjacent AQUATIC VENUE to SAFETY. For example, where a seasonal outdoor AQUATIC VENUE is operated in conjunction with a year-round indoor AQUATIC VENUE, a seasonal exit pathway separation ENCLOSURE may be used to maintain exiting in the off-season. During the outdoor swim season (when the outdoor aquatic venue is in operation), it is acceptable to egress via the AQUATIC VENUE DECK to EXIT GATES.
Height The MAHC discussed this issue at length. The prevailing "BEST PRACTICE" in the industry is for 4 foot (1.2 m) high fencing around unguarded AQUATIC FACILITIES. However, the MAHC decided to make the BARRIER height the same for all AQUATIC FACILITIES (6 feet or 1.8 meters) since 4 foot fences are scalable even with smaller mesh. Generally, even unguarded AQUATIC FACILITIES have some hours of use and these POOLS also need to discourage use outside of operational hours by youth and others. The MAHC's collective logic was that if an AQUATIC FACILITY is designed for unsupervised use at all times then there is no real advantage to a fence higher than 6 foot (i.e., 8 foot or taller).
# Other Barriers Not Serving as Part of an Enclosure
The 42 inch (1.1 m) BARRIER height is consistent with STANDARD building CODE requirements for a guardrail, which serves substantially similar purposes. This height provides for consistency across CODES for like appurtenances.
# Gates and Doors
This section is intended to address large AQUATIC FACILITIES where there may either be multiple AQUATIC VENUES, multiple grade elevations, or both. EXIT GATES must be provided to permit adequate emergency egress. For example, an AQUATIC FACILITY with ten AQUATIC VENUES split between different grade elevations should have the required number of exits spaced reasonably around the perimeter and not all at one grade elevation.
Wall Separating A minimum overhead clearance of 6 feet 8 inches (2.0 m) is required since it is consistent with requirements of building code minimum doorway clearances. Materials that do not pose a possibility of physical injury may be suspended from the structure to help contain the INDOOR AQUATIC FACILITY environment.
Multiple Aquatic Venues Rationale of 24 inch (61.0 cm) deep rule is that if adjacent water is not substantively deeper than the WADING POOL, there is no need to segregate the two. If it is the only AQUATIC VENUE within the facility, then normal fencing and perimeter ENCLOSURE requirements would apply. If WADING POOLS are a part of a larger facility with other types of AQUATIC VENUES, then the requirements proposed in MAHC Section 4.12.9.2 would apply.
# Aquatic Venue Cleaning Systems
The MAHC encourages draining SPAS for cleaning. A vacuum likely would not be required for very small AQUATIC VENUES, such as SPAS less than 75 square feet (7.0 m 2 ).
A simple wall brush with pole can adequately and efficiently clean the floor.
No Hazard Pumps shall not exceed 3 horsepower because the suction hydraulic of a larger pump through the small vacuum tubing would force the pump to operate at unacceptable hydraulic conditions. Strong suction forces provide a greater risk for bodily harm in the event of a vacuum system mishap.
POOL vacuum systems must use suitably-sized pumps, proper diameter vacuum hoses, and reasonable hose lengths to provide optimum hydraulics for vacuuming operations. Conventional suction requirements call for a maximum 15 feet (4.6 m) of water at a flow of 4 gpm per lineal inch of suction cleaner head for the total suction head loss.
GFCI Connection Not allowing extension cords prevents the possibility that the high voltage power supply unit has enough cord to potentially be dragged into the POOL causing a potential SAFETY risk.
The power cord length needs to be shorter than the distance between the receptacle and the edge of the POOL in order to prevent the power supply from accidentally entering the POOL water while connected.
Ventilation See International Mechanical Code Section 502.
Markings Pipes may be color coded according to use with either labels or a reference chart; directional arrows with permanent labeling on the pipes; or by other means deemed suitable by the AHJ.
# Equipment Rooms Containing Combustion Equipment
# Installed
No code language exists for this section since the MAHC defers to other CODES but the rationale for some of it is still included in the Annex.
No items should be installed, nor shall STORAGE be planned for any items, within the minimum clearances of a COMBUSTION DEVICE, as defined by the manufacturer, or within the minimum clearances as defined the National Fuel Gas Code or other applicable code, whichever are greater.
# Increased Ventilation
Rooms containing combustion equipment may be subject to requirements for increased ventilation and combustion-air intake, as specified by the National Fuel Gas Code or other pertinent CODES. The EQUIPMENT ROOM should be so constructed as to allow for the planned equipment, or should be modified as necessary.
Where an EQUIPMENT ROOM contains combustion equipment which uses equipmentroom air for combustion, no other equipment should be so installed as to reduce the room air pressure beyond the acceptable air-intake pressure range for the combustion equipment.
# Noxious Gasses
All practical flames produce carbon monoxide or nitrous oxides. There is very little chance of being rid of both of them at the same time. Neither is good for human health.
The key is to dilute combustion products and send them up the flue. This does not always work where equipment-room air pressure is lower than outdoor air pressure. Some COMBUSTION DEVICES work by natural draft (buoyancy of hot gases) and cannot tolerate any pressure difference. Other COMBUSTION DEVICES have higher pressure differences which they can overcome.
Where an EQUIPMENT ROOM contains combustion equipment which uses EQUIPMENT ROOM air for combustion, air-handling equipment should not use the room as a plenum. Exceptions may include where the combustion equipment is listed and labeled for the expected use, such installation shall be acceptable where approved by the AHJ.
- See International Mechanical Code Sec. 701.
# Plenum Room
A plenum room uses the EQUIPMENT ROOM as the intake duct for HVAC equipment. Thus, it will have a low air pressure while the HVAC equipment is operating. For an INDOOR AQUATIC FACILITY, the incoming air would contain halogen compounds, e.g. chloramines, and thus should never be used as combustion air.
Where an EQUIPMENT ROOM contains combustion equipment which uses a draft hood, air-handling equipment should not use the room as a plenum. Exceptions may include where the combustion equipment is listed and labeled for the expected use, such installation shall be acceptable where approved by the AHJ.
- See International Mechanical Code Sec. 701.
# Lowered Room Pressure
In this situation, there is a tendency for the lowered room pressure to pull combustion products back down the flue into the room, and thus spread them everywhere.
Rooms containing combustion equipment are also subject to requirements for separation from chemical-STORAGE spaces.
Separation from Chemical Storage Spaces Largely, building STANDARDS do not speak to AQUATIC VENUES; for example, the dangers that chemical fumes pose to combustion equipment. Combustion equipment cannot be allowed to intake halogen compounds, because acids will form in the flue and destroy it, allowing carbon monoxide and other combustion products to enter the occupied space.
# Equipment
Refrigeration Equipment Most refrigerants are heavier than air. When released from containment, most will evaporate rapidly, expanding greatly in the process. If a large enough amount is released, it could displace air to above head-height. For this reason mechanical CODES usually require refrigerant-release to the outdoors when the amount of refrigerant exceeds some fraction of the occupied volume.
# Chemical Storage Spaces
POOL-chemical associated injuries have been routinely documented. 280,281
Eyewash It is the intent to allow re-fillable eyewash bottles and not require plumbed emergency eyewashes and showers unless required by the AHJ.
# Outside
The intent is to allow some flexibility since installation in the CHEMICAL STORAGE SPACE may be prone to failure due to corrosion. External eye wash stations should be close and easily found such as in a location outside the door that all staff must walk past. The MAHC will continue to look for data supporting a maximum distance from the door.
# Construction
As applicable, the STANDARDS of NFPA 400, the IFPC, and the IBC shall prevail. This STANDARD is not intended to provide relief from these other regulations, but to provide BEST PRACTICE where these regulations are not adopted or enforced. The more stringent STANDARD shall prevail as applicable.
# Floor
The floor or DECK of the CHEMICAL STORAGE SPACE should be protected against substantial chemical damage by the application of a coating or sealant capable of resisting attack by the chemicals to be stored.
No Openings Other than a possible door, there should be no permanent or semi-permanent opening between a CHEMICAL STORAGE SPACE and any other INTERIOR SPACE of a building intended for occupation.
Exterior Chemical Storage Spaces As applicable, the STANDARDS of NFPA 400, the IFPC, and the IBC shall prevail. This STANDARD is not intended to provide relief from these other regulations, but to provide BEST PRACTICE where these regulations are not adopted or enforced. The more stringent STANDARD shall prevail as applicable.
Fencing Such part of an outdoor space as does not join a wall of a building should be completely enclosed by fencing that is at least 6 feet (1.8 m) high on all other sides.
Chemical Storage Space Doors As applicable, the STANDARDS of NFPA 400, the IFPC, and the IBC shall prevail. This STANDARD is not intended to provide relief from these other regulations, but to provide BEST PRACTICE where these regulations are not adopted or enforced. The more stringent STANDARD shall prevail as applicable.
Signage Given the high turnover rate or potential for employees to travel between workplaces at some AQUATIC FACILITIES, it would seem prudent to require a posting of the SDS location. Specifying the location of the SDS on the actual entry door to the chemical space may help reduce time for a response to an event. It further strengthens the requirements of OSHA:
# 1910.1200(g)(8)
The employer shall maintain in the workplace copies of the required safety data sheets for each hazardous chemical, and shall ensure that they are readily accessible during each work shift to employees when they are in their work area(s). (Electronic access and other alternatives to maintaining paper copies of the safety data sheets are permitted as long as no barriers to immediate employee access in each workplace are created by such options.)
# 1910.1200(g)(9)
Where employees must travel between workplaces during a workshift, i.e., their work is carried out at more than one geographical location, the material safety data sheets may be kept at the primary workplace facility. In this situation, the 4.0 Facility Design & Construction ANNEX 166 employer shall ensure that employees can immediately obtain the required information in an emergency.
# 1910.1200(g)(10)
Safety data sheets may be kept in any form, including operating procedures, and may be designed to cover groups of hazardous chemicals in a work area where it may be more appropriate to address the hazards of a process rather than individual hazardous chemicals. However, the employer shall ensure that in all cases the required information is provided for each hazardous chemical, and is readily accessible during each work shift to employees when they are in their work area(s).
- See NFPA 704 "Hazard Identification System".
Emergency Egress This usually takes the form of a kick-out panel in the door. When trapped, a person can sit down and kick out the panel, creating an opening usually about six inches (15.2 cm) narrower than the door and about 28 inches (71.1 m) high. Since these are used in most ENCLOSURES where a person can be trapped (e.g. walk-in freezers) the volume is high enough for additional expense to be minimal. Trapping could happen in several ways, but the most common is binding of the door to the jamb. Corrosion products can build up inside a metal door between the jamb and the wall, forcing the jamb away from the wall and toward the door. At some point the door will either fail to open or fail to close. Combustion equipment cannot be allowed to intake halogen compounds, because acids will form in and destroy the flue. Air-handlers have strong negative air pressures inside them. This will draw in any CONTAMINANTS around the cabinet and distribute throughout the ducted system.
# Interior Door
Interior Chemical Storage Space As applicable, the STANDARDS of NFPA 400, the IFPC, and the IBC shall prevail. This STANDARD is not intended to provide relief from these other regulations, but to provide BEST PRACTICE where these regulations are not adopted or enforced. The more stringent STANDARD shall prevail as applicable.
No Air Movement See ANSI/ACCA Manual SPS 2010 Section 4-4.
# Electrical Conduit System
An interior CHEMICAL STORAGE SPACE that shares any building surface (wall, floor, ceiling, door, etc.) with any other INTERIOR SPACE or that shares an electrical-conduit system with any other space should be equipped with a ventilation system that maintains the air pressure in the CHEMICAL STORAGE SPACE below that of any other INTERIOR SPACE by 0.05 to 0.15 inches (1.3 to 3.8 mm) of water pressure, or by such greater pressure difference as should be necessary to ensure that all air movement through building surfaces or conduits should be toward the CHEMICAL STORAGE SPACE.
Note 1: This can usually be accomplished by maintaining the air pressure in the CHEMICAL STORAGE SPACE at least 0.05 I.W.C. to 0.15 I.W.C. below that of any adjoining space and below that of any space connected to the CHEMICAL STORAGE SPACE by an electrical conduit system. Larger pressure differences may be needed in special cases. This pressure difference should be maintained by a continuously operated exhaust system used for no other purpose than to remove air from that one CHEMICAL-STORAGE SPACE. Gaseous Chlorination Space Many current jurisdictions closely regulate the use of gas CHLORINE from a disaster preparation and response standpoint. This can make CHLORINE gas use prohibitive from a regulatory standpoint to the point that its use is difficult to justify.
Windows in Chemical Storage Spaces 4.9.2.12.1 Not Required These windows are sometimes built into the door, although not always. (There are firerated doors with windows.) Such windows may serve several purposes.
Requirements Such windows are usually installed for free lighting, although there can be drawbacks. Some chemicals may react on exposure to sunlight.
# Hygiene Facilities
# General
Language similar to this section is found in most state CODES.
Minimum to Provide During 2009-2010, 24 (80.0%) of 30 treated recreational water-associated outbreaks of diarrheal illness were caused by Cryptosporidium 284 . These cryptosporidiosis outbreaks tend to disproportionately affect children under five years of age and can cause community-wide outbreaks 285 . Infectious Cryptosporidium OOCYSTS' extreme CHLORINE tolerance allows them to survive for 3.5-10.6 days when free CHLORINE levels are maintained at 1-3 mg/L 286 . The OOCYSTS small size (4.5 μm x 5.5 μm) also allows them to bypass typical sand and cartridge filters 287 . While secondary or supplemental DISINFECTION can inactivate the OOCYSTS, these ultraviolet and ozone treatment systems are circulation dependent 288,289,290,291,292 .
Thus, changing BATHER behavior in the following ways are needed to help prevent cryptosporidiosis outbreaks:
Facility Design & Construction ANNEX 170 Enforcement of policies that exclude swimmers with diarrhea, Swimmer education about hygienic swimming behaviors (e.g., taking a cleansing shower before entering the water, not swallowing the water), and Using secondary or supplemental DISINFECTION.
# Chloramines
During January-March 2007, over 660 BATHERS and aquatic staff at a waterpark experienced respiratory symptoms and eye irritation caused by chloramines. 293 Chloramines form when free CHLORINE oxidizes nitrogenous compounds (e.g., sweat, urine, and personal care products) that wash off BATHERS' bodies. Chloramines can volatilize into the air where it can accumulate in air of indoor AQUATIC VENUES. One in five (17%) American adults reports having ever urinated in a POOL 294 , and elite athletes can sweat over 700 mL/h 295 . Rinsing off in the shower for 60 seconds and wearing bathing caps significantly decreases the amount of total organic carbon and total nitrogen 296 . Studies also suggest that ultraviolet treatment can reduce chloramine levels in the water 297,298 .
Accumulation of chloramines in the air at indoor treated recreational WATER VENUES can be reduced with the following practices: Policies that require showering before entering the water, Swimmer education about hygienic swimming behaviors (e.g., taking a rinse shower and using the toilet before entering the water, not urinating in the pool, and wearing bathing caps), and Using ultraviolet water treatment and improving ventilation. so that they will use restrooms rather than urinating or defecating in the VENUE water, which is common. Unlike other recreational facilities, people feel that it is more acceptable to "pee in the POOL" than to not use sanitary facilities for these bodily functions and other locations. This may not be possible in large waterparks, however, they can possibly be located within 300 feet (91 m) from the AQUATIC VENUE. The distance needed for parents to walk or carry children less than 5 years old should be shorter (200 ft or 61 m) to ensure use. These distances are found in multiple state or local CODES including Wisconsin, Oregon, Florida, and New York. When possible, it is preferable to have a bathroom on the same floor as the AQUATIC VENUE; however, it is not required at this time in the MAHC.
Drinking water should be available so that PATRONS, especially young children, are less likely to drink POOL water and to ensure that PATRONS are kept well-hydrated.
Children Less than Five Years of Age There are specific types of AQUATIC VENUES that pose an INCREASED RISK of fecal contamination of the water and transmission to BATHERS such as WADING POOLS, WATER ACTIVITY POOLS, INTERACTIVE WATER PLAY VENUES, or other AQUATIC VENUES designed primarily for children less than five years old. For these AQUATIC VENUES, diaper changing areas should be located directly adjacent to the kiddie areas to promote use.
It is especially important that HYGIENE FACILITIES be available to these INCREASED RISK groups. Children less than five years of age have the highest incidence of diarrheal illness and are more likely to be a source for spreading recreational water illnesses.
# Design and Construction
Language similar to this section is found in most state CODES.
Floors "Slip resistant" is usually considered to mean having a static coefficient of friction of 0.6 or better for both wet and dry conditions. Currently, this ASTM STANDARD C1028 is under revision.
Floor Base The purpose of coving is to prevent water splashing on the wall when mopping. Six inches (15.2 cm), a common height, was taken from building CODE.
- For further information, also see the FDA Model Food Code for Kitchens.
Floor Drains
# Opening Grill Covers
Holes in floor drain cover openings need to be sized to prevent small children's toes from becoming entrapped when walking over them.
Hose Bibb The purpose of these hose bibs is to permit adequate cleaning of shower and toilet facilities and to permit cleaning of any spills occurring in the HYGIENE FACILITY. See also MAHC 6.5 for further rationale.
# Plumbing Fixture Requirements
Language similar to this section is found in most state CODES.
# General
# Protected
It is fundamental that there be no cross connections between safe (potable) and unsafe (non-potable) water supplies. All hose bibbs should be equipped with a vacuum breaker to prevent back siphonage. This cross-connection protection can also be achieved at lavatories and laundry tub washing facilities through an air gap. As a general rule, the INLET pipe is terminated at a distance about four times the diameter of the pipe and not less than four inches (10.2 cm) above the maximum overflow level of the fixture rim.
Toilet Counts Facilities in jurisdictions with requirements governing the number of sanitary facilities should follow those requirements. AQUATIC FACILITIES with an average PATRON load of over 100 persons should follow the International Plumbing Code (IPC). Facilities with average PATRON loads of less than 100 persons should follow either the IPC or Uniform Plumbing Code (UPC). The IPC may require significantly more toilet facilities for women than for men.
# Gender Potty Parity
Previous issues of the nation's model consensus code mandated an equal amount of toilet fixtures for both men and women. Newer versions of the code will likely provide recommendations that increase the minimum required facilities for women.
Potty Parity discussion from Reasons to Adopt the 2000 IPC, developed by the International Conference of Building Officials (ICBO) as an informational aid to code officials and the public.
The IPC requires far less HYGIENE FIXTURES for various types of occupancies than the UPC. This is contrary to the "potty parity" movement which demands more fixtures for women's toilet rooms to avoid the long waiting lines. The UPC also provides more water closets and urinals in most men's toilet rooms than the IPC and assures adequate water closets by limiting the number that can be deleted by installing additional urinals.
Facility Design & Construction ANNEX 173 the IPC do address the issue of "potty parity" and reflect studies by Dr. Sandra Rawls at the University of Virginia, the Stevens Institute of Technology, the National Restaurant Association, and the ASPE Research Foundation. The issue of "potty parity" is mostly an issue in assembly buildings with large occupant loads, especially where there is a period of high demand such as at intermission at a theater or at halftime at a football stadium. The "potty parity" is not an issue for occupancies where there is no instantaneous demand on the fixture usage. IPC Table 403.1 reflects requirements for twice as many fixtures in the ladies' room compared to the men's room, when the type of occupancy demands such a count. In occupancies where the factors do not demand such an increase, the code does not require it. It should also be pointed out that part of this issue arises because of some CODES requiring both water closets and urinals within the men's restroom. Therefore, the numbers for men were somewhat higher. The IPC does not have a mandatory requirement for urinals. It will generally require the same number of fixtures in the men's and women's restrooms. However, when two or more water closets are required, the IPC will permit up to 67 percent of the fixtures to be replaced by urinals.
- For additional supporting information, see IPC: A Guide for Use and Adoption: .
Some differences between the IPC and UPC CODES on this issue are as follows:
International Plumbing Code:
- Utilizes a fixed fixture to OCCUPANT LOAD ratio. Does not mandate urinals for men. Allows up to 67% of the requirement for water closets to be substituted for urinals. Establishes a separate fixture calculation factor for men and women. In some cases twice as many fixtures are required for women compared to men. No arbitrary parity requirement.
# Universal Plumbing Code:
- Utilizes a variable fixture-to-OCCUPANT LOAD ratio. Requires urinals to be installed based on a fixture-to-OCCUPANT LOAD ratio. Does not allow for one to one substitutions. For each urinal added over what is required, you may have one to one substitutions up to 2/3 of what is required. Requires the total number of water closets for women to be equal to the total number of water closets and urinals for men.
# Cleansing Showers
The purpose of CLEANSING SHOWERS described in this section is to remove dead skin, sweat, nitrogenous waste, and perianal fecal material before BATHERS enter the POOL. This is best done through nude showering using warm water and soap.
An average of 0.14 grams of fecal material can be found on a person's peri-anal surface (the amount of feces for children ranges from 0.01-10 grams and for adults 0.0001 to 4.0 Facility Design & Construction ANNEX 174 0.1 g 299 ). Therefore, fecal contamination of the perianal area is common. This contamination may include the CHLORINE-tolerant parasite Cryptosporidium 300 which is not inactivated by routine disinfectant levels required in AQUATIC VENUES. Since the effectiveness of most halogen-based disinfectants is reduced by the presence of organic material, the purpose of CLEANSING SHOWERS is to reduce the inorganic, organic, and fecal load introduced into POOLS.
Count The THEORETICAL PEAK OCCUPANCY (MAHC Section 4.1.2.3.5) has been accounted for in the one shower per sex per 4000 square feet (372 m 2 ). This assumes using one bather per 20 square feet (1.9 m 2 ), so at 4000 square feet, there will be one shower per 200 bathers. Further research on this topic is recommended and can be addressed in future versions of the MAHC.
# Location
The placement of the showers is intended to encourage BATHERS to see and use the showers before they enter the water.
Enclosed Entryways to CLEANSING SHOWER compartments shall be enclosed to provide privacy. Individual shower stall curtains and doors are not required. Providing privacy for CLEANSING SHOWERS promotes BATHER cleansing prior to entering AQUATIC VENUES.
Exemption "Residential settings" includes condos, apartments, and homeowners associations but does not apply to individual residential pool settings. The intent is for BATHERS to use their rooms/homes for a CLEANSING SHOWER; however, one RINSE SHOWER on the DECK is required at these AQUATIC FACILITIES encouraging BATHERS to shower prior to entering water if a BATHER had not already done so.
Rinse Showers The purpose of the RINSE SHOWERS is to remove inorganic material such as sand or dirt that can bind with CHLORINE and reduce the amount for other pathogen inactivation. Rinsing with water also removes BATHER'S CONTAMINANTS such as sweat, hygiene products, deodorant, hair spray, etc. Rinsing off in the shower for 60 seconds and wearing bathing caps significantly decreases the amount of total organic carbon and total nitrogen 301 .
A rinsing shower can be taken on the DECK in open showers by the AQUATIC VENUE using ambient temperature water so dirt and other CONTAMINANTS are rinsed off before entering the water.
Large Aquatic Facilities The intent is to encourage BATHERS to see and use the RINSE SHOWERS before they enter the water.
Beach Entry The intent of having at least four showerheads every 50 feet (15.2 m) at a beach entry allows multiple people to rinse off at the same time. Showerheads could be provided as wall units, pedestals (one pedestal could have four showerheads or two pedestals could have two showerheads each), allowing AQUATIC FACILITY owners to have versatility in design.
Lazy River BATHERS enter LAZY RIVERS only in designated areas; therefore locating RINSE SHOWERS near these entrances facilitates rinsing before entering the LAZY RIVER.
Waterslide BATHERS congregate into queue lines for access to waterslides. Providing a RINSE SHOWER on the DECK of a queue line encourages use prior to entering the water.
All Showers The intent is to encourage use of showering prior to entering an AQUATIC VENUE. Large AQUATIC FACILITIES, based on their THEORETICAL PEAK OCCUPANCY, would require a large number of CLEANSING SHOWERS which would put an economic burden on these facility types. The MAHC acknowledges CLEANSING SHOWERS are more expensive to install than RINSE SHOWERS, therefore as long as the required number of showers is met, AQUATIC FACILITIES can decide which type of shower is conducive for their PATRONS.
In addition, the 2012 International Swimming Pool and Spa Code (ISPSC) Section 609.3.1 allows flexibility on the ratio of CLEANSING to RINSE SHOWERS above 7500 square feet of water surface area.
Diaper-Changing Stations The material in this section addresses diapering of infants and young children. These are the age groups most commonly involved in contamination of recreational water that can lead to outbreaks of illness associated with recreational water. Although some older persons must wear diapers the incontinence is not likely to be associated with a diarrheal illness so the risk of infection from adults is much less than that from children. Therefore, we do not believe that special regulations are needed for elderly BATHERS. Current DIAPER-CHANGING UNIT designs do not supply all the features needed for sanitary and efficient diaper changing and clean-up to minimize spreading pathogens further in the AQUATIC FACILITY. If a permanently plumbed hand wash sink is not economically feasible to install, a portable HAND WASH STATION can be used as a substitute for one year. Portable HAND WASH STATIONS are used at temporary events and include a water and waste tank that requires frequent refilling and draining for continual use.
# Conform
There appear to be two different configurations of DIAPER-CHANGING UNITS currently available and suitable for this setting. The first type, a fold-down commercial unit commonly mounted on the wall, is addressed by ASTM F2285-04: Consumer Performance Standards for Commercial Diaper-Changing Stations. The second type, a free-standing unit, is addressed by Caring for Our Children: National Health and Safety Performance Standards: Guidelines for Out-of-Home Child Care Programs ().
A major difference between these two designs is that ASTM F2285-04 calls for restraining straps while CFOC prohibits the use of straps and relies on a 3 inch (7.6 cm) lip to keep children from falling off. Both designs have inherent problems. The problems with straps are associated with cleaning and possible hanging hazard. The problem with a 3 inch (7.6 cm) lip is that they are not available on fold-up units. The MAHC language does not discriminate between these two designs, but the unit used should conform to one of these two STANDARDS.
Unisex Increasingly, many AQUATIC VENUES are providing family dressing areas and caregiver rooms to attend to family needs. This provision permits parents to attend to the needs of small children of the opposite sex. Trash Can Trash receptacles are needed to help maintain cleanliness around the DIAPER-CHANGING STATION for any disposable changing unit covers, diapers, sanitizing wipes, or disposable paper towels.
Non-Plumbing Fixture Requirements 4.10.4.6.4 Lockers While some lockers are designed to sit directly on the floor, other lockers may need to be elevated. This prohibits water accumulation beneath the lockers. Such accumulation can lead to the growth of mold, mildew, and slime build up. The MAHC has gone with the current industry standard of 3.5 inches (8.9 cm) high but recommends moving to a new standard of 6 inches (152 cm) to allow better access, cleaning, and drying under the lockers.
Dryers / Paper Towels Hand drying devices or paper towel dispensers should be located adjacent to the hand washing sinks to facilitate use. To prevent overcrowding, they may be positioned to move users away from the sink and toward the exit. In childcare settings, the dispensers and devices are usually within arm's reach of the sink.
# Provision of Suits, Towels, and Shared Equipment
Although providing reusable bathing suits is no longer common, many AQUATIC FACILITIES provide PATRONS with towels and other shared equipment. The purpose of this wording STANDARD is to ensure that these AQUATIC FACILITIES provide adequate equipment and space in their design and construction for laundering, sanitizing, and drying these items.
# Foot Baths
FOOT BATHS with standing water allow the buildup of organic material and bacterial and fungal growth and can lead to the spread of pathogens.
# Sharps
This section was included to address AQUATIC VENUES that provide PATRONS with sharps, especially razors, so that safe disposal is assured. Approved sharps containers are rigid, leak-proof, puncture resistant boxes of various sizes made of hard red plastic. They have a lid that can be securely sealed to keep contents from falling out, and they are clearly marked with the bio-hazard symbol. Occupational Safety and Health Administration (OSHA) regulations describe the design and use of sharps containers for a variety of settings.
Businesses are required by OSHA to deposit sharps into a sharps container that complies with OSHA regulations in order to protect employees. Once that container is full, it must be disposed of according to state and federal regulations. There are several lake and spring sources around the country that have been used for decades to supply water to AQUATIC FACILITIES. As long as the source water quality does not significantly change and can be treated by the AQUATIC FACILITY equipment to protect the health and SAFETY of PATRONS, it can be allowed.
Condensate / Reclaimed Water The steps necessary to make reclaimed water meet source water STANDARDS are beyond the scope of the MAHC. These steps are set by the state and federal agencies that set requirements for drinking water.
This would be up to the AHJ and local conditions. The MAHC felt that, especially considering recent affinities towards sustainability, reclaiming condensate would be acceptable as long as this water met the same STANDARDS as incoming domestic water (even if this required UV or other disinfectants, filters, etc.). A provision for deferring to the AHJ ruling based on locale was important to us as well. For instance, this may be more of a politically important issue in Arizona or Nevada than in other areas of the country. Non-potable use for this water is in keeping with water as a limited resource.
Sufficient Capacity This requirement is for when AQUATIC FACILITIES choose to be open when backwashing (e.g., they can backflush one filter while still maintaining filtration through another system; operating without the recirculation system running is prohibited). A facility may choose to regulate when their backwash cycles occur (such as at closing). Many fully automated backwash systems for HRS filters are programmed to backwash at night when the facility is closed and there are no other demands on the source water coming into the facility. Alternatively, QUALIFIED OPERATORS may choose for an all deep 50 meter POOL to just backwash one filter at a time and allow make-up water to reestablish rim flow before doing the next one, as opposed to doing all six or eight tanks sequentially.
# Fill Spouts
For example, a fill spout located under a diving board or next to a ladder or handrail is less likely to be a trip hazard or be a hazard to swimmers coming up from below.
# Cross-Connection Control
An air gap can be provided through a fill spout at the side of an AQUATIC VENUE, through water supply piping over the edge of an open balance tank or surge tank, or over a fill stand pipe that is connected to the side of an AQUATIC VENUE.
# Facility Design & Construction
ANNEX 179 Splash guards are simply a means to keep fill water from splashing onto adjacent floors and walls. Water cannot be siphoned into the potable water supply through a properly designed splash guard. A proper design often consists of a concentric pipe that is a larger diameter than the fill pipe and that is open to the atmosphere at the top and bottom.
Because of the potential for back pressure or back siphonage, any potable water piping connected directly to any AQUATIC VENUE piping must have an RPZ. Some permitting agencies or CODES may allow pressure vacuum breakers or atmospheric vacuum breakers on water supplies not connected to the POOL piping but supplying potable water to the AQUATIC VENUE through a submerged INLET in the AQUATIC VENUE.
The pressure vacuum breaker would be located upstream of the shut-off valve.
The atmospheric vacuum breaker would be located downstream of the shut-off valve.
The AHJ may allow an elimination of an air gap to control splashing or flow of AQUATIC VENUE wastewater outside the receiving sump onto the EQUIPMENT ROOM floor. This can be accomplished by extending the AQUATIC VENUE wastewater pipe below the rim of the sump. This can be approved if the wastewater disposal pipe from the AQUATIC VENUE does not have a sealed connection to the sewer piping. This constitutes an air break.
An air break can be justified for the worst case scenario of a sewer backup at the AQUATIC VENUE wastewater sump. During a sewer backup, sewage cannot back pressure into AQUATIC VENUE piping through an air break. Further, if the sewage is above the AQUATIC VENUE waste pipe outlet when the AQUATIC VENUE is operating, the normal pressure of the POOL piping leaks AQUATIC VENUE water towards the sewer, preventing the AQUATIC VENUE piping from siphoning wastewater. If the AQUATIC VENUE is not operating, then there is no pressure or suction in the piping that could create a condition for siphoning sewage.
If the permitting agency does not allow an air break, they may allow an air gap with a splash guard.
# Deck Drains and Rinse Showers
# Sanitary Wastes
# Pool Wastewater
AQUATIC VENUE waste streams (including filter backwash water and AQUATIC VENUE drainage water) should be discharged through an air gap to sanitary sewers, storm sewers, drain fields, or by other means, in accordance with local municipal and building official recommendations including obtaining all necessary permits. The discharge should occur in a manner that does not result in a nuisance condition.
Each waste line should have a unique air gap. Waste lines from different sources (e.g. AQUATIC VENUE, spa, overflow, sump pump, etc.) should not be tied together, but multiple waste lines may discharge into a common sump or receptacle after an air gap.
Facility Design & Construction ANNEX 180 4.11.6.2 Ground Surface Filters work to reduce the level of pathogens in the AQUATIC VENUE water by retaining the pathogen in the filter. As a result, AQUATIC VENUE backwash water has been demonstrated to contain detectable pathogen levels (e.g., Cryptosporidium and Giardia). 302 , 303 Therefore, filter backwash water should be considered waste water requiring appropriate disposal.
Separation Tank for Precoat Media Filters If local or state CODES prohibit disposal of backwash filter media (perlite, cellulose or diatomaceous earth) directly to sanitary sewer, a separation tank may be recommended. The separation tank is to be designed for the conditions of the specific facility filtration system. The separation tank should be designed to accommodate the volume of water and spent media recommended for at least a single backwash (media change), without overflowing. The separation tank may include separation screens or a settling pit to allow for the spent media to be removed and properly disposed of according to AHJ requirements. Maximum Water Depth SPAS are designed for sitting and the expectation is that it will not be over the average 11-year-old child's head. That depth is about 48 inches (1.2 m). The MAHC felt that 24 inches (61.0 cm) is reasonable since it is half of the maximum depth previously stated (48 in or 1.2 m) and would allow for the vast majority of the population to sit comfortably with their head above water. The MAHC also consulted the ISPSC and their maximum depth of 28 inches (71,1 cm) is pulled from APSP which has been utilized by the industry for some time. The Committee recommends additional studies to determine if decreasing the SPA seating depth is necessary.
Handholds Even though a person is seated in a SPA, a sufficient number of positive handholds are needed to assist with standing up. Handholds at the edge of the SPA above the water line are visible and easily reachable.
Perimeter Deck This is to provide adequate area for life saving and rescue purposes. The AHJ may allow a smaller rescue area based on the assessment of a local emergency rescue agency.
SPAS elevated for transfer wall or other purposes need to be provided with an effective BARRIER so that the elevated wall is not used as a platform to access an adjacent 4.0 Facility Design & Construction ANNEX 185 cannot swim, and the inherent dangers posed by larger and deeper POOLS in close proximity.
Shallow Water Rationale of 24 inches (61.0 cm) deep rule is that if adjacent water is not substantively deeper than the WADING POOL, there is no need to segregate them.
Facility Operation & Maintenance ANNEX 186
# Facility Operation and Maintenance
The MAHC has worked extensively with ICC and IAPMO to eliminate conflicts between the three codes. These discussions have resulted in changes in the MAHC and plans to change items in the other codes as they are brought up for revision. The MAHC is committed to resolving these conflicts now and in the future as these codes evolve.
# Exemptions
# Variances
The permit issuing official may waive, in writing, any of the requirements of this code, and include the variance as a condition of the permit to operate, when it reasonably appears that the public health and SAFETY will not be endangered by granting of such a variance and adequate alternative provisions have been made to protect the health and SAFETY of the BATHERS and the public. The burden of providing the data and proof that any alternative provision is at least as protective at the code requirement is entirely on the permit holder. Additionally, closed POOLS can be a SAFETY concern, especially for small children. When the POOL is not drained or covered tightly to prevent entry, children may knowingly or accidentally enter the POOL and drown. Because of the slime that often builds on the wall of these abandoned POOLS, it may be impossible for those that enter the POOL to climb out. Abandoned POOLS may also have limited visibility so people falling in cannot be seen by other persons in the area. While fence BARRIERS or SAFETY covers can create a "safe condition" for the POOL, these methods will not prevent the potential mosquito problems mentioned above.
# Equipment Standards [N/
# Long Closures
The closing of an AQUATIC FACILITY for less than seven days is considered a temporary closure. A closure of more than seven days is considered a long term closure. Both types of closure require certain maintenance activities when closing or reopening to ensure a safe environment for PATRONS.
# Drain / Cover
POOLS that use a cover should refer to ASTM F1346-91. For POOLS where covers are not used or are not practical, access should be restricted and routine check of fence integrity is advised. When correctly installed and used in accordance with the manufacturer's instructions, this specification is intended to reduce the risk of drowning by inhibiting the access of children under five years of age to the water.
For long term and seasonal closures, where no residual disinfectant is maintained in the pipes, further research is needed to understand the growth of biofilms during closure. More research is needed to develop protocols for removing biofilms in AQUATIC VENUES. If the AQUATIC VENUE system becomes non-operational, such as during a power outage, the AQUATIC VENUE should be cleared of BATHERS. Prior to reopening, the QUALIFIED OPERATOR should confirm that all systems are operational as required by the MAHC. For example, recoating DE filters will be necessary and it should be confirmed that feed pumps did not continue feeding chemicals during a RECIRCULATION SYSTEM shutdown that may lead to outgassing into the POOL when the system is re-started.
Facility Operation & Maintenance ANNEX 188
# Reopening
The QUALIFIED OPERATOR should refer to previous inspection reports for more details on repairs or replacements needed, and any replacements or new items should be discussed with the regulatory authority to verify they comply with current code requirements. It is recommended that a model reopening checklist be developed in the future.
# Preventive Maintenance Plan
A preventive maintenance plan is a necessary and important part of any AQUATIC FACILITY operation based on data showing 22.8% of pool chemical-related events were due to equipment failure indicating they could have potentially been prevented 306 . The best maintenance plan is one that follows the manufacturer's and POOL designer/engineer's recommendations for all equipment. A POOL maintenance plan is similar in many ways to the purchase of a new vehicle. With the purchase of a new vehicle, a manufacturer's maintenance schedule is included. The schedule lists the maintenance items that should be followed such as rotating tires and performing major tune-ups.
Likewise, the QUALIFIED OPERATOR should perform an inventory of all equipment used in the AQUATIC FACILITY operation. For each piece of equipment, the QUALIFIED OPERATOR should develop a list and schedule of maintenance items. By following this maintenance schedule, the operator can help prevent costly repairs and breakdowns in the future.
Replacing items before they breakdown may prevent system breakdowns that could lead to outbreaks or injuries. For example, a common breakdown leading to loss of DISINFECTION is a break in the tubing leading from feed pumps to the RECIRCULATION SYSTEM. Although inexpensive, lack of replacement has been implicated in outbreaks.
AQUATIC FACILITIES need increased sophistication in plan maintenance and MONITORING.
# Facility Documentation
This equipment inventory should contain information such as:
# Depth Markings
Existing AQUATIC FACILITIES should still adhere to the requirements of MAHC Section 4.5.19 for depth and no diving markers. Existing AQUATIC FACILITIES may have to resort to using non-permanent(i.e., painted) alternatives if not already installed, which will need to be maintained to ensure they are readable and legible
# Pool Shell Maintenance
The MAHC was altered in the final round of public comments to eliminate the wording below. It was decided that this wording was not entirely health and safety related so should be replaced by wording that required repairs if the cracks could cause trips, falls, or lacerations. It is still good operational practice to identify and monitor cracks that could lead to water loss and structural failure and consult a structural engineer for assessment as needed.
CRACKS exhibiting any of the following characteristics shall be evaluated by a structural engineer: Light Levels System components will deteriorate and eventually need to be replaced, but lamp performance will continue to change prior to complete lamp failure. Indoor overhead lights, outdoor pole mounted lights, and underwater lighting are the key POOL light sources. Building lighting must also be maintained to provide safe AQUATIC FACILITY use, building and area security, and meet the aesthetic goals. Planned lighting maintenance includes group relamping, cleaning lamps, cleaning luminaires, and replacing defective components on a regular basis.
Lamp lumen depreciation is a characteristic of all lamps. Each lamp type has a different lamp life, thus impacting your maintenance schedule. As lamps fail or burn out, the local light levels are decreased and the lighting uniformity is also affected.
Luminaire surface deterioration and dirt accumulation may also occur and can reduce the light reaching the needed areas. During relamping and cleaning, inspect each luminaire for deterioration or damage. Repair or replace components and inspect and clean light fixtures and luminaires as needed to maintain required light levels. Consider regular group relamping combined with cleaning as part of an efficient and effective maintenance plan.
Basic steps for cleaning and relamping operations include the following:
1. Turn off electrical circuits and carefully remove lenses, diffusers, shields and/or lamps. 2. Dispose of replaced lamps and ballasts per state and federal guidelines. 3. Contact the U.S. EPA for more information. 4. Follow the light fixture and lamp manufacturer's recommendations for cleaning, relamping, and maintaining each light in good condition. 5. Routinely monitor underwater lights for proper operation.
Windows and natural lighting need to be evaluated seasonally and throughout the operating day.
Light levels may also be altered by dirty windows. Ensure that windows are cleaned regularly to eliminate any buildup of material that would affect light transmission.
# Main Drain Visible
The requirement for being able to see the main drain from POOLSIDE is a SAFETY issue. If QUALIFIED LIFEGUARDS or QUALIFIED OPERATORS cannot see the main drain, then they are unable to see a person on the bottom of the AQUATIC VENUE and unable to initiate rescue
Facility Operation & Maintenance ANNEX 191 procedures. This is cause for immediate closure and rectification before re-opening. Please refer to the MAHC Section 6.6.3 for more information.
Glare In addition to discomfort, annoyance, interference, and eye fatigue, glare reduces the visibility of an object. Without clear vision, there are increased chances for accidents that can cause injuries or potential drowning. Glare can be from reflections as well as direct lighting problems.
# Assessments
The AQUATIC FACILITY owner or LIFEGUARD SUPERVISOR may consider adjusting lifeguard positions to improve visibility.
# Indoor Aquatic Facility Ventilation Maintenance and Repair
Drains on AIR-HANDLING SYSTEM equipment should be tested before the system is started.
It is important that the drain system be checked regularly to ensure that the condensate drain pan, drain connection, and piping are free from buildup or blockages. In cases where air handling equipment is intended for use with P-trap type drains, the P-trap must be kept filled manually if normal operation does not keep the P-trap filled. If not kept filled, sewer gases, and odors can enter the system.
# Combined Chlorine Reduction Water chemistry affects air quality:
- The amount of disinfectant in the water should always be at sufficient level to disinfect properly, but high residual levels in an indoor environment contribute to the development of DBPs. A higher ratio of CHLORINE to nitrogen content in the water results in the formation of TRICHLORAMINE. Lower levels of CHLORINE/bromine in the POOL results in lower levels of DBPs in the presence of organic and inorganic CONTAMINANTS. High residual levels have been a requirement for outdoor AQUATIC VENUES that have sunlight exposure, but that requirement may not be necessary for INDOOR AQUATIC FACILITIES.
- FREE CHLORINE levels could likely be maintained at a lower level due to the absence of dechlorination due to sunlight. Lower pH levels increase the effectiveness of CHLORINE and by maintaining pH less than 7.5, less CHLORINE is required to achieve effective DISINFECTION 307 .
# Facility Operation & Maintenance ANNEX 192
The water quality will affect the air quality in INDOOR AQUATIC FACILITIES. Also BATHER practices will determine not only the water quality but also the air quality. Therefore, if air handling equipment is installed, INDOOR AQUATIC FACILITY operators should develop and implement a program to operate, monitor, and maintain the equipment as designed to reduce combined CHLORINE compounds introduced into the building from the AQUATIC FEATURES in accordance with the INDOOR AQUATIC FACILITY AIR HANDLING SYSTEM design engineer and/or the AIR HANDLING SYSTEM equipment manufacturer's recommendations.
# Electrical
# Electrical Repairs
National Electrical Code Article 225 provides installation requirements for outside branch circuits and feeders that run on (or between) structures or poles.
National Electrical Code Article 680 applies to the construction and installation of electrical wiring for and equipment in or adjacent to all swimming, wading, therapeutic, and decorative POOLS; fountains; hot tubs; SPAS; and hydro-massage bathtubs, whether permanently installed or storable, and to metallic auxiliary equipment, such as pumps, filters, and similar equipment.
Electrical Receptacles NEC Article 680.22 "General Circuitry Pool Pump Motors" states that "all 15-and 20 amp, single-phase, 125-volt or 240-volt outlets supplying pool pump motors shall have GFCI protection." 29 C.F.R. 1910.304 "Wiring Design and Protection" applies to temporary wiring installations that are used during construction-like activities, including certain maintenance, remodeling, or repair activities, involving buildings, structures or equipment.
Ground-Fault Circuit Interrupter GFCI testing should follow the manufacturer's recommendations. However, the minimum test procedure should include:
1. Testing personnel must wear shoes during the entire test. Where exposed terminals may be present, or where conditions warrant, other personal protective equipment may be required. 2. A suitable indicating test load should be connected to the circuit under test, and remain so for the duration of the test. 3. Test personnel should press the "TEST" button on the GFCI device. 4. The test load should then be observed to have ceased operation due to loss of electrical power. 5. Test personnel should next press the "RESET" button on the GFCI device. 6. The test load should then be observed to have resumed operation. 7. Where any of the conditions specified in steps (b) through (f) fail, the GFCI circuit must then be inspected and tested. Replace the GFCI device as necessary. No new electrical devices or equipment should be installed in an interior CHEMICAL STORAGE SPACE used for STORAGE of pool chemicals, acids, fertilizers, salt, oxidizing cleaning materials, or flammable liquids or gases without re-inspection by the AHJ.
# Isolation Of Chemical Storage Areas
An interior STORAGE space used for storing POOL chemicals, fertilizers, salt, oxidizing cleaning materials, other corrosive or oxidizing chemicals, or pesticides must be kept in ISOLATION from other INTERIOR SPACES, except for entry, egress, material transport, or alarm testing. The period of each instance of entry, access, or alarm testing should not exceed 15 minutes. The sum of the periods of all instances of breach of ISOLATION should not exceed one hour in each 24-hour period. Where the ISOLATION of an interior STORAGE space containing such chemicals from other INTERIOR SPACES containing COMBUSTION DEVICES depends on an interior door, such door should be gasketed to prevent the passage of air, fumes, or vapors, and should be equipped with an automatic door closer and an alarm that will give notice if the door remains open for more than five minutes. Function of this alarm should be confirmed monthly as part of scheduled maintenance. Failures of door gasketing, or of the door closer, or of the alarm should be repaired immediately.
# Unsealed Openings
Where any unsealed openings exist between an interior STORAGE space used for POOL chemicals, acids, fertilizers, salt, or corrosive or oxidizing chemicals and any other INTERIOR SPACE containing electrical equipment, the air pressure in the CHEMICAL STORAGE SPACE should be maintained at a level low enough to insure that all air flow should be toward the CHEMICAL STORAGE SPACE. This pressure difference should be maintained by a continuously operating exhaust system used for no other purpose than to remove air from the CHEMICAL STORAGE SPACE. Function of this exhaust system should be monitored continuously by a pressure switch and alarm. Function of the pressure switch and alarm should be confirmed monthly as part of scheduled maintenance. In the event of failure of the exhaust system or of the alarm, repairs should be done immediately.
In any space containing electrical equipment, ambient conditions such as temperature, humidity, and maximum concentrations of chemical fumes or vapors, or of flammable fumes or vapors should be continuously maintained to meet the operational requirements of installed electrically powered equipment. Labels or other marks indicating the circuits served by fuses, circuit breakers, and disconnect switches should be maintained in a condition readable by a person unfamiliar with the function of the circuits.
For spaces containing fuses, circuit breakers, electric motors, or motor-operated loads, the recommended minimum illumination capability should be maintained as part of the scheduled monthly maintenance. STORAGE should not interfere with the largest of the
Facility Operation & Maintenance ANNEX 195 minimum working clearances specified by the NEC, the equipment manufacturer, CFR 1910, or by local CODES or regulations.
# Replamping
Re-lamping operations within 20 feet (6.1 m) horizontally of the nearest inside edge of a POOL, SPA, FLUME, WATERSLIDE, or other open AQUATIC FEATURE should be carried out in such a way as to minimize the likelihood of lamp breakage. New lamps should be kept in their packing until just before installation. Old lamps should be packed immediately upon removal into a suitable container to prevent breakage. New lamps should not be stored in an interior STORAGE space used for POOL chemicals, fertilizers, salt, or other corrosive or oxidizing chemicals. Neither new lamps nor old lamps should be stored in the INDOOR AQUATIC FACILITY, shower room, locker room, or hallways.
Where visible or accessible, any required bonding jumpers should be visually inspected for damage, breaks, looseness, or corrosion quarterly as part of scheduled maintenance. Where any doubt exists concerning the condition of bonding jumpers, they should be inspected and, if necessary, the effectiveness of such jumpers should be tested.
Grounding The purpose and objective of NEC Article 250 -Grounding is to insure that the electrical system is safe against electric shock and fires by limiting the voltage imposed by lightning, line surges, or unintentional contact with higher-voltage lines and a GROUND FAULT (line-to-case fault). The rules contained in NEC Article 250 identify the installation methods that must be followed to insure a safe electrical installation.
National Electrical Code Article 680 applies to the construction and installation of electrical wiring for and equipment in or adjacent to all swimming, wading, therapeutic, and decorative POOLS, fountains, hot tubs, SPAS, and hydromassage bathtubs (whether permanently installed or storable) and to metallic auxiliary equipment, such as pumps, filters, and similar equipment.
Extension Cords
# Exception
The intent is to prevent the extension cord from reaching the water.
Compliance See CFR 1910.304 (b)(2) at DS&p_id=9881
# Communication Devices and Dispatch Systems
National Electrical Code Article 800 covers multi-purpose and communication cable. Multi-purpose cable is the highest listing for a cable and can be used for communication, Class 2, Class 3, and power-limited fire protective cable.
# Facility Operation & Maintenance
ANNEX 196 Communication cable can be used for Class 2 and Class 3 cable and also as a powerlimited fire protective cable with restrictions.
Facility Heating 5.6.4.1 Facility Heating
# Maintenance and Repair
There are a number of CODES which can be consulted. These include but are not limited to the National Fuel Gas Code, National Electrical Code, and certain building CODES.
# Defects
If inspection shows excessive fouling of air filters before the cleaning or replacement period has ended, that period should be reduced to prevent overloading of filters. Filters that become clogged with dirt, mold, or other CONTAMINANTS can become a source of increased operating costs and poor air circulation. In addition to the reduction of system effectiveness, which can result in costly repairs, air-borne CONTAMINANTS can be spread as a result of improper air handling.
Temperature The air temperature of an indoor AQUATIC VENUE should be controlled to the original specifications where possible. Where this is not possible, the air temperature of an INDOOR AQUATIC FACILITY should be controlled so as to prevent unexpectedly high levels of evaporation and to prevent condensation of water onto surfaces not designed for condensation. Particular care should be taken to prevent the condensation of water inside INDOOR AQUATIC VENUE building surfaces such as walls and ceilings. Please note that this code only looks at the part of the facility where the water "vessel" is located and not at other areas of buildings (the building code would cover). Particular attention needs to be given to the prevention of algae and mold growth on surfaces.
First Aid Room 5.6.6 Emergency Exit 5.6.7 Plumbing
# Water Supply
The potable water pressure should be maintained to enable the AQUATIC VENUE and all other water using fixtures to operate to design specifications.
Waste Water In some AQUATIC FACILITIES, backwash water may be recycled for other purposes instead of wasted in order to conserve water. This water must be treated in accordance with local code requirements prior to being re-used. Backwash water is likely to be routinely contaminated with pathogens so its use should be carefully considered and health
Facility Operation & Maintenance ANNEX 197 issues planned for prior to re-use. It should not be re-used in AQUATIC VENUES, but may be used in landscaping or other non-potable water uses with AHJ approval.
Water Replenishment See MAHC Annex Section 4.7.4 for more information.
A minimum of 4 gallons (15 L) of water per BATHER per day must be discharged from the POOL, but a volume of 8 gallons (30 L) per BATHER per day is recommended. Backwash water will count toward the total recommended volume of water to be discharged, but evaporated water will not count since inorganic CONTAMINANTS (e.g., salts and metals) and many organic CONTAMINANTS (e.g., sweat and urine) can simply be concentrated as water evaporates. Backwash water or other discharged water may not be returned to the POOL without treatment to reduce the total organic carbon concentration, DISINFECTION BY-PRODUCT levels, turbidity, and microbial concentrations less than the limits set for tap water by the U.S. EPA.
Solid Waste 5.6.9 Decks 5.6.9.1 Food Preparation and Consumption
# Eating and Drinking
Eating and drinking in AQUATIC VENUE areas may expose BATHERS to CONTAMINANTS. Food particles that fall into the POOL not only contribute to contamination burden, but may also affect POOL DISINFECTION. Additionally, contamination can occur through ingestion. Alcohol increases urine output and therefore creates more chloramines and other DISINFECTION BY-PRODUCTS if BATHERS do not regularly get out of the POOL to urinate. Regular bathroom breaks should be considered to reduce urination in the AQUATIC VENUE that allows designated areas like "swim-up bars" that may increase POOL urination and create compliance issues with MAHC combined CHLORINE levels. However, at this time, the MAHC does not have data suggesting that AQUATIC VENUES containing "swim up bars" have any more issues with water quality compliance issues that those AQUATIC VENUES that do not. AQUATIC VENUES considering "swim-up" bars need to be aware that these areas may also increase the risk of drowning caused by excessive alcohol consumption and should include this thinking in lifeguard training and in-service training.
Currently the majority of states do not allow swim up bars; however, Ohio, Hawaii, Texas, and a few local jurisdictions do, mostly in resort areas. The MAHC defers to local jurisdictions to assess and determine potential risks.
Another topic to consider is nursing mothers and safety and health risk to infants. While many mothers consider nursing in the POOL a pleasant experience for the baby there is a definite safety risk for the infant from hypothermia and a health risk from potentially ingesting contaminated POOL water that may contain organisms such as Cryptosporidium.
Facility Operation & Maintenance ANNEX 198 For more information about this topic, see CDC Healthy Swimming discussion at: pools.html 5.6.9.2 Glass Glass is prohibited in the POOL DECK area to prevent injuries to PATRONS. Most BATHERS and PATRONS are bare foot, so stepping on glass can cause serious injuries. If a glass container breaks in the AQUATIC VENUE vicinity it could potentially fall into the water. Clear glass is virtually invisible in water and is difficult to remove. The only way to ensure all broken glass is removed from POOL water is to thoroughly drain and clean the POOL structure. Depending on the size, draining and cleaning an AQUATIC VENUE can cost thousands of dollars.
Deck Maintenance
Free From Obstructions DECKS should always be kept clear of obstructions to preserve space that may be needed for rescue efforts. Obstructions also cause tripping hazards and can lead to falls and serious injuries. Attention must also be given to potential fall hazards from slippery DECK areas.
Vermin It is important to maintain these areas free from debris, vermin, and vermin harborage. Animals can carry diseases which could be transmitted through bites or contact with bodily fluids or feces.
Original Design Proper maintenance of surfaces will help prevent abrasions to BATHERS and biofilm growth. 308 5.6.10 Aquatic Facility Maintenance 5.6.10.1
Diving Boards and Platforms Slip resistance can be accomplished by ensuring that the coefficient of friction is greater than or equal to that specified in MAHC Section 4.8.1.4.
Starting Platforms Starting blocks are designed for use by trained persons or those under the supervision of a qualified individual. Use by untrained, unsupervised individuals can lead to serious injury. Since they can be an attraction for unqualified bathers to use, starting block use needs to be clearly prohibited by signage, covers, or other barriers/deterrents. Since bathers are known to ignore signs or barriers prohibiting use, the safest approach for removable blocks is to remove them and store elsewhere when not in use.
Facility Operation & Maintenance ANNEX 199 5.6.10.5
Fencing and Barriers This wording refers to alarms associated with open gates or barriers. It is not meant to include burglar or fire alarms.
Aquatic Facility Cleaning In-POOL cleaning systems must be periodically inspected to make sure they retract and stay flush with the floor.
# Recirculation and Water Treatment
Recirculation Systems and Equipment 5.7.1.1 General The MAHC does not allow shut down of the RECIRCULATION SYSTEM during closure times since uncirculated water would soon become stagnant and loose residual disinfectant likely leading to biofilm proliferation in pipes and filters. This would be likely compromise water quality and increase the risk to BATHERS. MAHC Section 4.7.1.10.6 describes turndown system design. The flow turndown system is intended to reduce energy consumption when AQUATIC VENUES are unoccupied without compromising water quality. A turbidity goal of less than 0.5 NTU has been chosen by a number of U.S. state CODES (e.g., Florida) as well as the PWTAG and WHO. The maximum turndown of 25% was selected to save energy while not necessarily compromising the ability of the RECIRCULATION SYSTEM to remove, treat, and return water to the center and other extremities of the POOL. Additional research in this area could identify innovative ways to optimize and improve this type of system and that more aggressive turndown rates are acceptable.
Gutter / Skimmer Pools The recommendation for gutter or SKIMMER POOLS with main drains to have the majority of the water (at least 80% of the recommended recirculation flow) be drawn through the PERIMETER OVERFLOW SYSTEM and no greater than 20% through the main drain during normal operation is based on subsurface distribution of bacteria data that showed most POOLS had higher surface concentrations of bacteria. 309 For the 65 POOLS examined, surface concentrations of bacteria were an average of 3.4 times greater at the surface. However, about 30% of the POOLS showed the opposite trend with higher subsurface concentrations, which is why some operational flexibility is provided with these values.
For reverse flow (upflow) POOLS, 100% of the recommended circulation flow should be through the PERIMETER OVERFLOW SYSTEM, which is consistent with the German DIN Standards. 310 Efficient removal of surface water is critical for maintaining water quality because surface water contains the highest concentration of pollutants from body oils, sunscreens, as well as other chemicals or particles that are less dense than water.
Facility Operation & Maintenance ANNEX 200 Bacteria appear to follow the same trend in most cases 311 . The distribution of CHLORINEtolerant pathogens like Cryptosporidium is not known at present. The majority of the organic pollution and contamination is concentrated at or near the surface irrespective of the mixing effects of the circulation .
Inlets During regular seasonal operation following initial adjustments, INLETS should be checked at least weekly so that the rate and direction of flow through each INLET has not been changed substantially from the original conditions that established a uniform distribution pattern and facilitated the maintenance of a uniform disinfectant residual throughout the entire facility without the existence of dead spots.
A tracer test (e.g., with a sodium chloride tracer injected on the suction side of the pump) should be conducted annually at startup and documented to quantitatively assess distribution pattern in the POOL. An amount of salt sufficient to increase the baseline conductivity by at least 20% should be added over a one minute period, and the conductivity or TDS should be measured at one minute intervals until the conductivity increases by 20% and/or stops changing for ten consecutive readings after an initial increase. Samples may also be taken at the corners, center, and bottom of the POOL (via a sample pump with the pool unoccupied) in small labeled containers for later measurement to increase the amount of information available to assist in interpreting the results. Increases greater than predicted by the amount of salt added to the POOL volume indicate poor mixing. Areas with conductivities lower than in the return stream at the time the sample was collected are likely to be areas with poor recirculation flows.
Note: It is possible to do a tracer test, which is quantifiable in terms of salt concentration ratios and/or time required to reach equilibrium concentration near the filter.
Piping Winterization may involve dropping the water level below the level of the INLETS, blowing or draining all of the water out of the pipes, adding antifreeze, and closing off both ends. Pipes should be drained or winterized in regions where freezing temperatures are expected to be reached inside of the pipes. This should not be done with car antifreeze, and the antifreeze should not be toxic to humans.
Flow Meters Flow meters are important for the maintenance of proper filtration, backwashing, and recirculation flow rates. It is also feasible to save money on electrical costs by using the flow meter to monitor and adjust the speed of the pump.
Facility Operation & Maintenance ANNEX 205 cartridges (i.e., cartridges that do not recover 100% of the original capacity when cleaned after fouling). Systems designed for a given TURNOVER time with a filter flow rate of 0.375 gallons per minute per square foot (0.26 L/s/m 2 ) would not be in compliance if partially fouled cartridges dropped the flow rate to 0.30 gallons per minute per square foot (0.20 L/s/m 2 ). Therefore, an acceptable operating range is provided beyond which cartridge replacement would be necessary.
Filter Elements Cartridges should be cleaned when the gauge pressure differential is 10 psi (68.9 KPa) and in accordance with manufacturer's instructions. Cleaning equipment should include a soaking container properly sized to immerse the filter elements, a rinsing area with proper drainage, and a drying area protected from contamination (e.g., birds and insects). New filters do not regain 100% of their capacity. Perhaps only about 80% of the capacity is recoverable, regardless of the treatment. If the recommended design flow rate exceeds 80% of the maximum flow allowed on the filter, the filter may be undersized.
Cleaning Procedure
Facilities with cartridge filters are recommended to have the equipment on-site to clean the cartridges. This includes a basin or tub large enough to immerse the entire cartridge. Water from the cleaning and soaking process must be discharged to the sanitary sewer. Proper cleaning is critical. Failure to clean the cartridge properly can lead to disease outbreaks.
# How to Clean Cartridge Filters:
1) RINSE THOROUGHLY: Rinse the cartridge of as much dirt and debris as possible by washing inside and out with a garden hose and spray nozzle.
- DO NOT use a pressure washer. High flow/pressure can drive the dirt into the interior and permanently damage the cartridge. It can also aerosolize pathogens in filter.
DEGREASE: Cartridge filters need to be degreased each time they are cleaned. Body oil, suntan oil, cosmetics, hair products, and/or algae and biofilms can form a greasy coating on the filter pleats, which will clog the pores and reduce the filter capacity. Acid may permanently set the grease and ruin the cartridge.
Facility Operation & Maintenance ANNEX 206 3) RINSE THOROUGHLY
# 4) SANITIZE:
To remove or prevent biofilms, algae, and bacteria growing on the cartridge, add one quart (0.95 L) of household bleach per 5 gallons (19 L) of clean water and soak one hour before rinsing.
RINSE: Remove the clean cartridge from the sanitization soak water and rinse thoroughly with a hose.
DRY: After the filter is cleaned and degreased, it should be allowed to dry completely. Some bacteria (e.g., Legionella spp.) that survive the cleaning process can be killed by drying. Do not allow the filter to become contaminated with dirt or soil after it is cleaned. Put the cartridges in a clean plastic trash bag if they are to be transported and the original boxes are not available.
ACID WASH -ONLY IF NECESSARY: Excessive calcium or mineral deposits on the filter media can be cleaned with a 1:20 solution of clean water and Muriatic Acid. Put a few drops of muriatic acid on the filter. If it foams, it might need to be acid washed. Very few filters need to be acid washed.
# Pressure Washer
A pressure washer should not be used because high flow/pressure can drive the dirt into the interior and permanently damage the cartridge or can aerosolize pathogens in the filter biofilm, which expose and infect workers when cleaning the cartridge filters in an enclosed space 324 .
# Disinfection and pH Control
# Primary Disinfectants
# Chlorine (Hypochlorites)
Although chlorine and bromine are the only primary disinfectants allowed at this time, future research may produce other acceptable primary disinfectants.
# Minimum FAC Concentrations
It is necessary to ensure that FAC is maintained at or above the 1.0 PPM (MG/L) minimum level at all times and in all areas of the POOL. Because CHLORINE efficacy is reduced in the presence of cyanuric acid, higher FAC levels may be necessary for POOLS using cyanuric acid or stabilized CHLORINE.
# Facility Operation & Maintenance ANNEX 207
The minimum FAC level of 1.0 PPM (MG/L) for swimming POOLS is well-supported by available data. The CDC data indicates that a 1.0 PPM (MG/L) FAC residual can provide effective 325 DISINFECTION of most pathogens other than Cryptosporidium.
Substantial laboratory data shows that kill times for microbial CONTAMINANTS are increased in the presence of cyanuric acid. However, the precise impacts on CT VALUES in a swimming POOL environment are not well-established. The impact on CT VALUES is mostly related to the hypochlorous acid (HOCl) concentration that can be calculated using equilibrium constants. In general, studies show that the presence of CYA up to 50 MG/L increases CT VALUES under demand free conditions, and the amount of this increase depends upon the pH and the ratio of CYA to available CHLORINE. Studies suggest that this effect is mitigated with the addition of ammonia nitrogen as low as 0.05 MG/L by producing monochloramine which, although a weaker disinfectant than hypochlorous acid, remains unbound to CYA.
Swimming POOL survey data demonstrates that 1.0 PPM (mg/L) FAC provides acceptable bacteriological quality 326 .
However, another paper suggests that FREE CHLORINE levels significantly higher than 1.0 PPM (mg/L) may be required. Based on data collected from seven chlorinated POOLS, Ibarluzea et al. predicted that 2.6 PPM (mg/L) is needed "in order to guarantee, with a probability of 90%, the acceptability of bathing water at indoor chlorinated swimming pools." 327 A minimum FAC level (3.0 PPM (mg/L)) for SPAS addresses the higher THEORETICAL PEAK OCCUPANCY, higher temperatures, and/or at-risk populations served by these venues. The THEORETICAL PEAK OCCUPANCY and temperatures of these venues favor microbial growth and can lead to rapid depletion of CHLORINE. This minimum
Facility Operation & Maintenance ANNEX 208 requirement is consistent with CDC recommendations to minimize transmission of Legionnaires disease from whirlpool SPAS on cruise ships, published in 1997, which recommends maintaining free residual CHLORINE levels in SPA water at 3 to 10 PPM (mg/L). It is further supported by a study reviewing both bromine and CHLORINE, which states, Pseudomonas aeruginosa were rapidly reestablished in SPAS (greater than 103 cells per mL) when disinfectant concentrations decreased below recommended levels . 328 In general, a range of 2-4 PPM (mg/L) FAC for POOLS (3)(4)(5) for spas) is recommended to help ensure the minimum FAC is maintained and to provide a margin of SAFETY for BATHERS.
For individual POOLS, considerations for ideal FAC levels include:
- Chlorine demand: FAC levels should be sufficient to accommodate peak BATHER LOADS and other sources of contamination. Temperature and sunlight: FAC levels should be sufficient to accommodate loss of FAC from higher water temperatures and sunlight. Cyanuric acid: Because CHLORINE efficacy is reduced in the presence of cyanuric acid, higher FAC levels may be necessary for POOLS using cyanuric acid 329,330,331,332 or stabilized CHLORINE. Algae control: Algae is more difficult to control than most pathogens and may require FAC residuals greater than 3.0 PPM (mg/L) although peer-reviewed data is lacking. Accuracy of FAC tests: POOL test kits have been reported to give FAC results which diverge significantly from true values although peer-reviewed data is lacking. Feeder equipment: Automated feeders help reduce variability in dosing and the potential for FAC levels to fall below minimum levels. Secondary disinfection: While the minimum FAC level must be maintained in all POOLS, approved SECONDARY DISINFECTION SYSTEMS such as UV and ozone reduce risks from CHLORINE-resistant pathogens and may reduce CHLORINE demand. However, the effects of UV/CHLORINE on water chemistry are still largely undefined. Recent research suggests that UV can increases some forms of CHLORINE demand.
# Facility Operation & Maintenance ANNEX 210
The hydrogen gas is dissolved in the water and eventually vents to the atmosphere. The CHLORINE gas then dissociates into hypochlorous acid (HOCl), which provides a residual of FREE AVAILABLE CHLORINE (FAC): Cl2 (g) + H20 HOCl (aq) + HCl (aq) Salt water chlorination units should be sized appropriately to maintain minimum FAC levels during maximum load periods. The units should ideally be controlled by an ORP controller. Operators must still test the FAC residual of the water to ensure that the cell is producing adequate CHLORINE for the POOL. However, a separate chlorinating product may be needed to provide a sufficiently high FAC level for shock treatment or remediation following a fecal accident.
MONITORING and maintaining the pH, total alkalinity, and TDS of the water in the POOL is important. Salt water POOLS intentionally have high concentrations of sodium chloride. The sodium chloride will contribute to TDS, but will not cause decreased disinfectant efficacy or cloudy water.
Electrolytic cells do wear out and need to be replaced. The life of the cell depends upon how many hours the cell operates each day, the pH of the water, and the calcium content of the water. The cells have to be cleaned to remove scale build-up. The systems usually utilize reversal of the polarity on the cells to minimize the scale formation, but eventually the cell will have deposits that require the cell to be removed from the plumbing and soaked in an acid solution.
The cells are also vulnerable to damage if they are operated in conditions of lower than recommended salt residuals or in water that is too cold. The systems have sensors and cut-offs to prevent this damage, but operators must be sure to monitor the unit to recognize when there is a problem.
# Bromine
# EPA Registered
The US EPA Office of Pesticides registers products and approves labels for bromine. Currently bromine products on the market for use in recreational water are registered with use levels ranging from 1-8 PPM (mg/L), depending on the product. The efficacy of these products have been studied by the manufacturers and submitted to the U.S. EPA under the Federal Insecticide Fungicide and Rodenticide Act (FIFRA). The efficacy data analyzed by the U.S. EPA is company confidential and has not been reviewed as part of the development of the MAHC. The MAHC welcomes input and supporting data for establishing upper limits.
# Minimum Bromine Concentrations
Bromine concentrations established by state and local jurisdictions have not been found to correlate with data supporting the concentrations being used. However, every state or local jurisdiction that allows bromine as a disinfectant requires bromine at higher concentrations than CHLORINE and almost twice as much in SPAS and warmer POOLS.
Facility Operation & Maintenance ANNEX 211 Commercially available test kits are not capable of distinguishing free bromine (Br 2 , HOBr, OBr-) from combined bromine (bromamines). The bromine value specified in test results is the concentration of total bromine, not the free available halogen that is tested with CHLORINE. To determine total bromine, test kit manufacturers use a CHLORINE value and multiply it by 2.25. The 2.25 conversion factor accounts for the molecular weight difference between elemental bromine and elemental CHLORINE (Br = 79.90 grams per mole and Cl = 35.45 grams per mole). Further, presently used field test kits assay only for total bromine.
Bromine is commonly used in indoor commercial SPAS, probably due to these two factors. First, bromamines (bromine and ammonia combined) do not produce irritating odors as do chloramines. Second, bromine efficacy is less impacted than CHLORINE'S at a higher pH, which typically occurs in a SPA environment. At pH of 7.5, 94% of bromine is hypobromous acid, whereas at the same pH, hypochlorous acid is 55% in chlorinated water. At pH of 8.0 bromine still has 83% hypobromous acid, while in a chlorinated water, hypochlorous acid is 28%. 334 Bromine is also not very common in outdoor POOLS because like CHLORINE, bromine is destroyed rapidly in sunlight. Cyanuric acid was developed to combat the problem in chlorinated POOLS, but does not provide a stabilizing effect for bromine.
While reviewing the literature and surveillance data from CDC, evidence that outbreaks have occurred when required minimum bromine concentrations have been maintained is lacking. Therefore, in absence of any clear research, the decision to use common state requirements as the recommended levels is prudent.
SPAS have been implicated in many skin disease outbreaks throughout the years. One paper suggests that a common culprit, Pseudomonas aeruginosa, were rapidly reestablished in whirlpools (less than 103 cells per mL) when disinfectant concentrations decreased below recommended levels (chlorine: 3.0 PPM (mg/L); bromine: 6.0 PPM (mg/L)). The authors studied the reoccurrence of bacteria following cleaning and halogen shock treatment 335 . This study emphasized the need for maintaining a consistent CHLORINE level in the SPA. CDC recommends 4-6 PPM (mg/L) for bromine.
The MAHC recommends a follow up study to evaluate the efficacy of bromine on P. aeruginosa, since it is so commonly found in SPAS; and because bromine is very common disinfectant used in SPAS, prevention and treatment is essential.
There are few peer-reviewed studies on bromine efficacy in real world POOLS and SPAS in the literature. Brown et al. reported reasonable bacterial control with 2.0 total bromine in an 118,000 gallon (447 m 3 ) INDOOR POOL using BCDMH 336 . Normal day time BATHER
Facility Operation & Maintenance ANNEX 212 COUNTS were around 0.21 persons per 500 gallons (1893 L) per hour but often increased to as high as 0.85 in the evening. The POOL did not use supplemental OXIDATION but did replace 5% of the water daily which likely contributed to the low reported ammonia nitrogen and organic nitrogen. Shaw reports a retrospective analysis of brominated and chlorinated semi-public SPAS in Alberta. 337 The data used was from the microbiological results of the weekly samples required under provincial regulations. The treatment systems compared include BCDMH (oxidation method not specified), bromide salt regenerated by hypochlorous acid/potassium monopersulfate continuous feed, CHLORINE gas, hypochlorite (type not specified), dichlor, and trichlor. The concentrations were generally in line with provincial regulations of 2 PPM (mg/L) total bromine and 1 PPM (mg/L) free CHLORINE. The brominated SPAS had a higher failure rate in all three bacterial parameters. There were several complaints of both contact dermatitis and Pseudomonas folliculitis from the brominated SPAS during the period studied, but due to the nature of the retrospective studies, it was not possible to link the reported RWIs to the concentration of the disinfectant at the time of the complaint. It appears from composite data that when semi-public SPAS are operated using the U.S. EPA minimum halogen concentration of 1.0 PPM (mg/L) free CHLORINE or 2 PPM (mg/L) total bromine that Pseudomonas aeruginosa can be isolated from the brominated SPAS at greater than twice the frequency than from chlorinated SPAS.
# Bromates
Ozone and bromide ions in water form hypobromous acid and bromate ions. Bromates have been classified by the International Agency for Research on Cancer (IARC) as having sufficient evidence of carcinogenicity in laboratory animals. As a result, WHO has set a provisional drinking water guideline value of 10 ug/L. The U.S. EPA has established a maximum CONTAMINANT level of 10 ug/L for bromate in drinking water.
BCDMH (1-bromo-3-chloro-5, 5-dimethylhydantoin) is the most common form of bromine used in commercial POOLS and SPAS today. The function of DMH is to inhibit the formation of bromates.
At present there is little information on the functionality of using DMH in this manner. Since there is not a convenient field test kit available, an operator has no way of knowing what the DMH level is in the water or when it may go below 10 PPM (mg/L) to allow bromates to form. We also do not know what the maximum safe level of DMH should be. To rely on DMH for bromate prevention, suitable test methods and further research are necessary.
Operators should consider that ozone should likely not be used with bromine systems when there is a substantial likelihood of ingestion of the water. When ozone is used in conjunction with organic bromine sources (BCDMH or DBDMH-another common source of bromine), the ozone readily converts residual bromide ion back to hypobromous acid. This process reduces ozone. With the continued addition of BCDMH, DBDMH, or sodium bromide, the bromide levels will continue to climb in the 337Shaw JW. A retrospective comparison of the effectiveness of bromination and chlorination in controlling Pseudomonas aeruginosa in spas (whirlpools) in Alberta. Can J Public Health. 1984 Jan-Feb;75(1):61-8.
Facility Operation & Maintenance ANNEX 213 POOL or SPA. Continuous build-up of bromide will constantly reduce ozone; diminishing ozone's effective OXIDATION (and destruction) of organics and microorganisms in the water. Because of the wide variation in the concentration of bromide and the potential for bromate ingestion at least one ozone manufacturer does not recommend the installation of ozone units in bromine-treated facilities.
Disinfection DISINFECTION using bromine is more complex but less well documented than DISINFECTION using CHLORINE. Hypobromous acid is the putative biocidal chemical species at recreational water pH. Hypobromous acid reacts with inorganic ammonia and forms monobromamine, dibromamine, and nitrogen tribromide, depending on the pH and concentration of ammonia. 338 These inorganic bromamines are all considered more biocidal than their corresponding CHLORINE analogs. Hypobromous acid is converted to inert bromide ion upon biocidal action in a manner similar to that seen with hypochlorous acid. One key difference between bromine and CHLORINE DISINFECTION is that bromide is readily oxidized back to hypobromous acid and chloride is not. Further, hypobromous acid is a much weaker oxidizer than hypochlorous acid. As a consequence of these two differences, exogenous OXIDATION of brominated waters (e.g. shocking with chlorine) is more important for safe operation than it is in chlorinated waters. In reviewing the published epidemiological studies on RWIs, it is often difficult to determine the exact treatment system used because the SUPPLEMENTAL TREATMENT SYSTEM is not described. Further, presently used field test kits assay only for total bromine and are not capable of distinguishing free bromine from biocidal inorganic bromamines or from non-biocidal organic bromamines.
# Bromamines
Current POOL and SPA operating manuals state that combined bromine (bromamines) is as efficacious as free bromine. This may be an over generalization of the complex nature of bromine chemistry. Bromine reacts with inorganic ammonia and forms analogous compounds (Br 2 , hypobromous acid, monobromamine, dibromamine, and nitrogen tribromamide) depending in the pH and concentration of ammonia. 339 All three bromine-ammonia derivatives are biocidal, but all three are also less stable than their corresponding CHLORINE compounds. As with their CHLORINE analogs, the ratios of the bromamines are highly dependent on the ratio of ammonia to bromine. Further, at low ammonia to bromine ratios the biocidal action appears to be substantially reduced 340 . The levels of ammonia that result in loss of bromine efficacy have been detected in SPA water 341 . At these documented concentrations of bromine and ammonia, the predominant bromamine is most likely dibromamine, which has an estimated half-life of 338 Galal-Gorchev H, et al. Formation and stability of bromamide, bromimide, and nitrogen tribromide in aqueous solution. Inorganic Chemistry. 1965;4(6):899-905. 339 Galal-Gorchev H, et al. Formation and stability of bromamide, bromimide, and nitrogen tribromide in aqueous solution. Inorganic Chemistry. 1965;4(6):899-905. 340 Wyss O, et al. The germicidal action of bromine. Arch Biochem. 1947 Feb;12(2):267-71. 341 Kush BJ, et al. A preliminary survey of the association of Pseudomonas aeruginosa with commercial whirlpool bath waters. Am J Public Health. 1980 Mar;70(3):279-81.
Facility Operation & Maintenance ANNEX 214 10 minutes 342 . The MAHC was not able to locate data on the efficacy of organic bromamines.
# Future Research Needs
# Cryptosporidium Inactivation
Methods to hyper-brominate recreational water in response to diarrheal fecal accidents have not been established. Research in this area is lacking.
# Bromine Associated Rashes
Note to readers: These comments have been inserted to point future researchers toward an under-investigated area of public health and are not meant to imply a negative bias toward bromine.
Literature reviews demonstrate a large number of reports describing rashes associated with brominated water. These rashes fall into two general categories: Contact dermatitis due to brominated species in the water, and Dermal infections due to Pseudomonas aeruginosa.
These are most easily differentiated by incubation time. The vast majority of contact dermatitis reactions occur within 24 hours of immersion, sometimes within minutes. These are often referred to as "bromine itch" and are widely reported in the medical literature 343 , 344 , 345 . In most cases the putative etiological agent is thought to be bromamines. This type of dermatitis appears to be a result of cumulative exposure to bromine treated water and is particularly prevalent among medical personnel who provide aquatic physical therapy 346 . The exact compounds inducing contact dermatitis have not been identified. One study strongly suggests that the use of bromine with supplemental OXIDATION minimizes contact dermatitis 347 . In numerous epidemiological studies, poor water quality is commonly, but not always, reported (Woolf and Shannon report an extreme example of a foamy pool leading to multiple cases of contact-related RWI 348 ). The typical incubation period for Pseudomonas aeruginosa folliculitis is several days but can be as short as 24 hours. Outbreaks of Pseudomonas aeruginosa folliculitis are routinely associated with inadequate sanitation in both chlorinated and brominated waters. The minimum concentration to prevent such outbreaks has not been established but appears to at least one PPM (mg/L) free CHLORINE and two PPM (mg/L)
Facility Operation & Maintenance ANNEX 215 total bromine. A survey of the literature since the mid-1980s shows more dermal RWI outbreaks reported in brominated waters than in chlorinated waters. It is not known whether the reports reflect the true incidence, a bias in reporting of bromine systems, or a bias in reporting RWIs in SPAS, which tend to use bromine disinfectants.
There are many unanswered questions surrounding bromine-treatment systems commonly used in AQUATIC VENUE DISINFECTION. After reviewing the literature, the MAHC has concluded the following research is essential to understanding bromine DISINFECTION.
Further research needs to address, in priority order:
- The efficacy of bromine to establish a minimum concentration for AQUATIC VENUES and warm water SPAS and THERAPY POOLS, The maximum bromine concentration that should be allowed, The contribution of bromamines to DISINFECTION and BATHER rashes, Methods to better control bromamines, Creation of a test kit to differentiate free bromine from combined (as is currently practiced with chlorine) in the water, Use of DMH in respect to bromate formation, Establish a safe maximum level, Creation of a test kit to establish levels in the water, and Fecal accident recommendations to control Cryptosporidium when using a bromine POOL.
# Stabilizers
# Minimum Disinfection
Minimum CHLORINE levels should be increased by a factor of at least two when using CYA. Robinton et al. found that "50 MG/L of cyanuric acid produced pronounced retardation of the bactericidal efficiencies of solutions of calcium hypochlorite, trichloroiso-cyanuric acid, and potassium dichloroisocyanurate such that a four-to eightfold increase in the amount of "free" available residual CHLORINE may be necessary to attain the same degree of inactivation of the same organisms in the same interval of time" 349 .
Laboratory studies by Warren and Ridgway show that addition of 50 MG/L cyanuric acid to 0.5 -1.0 MG/L available CHLORINE resulted in a significant increase in the CT of Staphylococcus aureus, in parallel with the increase in available CHLORINE stability in sunlight. However, higher concentrations of cyanuric acid resulted in little to modest further increases in CT over that for 50 MG/L cyanuric acid. For example, the data suggest that for 50, 100 and 200 MG/L of cyanuric acid, the level of CHLORINE required for 99% kill of Staphylococcus aureus in one minute would be 1.9, 2.15, and 2.5 MG/L, respectively 350 .
The MAHC has adopted a SAFETY factor of 2 so that 2 PPM is the minimum concentration of using stabilized products. More data are needed to understand the impact of increasing cyanurate levels on pathogen inactivation to assess what this level should be so the MAHC has adopted less than or equal to 100 PPM, as has the World Health Organization 351 .
The level of cyanurate allowed in outdoor AQUATIC VENUES is double that for nonstabilized CHLORINE, which is a SAFETY factor for the decrease in oxidative capacity. The MAHC has decided that from a public health standpoint it cannot support a prohibition of the use of cyanurate in most INCREASED RISK AQUATIC VENUES. The SAFETY margin of two times the level of non-stabilized product would also apply for increased indoor settings in addition to the requirement for a SECONDARY DISINFECTION SYSTEM and therefore prohibition in an INCREASED RISK VENUE cannot, at this time, be supported with a public health argument. The exception to this is operation of SPAS and THERAPY POOLS, which have large issues with efficacy of agents against pathogens in biofilms and difficulties with maintaining needed pH levels (spas) and the use by INCREASED RISK groups of patients (THERAPY POOLS). SPAS and THERAPY POOLS will, therefore, not be allowed to use cyanuric acid or stabilized CHLORINE products.
Facility Operation & Maintenance ANNEX 217 Users should be aware that if AQUATIC VENUES using cyanuric acid or stabilized CHLORINE products have a fecal incident, they will need to close for more prolonged periods for a diarrheal fecal incident and HYPERCHLORINATION, circulate water through a SECONDARY DISINFECTION SYSTEM, or replace the water in the AQUATIC VENUE per MAHC Section 6.5.3.2.1 352 .
# Indoor Pools
There appears to be no operational or public health reason for INDOOR AQUATIC VENUES to use CYA. It is a stabilizer for degradation from direct sunlight and so likely has limited benefits for indoor POOLS despite some operators claiming a benefit for indoor POOLS with large glassed areas. However, the level of cyanurate allowed in outdoor AQUATIC VENUES is double that for non-stabilized CHLORINE which is a SAFETY factor for the decrease in oxidative capacity. The MAHC has decided that it cannot support, from a public health standpoint, a prohibition of the use of cyanurate in indoor settings. The SAFETY margin would also apply for indoor settings and therefore prohibition in an indoor setting would require specific data on the direct impact in indoor settings since the MAHC allows it in outdoor settings. CDC still does not recommend cyanuric acid use for indoor POOLS or hot tubs. The recommendation was underscored in a 2000 MMWR after investigating a Pseudomonas dermatitis/folliculitis outbreak associated with indoor POOLS and hot tubs in Maine, noting that cyanuric acid was added to an indoor POOL which reduces the antimicrobial capacity of free CHLORINE 353,354 .
Users should be aware that if AQUATIC VENUES using cyanuric acid or stabilized CHLORINE products have a fecal incident, they will need to close for more prolonged periods for a diarrheal fecal incident and HYPERCHLORINATION, circulate water through a SECONDARY DISINFECTION SYSTEM, or replace the water in the AQUATIC VENUE per MAHC Section 6.5.3.2.1 355 .
# Effects of Cyanuric Acid on Microbial Inactivation
There are a large number of references on the effect of CYA on kill times (CT Values). In general, they show that the presence of CYA increases CT VALUES, and the amount of this increase depends on the pH and the ratio of CYA to available CHLORINE. However, there are few reports that relate specifically to the issue of what levels of available CHLORINE and cyanuric acid are required to maintain a swimming POOL in a biologically satisfactory state.
Studies examining the effect of cyanuric acid on the DISINFECTION capacity of CHLORINE show that using cyanuric acid or stabilized CHLORINE slows down the inactivation times
Facility Operation & Maintenance ANNEX 218 on bacteria, algae, protozoa (Naegleria gruberi and Cryptosporidium parvum), and viruses. Yamashita et al. concluded the addition of cyanuric acid increased the time needed for DISINFECTION of 12 virus types by a factor of 4.8-28.8 compared to free CHLORINE alone 356,357 . In a later study, Yamashita et al. 358 found "Total plate counts ranged from 0 to 1 per mL in the swimming POOLS treated with sodium hypochlorite and 0 to 51 in those with trichloroisocyanurates. In 11 of 12 water samples of 3 swimming POOLS using trichloroisocyanurates, poliovirus type 1 survived after 2 minute contact while in 5 samples poliovirus type 1 survived after 5 minute contact." The researchers concluded this showed that the risk of viral infection is greater in swimming POOL water treated with chlorinated isocyanurates than that with sodium hypochlorite.
The addition of CYA similarly impaired the inactivation of poliovirus 359 . Cyanuric acid, used as CHLORINE stabilizer in swimming POOL waters, had a relatively minor effect on the algicidal efficiency of FREE CHLORINE 360 . There are few data regarding protozoa and the effect of CYA on inactivation though the DISINFECTION rate for Naegleria gruberi was reduced by cyanuric acid in laboratory-controlled CHLORINE demand free conditions 361 .
Facility Operation & Maintenance ANNEX 219 Shields et al. 362 extended the previous findings by demonstrating that cyanuric acid significantly decreases the rate of inactivation for Cryptosporidium parvum OOCYSTS. In this study a three-log reduction of OOCYSTS was found to take place in the presence of 20 PPM (mg/L) FAC. When 50 PPM (mg/L) CYA was introduced, the 10-hour kill rate was less than ½ log.
Pseudomonas inactivation in the presence of CYA was also studied in POOL water and it was found that increased CYA concentrations lengthened the kill times. The effect of cyanuric acid was greater as the concentration of CHLORINE in the water decreased 363 . Favero et al. found that at free CHLORINE concentrations of more than 0.5 PPM (mg/L), P. aeruginosa was rarely found except in those POOLS which used sodium dichloroisocyanurate as a POOL disinfectant. Three private swimming POOLS using sodium dichloroisocyanurate as a POOL disinfectant were found to contain large numbers of the potential pathogen, P. aeruginosa 364 . Fitzgerald found concentrations of 25, 50, and 100 mg of cyanuric acid per liter had large effects on the Pseudomonas kill rate of 0.1 MG/L free CHLORINE but this effect diminished with increasing free CHLORINE content (0.25, 0.5 mg/L). Fitzgerald found concentrations of 25, 50, and 100 mg of cyanuric acid per liter had little effect on the kill rate of 0.5 mg of CHLORINE plus 0.1 mg of NH4-N per liter; however, cyanuric acid did reduce the time required for 99.9% kills when tested in the presence of higher concentrations of ammonia 365 . The basis for this finding should be explored further.
# Fecal Accident Response
The use of stabilized CHLORINE is not recommended for HYPERCHLORINATION in RWI outbreaks, or in response to fecal accidents. Present MAHC requirements for HYPERCHLORINATION and POOL remediation are ineffective for POOLS using cyanurate stabilized CHLORINE. Estimated Cryptosporidium inactivation times are much longer, which will require substantially longer closure times 366 .
# Toxicity
The maximum CYA concentration of 100 PPM (mg/L) should be considered protective from a toxicological perspective. Using an assumption that 100 mL of POOL water is swallowed per swim session; the World Health Organization (WHO) concluded that CYA levels in POOLS should be below 117 PPM (mg/L). This is based on a tolerable daily intake (TDI) for anhydrous sodium dichloroisocyanurate (NaDCC) of 2 mg/kg of body weight, which translates into an intake of 20 mg of NaDCC (or 11.7 mg of CYA per
Facility Operation & Maintenance ANNEX 220 day) for a 10 kg child. The U.S. EPA SWIMODEL, relying on somewhat lower exposure assumptions, would yield a higher acceptable level for CYA.
# Research
Though the data shows CYA use increases the inactivation time of many pathogens, the MAHC would like to have a study done on specific pathogens and inactivation rates at differing CYA levels, up to at least 200 PPM (mg/L). Further research on the inhibitory effect of cyanuric acid on DISINFECTION should evaluate the level at which cyanuric acid can still protect CHLORINE from UV and also balance the inactivation rate of the most common AQUATIC VENUE pathogens. The effect of pH in the presence of cyanuric acid should also be investigated. Additionally, a test kit should be created to test lower and higher levels of CYA. The current products on the market are not very accurate and need to operate over a wider range of CYA levels. During RWI outbreaks, it is strongly recommended that the investigation team measure CYA levels.
Compressed Chlorine Gas Installation/use of compressed CHLORINE gas is prohibited for new AQUATIC FACILITIES; however there are existing facilities that continue to use these gas systems. Because of the potential hazard, it is important that existing facilities meet STORAGE, ventilation, handling, and operator training requirements if use is to continue. If these requirements are not met, use must be discontinued and a properly designed/sized and approved disinfectant system installed.
The following design criteria from an existing health code provide additional details for consideration when evaluating acceptability of an existing compressed gas installation.
- Location. The chlorinator room shall be located on the opposite side of the POOL from the direction of the prevailing winds. CHLORINE STORAGE and chlorinating equipment shall be in a separate room. This room shall be at or above grade. Venting. The CHLORINE room shall have a ventilating fan with an airtight duct beginning near the floor and terminating at a safe point of discharge to the out-of doors. A louvered air intake shall be provided near the ceiling. The ventilating fan shall provide one air change per minute and operate from a switch located outside the door. Door. The door of the chlorinator room shall not open to the swimming POOL, and shall open outward directly to the exterior of the building. The door shall be provided with a shatterproof inspection window and should be provided with "panic hardware." Chlorine cylinders. CHLORINE cylinders shall be anchored. The cylinders in use shall stand on a scale capable of indicating gross weight with one-half pound accuracy. STORAGE space shall be provided so that CHLORINE cylinders are not subjected to direct sunlight. STORAGE space shall be in an area inaccessible to the general public. Injection location. Mixing of CHLORINE gas and water shall occur in the CHLORINE room, except where vacuum-type chlorinators are used.
Facility Operation & Maintenance ANNEX 221 Backflow. The chlorinators shall be designed to prevent the BACKFLOW of water or moisture into the CHLORINE gas cylinder
# Salt Electrolytic Chlorine Generators, Brine Electrolytic Chlorine or Bromine Generators
In-line generators shall use only POOL-grade salt dosed into the POOL to introduce CHLORINE into the POOL vessel through an electrolytic chamber to avoid potential health risks associated with DISINFECTION byproducts forming from salt impurities, including bromide and iodide. For example, Kristensen et al. directly correlated bursts of bromodichloro-methane formation to salt addition to POOL water over a MONITORING period of more than one year. 367 In a comparison study of common disinfectant methods, Lee et al. found salt brine electrolysis formed the highest levels of bromodichloro-methane, dibromochloro-methane and bromoform. 368 Zwiener et al. noted that iodized table salt should not be used in salt POOLS because iodized DISINFECTION byproducts, which are generally more toxic than chlorinated DISINFECTION byproducts, could form. 369 Additionally, there is a perception by some that salt water POOLS can be operated with table salt (which is commonly iodized).
Secondary or Supplemental Treatment Systems Due to the risk of outbreaks of recreational water illnesses (RWIs) associated with halogen-tolerant pathogens such as Cryptosporidium, it is strongly recommended that all AQUATIC FACILITIES include SECONDARY DISINFECTION SYSTEMS to minimize the risk to the public associated with these outbreaks.
All existing regulations covering fecal events or detection of pathogens must still be adhered to when SECONDARY DISINFECTION SYSTEMS are utilized. SECONDARY DISINFECTION SYSTEMS can only minimize the risk and are not a guarantee of treatment due to the possibility of cross contamination of the POOL or water feature and the time required to pass the entire volume of water through the treatment process.
As the general effectiveness of a SECONDARY DISINFECTION SYSTEM is affected by the AQUATIC VENUE TURNOVER rate and mixing/circulation within the AQUATIC VENUE, the MAHC requirements for filter recirculation and TURNOVER rates must be followed. The performance of SECONDARY DISINFECTION SYSTEMS will be enhanced when the shortest TURNOVER times are achieved for any particular type of AQUATIC FACILITY.
The use of certain types of AQUATIC VENUES presents an increased risk of recreational water illness (RWI) to users. These AQUATIC VENUES include THERAPY POOLS, WADING POOLS, SPRAY PADS, swim schools, INTERACTIVE WATER PLAY AQUATIC VENUES, and AQUATIC FEATURES. Given that users of these types of facilities frequently have lesser
Facility Operation & Maintenance ANNEX 222 developed immune systems (children), and/or a higher prevalence of disease (children and older adults), and/or compromised immune systems, and/or open wounds, additional precautions against RWIs are warranted.
CDC swimming POOL surveillance reports show that of the 21,500 inspections conducted between May and September of 2002, water chemistry violations were found at 38.7% of these facilities. Of this percentage, 14.3% of the violations were for inadequate DISINFECTION levels at THERAPY POOLS 370 .
The use of INTERACTIVE WATER PLAY AQUATIC VENUES has previously been associated with outbreaks of gastroenteritis. In 1999, an estimated 2,100 people became ill with Shigella sonnei and/or Cryptosporidium parvum infections after playing at an "interactive" water fountain at a beachside park in Florida 371 .
In one of the largest outbreaks reported, approximately 2,300 persons developed cryptosporidiosis following exposure to a New York spray park. The environmental investigation revealed that filtration and DISINFECTION of the recycled water were not sufficient to protect the PATRONS from this disease. In response, emergency legislation was passed, which required the installation of SECONDARY DISINFECTION (e.g., ultraviolet radiation or ozonation) on water returning through the sprayers 372 .
Ultraviolet Light
# 3-log Inactivation
Records of the correct calibration, maintenance, and operation of SECONDARY DISINFECTION SYSTEMS should be maintained by the facility's management.
# Calibrated Sensors
Owners/operators need to consult the unit manual and the manufacturer's manual for guidance on how to accomplish this and who is qualified to do so.
Copper / Silver Ions EPA has set current drinking water STANDARDS at 1.3 PPM for copper and 0.10 PPM for silver, which are generally accepted in the states that have requirements for this treatment. These ion generation systems are not meant to replace disinfecting halogen and the minimum levels must continue to be provided. The manufacturer's recommended procedures should be followed to avoid the potential of staining, and operating the POOL with copper levels outside the recommended range may cause staining. Copper-based algaecides should not be used in these systems 5.0 Facility Operation & Maintenance ANNEX 223 since use of these products increase the level of copper in the POOL and increases the potential to cause health effects or stain surfaces.
In addition, studies have shown that the presence of copper in POOL water has a catalytic effect on the formation of 373 TRIHALOMETHANES .
Other Sanitizers, Disinfectants, or Chemicals The MAHC has opted to not include lists of disinfectants that should not be used in AQUATIC VENUES versus just saying that they must not pose a hazardous issue with the CHLORINE or bromine disinfectants in use and that all water quality criteria must be met.
# PHMB
Polyhexamethylene biguanide hydrochloride (PHMB) is a polymeric antimicrobial that has been used as an alternative to CHLORINE and bromine. PHMB is often referred to as biguanide in the industry. The formal name for PHMB on US EPA accepted labels is "Poly (iminoimidocarbonyliminoimido-carbonyl iminohexamethylene) hydrochloride". The U.S. EPA has registered PHMB for use in POOLS and SPAS as a "SANITIZER" with label directions requiring that the concentration be maintained between 30 and 50 PPM (mg/L) as product (6 to 10 PPM (mg/L) of active ingredient).
PHMB is not an oxidizer and must be used in conjunction with a separately added product. Hydrogen peroxide is the strongly preferred oxidizer.
The vast majority of the PHMB used in POOLS and SPAS is in private residences but a limited number of public facilities have used PHMB.
Because of its limited use in public AQUATIC FACILITIES, there are few independent studies on the efficacy of PHMB in recreational water. Studies report that the rate of kill of bacteria is slower than that of CHLORINE under laboratory conditions. However, the U.S. EPA found that manufacturer's generated data demonstrated adequate efficacy under the EPA guideline DIS/TSS-12 to grant registration under the Federal Fungicide, Insecticide and Rodenticide Act (FIFRA) and without regard to whether the facility is public, semi-public, or private. As part of their registration process, the US EPA does not distinguish between public and private facilities. The efficacy data analyzed by the U.S. EPA is company confidential and has not been reviewed as part of the development of the MAHC.
There are no known published studies of the efficacy of PHMB against non-bacterial POOL and SPA infectious agents (e.g. norovirus, hepatitis A, Giardia sp., Cryptosporidium spp.), under use conditions. PHMB is generally compatible with both UV and ozone, but both UV and ozone will increase the rate of loss of PHMB. Since SECONDARY DISINFECTION SYSTEMS require the use of a halogen as the primary disinfectant, the use of PHMB, even with a secondary system is problematic.
Facility Operation & Maintenance ANNEX 224 PHMB IS NOT compatible with CHLORINE or bromine. POOLS using PHMB have a serious treatment dilemma for control of Cryptosporidium after a suspected outbreak or even a diarrheal fecal accident. The addition of a 3 PPM (mg/L) of CHLORINE to a properly maintained PHMB-treated POOL results in the precipitation of the PHMB as a sticky mass on the POOL surfaces and in the filter. Removal of the precipitated material can be labor intensive.
Testing for PHMB requires special test kits. Conventional kits for halogens are not suitable. PHMB test kits are readily available at most specialty retail POOL stores and on-line.
# Hydrogen Peroxide
Hydrogen peroxide is not registered by the U.S. EPA as a disinfectant for recreational water. Since it is not registered, the use of hydrogen peroxide as a recreational water disinfectant, or any market claims that implies hydrogen peroxide provides any biological control in recreational water is a violation of the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA). Hydrogen peroxide has been granted registration by the U.S. EPA as a hard surface disinfectant and several other applications. The U.S. EPA Registration Eligibility Document (RED) on hydrogen peroxide is available from the EPA website at www.epa.gov/oppsrrd1/REDs/old_reds/peroxy_compounds.pdf. The U.S. EPA posts PDF copies of accepted product labels on the National Pesticide Information Retrieval System website (/#.) Product claims for uses and concentration may be verified by reading the PDF of the U.S. EPA stamped and accepted copy of the product use directions at this website.
When used as a hard surface disinfectant, hydrogen peroxide is normally used at around 3%. When used in recreational water, hydrogen peroxide is used at 27 to 100 ppm (mg/L), which is 1111 and 300 times, respectively, more dilute than that used on hard surfaces. Borgmann-Strahsen evaluated the antimicrobial properties of hydrogen peroxide at 80 and 150 ppm (mg/L) in simulated POOL conditions. 374 Whether 150 ppm (mg/L) of hydrogen peroxide was used by itself or in combination with 24 ppb of silver nitrate it had negligible killing power against Pseudomonas aeruginosa, E. coli. Staphylococcus aureus, Legionella pneumophila or Candida albicans, even with a 30 minute contact period. In the same tests, the sodium hypochlorite controls displayed typical kill patterns widely reported in literature. Borgmann-Strahsen concluded that hydrogen peroxide, with or without the addition of silver ions, was, "no real alternative to chlorine-based disinfection of swimming pool water from the microbiological point of view."
Chlorine Dioxide CHLORINE dioxide is not presently registered by the U.S. EPA for any use in recreational water. Since it is not registered, the use of chlorine dioxide as an antimicrobial treatment (e.g. disinfectant, sanitizer, algaecide, slimicide, biofilm control agent) in recreational 5.0 Facility Operation & Maintenance ANNEX 225 water, or any market claims that implies chlorine dioxide provides any biological control in recreational water is a violation of the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA). Chlorine dioxide has been granted registration by the U.S. EPA as an antimicrobial for other applications, including drinking water. One product was previously registered as a slimicide for use in PHMB-treated recreational water but that registration has since been dropped. The U.S. EPA Registration Eligibility Document (RED) on chlorine dioxide is available from the US EPA website at .
The U.S. EPA posts PDF copies of accepted product labels on the National Pesticide Information Retrieval System website (/#). Product claims for uses and concentration may be verified by reading the PDF of the U.S. EPA stamped and accepted copy of the product use directions at this website.
Chlorine dioxide has the potential to be an alternative remediation tool, but it has not yet been approved by EPA for this use and can be hazardous unless appropriate SAFETY protocols are included. CDC has determined that chlorine dioxide can be used instead of HYPERCHLORINATION for rapid inactivation of Cryptosporidium (3-log inactivation in 105 to 128 minutes) and that this effect was synergistically enhanced with a FREE CHLORINE RESIDUAL in place. 375 This suggest chlorine dioxide might be very useful in remediating contaminated AQUATIC VENUES in the absence of BATHERS.
# Potential for Using Chlorine Dioxide in the Future
During the drafting of this section of the MAHC, several members of the MAHC had interest in using chlorine dioxide as a remedial treatment for Cryptosporidium and Legionella. Recommendations for this were not pursued because of the status of chlorine dioxide under FIFRA. Published studies, including the EPA Alternate Disinfection Manual for drinking water shows that chlorine dioxide may be a very rapid remedial treatment for these life-threatening pathogens. If the registration status of chlorine dioxide changes, the MAHC suggests that chlorine dioxide use should be reconsidered.
# Provisions for Emergency Use of Chlorine Dioxide
Even though chlorine dioxide is not presently registered for use in recreational water, it is possible to use it under Section 18 of FIFRA. An example of this would be the remediation of a Legionella-contaminated health club SPA where other treatments were proven to be ineffective. More information on emergency exemptions can be found on the U.S. EPA website at /. Because of the lack of existing use directions and potential for occupational exposure, it is strongly suggested that a certified industrial hygienist be included in developing emergency treatment plans. Clarifiers, Flocculants, and Defoamers POOLS and SPAS may benefit from the use of one or more of these types of products. There are numerous brands available that are formulated for commercial POOLS and SPAS. Each product is marketed for a specific procedure. Each may contain one or more natural or synthetic polymers, chemical or metallic ingredients. Neither the efficacy nor the SAFETY of product chemistry of these products has been reviewed by the US EPA or any other federal agency. The state of California does require submission of a detailed data package prior to registration. Products sold in the state of California must have the state registration number on the label. Products registered in California but sold outside of the state usually, but are not required to, have the California registration number on the label. Any local agency concerned about a particular product could request the producer supply the California registration number and then verify the status of the product with the California Department of Pesticide Regulation.
pH There are three reasons to maintain pH: Efficacy of the CHLORINE, BATHER comfort, and Maintenance of balanced water.
Each of these reasons are discussed briefly below:
# Efficacy of Chlorine
The efficacy of CHLORINE/hypochlorous acid is dramatically impacted by pH and therefore pathogen inactivation can be severely affected by higher pH levels where only a small percentage of FREE CHLORINE is active. Lower pH levels below this range allow a greater percentage of FREE CHLORINE to be "active". Further data are needed to ensure that lower levels (e.g., 6.8 to 7.2) do not adversely impact membranes, particularly eyes. The present practice of maintaining the pH between 7.2 and 7.8 has been developed by coupling physical chemistry with empirical observations. There is no definitive peerreviewed study that extensively covers the subject of pH in POOL and SPA water except those showing the titration of hypochlorous acid and the importance of pH for assuring maximal efficiency. The best general authority is the 1972 edition of the Handbook of Chlorination by Geo. Clifford White. The 1972 edition of this widely recognized authority on CHLORINE chemistry is the only edition that has a chapter especially on POOLS. Much, but not all, of the POOL chemistry chapter can be found in subsequent editions. Copies of the 1972 edition are difficult to locate in libraries but are available for sale on the internet as of July 2009. The discussion on efficacy and BATHER comfort is a summary of the 1972 edition discussion on pH.
CHLORINE used in POOLS refers to hypochlorous acid. Hypochlorous acid (HOCl) is a weak acid that readily dissociates to form hypochlorite (OCl-) and hydrogen ion (H+). The mid-point of the dissociation (the pKa) is at pH 7.5. Functionally, this means that at pH of 7.5, 50% of the FREE CHLORINE present will be in the form of hypochlorous acid and 50% will be in the form of hypochlorite. As the pH decreases below 7.5, the
Facility Operation & Maintenance ANNEX 227 proportion of hypochlorous acid increases and proportion of hypochlorite ion decreases. The opposite occurs as the pH increases above 7.5. Numerous investigators have reported that hypochlorous acid is approximately 100 times more effective at killing microorganisms than the hypochlorite ion. Thus from a public health perspective, it is desirable to maintain the pH so as to maximize the portion of hypochlorous acid portion of the FREE CHLORINE present in the water.
# Bather Comfort
As BATHERS enter the water, their skin and eyes come into direct contact with the water and its constituent components. In general, the eyes of BATHERS are more sensitive to irritation than the skin. Studies on the sensitivity of BATHERS' eyes to pH changes of the water show wide variations in tolerance limits. The tolerance of the eye to shifts in pH is also impacted by the concentration of FREE CHLORINE, combined CHLORINE, and alkalinity. Under normal POOL conditions, the optimum limits for BATHER comfort appears to be from pH 7.5 to 8.0.
# Potential for Lowering pH in the Future
During the review of the data, the MAHC had a broad interest in lowering the minimum pH. This would increase the efficacy of the CHLORINE by increasing the proportion of hypochlorous acid (at the expense of hypochlorite) and thus increase DISINFECTION efficacy. This was not recommended because of the lack of data on the impact on BATHERS, particularly the eyes. If additional information on the impact of lower pH on BATHERS' skin and eyes is developed, the MAHC suggests that the acceptable range for pH be reexamined. As part of the reexamination, consideration should also be made concerning how this change will impact the water balance and any possible negative impact on the facility.
Feed Equipment The Chlorine Institute has checklists and guidance for working with compressed CHLORINE gas at: .
Facility Operation & Maintenance ANNEX 5.7.3.7
Automated Controllers and Equipment Monitoring
Ozone System As a secondary treatment system, it is critical to monitor the system to ensure it is performing as required. UV System As a secondary treatment system, it is critical to monitor the system to ensure it is performing as required. 18-inch (45.7 cm) water depth for sample collection is recommended. Both the National Swimming Pool Foundation (NSPF) Certified Pool Operator Manual and the National Recreation and Park Association (NRPA) Aquatic Facility Operator Manual instruct the operator to reach at least 18 inches (45.7 cm) below the water's surface to collect the water sample. In an outdoor POOL, there is chemical interaction with ultraviolet light at the surface which will affect the reading. Most of the chemical CONTAMINANTS in a POOL are located within the top 18 inches (45.7 cm), which is why most studies of POOL CONTAMINANTS are performed by collecting samples at a depth of less than or equal to 30 centimeters (11.8 inches) below the POOL water surface 376,377 . These CONTAMINANTS will give false pH and DISINFECTANT readings in indoor and outdoor AQUATIC VENUES. To sample, plunge the assembly (mouth first) quickly to the marked depth, invert, and let the bottle fill. Remove when full of water and begin testing.
Aquatic Venue Water Chemical Balance Water balance is a term used to describe the tendency of water to dissolve (corrode) or deposit minerals (form scale) on surfaces contacted by the water. Balanced water will neither corrode surfaces nor form scale. Factors that impact water balance are pH, hardness, alkalinity, dissolved solids, and temperature. The presently used water balance parameters are used to protect AQUATIC VENUE equipment and surfaces from deleterious effects of corrosion and scale formation. Improperly balanced water is not in itself a threat to public health. Water balance is expressed in several ways but the most common one is by the SATURATION INDEX. Each factor in the SATURATION INDEX equation can vary within a limited range and the water is still considered balanced. Shifts in pH have a significant impact on water balance. Water balance chemistry is discussed extensively in all pool operator classes and is well beyond the scope of this Annex.
Total Alkalinity Level Total alkalinity is closely associated with pH, but rather than a measure of hydrogen ion concentration, it is a measure of the ability of a solution to neutralize (buffer) hydrogen ions. Expressed in parts per million (PPM), total alkalinity is the result of alkaline materials including carbonates, bicarbonates and hydroxides -mostly bicarbonates. As noted in the MAHC, the ideal range is 60 PPM to 180 PPM. This acid neutralizing (buffering) capacity of water is desirable because it helps prevent wide variations in pH (pH bounce) whenever small amounts of acid or alkali are added to the POOL. Total alkalinity is a measure of water's resistance to change in pH and is a source of rising pH itself from the outgassing of carbon dioxide.
Facility Operation & Maintenance ANNEX 233 the gas phase NCl 3 concentration is dynamic and impacted by BATHER COUNT, swimmer activity, liquid phase NCl 3 concentration. Reliable sampling and analytical methods affect the accuracy of the characterization of the AQUATIC VENUE water and air.
# Health Incidents
Studies of swimming POOL users and non-swimming attendants have shown a number of changes and symptoms that appear to be associated with exposure to the atmosphere in indoor AQUATIC VENUES. CDC has intervened and investigated various health incidents reporting skin and eye irritation and acute respiratory outbreaks that could be associated with exposures to chloramines and other by-products at recreational water facilities, including swimming POOLS 382,383 .
# Lifeguard Exposure
For lifeguards at swimming POOLS, an exposure-response relationship has been identified between NCl 3 , measured as total chloramines, and irritant eye, nasal, and throat symptoms, although not chronic respiratory symptoms or bronchial hyper re sponsiveness 384 .
# Respiratory Conditions
In addition to potential occupational exposures, there have been a number of studies investigating respiratory conditions, including asthma, related to swimming pools.
There appears to be no consistent association between swimming POOL attendance during childhood and the prevalence of asthma or atopic disease 385,386,387 . Studies indicate that asthma is more commonly found among elite swimmers than among other high-level athletes, although it is premature to draw conclusions about the causal link between swimming and asthma because most studies available to date used crosssectional design, because the association is not confirmed among non-competitive swimmers, and because asthmatics may be more likely to select swimming as the activity of choice because of their condition 388 .
# Contact Dermatitis
Chloramines have also been implicated in contact dermatitis (rashes). The number of rashes that occurs among BATHERS in treated recreational water is not known. One cross-sectional study of Australian school POOLS retrospectively examined the incidence rate of rashes in three POOLS. The three POOLS treatment types were 1) CHLORINE alone
Facility Operation & Maintenance ANNEX 234 (hand dosing), 2) CHLORINE plus ozone (automatic dosing and control), and 3) bromine (sodium bromide plus ozone using automatic dosing and control). This study reported 14.4% of the BATHERS in the hand-dosed CHLORINE POOLS experienced rashes 389 . This and anecdotal reports strongly suggests that rashes are the most common RWI.
The greatest number of rashes appears to be among hydro-therapists (aquatic physical therapists). A survey of 190 professional hydro-therapists in Israel reported that 45% developed skin disease after beginning work. Symptoms reported included itchiness, redness, dry skin. The areas affected were the extremities, the face and trunk, and folds in the skin. The authors concluded: 1) exposure to water influences development of irritant contact dermatitis; 2) cumulative exposure of low-potency irritants may be cause of contact dermatitis; 3) contact dermatitis is an occupational disease of hydro therapists 390 . In these and similar reports, the exact chemical species inducing the contact dermatitis has not been identified but the collective opinions of the investigators is that halogenated organic compounds (DISINFECTION BY-PRODUCTS) is the cause. One conservative estimate places the number of halogenated DISINFECTION byproducts, including organic chloramines, in swimming POOLS at greater than 200. The clinical significance of these is likely to vary with the concentration of specific chloramine and BATHER specific factors (length of exposure, underlying health conditions, and cumulative previous exposure).
# Maximum Concentration
After considerable discussion the MAHC decided to recommend an action concentration of 0.4 PPM (MG/L) for combined CHLORINE in all recreational waters. This recommendation is based on the desire to minimize the potential for both respiratory and dermal disease that is known to be associated with exposure to chloramines. The MAHC recognizes that this concentration is arbitrary and that it has not been substantiated by adequate human clinical studies. In the absence of an adequate human study, the MAHC has opted for a conservative value rather than a more lenient value of 0.5 PPM (MG/L) preferred by some operators. The key is that regulators start enforcing regular testing for combined CHLORINE so that POOL operators work towards keeping levels low by responding to this action threshold.
Levels of chloramines and other volatile compounds in water can be minimized by reducing introduction of CONTAMINANTS that lead to their formation (e.g., urea, creatinine, amino acids and personal care products), as well as by use of a shock oxidizer (e.g., potassium monopersulfate) or supplemental water treatment. Effective filtration, water replacement, and improved BATHER hygiene (e.g., showering, not urinating in the POOL) can reduce CONTAMINANTS and chloramine formation.
Facility Operation & Maintenance ANNEX 235 has determined that manufacturers of "shock oxidizers" may advertise that their "shock oxidizer" products "remove," "reduce," or "eliminate" organic CONTAMINANTS For more information, see: .
# Secondary Disinfection
SECONDARY DISINFECTION SYSTEMS such as ozone and ultraviolet light may effectively destroy inorganic chloramines. As this also has a public benefit and can assist in meeting the MAHC requirements for combined CHLORINE, it is strongly recommended that any installation utilizing UV or ozone as a SECONDARY DISINFECTION SYSTEM consider the positive impact the equipment may have on reducing combined CHLORINE levels in addition to achieving DISINFECTION goals.
To improve chloramine control strategies, future research should be aimed at:
Calcium Hardness Calcium hardness is the amount of dissolved calcium (plus some other minerals like magnesium) in the water. High calcium is not healthy for swimming since it can cause burning of the mucous membranes, as well as skin irritation on sensitive people. Calcium hardness of 200 -400 PPM (mg/L) is preferred for proper calcium carbonate saturation and for avoiding soft-water scale found in SPAS and hot tubs when other water parameters are near their nominal levels. For venues with water temperatures greater than 90 o F (32oC), the range should be 100 to 200 PPM.
Too much calcium causes cloudiness and scale formation. It also reduces the effectiveness of disinfectants. Too little calcium, especially when combined with low pH or low Total Alkalinity can also lead to "aggressive water," which can dissolved calcium carbonate from plaster, as well as metallic parts of the POOL (walls, floor, handrails, ladders, light fixtures, and equipment), and also cause discolored water or stains on the POOL walls and floor.
The maximum permissible concentration of 1000 PPM (mg/L) may not be appropriate for regions with particularly hard source water. In such regions local CODES should reflect the specialized practices needed for source waters containing more than 1000 PPM (mg/L) total hardness.
Facility Operation & Maintenance ANNEX 236 Minor deviations from the calcium hardness levels stated in the CODE do not in themselves present imminent health threats to the BATHERS. As such, minor deviations in hardness levels do not require the immediate closure of the facility. Rather, deviations from permissible hardness levels indicate poor management of the water balance and should indicate a need for a thorough inspection of the entire facility.
Algaecides In practice most algaecides are reasonably effective when applied according their US EPA accepted label directions and the application is coupled with frequent and thorough brushing.
CHLORINE and bromine can be registered and used as algaecides, but must be used in accordance with EPA label directions.
Bromine and bromamine have been demonstrated to be algaecidal 391 .
# Common Types
The two basic types of non-halogen algaecides are copper based algaecides and quaternary ammonia compounds (QACs), often referred to as "quats". Some algaecides contain a mixture of a quat and a copper compound.
Copper-based algaecides can be used to treat against all types of algae, but are especially effective against mustard and green types of algae. These will not cause foam to appear in a swimming POOL as is common with simple quaternary ammonia types of algaecides. There is however a problem with stains on the surface of the swimming POOL if the product is not used properly. Proper pH control is very important to minimize staining potential when using copper-containing algaecides.
The other most common types are quaternary ammonium. These algaecides will not stain a swimming POOL. There are two types of quats: simple and polymeric (more commonly called "polyquats"). Simple quats are mixtures of various alkyl dimethyl benzyl ammonium compounds (ADBACs) or didecyl dimethyl ammonium compounds (DDACs). There are numerous variations of these. The technical name for the active ingredient in polyquats is "Poly". Neither type of quat will cause staining. When overdosed, simple quats tend to cause foam, especially in POOLS with water features (e.g. fountains, waterfalls). Polyquats do not cause foaming, even when used repeatedly at the maximum label dose in POOLS with water features.
# EPA-Registered
In selecting a quat, it is vital that the product has been registered by the U.S. EPA for use in swimming POOLS. The vast majority, but not all, of the products on the market have current US EPA REGISTRATIONS. All products registered by the U.S. EPA will have a registration number on the label (usually it will state "EPA Reg. No." followed a series The U.S. EPA registration process for algaecides is substantially different than the registration process used for disinfectants. As part of the development of the product, the U.S. EPA requires companies to conduct efficacy studies on the product. The U.S. EPA does not consider algae in POOLS or SPAS to be pathogenic and thus not a direct threat to public health. Since algae are not a public health issue, the US EPA does not require companies to submit their efficacy package for an agency data review. Thus, in the registration process the U.S. EPA looks carefully at the toxicology of the product but not the efficacy. The state of California does require detailed efficacy studies prior to registration. Products sold in the state of California must have the state registration number on the label. Products registered in California but sold outside of the state usually, but are not required to, have the California registration number on the label. Any local agency concerned about the efficacy of a particular algaecide could request the producer supply the California registration number and then verify the status of the product with the California Department of Pesticide Regulation.
# Source (Fill) Water
Most public recreational water venues use the PUBLIC WATER SUPPLY as the fill water source. In instances where this is not possible, it is important that the fill water not be a potential source of illness to BATHERS. Since requirements governing water quality vary by jurisdiction, it is not possible to specify every test that might be required by a jurisdiction. Therefore, facilities need to insure that the fill water complies with the jurisdictional requirements. Examples of potential tests that a jurisdiction may require include, but are not limited to the following: bacteria, nitrates, nitrites, iron, manganese, sulfur, and turbidity. It is also recommended that this testing be conducted on an annual basis. It is also important to note that the salt required by saltwater chlorination systems will substantially increase the TDS level. Therefore, in saltwater AQUATIC VENUE, it is best to consider the TDS level after the required amount of salt has been added to a freshly filled AQUATIC VENUE as the baseline level. A small positive value is preferred over a negative value because a slight scale layer provides some protection, and is less harmful than corrosion, which causes permanent damage to mechanical and structural components. While it is always possible to lower the pH, it is not as simple with the total alkalinity or calcium hardness. Lowering the total alkalinity will usually lower the pH as well. Lowering the calcium hardness is not always possible, given the variation in hardness of the fill water. In situations where the calcium level is high, attention should be paid to lowering the pH and / or total alkalinity in order to improve the LSI.
It is not always possible to get the pH and total alkalinity within the proper range, due to the nature of the dissolved minerals. pH is the more important parameter, and should be maintained within the proper range.
If the AQUATIC VENUE is outdoors, and uses stabilized CHLORINE, in order to get a more accurate reading of the LSI, it is recommended that 30% of the cyanuric acid reading be deducted from the total alkalinity test result. However, there are maximum temperatures that can and do have an effect on the health of the PATRON using the facility. Water temperature between 83°-86°F (28°-30°C) is the most comfortable temperature for typical recreational water usage. Water temperature may need to be adjusted based upon specific uses of the facility.
The WHO recommends that water temperatures in hot tubs be kept below 104°F (40°C). High temperatures (above 104°F or 40°C) in SPAS or hot tubs can cause drowsiness, which may lead to unconsciousness and, consequently, drowning 393 . The Consumer Product Safety Commission has received reports of several deaths from extremely HOT WATER (approximately 109°F or 43°C) in hot tubs. In addition, high temperatures can lead to heat stroke and death 394 . Further examination of data on the health impact of high temperature water on pregnancy, particularly in the first trimester, is needed.
Minimum temperature requirements are not included in this CODE. Water that is too cold, simply will not be utilized for any extended period of time, and will not be used by individuals seeking a recreational water experience.
Even though minimum temperatures are not included in the CODE, it is important to remember that cold-water basins, such as plunge pools, can present health concerns due to water temperature extremes. These small, deep POOLS generally contain water at a temperature of 46-50°F (7°-10°C) and are used in conjunction with saunas or steam baths. Adverse health outcomes that may result from the intense and sudden changes in temperature associated with the use of these POOLS include immediate impaired coordination, loss of control of breathing and, after some time when the core body temperature has fallen, slowed heartbeat, hypothermia, muscle cramps, and loss of consciousness. In general, exposure to temperature extremes should be avoided by pregnant women, users with medical problems, and young children. 395
Facility Operation & Maintenance ANNEX 244 response systems in healthy males at 104°F (40°C) 397,398 . However, there did appear to be increased risk of cardiac hypotension and fainting when users stood up that could result in slips or falls, the most common cause of SPA-related injury in the United States 399 . Several studies of sauna-related deaths in Scandinavia find a high percentage of alcohol use and that users were alone 400,401 . Signage to restrict alcohol use, not to use SPAS when alone, and to exercise caution and use handrails to exit is warranted.
# Pregnant Women
Maternal hyperthermia has been shown to be associated with birth defects 402 . Some studies have shown an increased risk of birth defects and miscarriages associated with hot tub or SPA use during early pregnancy 403 , 404 , 405 , 406 . Pregnant women, particularly during the first trimester, should consult their physician before using hot tubs or SPAS. If women in later pregnancy choose to use hot tubs or SPAS, they should keep exposure to a minimum, and ensure the temperature is at or below recommended STANDARDS. Signage should alert pregnant women about the potential risks of hot tub or SPA use and the need to consult with their physician before use. Further expert review of the data is warranted to see if the data support reducing the water temperature and, if so, what temperature should be adopted to proactively protect women of childbearing age who may not know if they are pregnant.
# Young Children
Few studies exist looking at the impact of high temperature on young children although older children do appear to be able to control their temperature as well as adults 407 but the high temperatures in saunas do put great demands on the circulatory system 408 . However, infants cannot control their body temperature as effectively as their older siblings and parents. This is because babies have a small body mass compared to body surface area. Being in water even a few degrees different from normal body temperature (98.6°F/37°C) can affect a baby's body temperature. Being in very warm or HOT WATER found in hot tubs/SPAS can cause hyperthermia, a dangerously high body temperature. Signage for SPAS and hot tubs should caution users about bringing infants and young children into SPAS or hot tubs, particularly for prolonged use.
# Facility
Operation & Maintenance ANNEX 245
Water Quality Chemical Testing Frequency
# Chemical Levels
When using colorimetric testing methods, combined CHLORINE testing consists of measuring free CHLORINE, measuring total CHLORINE, and subtracting the free CHLORINE from the total CHLORINE. When using titrimetric methods, it is easiest to perform a direct measure. The analyst should simply count each drop of titrant and multiply by the correct factor to attain the combined CHLORINE level.
A properly calibrated automatic chemical MONITORING system which maintains records and can be monitored remotely via a secure website could be acceptable for daily testing, if the system allows for the health department to have access to view a readonly log which monitors the chemistry at a facility.
# Water Clarity
Water clarity is a useful measure of general water quality. Visual observation of main drains is important for BATHER SAFETY to avoid drowning incidents and injury prevention (for BATHER visibility). For POOLS, the use of a Secchi disk is not recommended. If a marker tile of suction outlet is not available, an alternate such as a submersible manikin or shadow doll could be placed at the deepest point of the pool/attraction to check clarity.
For more information about the limitations of Secchi disks, see:
- NOAA
# Equipment Inspection and Maintenance
The absence of this required equipment can adversely affect the effectiveness of a rescue and the SAFETY of the lifeguard. It could also hinder the response from emergency services. For this reason, it is the responsibility of the owner/operator to make sure this equipment is in place prior to opening the AQUATIC FACILITY to the public.
The equipment should be working so it can be used when needed. The word "safe" makes sure the equipment is not modified to be in working condition but posing a risk to the user.
Safety Equipment Required at All Aquatic Facilities
# Emergency Communication Equipment
# Functioning Communication Equipment
As stated in the design section, emergency communication devices should be part of the design but also required to be present in the operation. The intent is that an emergency phone or communications system or device is immediately available to PATRONS from all AQUATIC VENUES within an AQUATIC FACILITY.
Some alternate communication systems might include a handset or intercom system to a location that is constantly manned whenever the AQUATIC VENUE is open for use (e.g. a front desk at a hotel, the check in desk at a fitness club, or other continuously manned location); a commercial emergency contact device that connects to a monitoring service, or directly to 911 dispatch; or devices that alert multiple staff on site when activated (e.g. pagers systems, cellular telephone systems and radio communication alert systems). For larger facilities, this could include internal communication processes such as radio use to a central phone to facilitate emergency communications to outside EMS in place of hard wired publicly accessible phones.
# First Aid Equipment
# Location for First Aid
This is stated in the design section but also stated in the MAHC operations section to require the operator to designate a first aid location for existing facilities. The supplies should be provided at locations where they can be quickly accessed by staff responding to emergencies.
# First Aid Supplies
The first aid supply list is based on the ANSI/ISEA Z308.1-2009 standard for a Workplace First Aid Basic Kit. The listed contents are based on the items needed, but the quantities are not specified to allow for flexibility based on the size of the AQUATIC FACILITY, the anticipated BATHER COUNT, anticipated number and types of injuries, and the number of first aid locations. Topical supplies such as antibiotic cream, burn gels, and antiseptics were removed because this poses a scope of practice issue for the level of training typical to lifeguarding.
The operator should provide enough supplies that the kit does not need continuous restocking. There should be enough supplies to last between first aid kit supply inspections, plus the time needed to obtain and replace the supplies. The contents should be inspected and resupplied often enough to maintain the supplies in good condition.
The supplies must be stored in such a manner as to protect them from moisture and extremes of heat and cold that will cause deterioration. Supplies must be periodically checked for expiration dates and replaced as needed.
Facility Operation & Maintenance ANNEX 248 A biohazard cleanup kit was included as lifeguards often deal with body fluids on surfaces such as vomit, feces, and blood. According to OSHA 410 , "Generally, lifeguards are considered to be emergency responders and, therefore, would be considered to have occupational exposure. Emergency response is generally the main responsibility of lifeguards, therefore, such duties could not be considered collateral. Although it is the employer's responsibility to determine which, if any, of the employees have occupational exposure, employers of lifeguards should examine all facets of the lifeguard's emergency response duties, not just "retrieval from deep water." As a result, lifeguards are covered under OSHA 29 CFR 1910.1030 Bloodborne Pathogens standard, which speaks to having contact with individuals that may be injured and bleeding. As a result, employers are required to offer all the protections of the Bloodborne Pathogens STANDARD. Management should also consider how bloodborne pathogen training is integrated with training for environmental and/or water-based clean-up of feces and other body fluids (See MAHC Section 6.5).
The MAHC chose to compile this list after reviewing the contents of several kits that were commonly available. It is suggested that a kit be assembled, put in a container and sealed to assure the contents are still intact when needed. After use, a new kit is provided or the container is restocked and resealed.
In addition to the AQUATIC FACILITY kit, lifeguards should carry basic PPE for immediate use during initial exposure to feces, vomit, and small amounts of blood until the full kit arrives at the treatment scene.
Facility Operation & Maintenance ANNEX 252 Administration rules are implemented 412 . These protect against both UVA and UVB rays as long as re-application is conducted periodically. Because SPF ratings only measure UVB effectiveness there is a lot of variability in UVA protection in sunscreens. The CDC recommends a sunscreen with an SPF of at least 15 413 .
There are some questions about the health effects of some of the screening chemicals, but the benefits seem to outweigh the hazards. To minimize exposure to these chemicals, lifeguards should also wear protective clothing, hats, use sun-blocking umbrellas, or any other means to avoid exposure to UV light. Protection is also needed from reflected exposure. Light-skinned individuals can be particularly sensitive to both direct as well as indirect exposure to the sun's UV rays 414 . Employers should educate lifeguards about the risk and protection options but are exempted from requirements to pay for sunscreen as personal protective equipment according to OSHA 1910.132(h)(4)(iii) 415 .
Polarized Sunglasses Glare and reflected sunlight off the water surface can cause significant visibility problems for lifeguards and potentially impact job performance. Lifeguards working at outdoor AQUATIC VENUES are required to wear polarized eye wear to reduce the risk of glare causing reduced visibility. This polarized eyewear should also be a part of any sun exposure awareness training since it also potentially reduces the harmful short-and long-term effects of UV on eyes that include increased risk for cataracts and macular degeneration 416 417 , 418 , 419 , 420 , 421 , 422 . However, employers are exempted from requirements to pay for sunglasses as personal protective equipment according to OSHA 1910.132(h)(4)(iii) 423 .
Polarized eyewear can assist with glare indoors as well but should be tested so it does not impede visibility due to lower light levels.
Polarized 3-D glasses must not be used as they can be disorienting and can disrupt normal vision. Reaching Pole The pole is intended to reach out to a swimmer in distress and to allow them to grab hold of the pole. The pole should be submerged when introducing it to the swimmer to prevent injury. In some cases the "hook" can be used to encircle a non-responsive swimmer to draw them to the side. Use of the device involves reaching out to the swimmer and then pulling the pole straight back to the side, along with the swimmer. The pole cannot be swung around to the side as the strength required exceeds that of most people, and the pole is not that durable.
Since the pole is pulled back to the side, a telescoping pole is not appropriate as it can pull apart. Ideally the pole can reach to the middle of many smaller POOLS making the entire POOL reachable from the side with the pole.
The pole must be equipped with a "life-hook" or "shepherd's crook". For SAFETY, the hook must be a looped frame-type hook, not the single metal hook. The hook protects the swimmer from being injured by the pole, as well as allows a non-responsive swimmer to be pulled in. To prevent injury, use only the hook attachment bolts supplied by the manufacturer. This will prevent hooks and snags, caused by using the improper bolts, which can injure the swimmer.
CPR Posters CPR performed by bystanders has been shown to improve outcomes in drowning victims 425 . CPR started immediately on a drowning victim instead of waiting until emergency responders arrive will have a significant effect on the potential for brain damage in the victim. Posters of CPR explaining basic procedures can be reviewed in seconds and give the provider enough knowledge to assist the victim until emergency responders arrive.
Posters can educate PATRONS to recognize potential causes of, prevention, and spread of RWIs. PATRONS need to be educated to what RWIs are, how they are spread, and how they can be prevented.
Resources for RWI education can be found at . There are many resources for CPR posters that can be found online.
Imminent Hazard Sign A sign indicating reasons requiring closure especially at AQUATIC FACILITIES where a QUALIFIED OPERATOR or QUALIFIED LIFEGUARD is not present should be posted listing specific incidents which would require the AQUATIC FACILITY to immediately close. Examples of such incidents include fecal incidents, broken/missing drain grates, water clarity, water quality issues, and lightning. A contact number should be provided to notify the owner/operator of conditions considered an IMMINENT HEALTH HAZARD.
Facility Operation & Maintenance ANNEX 256 Oxidizers such as calcium hypochlorite, monopersulfate or bleach shall not be mixed with any other chemicals.
Safety Data Sheets A safety data sheet (SDS) is a form containing data, potential hazard information, and instructions for the safe use of a particular material or product. An important component of product stewardship and workplace SAFETY, it is intended to provide workers and emergency personnel with procedures for handling or working with that substance in a safe manner, and includes information such as physical data (melting point, boiling point, flash point, etc.), toxicity, health effects, impact on the environment, first aid, reactivity, STORAGE, disposal, protective equipment, and spill handling procedures. The exact format of an SDS can vary from source to source. It is important to use an SDS that is supplier-specific as a product using a generic name (e.g. oxidizer) can have a formulation and degree of hazard which varies between different manufacturers.
# Filed
SDSs should be filed anywhere chemicals are being used. An SDS for a substance is not primarily intended for use by the general consumer, focusing instead on the hazards of working with the material in an occupational setting.
# OSHA
In the U.S., OSHA requires that SDSs be available to employees for potentially harmful substances handled in the workplace under the Hazard Communication regulation. The SDS is also required to be made available to local fire departments and local and state emergency planning officials under Section 311 of the Emergency Planning and Community Right-to-Know Act.
Facility Operation & Maintenance ANNEX 259 Biofilms are a complex collection of microbes that attach to a wet surface and form a scum layer that harbors bacteria and other microbes that could cause illness. Once established, biofilms provide a home for a variety of microbes such as Pseudomonas and are hard to remove. Biofilm-associated bacteria are much more resistant to hypochlorous acid compared to free swimming microbes. Design options to reduce biofilm formation as well as sanitizing systems with effective validation, could be useful for reducing biofilm formation. 5.10.5 Provision of Suits, Towels, and Shared Equipment 5.10.5.1 Towels The drying temperature is more important than the wash temperature when destroying potential pathogens.
- See CDC recommendations for laundering entitled, "Environmental Cleaning & Disinfecting for MRSA" at: /.
# Shared Equipment Cleaned and Sanitized
Research has demonstrated that play features, mat materials, and other shared equipment found at AQUATIC FACILITIES and water parks can harbor bacteria, even while submerged in chlorinated water. Damp materials that were not submerged in water contained the highest populations of bacteria. Damp play features designed for infants and toddlers were found to be likely vehicles for transference of gastrointestinal bacteria. 429 Sanitization is defined as reducing the level of microbes that are considered safe by public health STANDARDS. This may be achieved through a variety of chemical or physical means including chemical treatment, cleaning, or drying.
Associations between swimming POOLS and disease outbreaks have been welldocumented in literature. Though an outbreak has never been connected to play features or the type of play feature material specifically, the possibility could exist due to biofilms found on these materials. Outbreaks may be more likely if the AQUATIC FACILITY is not maintained properly.
Biofilms are a complex collection of microbes that attach to a wet surface and form a scum layer that harbors bacteria and other microbes that could cause illness. Once established, biofilms provide a home for a variety of microbes such as Pseudomonas and are hard to remove. Biofilm-associated bacteria are much more resistant to hypochlorous acid compared to free swimming microbes. Design options to reduce biofilm formation as well as sanitizing systems with effective validation, could be useful for reducing biofilm formation.
# Facility Operation & Maintenance ANNEX 260
# Contact
Shared equipment that contact mucous, saliva, eyes, or ears require sanitizing to prevent transmission of potential disease causing pathogens.
Other Equipment Shared equipment which is hand held or used as a flotation device used in aquatic therapy or play have also been found to harbor potential harmful microorganisms, even while submerged in properly chlorinated water. Bacteria found in these environments are most likely from biofilms that have attached to these surfaces. Soaking in disinfectants may not be enough to penetrate the biofilm; so to control biofilm growth, it is recommended to physically remove the slimly film by scrubbing equipment on a routine basis. The array of organisms isolated from damp features suggests that features need to be cleaned, SANITIZED, and thoroughly dried on a routine basis using a combination of chemical and physical methods, preferably as recommended by the manufacturer 430 .
# Water Supply/ Wastewater
Policies & Management ANNEX 261
# Policies and Management
The MAHC has worked extensively with ICC and IAPMO to eliminate conflicts between the three codes. These discussions have resulted in changes in the MAHC and plans to change items in the other codes as they are brought up for revision. The MAHC is committed to resolving these conflicts now and in the future as these codes evolve.
# Lifeguard Training
This portion of the MAHC deals directly with providing QUALIFIED LIFEGUARDS in an AQUATIC FACILITY to first, reduce the risk that could lead to injury and, secondly, appropriately respond to incidents when they happen. The duties of an AQUATIC FACILITY lifeguard have been compared to a number of other occupations including comparing the role of the police officer to that of a lifeguard at a swimming POOL. 446 "The majority of the time, the task is very sedentary, sitting and watching. A quadriplegic could do it; until someone needs rescuing. Then the quadriplegic could not perform the required functions. It does not often happen to a lifeguard that someone needs rescuing, perhaps 0.1 percent of the time. But the ability to jump into the water and save the drowning victim is critical to the job. This is the reason why there has been someone sitting and watching for the other 99.9 percent of the time." Bonneau and Brown's 447 position is that, because the disabled lifeguard is unable to perform the critical and essential part of the job, he is incapable of doing the job of lifeguard. Even if he can do 99.9% of the job, he should not be employed as a lifeguard. The perception of the public is that all lifeguards can perform all that is critical and essential to their job set. Unfortunately, this has sometimes been proven to be false.
# Many
drowning deaths resulted from omissions of basic SAFETY precautions 448,449,450,451,452,453,454,455 . These include absent or inadequate POOL fencing, unattended young children at water sites, faulty POOL design resulting in victims becoming trapped below the surface of the water, poor POOL maintenance resulting in murky or cloudy water that obscured sight of submerged bodies, lifeguards being distracted by socializing with others and doing other chores such as manning admission booths and doing housekeeping chores while on lifeguard duty, and poorly trained lifeguards who did not recognize a person in trouble in the water or had not been properly trained in rescue and resuscitation techniques. In some cases, these are correctable issues that could prevent drowning deaths. We anticipate that if POOL and water SAFETY STANDARDS are strictly enforced, and as lifeguards continue to become better trained and adhere to important basic principles of surveillance, rescue, and resuscitation, the death rate in public AQUATIC FACILITIES should decline. The goal of this section is to give POOL owners and operators BEST PRACTICE guidelines for guarded and unguarded POOLS as tools to make AQUATIC FACILITIES safer for the general public.
Policies & Management ANNEX 270
# Lifeguard Qualifications
Every day, about ten people die from unintentional drowning 456 . Of these, two are children aged 14 or younger. Drowning is the fifth leading cause of unintentional injury death for people of all ages, and the second leading cause of unintentional injury death for children ages 1 to 14 years 457 . From 2005457 . From -2009, there were on average 3,533 fatal unintentional drowning (non-boating related) in the United States per year and more than one in five people who die from drowning are children 14 and younger 458 . More than 50% of drowning victims treated in emergency departments require hospitalization or transfer for higher levels of care (compared to a hospitalization rate of 6% for all unintentional injuries) 459,460 .
Nonfatal drowning can cause brain damage that may result in long-term disabilities including memory problems, learning disabilities, and permanent loss of basic functioning (e.g., permanent vegetative state). 461,462 Appropriately trained lifeguards are one way to reduce this risk at public AQUATIC VENUES.
# Course Content
This section defines a broad scope of lifeguard training which is further described in the section below. These topics are universally found in all currently recognized national lifeguard training programs.
# Hazard Identification and Injury Prevention
Lifeguards have an obligation to know and understand common hazards associated with AQUATIC VENUES, and how they may be mitigated or prevented. A vital component of this obligation is to provide PATRON surveillance, commonly referred to as scanning. In order to prevent injuries, a lifeguard must be taught how to recognize various swimmer conditions that need intervention such as "active," "passive", and "distressed", and to use scanning strategies and techniques to be able to see and identify the emergency. This instruction is incomplete without also teaching lifeguards how to identify factors and circumstances which cause victim recognition to become impeded such as overcrowding, cloudiness of the water, glare, or obstacles on the DECK or in the water such as slides, inner tubes, or structures.
Emergency Response Skill Set Lifeguards should have a clear understanding of the responsibilities and actions of not only the physical skills, but also the cognitive and decision making skills involved in an The ability to identify the lifeguard instructor allows for higher quality control by the training agency. It also aids in the prevention of fraudulent certifications.
Clearly stating the restrictions on water depth for which the lifeguard is qualified allows the employer and the AHJ to quickly ascertain the basic abilities of the lifeguard that were assessed during training.
Expired Certificate A 45-day grace period after certificate expiration, was added to accommodate the numerous lifeguards attending college. Consider a senior in high school who takes their course in April. Subsequently they are now in college and typically will not return from college until early May. A grace period of up to 45 days after certificate expiration allows renewal as opposed to completing a new training course, however the lifeguard is not permitted to lifeguard until renewal training is successfully completed.
# Challenge Program
A challenge course is one in which a lifeguard demonstrates the essential skills and knowledge required by the training agency. This demonstration is performed without prior review and/or instruction at the time of the challenge by the instructor. Prompting or coaching is not performed unless necessary to adequately assess skill level (i.e. "the victim is not breathing").
# Certificate Renewal
A renewal course can also be described as a recertification course. Review / Recertification courses are abbreviated courses designed to be used to assess that a currently certified lifeguard has the necessary skills and knowledge to perform essential competencies required of the training agency.
Although some skills and information are universal to all lifeguard training agencies, there are differences in physical skills. A lifeguard attempting to recertify through a different agency is not likely to have ample time to master these different physical skills. This should not be confused with "crossover" type courses which are specifically designed to teach a currently certified lifeguard the different skills and information from another training agency.
# Certificate Suspension and Revocation
The AHJ is expected to contact course providers with questions about the validity of any certificate or with questions about a lifeguard's performance. In turn, course providers are expected to readily provide verification of certificates and suspensions and revocations of certificates and to notify the AHJ of actions taken in response to its reported concerns.
Policies & Management ANNEX 277 The Food Protection Managers Certification Program Standards, Section 7.5 reflect the following, "A certification organization shall have formal certification policies and operating procedures including the sanction or revocation of the certificate. These procedures shall incorporate due process."
Aquatic Supervisor Training 6.2.2.1 Lifeguard Supervisor Candidate Prerequisites The MAHC agreed that 18 years and above was an adequate age level to consider a person as being mature enough for this position but there are many examples of good supervising at a younger age. This was a starting point but many other factors with regard to experience, training, management skills, and others were equally or more important. For this reason, the minimum age for a LIFEGUARD SUPERVISOR is not specified and is limited to meeting the minimum age requirement of a lifeguard and having the experience that equates to one season of lifeguarding (3 months).
The requirement of the ability to communicate in English is related to the ability to effectively activate the EMERGENCY ACTION PLAN and deliver instructions as well as interface with emergency services. This is similar to the requirement on airlines for emergency exit row seating.
# Lifeguard Supervisor Training Elements
As of the writing of the MAHC, lifeguard supervision and management training courses are limited. In the development of the MAHC, the MAHC recognizes the importance of ongoing AQUATIC VENUE supervision with adequate training in injury prevention and response. What constitutes supervisor and management training was heavily discussed. The concept of "supervisor training" lends itself to far more than simply MONITORING lifeguards and performing essential functions of the lifeguard as needed. Required skills for the supervisor include staff management skills, emergency response, decision making, knowledge of aquatic industry STANDARDS, etc. This list is obviously not comprehensive. This leads to a main concern in the development of a LIFEGUARD SUPERVISOR course which is course content and length. Training agencies are encouraged to develop a system of training LIFEGUARD SUPERVISORS that incorporates the critical components of supervising lifeguards and responding to incidents in an AQUATIC FACILITY as these items directly affect BATHER SAFETY. This may include a variety of levels that address this information in various ways and as appropriate for the intended audience of each level course. The skills and knowledge found in this section are considered by the MAHC to be essential to any LIFEGUARD SUPERVISOR training course, regardless of intended depth of scope. The course outline and requirements mirror that of the lifeguard training course requirements.
LIFEGUARD SUPERVISORS need to have knowledge beyond that of the lifeguard training program. The LIFEGUARD SUPERVISOR is responsible for keeping the lifeguard accountable for their own performance and as such should MONITOR scanning and vigilance within the zone of PATRON responsibility. As situations occur, the LIFEGUARD
Policies & Management ANNEX 278 SUPERVISOR will also need to react to reduce risk while they understand the legal responsibilities of the job.
Due to the nature of the content in the LIFEGUARD SUPERVISOR training, it is possible for this content to be delivered in person or online utilizing various methods such as video and interactive media to establish competency.
Lifeguard Supervisor Training Delivery
# Standardized and Comprehensive
The term standardized is meant to convey that the materials are standard, in writing, and are consistent from one course to another when delivered. This would require that providers, whether an agency or an AQUATIC FACILITY, have a standard method to deliver the course.
Sufficient Time A course length is not specified as each training agency may have their own program that incorporates all the requirements but may also add other topics. The method used to effectively instruct is up to the training agency. Some may take more time than others. The MAHC is not prescriptive on timing but rather on a course timeline that allows for covering the course content.
Lifeguard Supervisor Course Instructor Certification This is the same rationale as for lifeguard training. This allows for an AQUATIC FACILITY to have its own internal LIFEGUARD SUPERVISOR training course or use a training course through a training agency.
# Minimum Prerequisites
This allows for experienced supervisors that may not have the physical skills to do the current lifeguard course as defined by the MAHC but still require the knowledge of lifeguarding.
The LIFEGUARD SUPERVISOR instructor training course utilizes the same rationale as the lifeguard Instructor training course.
# Quality Control
This is the same rationale as for lifeguard training.
Competency and Certification
Lifeguard Supervisor Proficiency LIFEGUARD SUPERVISOR testing could be in many forms from situational-based observations, shadowing with an experienced supervisor, or testing technical knowledge. Some LIFEGUARD SUPERVISOR skill proficiencies can be subjective so the methodology for testing is not prescribed in the code. There are many conditions that result in higher risk for BATHERS in an AQUATIC FACILITY and/or higher risk for any persons attempting to assist a BATHER in distress. These conditions each have their own distinct features that the MAHC felt a QUALIFIED LIFEGUARD presence would reduce those risks. These requirements only apply to AQUATIC VENUES with standing water.
1) Deeper than 5 Feet: The 50th percentile female adult is at least 63.8 inches (162 cm) tall. The rationale is that the average adult BATHER'S head would be above the static water line and they could use the AQUATIC VENUE without difficulty. If a BATHER were in distress, another adult BATHER would be able to assist with equipment or without equipment. Under these conditions, assuming adults are present, the likelihood of providing assistance by untrained persons is high compared to water depths above 5 feet (1.5 m).
The MAHC thinks it necessary to begin working to prevent some of the deaths caused by greater water depth combined with the lack of lifeguard supervision.
The hardship this could cause unguarded AQUATIC FACILITIES is recognized. As a result, the MAHC requirements still allow for existing AQUATIC FACILITIES to be unguarded if they follow the requirements outlined in the MAHC, such as posting required signage. However, new construction of unguarded AQUATIC VENUES will require them to be less than 5 feet (1.5 m) deep.
2) Age 14 or Younger: Many STANDARDS recognize that a person who is under the age of 14 is considered to be a child and that their ability to make decisions, especially when complying with rules, require adult supervision 467 . Because the AQUATIC VENUE presents the risk of drowning at any depth and despite rules being posted, adult supervision is required for compliance with those rules.
The 50th percentile female at age 14 is 63.4 inches (161.0 cm) tall while the 50th percentile female at age 13 is less than 62.1 inches (157.7 cm) tall. This is a critical time frame in which the 1+ inches (3.3 cm) The phrase "allows for unsupervised children" implies that an AQUATIC FACILITY that does not allow unsupervised children would not need a QUALIFIED LIFEGUARD. The intent for supervision of children is that parents/guardians or other similar adults responsible for the children are present at poolside with the children and the children are in sight. The critical component is how this is enforced. In some cases, the facility may have a sign posted that persons under the age of 14 are not allowed, such as a hotel POOL. In these cases, mechanisms should be in place for MONITORING and enforcing the rule understanding that by posting a sign, it is the responsibility of the adult supervising these persons under 14 to also comply with the rule.
3) Dedicated Surveillance: The responsibilities of a QUALIFIED LIFEGUARD are different from the responsibilities of the chaperone of a youth group. The MONITORING of children in these environments is often more than six children to every chaperone. These responsibilities must be separated by having the presence of a QUALIFIED LIFEGUARD that is not distracted by the activities of the group and is focused on their zone of PATRON surveillance.
The chaperone, even if trained as a lifeguard, cannot manage both PATRON surveillance and the activities of individual children. If the chaperone is not trained as a lifeguard, it puts them at risk if a rescue is required.
4) Group Practice or Instruction: Competitive swimming, sports, lifeguard training, exercise programs, and group swimming lessons all include multiple persons being instructed by one or more persons for a distinctly different objective. The primary focus is on the activity and not on PATRON surveillance. Similar to the rationale for youth groups, there is a need to separate the responsibility of the coach/instructor from that of providing dedicated PATRON surveillance.
Group swim lessons are an obvious reason to have a QUALIFIED LIFEGUARD as participants are not proficient at swimming, thus at higher risk for drowning.
Lifeguard training, sports, exercise programs, and competitive swimming involve exertion and could result in a BATHER in distress. If the instructor is focused on an individual, the risk of a different person drowning unnoticed is higher than if a QUALIFIED LIFEGUARD was assigned just to PATRON surveillance. Waterslide LANDING POOLS have an induced current from the lift pump providing water as lubrication on the slide. This is not to be confused with POOL SLIDES that are on a POOL DECK and do not have water flowing down them. Some smaller slides have a small amount of water on them to lubricate the surface but generally do not have a dedicated POOL to "catch" or "land" riders and do not generate a significant current.
INTERACTIVE WATER PLAY AQUATIC VENUES that do not include standing water are not included in this line item as they have an induced water movement but do not have standing water. There is no QUALIFIED LIFEGUARD requirement for an AQUATIC VENUE with no standing water.
# 7) Starting Platforms and Diving Boards:
The risk of spinal injuries increases with activities involving head first entries from starting platforms and diving boards. As such, the need for QUALIFIED LIFEGUARDS to monitor behaviors and control the use of starting platforms and diving boards is important.
# Safety Plan
The MAHC agreed that there needs to be a SAFETY PLAN that is specific to the AQUATIC FACILITY. Training agencies, ANSI/APSP-1 and -9 STANDARDS for public swimming POOLS and aquatic recreation facilities all speak to having plans written, rehearsed, and reviewed. The MAHC agreed that there are other types of plans that detail processes that directly affect PATRON SAFETY. In the code, the SAFETY PLAN is outlined to contain several PATRON-SAFETY components. The SAFETY PLAN is written dependent on whether or not QUALIFIED LIFEGUARDS are present.
Note that the SAFETY PLAN components are different for guarded and unguarded aquatic facilities.
The AQUATIC FACILITY staffing plan is meant to identify positions in the AQUATIC FACILITY that address specific risks as well as support staff that would be present to assist in cases of emergency or provide support by MONITORING performance of QUALIFIED LIFEGUARDS (for AQUATIC FACILITIES requiring them). In unguarded AQUATIC FACILITIES, this plan would include other staff in the STAFFING PLAN. Training agencies, ANSI standards for public swimming POOLS, and AQUATIC FACILITIES all speak to having plans written, rehearsed, and reviewed for emergency action.
Policies & Management ANNEX 282 Pre-employment testing as well as scheduled training is needed to verify that staff members are qualified for the environment. The MAHC agreed that ongoing in-service training programs for lifeguards, attendants, QUALIFIED OPERATORS, and other aquatic personnel should be required. To address this, the definition for QUALIFIED LIFEGUARD requires ongoing in-service training. Such programs should include drills aimed at raising the awareness of AQUATIC FACILITY surveillance, victim recognition, emergency response, CPR/water drills, and simulations incorporating daily challenges. In addition, in-service training needs to be documented.
Code Compliance Staff Plan In consideration of the requirements of the code as it relates to staff, the MAHC recognizes the need for identifying an individual or individuals to be responsible for compliance with the code and the general operation of the AQUATIC FACILITY. For this reason, certain functions are identified and the AQUATIC FACILITY should designate persons to be responsible for each function even if multiple functions are accomplished by a single person. The AQUATIC FACILITY staffing plan is meant to identify risks and create accountability for the prevention and/or mitigation of such risks by identifying person(s) responsible for each.
# Risk Management Responsibility
It is important to not only address identified risks but to designate persons who shall be responsible for conducting periodic safety inspections to be proactive about finding and mitigating risk as well as making decisions on closure for imminent hazards. Determining who is responsible for deciding on closure of the AQUATIC FACILITY is important as it empowers the designated person but also creates a clear point-person for staff to go to for making this decision. The AHJ may be conducting periodic reviews and may have recommendations or need additional information. It would be beneficial to identify the individual or position responsible for interfacing with the AHJ to most effectively address changes or to provide background information. This makes it clear to stakeholders where to direct information or requests.
# Maintenance and Repair of Risks
Once risks are identified, it is critical to determine who is responsible for mitigating those risks. In some cases, it may be a facility maintenance person responsible for conducting repairs, but ultimately it is the responsibility of management to make sure these risks are addressed. Failure to maintain water and air quality can result in illness and it is the responsibility of the AQUATIC FACILITY to maintain proper air and water quality. In some cases, a maintenance team manages these systems and in some cases it may be a third party contractor or the QUALIFIED LIFEGUARD staff. Nonetheless it is important to determine who is responsible for these systems to minimize the risk to batherS.
# Enforcing Rules and Responding to Emergencies
It is important to identify who is responsible for rule enforcement. One may assume the QUALIFIED LIFEGUARD is the person responsible for rule enforcement, but by identifying the function here, it will make it clear that their primary role is in preventing injury. QUALIFIED LIFEGUARDS will generally be the first responder to an incident but other 6.0
Policies & Management ANNEX 283 support staff may participate in the EMERGENCY ACTION PLAN, whether QUALIFIED LIFEGUARDS are present or not. Identifying QUALIFIED LIFEGUARDS, LIFEGUARD SUPERVISORS, medical specialists, and management are critical pieces of an emergency action plan and should be identified as a part of the staffing plan in any SAFETY PLAN.
# Supervising Staff
It is important to have a person designated as the person responsible for the critical safety functions of an AQUATIC FACILITY. Although each QUALIFIED LIFEGUARD is accountable for their zone, the LIFEGUARD SUPERVISOR makes sure each individual is doing what is expected and is present for responding to emergencies and taking the lead in making decisions about imminent hazards. Accountability for rotations and breaks lies with the LIFEGUARD SUPERVISOR and should be clearly identified in the SAFETY PLAN to show the ability to comply with the CODE.
# Training
Qualified lifeguards who cannot demonstrate proficiency in their lifeguarding skills may be a danger to bathers and to themselves. Serious deficiencies that are not immediately corrected may cause the serious injury or death of a bather, the QUALIFIED LIFEGUARD, or other staff member. For this reason, it is important to identify who is responsible for conducting pre-service evaluations and in-service training. In both cases, it may be someone specifically trained in evaluating skills or trained in training others.
# Zone of Patron Surveillance
The zones of PATRON surveillance are identified in the SAFETY PLAN so that all stakeholders are aware of the zones, how many QUALIFIED LIFEGUARDS are required to effectively cover all parts of the AQUATIC VENUE(S), and show that each zone can be effectively monitored by a QUALIFIED LIFEGUARD in accordance with the code.
The MAHC agrees that having identified zones of PATRON surveillance was one of the most needed components for all AQUATIC VENUES. QUALIFIED LIFEGUARDS should be able to determine their area of responsibility and be able to focus on that area. With the proper coverage, all areas of the AQUATIC VENUE needing to be covered would be assigned. The MAHC thought that one of the challenges in AQUATIC VENUE management is to ensure that QUALIFIED LIFEGUARDS understand the exact scope of their zone of PATRON surveillance. Training agencies and the ANSI STANDARDS for AQUATIC FACILITIES speak to "lifeguards understanding their responsibilities to their assigned stations." This would include understanding what type of position (e.g., elevated, roaming) the QUALIFIED LIFEGUARD should be in for the most effective PATRON surveillance.
Both the ANSI/APSP-1 Public Swimming Pools and ANSI/APSP-9 standards state that the lifeguard "shall be positioned and provided equipment in order to reach the victim within 20 seconds of identification of a trauma or incident (e.g., response time)." Note that this time (20 seconds) addresses the time the rescuer must reach the furthest extent of the zone, which would include addressing size and shape of each zone, among other factors. It does not include the "recognition phase" in this time.
Policies & Management ANNEX 284 For the purposes of maintaining effective surveillance of a zone of PATRON responsibility, the zone is generally set up based on the location of the QUALIFIED LIFEGUARD and their ability to see the entire zone. In some cases, it requires the QUALIFIED LIFEGUARD roam to see the entire zone and in some cases the QUALIFIED LIFEGUARD must be elevated to see the whole zone. For this reason, the SAFETY PLAN must stipulate by what method the QUALIFIED LIFEGUARD can see the whole zone.
Additional responsibilities may include MONITORING of adjacent DECKS or MONITORING activities on a structure such as a WATERSLIDE, play element, or other AQUATIC FEATURE. As the aquatics industry has added other AQUATIC FEATURES to traditional AQUATIC VENUES, it is important to identify these additional responsibilities that may not be apparent if the zone were strictly a flat-water POOL.
An AQUATIC FACILITY may have more than one AQUATIC VENUE and for each AQUATIC VENUE, may have multiple zones of PATRON responsibility. These zones may overlap in some areas and it is important to show there are not unassigned areas. The MAHC does not speak to a time standard for identification of an incident versus the response time, as there are too many variables in the circumstances leading to an incident.
# Rotation Procedures
Studies have documented the effect of critical and non-critical signals on maintaining vigilance in tasks; these may be useful in understanding lifeguarding duties. Jerison and Pickett demonstrated that a high number of critical signals could be processed by the lifeguard for up to 60 minutes with tolerable effects on vigilance. 469 However, the study found that low numbers of critical signals indicated that detrimental effects on vigilance occurred after only 20 minutes. This study also referenced the Mackworth Clock Test, commissioned in 1950 by the British Royal Navy, which found that optimal vigilance cannot be maintained for more than 30 minutes. 470 Researcher N.H. Mackworth developed the visual sensitivity loss model. Using classic clock-task experiments, signal detection performance often declined during the first half hour of the watch. Later experiments found five-to 10-minute breaks reset the vigilance level to its original point. 471 The SAFETY PLAN should specify how breaks or changes in duties will be instituted into the rotation plan without reducing the number of QUALIFIED LIFEGUARDS on PATRON surveillance.
For single QUALIFIED LIFEGUARD AQUATIC FACILITIES, the plan needs to address procedures for keeping PATRONS out of the water while the QUALIFIED LIFEGUARD is on break or performing other alternation of task activities. Other AQUATIC FACILITY staff may need to be at poolside to ensure that PATRONS stay out of the water, unless all PATRONS leave the AQUATIC VENUE and it is appropriately secured against entry. The "off-duty"
Policies & Management ANNEX 285 QUALIFIED LIFEGUARD cannot be responsible for this activity as it does not meet the intent, which is to accomplish a reset of the vigilance level.
Having a sound lifeguard rotation plan and procedures is crucial to the ability of the QUALIFIED LIFEGUARDS to be effective in PATRON surveillance. During the rotation of QUALIFIED LIFEGUARDS there can potentially be a lapse of PATRON surveillance if not done correctly. Because of this, the rotation system must be practiced and evaluated as to eliminate or minimize the lapse of PATRON surveillance time.
Heat, humidity, and high BATHER COUNTS are stresses for QUALIFIED LIFEGUARDS, which may warrant more frequent breaks. Note that DECK areas are part of the zone of PATRON surveillance for some lifeguard stations to prevent incidents from occurring (i.e., stop running on deck, stop diving from deck to shallow water, and otherwise enforcing rules).
# Emergency Action Plan
The MAHC agreed that there needs to be an emergency closure policy that is retained and available for review by the AHJ.
Training agencies educate lifeguards to expect a written EMERGENCY ACTION PLAN created by the AQUATIC FACILITY where they will work that addresses the reasonably foreseeable emergencies that could occur.
There is a need to identify how emergencies are communicated within the AQUATIC FACILITY and external to the AQUATIC FACILITY. The types of emergencies that could occur in AQUATIC FACILITIES include but are not limited to: chemical spills, submersion events/drowning, fire, violent acts, lost children, contamination (fecal incidents and water clarity), and inclement weather.
AQUATIC FACILITY staff will likely be the persons to observe any imminent hazards and should be empowered to close POOLS or other areas of the AQUATIC FACILITY should those hazards be present. In particular, fecal incidents, water clarity, and inclement weather may be encountered more often and the AQUATIC FACILITY staff should know procedures for dealing with those imminent hazards and their authority to close the AQUATIC FACILITY.
# Coordination of Response
The EMERGENCY ACTION PLAN identifies the individuals available and expected to respond. The goal of an EAP for a life-threatening emergency should be to activate EMS and provide for other individuals to assist the QUALIFIED LIFEGUARD with the actions identified in the EAP (such as CPR if needed) as soon as possible. Performing effective compressions is difficult to maintain for more than a few minutes, and the presence of at least one person to take over compressions creates a cycle of rest.
In AQUATIC FACILITIES where there are multiple QUALIFIED LIFEGUARDS and/or other staff persons such as desk or maintenance personnel who are always closely available when the AQUATIC FACILITY is open, it is feasible for many persons who are trained in 6.0
Policies & Management ANNEX 286 CPR/AED and first aid to respond within three minutes. Having a person who is CPR trained who can respond within minutes greatly improves survivability. 472 At an AQUATIC FACILITY with a single QUALIFIED LIFEGUARD, the SAFETY PLAN should identify the options for obtaining assistance, which is likely to include use of bystanders. If bystanders are part of the EAP, pre-service and in-service training should include how to direct bystanders in an emergency.
Pre-Service Requirements
# Safety Team EAP Training
The MAHC agreed that there needs to be a SAFETY PLAN specific to each AQUATIC VENUE. Training agencies, ANSI standards for public swimming POOLS, and AQUATIC FACILITIES all speak to having plans written, rehearsed, and reviewed for emergency action.
It is imperative that EMERGENCY ACTION PLAN training take place before the staff begins their work as an emergency can happen at any time.
Providing a copy or posting a copy for staff ensures staff has access to the information at any time.
# Safety Team Skills Proficiency
Responding to emergencies may require more specific skills and physical abilities, which once learned, must be maintained as emergencies can occur at any time. This demonstration of skill and/or knowledge verifies the staff person is ready to fulfill their role.
# Qualified Lifeguard Emergency Action Plan Training
The QUALIFIED OPERATOR is required to prepare the SAFETY PLAN as a set of policies for the AQUATIC FACILITY. It is imperative that the employees be aware of their responsibilities and have access to the information at all times the AQUATIC FACILITY is open, so they may refresh their memory or seek further information. Training during preservice will allow the QUALIFIED LIFEGUARD to become trained in the SAFETY PLAN of the AQUATIC FACILITY.
Qualified Lifeguard Skills Proficiency It is imperative that all lifeguards hired are currently able to perform effectively in the workplace. AQUATIC FACILITIES need to assess the lifeguard's ability to perform the job skills necessary to be a QUALIFIED LIFEGUARD at the AQUATIC FACILITY, including at any AQUATIC VENUES within the AQUATIC FACILITY where the lifeguard may be assigned, before allowing the lifeguard to be on duty.
Policies & Management ANNEX 288 6. 3.3.4.3 In-Service Training Plan Requiring QUALIFIED LIFEGUARDS to have the ability to respond to a victim and complete a rescue is critical. To not specify this requirement would allow a QUALIFIED LIFEGUARD to demonstrate the individual skills but not necessarily have the ability to do all the skills in consecutive order to complete the whole rescue.
Physical fitness is a critical part of performance when conducting a rescue. QUALIFIED LIFEGUARDS who are newly certified must maintain their physical fitness and skill proficiency throughout the term of their certificate as those skills can be called upon at any time. The required level of physical fitness can be determined by several means.
Schultz and colleagues showed that in order to do CPR at 80 compressions a minute (training now requires 100 compressions a minute) over a 10 minute period of time, the METS (metabolic equivalents) required to perform this task was 4.6 ± 0.7 475 . One would expect this number to increase using the current protocol for CPR. The following logic and calculations was developed by Dr. Timothy Lightfoot 476 using METS values for a variety of activities that lifeguards might be expected to perform. 477,478,479 If someone swims 500 yards (457 m) in 10 minutes, they exert 8 METs/min (so, almost double the CPR cost discussed above); Similar levels of exertion are given by:
- Running at 5 mph on a level grade (running one mile in 12 min or 0.8 mile in 10 minutes) Riding a bicycle at 14 mph on level grade (riding 2.3 miles in 10 minutes)
If the metabolic cost of doing CPR is about 4.75 METS, then lifeguards that are able to do the above tasks, should be able to do CPR almost indefinitely BECAUSE (and this is important), the metabolic cost of doing CPR is only 60% of the cost of the above exercise. Importantly, this means that when doing CPR, the metabolic cost is not so intense that they will be doing effort that will increase the amount of lactate in their blood (i.e. they won't go above lactate threshold) and if they stay below lactate threshold (60 65% max intensity) they should be able to do CPR a long time.
The United States national average response time for a BLS ambulance is 10 minutes. Paramedics are 12-15 minutes. For this reason, QUALIFIED LIFEGUARDS should be fit enough to do the rescue and do CPR for at least this for this time frame.
Policies & Management ANNEX 291 someone should be responsible for maintaining equipment and knowing when an AQUATIC FACILITY should close and how to mitigate hazards. This level of skill is different from that of the QUALIFIED LIFEGUARD, and each of these skills is important to have onsite anytime the AQUATIC FACILITY is open.
The MAHC considered requiring a LIFEGUARD SUPERVISOR for all AQUATIC FACILITIES; but for a single guard facility, there is no requirement, as a QUALIFIED LIFEGUARD doubling as a supervisor would be a redundancy. The SAFETY PLAN should address the means of providing oversight and direction to QUALIFIED LIFEGUARDS at single guard facilities.
Designated Supervisor For any AQUATIC FACILITY, someone must be designated to make decisions and provide oversight of expected performance. When an AQUATIC FACILITY is required to have two or more QUALIFIED LIFEGUARDS, one of the QUALIFIED LIFEGUARDS may be designated as the LIFEGUARD SUPERVISOR as long as they comply with the training requirements. The QUALIFIED LIFEGUARD cannot fulfill LIFEGUARD SUPERVISOR duties while on scanning duty. For small AQUATIC VENUES, the MAHC was sensitive to requiring an additional person simply to be the LIFEGUARD SUPERVISOR. In this scenario, one of the QUALIFIED LIFEGUARDS is designated as the LIFEGUARD SUPERVISOR to make decisions when appropriate.
Emergency Response and Communications Plans
# Emergency Response and Communication Plan
Chemical STORAGE and EAP/evacuation info also must be filed with local fire/hazmat agency according to quantities and chemical types stored.
Training Documentation It is recommended that EAP Drills are conducted with the staff on a quarterly basis as specified by the American Heart Association; however each operation is unique. Some operations may only be open during specific seasons, etc.
Communication Plan Remote Monitoring Systems Remote MONITORING systems may be used as an additional tool to improve health and SAFETY but are not to replace or substitute for aquatics staff or their duties.
Lifeguard-Based A remote SAFETY MONITORING system is an added value but should not be a substitute for a having a lifeguard present when conditions deem that a lifeguard is necessary.
The following excerpts from YMCA guidance provide an overview and discussion of lifeguard-based remote SAFETY MONITORING systems: Employee Illness and Injury Policy Open wounds may become entry points for pathogens and are the greatest risk to the wounded person. Water-related work could be allowed with physician approval or if the wound is covered with an occlusive, waterproof bandage.
# Facility Management
Facility management is critical in preventing illness and injury as summarized in this section. The Centers for Disease Control and Prevention (CDC) identifies the most frequently reported contributing factors to the spread of recreational water illnesses, in particular gastroenteritis. Another report identified the most frequently reported type of recreational water illness (RWI) outbreak as gastroenteritis, the incidence of which is increasing. 482 Prevention of RWIs at treated venues requires POOL operators to: Maintain appropriate disinfectant and pH levels to maximize disinfectant effectiveness, and Ensure optimal water circulation and filtration.
A study of POOL inspection data underscored the need for improved maintenance. 483 A total of 13,532 (12.1%) of 111,487 inspections identified serious violations that threatened the public's health and resulted in immediate POOL closure. Of 120,975 inspections,12,917 (10.7%) identified disinfectant level violations; of 113,597 inspections, 10,148 (8.9%) identified pH level violations. Other water chemistry violations were documented during 12,328 (12.5%) of 98,907 inspections, with the number identified per inspection ranging from zero to four. Circulation and filtration violations were documented during 35,327 (35.9%) of 98,361 inspections, with the number identified per inspection ranging from zero to nine. The following violations also were identified: improperly maintained POOL log (12,656 of 115,874 inspections), unapproved water test kit used (2,995 of 90,088 inspections), valid POOL license not provided and/or posted (741 of 28,007 inspections), and operator training documentation not provided and/or posted (1,542 of 8,439 inspections).
Of the 121,020 inspection records, 59,890 (49.5%) included POOL setting data. Among venues with known POOL settings, child-care POOL inspections had the highest percentage of immediate closures (17.2%), followed by hotel/motel and apartment/condominium POOL inspections (15.3% and 12.4% respectively).
- See Table 1: #tab1 Apartment/condominium and hotel/motel POOL inspections had the highest percentage of disinfectant level violations (13.1% and 12.8%, respectively). Child-care and apartment/condominium POOL inspections had the highest percentage of pH level 6.0
Policies & Management ANNEX 294 violations (11.8% and 10.0%, respectively). Approximately 35% of inspections of apartment/condominium POOLS, hotel/motel POOLS, and water parks identified circulation and filtration violations.
Of the 121,020 inspection records, 113,632 (93.9%) included POOL type data. Interactive fountain inspections had the highest percentage of immediate closures (17.0%). Kiddie/WADING POOL inspections had the highest percentage of disinfectant level violations (13.5%), followed by interactive fountain inspections (12.6%). THERAPY POOL inspections had the lowest percentage of disinfectant and pH level violations but the highest percentage of other water chemistry violations (43.9%). Interactive fountain inspections identified the lowest percentage of circulation and filtration violations (12.8%).
Drowning and falling, diving, chemical use, and suction injuries continue to be major public health injuries associated with AQUATIC VENUES. Drowning is a leading cause of injury death for young children ages 1 to 4, and the fifth leading cause of unintentional injury death for people of all ages. 484 For 2007-2008, 32 POOL chemical-associated health events that occurred in a public or residential setting were reported to CDC by Maryland and Michigan. These events resulted in 48 cases of illness or injury; 26 (81.3%) events could be attributed at least partially to chemical handling errors (e.g., mixing incompatible chemicals). ATSDR's Hazardous Substance Emergency Events Surveillance System received 92 reports of hazardous substance events that occurred at AQUATIC FACILITIES. More than half of these events (55 ) involved injured persons; the most frequently reported primary contributing factor was human error. Estimates based on CPSC's National Electronic Injury Surveillance System (NEISS) data indicate that 4,574 (95% confidence interval : 2,446) emergency department (ED) visits attributable to POOL chemicalassociated injuries occurred in 2008; the most frequent diagnosis was poisoning (1,984]). 486 The information identified in this report, along with existing recreational water injury data and first hand inspector experience, drove the development of the critical risk factors for recreational water injury and illness at treated AQUATIC VENUES. The eight broad critical risk factors for recreational water illness and injury are: violations must be corrected at the time of inspection or the POOL must be closed until the violations are corrected. Whenever a POOL is closed due to a public health violation, signage must be posted stating that the facility is closed due to an IMMINENT HEALTH HAZARD. Before removing the closure sign and reopening in the feature, a follow-up inspection or other evidence of correction of the violations is required to ascertain correction and re-open the POOL.
The factors being considered IMMINENT HEALTH HAZARDS cover known risk areas: Low or absent disinfectant levels lead to reduced inactivation of pathogens and these conditions have been associated with infectious disease outbreaks. Low pH has been associated with loss of dental enamel. Dental erosion begins to occur below pH 6.0 and rapidly accelerates as the pH drops. 504,505,506 High pH reduces the efficacy of CHLORINE-based DISINFECTION by reducing the amount of molecular hypochlorous acid (HOCl), the active form that is available for DISINFECTION. At pH 7.0, about 70% of the hypochlorous acid is molecular, at pH 7.5 about 50% is molecular, at pH 8.0 about 20% is molecular, and at pH 8.5 only 10% is molecular. As a result, the MAHC decided to set upper and lower limits for pH as an IMMINENT HEALTH HAZARD. Injuries/deaths occur to persons using equipment such as vacuums and reach poles at swimming POOLS when this equipment contacts overhead wires which are too close to the POOL. Clearance in any direction from the water, edge of POOL, etc. is to protect people using rescue and service equipment at POOLS, which are typically aluminum. Clearance in any direction to the diving platform, tower, waterslide or other fixed POOL related structure is to protect a swimmer using these items. Follow-up procedure for observance of electrical lines within 20 feet (6.1 m) of a swimming POOL during an inspection: o Determine whether the electrical lines are owned by the utility company or by the owner/operator of the swimming POOL/property. o If they are owned by the utility company, the operator should obtain a letter from the utility company stating that these lines are in compliance with NEC 680 STANDARDS. o If the lines are owned by the owner/operator, and there is no waiver or variance, it is a public health hazard.
8.0 MAHC Appendices ANNEX 351 0.5 mg/m 3 was 86% for eye irritation, 61% nose irritation, 29% throat irritation, and 42% dry cough. 514 Airborne TRICHLORAMINE was measured at six indoor swimming facilities and researchers found an elevated prevalence of respiratory symptoms in swimming POOL workers. Mean TRICHLORAMINE concentration of 0.56 mg/m 3 , with the highest concentration reaching 1.34 mg/m 3 . General respiratory symptoms were significantly higher in POOL employees compared to the Dutch population sample (odds ratios ranged from 1.4 to 7.2). 515 Researchers generated TRICHLORAMINES at 0.5 mg/m 3 in a challenge chamber and exposed the participants to a series of 10-minute exposures followed by spirometry. Results showed a decrease in pulmonary function. 516 TRICHLORAMINE is the most volatile and prevalent chloramine compound in the air around swimming POOLS 517 , has low solubility, and decomposes rapidly in sunlight. The World Health Organization proposes a 0.5 mg/m 3 provisional value although it states that more research is needed to investigate health effects in people who use the POOL for extended periods of time and the role of TRICHLORAMINE in possibly causing or exacerbating asthma. 518 Although proposed STANDARDS and past studies indicate that a comfort level for indoor POOL areas would be to keep TRICHLORAMINE concentrations below 0.5 mg/m 3 , there have been some concerns that this level may not be low enough to prevent symptoms. 519 TRIHALOMETHANE (THM) threshold research reference synopses:
- Animal toxicity studies demonstrate and characterize hepatotoxicty and nephrotoxicity. 520 Investigation of THMs in tap water and swimming POOL water. The concentrations of total THMs in swimming POOL water was higher than those in tap water, particularly, brominated-THMs. This poses a possible cancer risk related to exposure. 521 Environmental and biological MONITORING of THMs was performed in order to assess the uptake of these substances after a defined period in five competitive
Facility Design & Construction ANNEX 181 AQUATIC VENUE. An effective BARRIER shall be one that does not allow BATHERS to walk on the elevated wall.
Small and/or narrow SPAS are examples where the AHJ may allow a relief from the 50% minimum DECK requirements. The rationale is that if a SPA is of a limited size or width then it can be entirely be guarded effectively from one side or one location.
Elevated Spas For example, if an elevated SPA is next to or within 4 feet (1.2 m) of another AQUATIC VENUE, a guard rail or post-and-rope system would be a couple of options as effective BARRIERS which would discourage PATRONS to use this elevated wall to jump into the other AQUATIC VENUE.
Temperature Temperatures above 104˚F are essentially inducing a fever in the BATHER'S body as internal temperature rises. It also causes birth defects in fetuses so that pregnant women, particularly in their first trimester, should consult their physician before using. Further research is needed to understand the potential role of SPA use early in pregnancy and associated birth defects. See MAHC Annex 5.7.4.7.2 for further discussion.
# Timers
The "Fifteen Minute Rule" -complies with most state CODES. The timer for the hydrotherapy pump is for the SAFETY of the BATHERS. Longer times can be hazardous to BATHERS and the therapy pump shutting off at least reminds the BATHER to get out and reset the timer.
Emergency Shutoff Emergency shutoffs should be located between 5 feet (1.5 m) and 50 feet (15.2 m) and within sight of the SPA structure.
# Waterslides and Landing Pools
Design and Construction The designs of WATERSLIDES are governed by amusement ride regulations such as ASTM that have appropriate experience. However, the design of the LANDING POOL along with associated water quality and circulation is regulated by this STANDARD.
Exit into Landing Pools Present practices for safe entry into LANDING POOLS include: A water backup, and A deceleration distance.
Facility Design & Construction ANNEX 182 4.12.2.9
Drop Slides DROP SLIDES are being highlighted because of one incident that resulted in a fatality in Massachusetts. WATERSLIDES, particularly those that drop BATHERS into the water (versus being delivered to a water entry point), from a height above the water require diligent MONITORING by staff at the top of the slide and the water entry point to ensure there is adequate SPACING between slide users so that people do not land on top of each other. Each slide user must have time to move out of the collision zone before another slide user is allowed down the slide. The incident cited above resulted in the drowning of a slide user and a multi-day period to discover the victim because of the high turbidity of the water.
# Wave Pools
The WAVE POOL will still have side wall ladders for egress purposes (and therefore partial trafficking) and the MAHC still felt that "NO DIVING" signage should still be required for all areas around the WAVE POOL regardless of water depth due to the freeboard. 4.12.4 Therapy Pools 4.12.5 Lazy Rivers 4.12.5.2
Access and Egress
# Means
Since there is moving water in a LAZY RIVER, less frequent means of ingress/egress are acceptable. The moving water propels people around a LAZY RIVER quickly and with less effort to the next means of egress. LAZY RIVERS can be several hundred feet long. They are often constructed with side walls that make it difficult to exit the water. This distance will make it so that a BATHER will never be more than 75 feet (22.9 m) from an exit. The distance to the nearest exit for a large regular POOL can be as much as 50 feet (15.2 m). This distance can be farther for a LAZY RIVER because of the current. If water is flowing at 1 to 4 feet/second around the river, then a person floating around a river will never be more than 2.5 minutes from a means of egress.
Deck LAZY RIVERS are of necessity closed (or mostly closed) loops. The wall for the inside of a LAZY RIVER loop is an ISLAND which may be designed for people but is most often not. Therefore, a PERIMETER DECK is only needed for the outside of the river loop, or only on one side of the river.
Bridges Seven feet (2.1 m) minimum clearance overhead is required since it is consistent with requirements of building code minimum ceiling clearances.
Most LAZY RIVERS are closer to 3.5 feet (1.1 m) deep making the clearance 7.5 feet (2.3 m) if you adhere to the 4 foot (1.2 m) clear requirement above the water surface. The
Facility Design & Construction ANNEX 183 MAHC chose 7 feet (2.1 m) because it is the typical building code minimum height requirement for ceilings whereas the 6 foot 8 inches (2 m) minimum clearance is usually only applicable to doorways.
4.12.6 Moveable Floors 4.12.6.3 Safety 4.12.6.3.1 Not Continuous Examples of adequate SAFETY precautions for entering the other area of the AQUATIC VENUE include but are not limited to the following: A moveable BULKHEAD, located at least at the water surface, to enclose the area of the MOVEABLE FLOOR; A highly visible floating line installed over the MOVEABLE FLOOR surface, two feet (61.0 cm) in front of the end of the MOVEABLE FLOOR. A four inch (10.2 cm) wide contrasting marking shall be provided at this leading edge; and A railing system that shall be anchored into the MOVEABLE FLOOR.
Underside When the MOVEABLE FLOOR is not continuous over the entire surface area of the POOL, access to the underside of the MOVEABLE FLOOR shall be denied when it is not flush with the POOL floor. Examples of adequate measures to prevent access under the MOVEABLE FLOOR include but are not limited to the following: Position a BULKHEAD at the end of the MOVEABLE FLOOR; Have a trailing ramp that hinges to the MOVEABLE FLOOR and extends to the POOL floor.
4.12.6.4 Movement There are no U.S. regulations on MOVEABLE FLOORS. This velocity was obtained from European design STANDARDS.
- European Standard EN 13451-11:2004. 4.12.7 Bulkheads 4.12.7.2 Entrapment All BULKHEAD parking positions should be designed such that QUALIFIED LIFEGUARDS can see under 100% of the BULKHEAD from their station on the POOL DECK.
Gap BULKHEADS designed with greater gaps may result in BULKHEADS veering off its intended path.
Facility Design & Construction ANNEX 184 4.12.7.6 Handhold During FINA sanctioned events, full height touchpads will be on most BULKHEADS. But the majority of BULKHEADS in the U.S. allow for wide holes at the waterline for handholds and USS / NFSHSA / NCAA touchpads which are hung from these holes and are below the waterline. Touchpads aren't normally installed during normal operating hours. End wall concrete parapets that cantilever over the gutter that require full height FINA touchpads for those level of competitions do not negate the requirement for handholds (though behind) in these locations.
Width Any BULKHEAD that is intended for foot traffic for use by officials shall be at least one meter (3 ft and 3 in) wide which is the current minimum width provided by commercial manufacturers.
Starting Platforms Any BULKHEAD that dictates starting platforms shall be installed shall be at least three feet and nine inches (1.1 m) wide in order to allow for sufficient trafficking space for officials and athletes behind the starting platforms.
4.12.8 Interactive Water Play Venues 4.12.8.3 Sloped An example for an acceptable design solution would be a diverter valve installation.
Hazard While consistent with many state CODES, the MAHC has determined that this topic needs more research regarding water velocity and eye safety 304 .
Signage Since there is no standing water on INTERACTIVE WATER PLAY VENUES, depth markers and "NO DIVING" warning signs are not required.
This was included because it deviates from the regular marking and warning signage requirements for typical AQUATIC VENUES as stated in this code. Other signage requirements such as diaper changing reminders and "Do Not Drink" would likely be appropriate. 4.12.9 Wading Pools 4.12.9.2 Barrier A more stringent requirement is stipulated for separating WADING POOLS from other bodies of water (compared with the spacing between other aquatic venues) is due to the fact that the predominant users of WADING POOLS are small toddlers, most of whom
Backwashing Frequency Backwashing frequency is important for multiple reasons. First, solids attach more strongly to the filter media over time and can be more difficult to remove following infrequent backwashing. Secondly, the organic particles (e.g., skin cells) held in the filter in contact with FREE CHLORINE can break down over time and produce DISINFECTION BY PRODUCTS and/or combined CHLORINE. The potential to form "mudballs" also increases with solids loading inside of a filter and can cause filter failures. The preceding items are the rationale for requiring backwashes at manufacturer prescribed pressure losses through the filter. Some data suggests tainted backwash water remains inside of the filter at the conclusion of the backwash procedure and therefore should be wasted to drain for at least the first two minutes after restarting.
# Backwash Scheduling
Backwashing while patrons are in the water is not recommended. First, the MAHC requires that recirculation systems are running at all times that an AQUATIC VENUE is open for BATHER use. Second, with no interlock in place, stopping recirculation while inadvertently continuing chemical feed pumps can cause a build-up of acid and chlorine product in the lines that leads to chlorine gas production. When the recirculation system is turned back on, the risk increases dramatically of a chlorine gas plume being delivered into the AQUATIC VENUE causing injury to bathers and initiating an emergency response 312 . Exceptions to this would be if an AQUATIC VENUE has multiple filters and an individual filter can be taken off line without shutting down the recirculation system and there is no chance of overfeeding chemicals that may lead to outgassing events or other chemical mixing emergencies.
# Filtration Enhancing Products
Coagulants should be used with caution due to potential for filter bed fouling. Maintaining records of clean bed headloss is recommended to help detect problems of filters not being adequately cleaned via backwashing. If a facility decides to use coagulants, they should be used continuously. Not using coagulants when the water is clear to save money will significantly impair the capabilities of the filters to remove pathogens like Cryptosporidium and Giardia.
Precoat Filters
# Return to the Pool
In closed-loop mode, it will be necessary to charge the media slurry to the suction side of the pump or precoat tank, prior to closing down the loop and putting the system into recirculation. Precoating of a filter typically takes 5 to 10 minutes. At the end of the precoat cycle, the discharge out of the filter should be clear and free of filter media. If 312Hlavsa MC, et al. Surveillance for waterborne disease outbreaks and other health events associated with recreational water use -United States, 2007. MMWR Surveill Summ. 201160:1-37.
Facility Operation & Maintenance ANNEX 202 the discharge is not clear, the filter should be opened, inspected, and repaired as necessary.
Operation When flow or pressure is lost in the filter, the precoat layer may become unstable and fall off of the filter septum. To reduce the likelihood of debris and CONTAMINANTS being returned to the POOL, it is recommended that prior to restarting the filter, it should be backwashed and/or cleaned and the precoat re-established with new filter media in a closed loop recirculation mode or with water wasting until the discharge of the filter is clear to minimize the potential of media or debris returning to the POOL. It is important that flow not be interrupted after the precoating process is completed and the flow out of the filter is redirected from the recirculation or waste piping back to the POOL. It is acceptable to open and close valves on the filter effluent stream as long as the closed valves are opened first so that the filter effluent water can flow continuously. Allowing the media to fall off of the filter septum decreases the capability of the filter to remove particles. The critical importance of always cleaning the filter and replacing the media when the flow is interrupted for any reason is related to uneven recoating permitting pathogen passage as well as fouling of the media support layers. 313
Cleaning Septum covers should be properly cleaned and inspected to maintain proper performance of precoat filters. Filters should be backwashed following a significant drop in the flow rate or when the pressure differential across the filter is greater than 10 pounds per square inch (69.90 KPa). Vacuum-type precoat filters should be cleaned when the vacuum gauge reading increases to greater than 8 inches (20.3 cm) of mercury or as recommended by the manufacturer. If after precoating with fresh media, the filter pressure does not return to the normal initial starting pressure noted on filter start-up, it would be advisable to disassemble the filter and clean the elements (septum covers) per the filter manual. Septum covers should be cleaned or replaced when they no longer provide effective filtration or create a friction loss preventing maintenance of the recommended recirculation rate. Water and spent media should be discharged in a manner approved by the appropriate regulatory agency.
Bumping Bumping is the act of intentionally stopping the filter and forcing the precoat media and collected CONTAMINANTS to be removed from the filter septum. Bumping may impair pathogen removal and could facilitate the release of pathogens previously trapped in the filter. Therefore, bumping should be performed in accordance with the manufacturer's recommendations. Prior to restarting a bumped filter, it is recommended that the precoat be re-established in a closed loop recirculation mode or with water wasting until the discharge of the filter is clear to minimize the potential of media or CONTAMINANTS returning to the POOL.
Facility Operation & Maintenance ANNEX 203 Pending future research, bumping is strongly discouraged in any precoat filter application where pathogen removal is a concern. Bumping may impair pathogen removal as pathogens once trapped at the surface of the cake could be positioned close to the septum and penetrate the filter during operation. 314 Cyst-contaminated water used for precoating filters led to much higher cyst concentrations in the filter effluent. 315 Precoat filters have been demonstrated to remove greater than 99% of the OOCYSTS. Using clean precoat media to precoat filters as well as maintaining continuous flow is recommended. 316,317,318
Filter Media Continuous filter media feed (or body-feed) can be used to increase the permeability of the cake, maintain flow, and extend cycle length as it becomes coated with debris. Body-feed is filter media added during the normal filtration mode on a continuous basis. The amount of body-feed used is dependent upon the solids loading in the POOL. Turbidity is the best available method to quantify and estimate solids loading. For filter influent turbidities greater than 1.5 NTU, body-feed may be beneficial with addition rates ranging from 1.0 to 4.0 ounces of DE per square foot of filter area per day dependent on the solids loading in the POOL. The lowest effective concentration of suspension should be used in a body-feed system. The concentration of the suspension may not exceed 5% by weight. The body-feed system head and lines should be flushed once every 15 minutes for at least one minute to assure proper and continuous operation. Water from the discharge side of the recirculation pump may be used. If connection is to a potable water supply line, the supply line should be equipped with an approved BACKFLOW prevention device.
Precoat media should normally be fed into the filter at a concentration not to exceed 5% by weight. Since perlite is approximately half the density of DE, half of the weight of perlite will achieve a similar depth of media inside of the filter as shown in MAHC Annex Table 5.7.2.2.7.1.
Facility Operation & Maintenance ANNEX 204 Drinking water applications typically recommend using DE at application rates of 0.2 pounds per square foot (1 kg/m 2 ). 319 This practice seems to be based on research showing that the removal of 9-micron (Giardia-sized) microspheres increased from greater than 99% to greater than 99.9% as the precoat amount increased from 0.5 to 1 Kg/m 2 . 320 Under the range of conditions tested, Logsdon and coworkers 321 found that the amount of DE had a greater impact on microsphere removal than did the grade of DE.
Alum-coated DE has been shown to significantly improve the removal of turbidity and bacteria not normally removed by DE filters. 322 Logsdon 323 reported that alum could be added at 0.05 gram of alum as Al 2 (SO 4 ) 3 14 H 2 O per 1 gram of DE in a slurry to form a precipitate on the surface to enhance performance.
# Cartridge Filters
# NSF Standards
Cartridge filter elements should be cleaned (or replaced) when the differential pressure across the filter exceeds 10 psi (68.9 KPa). Every cartridge filter should have two sets of cartridges. This will allow for one set to be in use while the other is being cleaned (soaking and drying are recommended).
Filtration Rates The 0.375 gallons per minute per square foot (0.26 L/s/m2) maximum design flow rate is acceptable, but an allowance is necessary to accommodate irreversible fouling of Use of high levels of CHLORINE as a "shock dose" when BATHERS are not present may be part of an overall water quality management strategy. Periodic shock dosing can be an effective tool to maintain microbial quality of water and to minimize build-up of biofilms and inorganic chloramines. For BATHER re-entry, FAC levels shall be consistent with label instructions of the disinfectant.
Salt water (saline) chlorination systems generate and deliver a CHLORINE disinfectant onsite directly into POOL water.
While cell size and configuration of these systems may differ depending on the manufacturer, the principles of their operation remain the same. Sodium chloride is added to balanced POOL water to establish a saline solution, which flows through the electrolytic cell. A low voltage electrical charge is passed through the saline solution and the current breaks the sodium and chloride bonds resulting in the formation of CHLORINE gas, hydrogen gas, and sodium hydroxide:
# Facility Operation & Maintenance ANNEX 231
# Too Low
If total alkalinity is too low: pH changes rapidly when chemicals or impurities enter the water. pH may drop rapidly when using net acidic sources of CHLORINE or other acidic chemicals (e.g. Trichlor (trichloro-s-triazinetrione), Dichlor (sodium dichloro-s triazinetrione), potassium monopersulfate), causing etching and corrosion.
# Raising Total Alkalinity
Total alkalinity can be raised by the addition of bicarbonate of soda (sodium bicarbonate, baking soda). 1.4 lbs. bicarbonate of soda per 10,000 gallons (635.0 g per 37,854.1 L) will raise total alkalinity approximately 10 PPM.
# Too High
If total alkalinity is too high: pH becomes difficult to adjust. High pH often occurs causing other problems such as; cloudy water, decreased disinfectant effectiveness, scale formation, and filter problems. The higher the total alkalinity, the more resistant the water is to large changes in pH in response to changes in the dosage of disinfectant and pH correction chemicals. If the total alkalinity is too high, it can make pH adjustment difficult.
# Lowering Total Alkalinity
Add acid -The acid reacts with bicarbonates in the water and reduces the total alkalinity. Add 1.6 pounds of Dry Acid (Sodium Bisulfate) per 10,000 gallons of water, or 1.3 quarts of Muriatic Acid, to decrease the Total Alkalinity by 10 PPM. Retest and adjust the pH.
High levels of cyanuric acid will cause interference in the total alkalinity test. This interference is magnified at low levels of total alkalinity. To correct for cyanuric acid interference, measure the concentration of cyanuric acid, divide that number by 3, and then subtract that value from the measured total alkalinity value.
Minor deviations from the alkalinity levels stated in the CODE do not in themselves present imminent health threats to the BATHERS. As such, minor deviations in alkalinity levels do not require the immediate closure of the facility. Rather, deviations from permissible alkalinity levels indicate poor management of the water balance and should indicate a need for a thorough inspection of the entire facility.
Combined Chlorine (Chloramines) Combined CHLORINE compounds (chloramines) are formed when FREE AVAILABLE CHLORINE combines with amine-containing compounds such as urea, amino acids, and ammonia from perspiration and urine. Chloramines include inorganic compounds (monochloramine (NH 2 Cl), dichloramine (NHCl 2 ) and trichloramine (NCl 3 )) as well as a variety of organic compounds. Inorganic chloramines are biocides, but are much less effective as quick kill disinfectants than FREE AVAILABLE CHLORINE. If the local water treatment plant uses chloramination for drinking water DISINFECTION, inorganic chloramines (predominantly monochloramine) may be present in the fill water.
# Facility Operation & Maintenance ANNEX 232
# High Chloramines
A high level of chloramines is undesirable in AQUATIC VENUES. The action level for combined CHLORINE is 0.4 PPM (mg/L). Higher levels indicate that bathing loads or pollution from BATHERS may be too high, or that treatment is inadequate. Higher levels may also pose a health concern to BATHERS, employees, and other PATRONS.
The World Health Organization recommends that combined CHLORINE levels be "as low as possible, ideally below 0.2 mg/L" 378 . However, this "ideal" level would be challenging to implement as a CODE requirement. Since the combined chlorine values reflect the combination of inorganic (well demonstrated health effects) and organic (poorly understood relationship to health effects) chloramines, the MAHC has decided to work with an "action" level until they can be differentiated. Development of tests that can measure the inorganic chloramines separately from the organic chloramines is needed so actionable levels can be set. With such tests, aquatics staff will be able to respond to actionable levels of volatile chloramines so appropriate air quality can be maintained. The separate measurement of organic chloramines, which accumulate in the pool, may be a useful marker for the need to replace water or supplement with a system known to remove these compounds.
Published data are limited, but suggest that combined CHLORINE levels are commonly above 0.2 PPM (mg/L) in swimming POOL water 379,380,381 .
# Inorganic Chloramines
Volatilization of chloramine compounds can lead to strong objectionable odors in AQUATIC VENUE environments, as well as eye, mucous membrane, and skin irritation for BATHERS and PATRONS. Among the inorganic chloramines, NCl 3 has the greatest impact on air quality, owing to its relatively low affinity for water and its irritant properties. NCl 3 has been reported to be an irritant at concentrations in water as low as 0.02 PPM (mg/L).
Odors are unlikely to be present from inorganic chloramines below the following concentrations:
Research to understand the relationship between inorganic chloramine concentrations in water and their impact on air quality is limited, although some research indicates that 5.0 Facility Operation & Maintenance ANNEX 249 AEDs were considered to be included in this list, but due to the requirement for medical direction for AED use by trained rescuers, it was not included as it may not be within the AHJ's authority to mandate such equipment. However, AEDs are widely used and can be used for submersion events and any cardiac incident. If local protocols can be established, it is recommended to have an AED.
# UV Protection for Chairs and Stands
In MAHC Section 4.8.5.3.3, permanently installed chairs and stands are required to be designed with UV protection. In MAHC Section 5.8.5.3.1, chairs and stands are required to have the UV protection present. Regardless of when the chair or stand was constructed, UV protection is required to protect the lifeguard from an occupational exposure.
Spinal Injury Board Spinal injury boards facilitate immobilization of a person with a suspected spinal injury. Because these boards are often used in or around the water, their construction should be of materials that can withstand the environment and be easily SANITIZED/disinfected between uses. Boards must be properly maintained and in good repair. An example is using a wooden backboard that is worn so the wood is exposed and no longer cleanable. In this case, refinishing it with a waterproof finish should again make it cleanable. The head immobilizer and straps are commonly used in lifeguard training programs and these tools assist in the immobilization of a person on the board and should be present during operation. Deciding which straps to be included should consider how to best immobilize the person to the board. Common locations for straps are at the upper torso, the hips, and legs.
The number of spine boards available at the AQUATIC FACILITY should be dependent on the size of the AQUATIC FACILITY. It would be difficult to determine the exact number but a general consideration should be to have a spine board reach the location it is needed within a couple of minutes. There should not be a delay: the person needing to be extricated from the water will need to be held in an immobile position in the water. To extricate without a spine board can cause more damage to the person.
Rescue Tube Immediately Available The 50th percentile adult is at least 64 inches (1.6 m) tall. The rationale is that the average adult BATHER'S head would be above the static water line and they could use the AQUATIC VENUE without difficulty. Due to buoyancy considerations at chest level, a short lifeguard could have difficulty doing a rescue safely without equipment. For this reason, the rescue tube is required unless there is less than 3 feet (0.9 m) of depth in which their chest would likely be above the static water line.
Lifeguard training agencies have determined that the use of a rescue tube makes rescues safer for both the victim and the rescuer. The rescue tube provides a BARRIER between the victim and the rescuer as well as a handhold for both during a rescue.
In very shallow water, the rescue tube may not be as effective so the language in the code is flexible to allow for the rescue tube to be available immediately, but is not required to be worn. However, as stated above, the rescue tube provides protection for the lifeguard so the operator should determine the level of risk and requirement for
Facility Operation & Maintenance ANNEX 251 wearing the rescue tube based on the AQUATIC VENUE depth, activities, and frequency of rescue.
Rescue Tube on Person Being properly prepared to respond to an emergency requires wearing the harness strap attached to the rescue tube and keeping the rescue tube in a position and location where it can be immediately used.
It is important to wear the rescue tube in a rescue ready position. Wearing the strap and sitting with the tube at the lifeguard's feet, or in any other position except held against the body, can lead to situations where a lifeguard is injured or cannot respond because the tube's strap is wrapped around handrails, chair pedestals or other catch points. Management should reinforce through pre-service, in-service, and employment policy that the lifeguards are expected to hold the rescue tube in a manner taught and accepted by the lifeguard training agency.
Identifying Uniform There should be no delay in care because a PATRON is unable to find a member of the AQUATIC FACILITY SAFETY TEAM. Distinct uniforms are a standard in most industries to identify workers and their assigned tasks.
Signal Device The most basic communication method used by lifeguards is a combination of whistle blasts and hand signals to communicate with each other, PATRONS, and management. Whistle signals can communicate when to clear the POOL, get another lifeguard's or supervisor's attention, and communicate emergencies.
The devices and their use can vary depending on the AQUATIC FACILITY and its management. Because of inherent background noise, whistles, hand signals, emergency buttons, radios, and telephone handsets are used to provide more effective communication.
Sun Blocking Methods Protection from direct sun exposure is a necessary part of lifeguarding at AQUATIC FACILITIES. Gone are the days when the objective of the lifeguard was to get as deep a tan as possible. Today, sun exposure, especially when the skin becomes burned, increases significantly the risk of skin cancers.
In a recent study of melanoma, it was noted that the melanoma DNA contained 33,000 mutations, many of which may have come from ultraviolet light exposure 411 .
The best sunscreens available at the present time are broad spectrum or full spectrum and are usually so labeled. More will probably become available as new Food and Drug Personal Protective Equipment Appropriate personal protective equipment (PPE) must be provided to all employees that have possible occupational exposures. Lifeguards should carry or have immediately available basic PPE (disposable gloves and resuscitation mask with oneway valve) for immediate use during initial exposure to feces, vomit, and small amounts of blood until the full facility bloodborne pathogen kit arrives at the treatment scene. This could be in a small pouch to be carried on the lifeguard, a pouch associated with the rescue tube, or at a location near the lifeguard position. The intent is that the lifeguard does not need to leave the immediate area to find PPE nor will it create a delay in response.
OSHA Blood borne Pathogen Regulations 424 , require that the employer shall provide, at no cost to the employee, appropriate personal protective equipment such as, but not limited to, gloves, gowns, laboratory coats, face shields or masks and eye protection, and mouthpieces, resuscitation bags, pocket masks, or other ventilation devices. Personal protective equipment will be considered "appropriate" only if it does not permit blood or other potentially infectious materials to pass through to or reach the employee's work clothes, street clothes, undergarments, skin, eyes, mouth, or other mucous membranes under normal conditions of use and for the duration of time which the protective equipment will be used.
Rescue Throwing Device If the single lifeguard is engaged in a rescue and another person is in distress, the rescue throw device allows for an untrained individual to assist the distressed person.
Reaching Pole If the single lifeguard is engaged in a rescue and another person is in distress, the reaching pole allows for an untrained individual to assist the distressed person.
Safety Equipment and Signage Required at Facilities without Lifeguards
Throwing Device A rescue throwing device is a throw bag, buoyant life ring, torpedo buoy or other easily thrown buoyant device that is designed for a person on the DECK to throw to a person in distress in the AQUATIC VENUE. Fifty feet (15.2 m) minimum of ¼ inch (6.4 mm) rope securely attached to the device is required. It has been found that untrained individuals have a reasonable ability to reach 30 feet (9.1 m) with a rescue throw device. A 50 foot (15.2 m) rope would accommodate that distance. The 1.5 times the width of the POOL allows for a SAFETY factor to overthrow the device and pull the rope back toward the person in distress. This also allows for extra rope to hold on to. The device must be kept ready for use, and the rope must be coiled to prevent tangles and to facilitate throwing the device. Additional Signage MAHC Section 6.3.2 outlines the conditions that require a QUALIFIED LIFEGUARD. For AQUATIC FACILITIES that do not have lifeguards, PATRONS should be informed that no lifeguard is provided so they can comply with any requirements and understand the identified risk. For instance, at a hotel POOL that requires key entry, the sign would notify hotel guests that no lifeguard is provided and persons under the age of 14 are not allowed in without adult supervision. The label should explain necessary precautions to take; how to handle, store, and dispose of chemicals; and sometimes indicate hazard potential with a number from 0 to 4. This number indicates the degree of risk, with the number 4 representing the greatest risk, and shows the hazard categories (see NFPA 704: Hazard Identification System).
# Filter/Equipment Room
OSHA and EPA Chemicals should never be pre-mixed with water by hand before adding the chemical to the AQUATIC VENUE unless specified by the manufacturer.
If a dissolution or feed tank is used to dissolve product for feeding into the AQUATIC VENUE, the tank must be equipped with a mechanical mixer, dedicated to a single chemical, and clearly labeled to prevent the introduction of incompatible chemicals to the tank.
# Chemicals should be added to water, water should never be added to chemicals.
Pre-mixing in containers that are not clean can result in the generation of heat and toxic gases and may result in fire or explosion.
# Hazard Ratings
The SDS will typically contain the hazard ratings according to either the NFPA or HMIS systems. The NFPA system may be found in NFPA 704: Standard System for the Identification of the Hazards of Materials for Emergency Response. In the NFPA system, the chemicals are rated according to their health, flammability, instability, and special hazards. The degree of hazard is indicated by a number from 0 to 4, with 0 being the least hazardous and 4 being the most hazardous. Either HMIS or NFPA ratings are useful to include on product labels. Most fire CODES require these ratings to be posted on chemical STORAGE room doors.
Protected In addition to the requirements listed in MAHC Section 5.9.1.5, the following BEST PRACTICES are recommended: Place all chemical containers, drums, boxes, and bags on pallets to raise them off the floor. Containers should not be stacked so that they will easily fall over. A general rule of thumb is that they should not be stored more than three high. Containers of chemicals shall be closed securely to prevent contamination.
Facility Operation & Maintenance ANNEX 257 Any shelving units used to store chemicals should be sturdy enough to support the weight of the chemicals being stored.
No Mixing Particularly keep chlorinated cyanurates, hydantoin bromine, and calcium hypochlorite away from other chemicals, paper, water, petroleum products, or other organic compounds to avoid possible cross-contamination.
No liquids should be stored above solids.
Chemicals must be stored in the original manufacturers' labeled container. Storage containers that held other chemicals previously are unacceptable. Chemicals may be transferred from the original container to a new container if that container was manufactured for the storage of that chemical and properly labeled.
Aquatics staff should read and consider findings and recommendations developed from investigations related to POOL chemical-related injuries.
- See "CDC Recommendations for Preventing Pool Chemical-Associated Injuries" at: injuries.html
Ignition Sources National Fire Protection Association (NFPA), Hazardous Material Identification System (HMIS), or equivalent hazard rating systems may be used.
Lighting Horizontal-plane illumination must be adequate for SAFETY and navigation, as well as for reading documents.
The Illuminating Engineering Society of North America (IESNA) recommends a 30 footcandle (323 lux) minimum for Motor & Equipment Observation.
# PPE Common components of PPE for chlorinated AQUATIC VENUE chemicals are as follows:
- Respiratory Protection: Wear a NIOSH approved respirator if levels above the exposure limits are possible. Respirator Type: A NIOSH approved full-face air purifying respirator equipped with combination chlorine/P100 cartridges. Air purifying respirators should not be used in oxygen deficient or IDLH atmospheres or if exposure concentrations exceed ten times the published limit. Skin Protection: Wear impervious gloves to avoid skin contact. A full impervious suit is recommended if exposure is possible to a large portion of the body. A safety shower should be provided in the immediate work area. Although the MAHC is not aware of any work in this particular setting, studies in child care settings, schools, long term care facilities and food service establishments all support the importance of surface cleaning. The MAHC feels that daily cleaning at a minimum in this setting is reasonable for aesthetics as well as health and SAFETY.
Rinse Showers Soap is not needed at RINSE SHOWERS because it can have a negative effect on water chemistry.
Diaper-Changing Stations It is the responsibility of PATRONS to clean diaper changing surfaces after each use. This is consistent with practice in other public settings where diapering takes place. However, staff should keep an eye on stations and clean when necessary.
Non-Plumbing Fixture Requirements Associations between AQUATIC VENUES and disease outbreaks have been well documented in the literature. Though an outbreak has never been connected to the materials used specifically, wood and other porous materials have been shown to have bacterial growth on them that can be hard to remove.
Non-porous materials used as matting at AQUATIC FACILITIES were found to be contaminated with bacteria and biofilm scum layers, although conventional cleaning was documented to remove the contamination 428 .
Policies & Management ANNEX 262 immediate closure because of the seriousness of identified violation(s). In addition, SPA inspection data indicated that the following violations regarding the following issues are frequently identified: 437 .
These analyses underscore the need for inclusion of these topic areas in operator training courses. These essential topics are covered in nationally recognized operator training courses.
# Course Content
# Water Disinfection
Many other DISINFECTION chemicals or systems with varying effectiveness and suitability are being offered in the market to AQUATIC FACILITY operators for water treatment. In general terms, discuss the evaluation steps that should be used by the AQUATIC FACILITY operator, including required AHJ acceptance of the chemicals or systems for public AQUATIC FACILITIES, in their decision process on using these types of supplemental systems or treatments. submerged pumps such as turbine, mixed flow, and others used in waterpark applications. Additionally, the operator needs to have an understanding of the winterizing needs for these types of equipment. Filter Backwashing/Cleaning -In these days of energy and water conservation, it is increasingly important that water conservation be practiced. Backwash water can be responsible for wasting an unnecessary amount of water if not done properly or too frequently.
If properly treated to meet water quality STANDARDS, AQUATIC FACILITIES can obtain savings with water costs. However, in some cases, it may not be cost effective for an AQUATIC FACILITY to expend funds on retreatment of backwash water. In those cases, it is most important that all water is discharged properly in accordance with the regulations of the local jurisdiction.
# Health and Safety
# Recreational Water Illness
The number of outbreaks associated with recreational water has continued to substantially increase since reporting began in 1978, most notably in 1982, 1987, 2004, and 2007. CDC recommends that public health and the aquatic sector collaborate on educating the swimming public, an important source of recreational water contamination, about RWIs and what swimmers can do to protect themselves and others 438 . The operator should be aware of the need for frequent manual testing, standardization of automatic controllers, and adequately sized chemical feeders.
Note the need for larger feeders for waterpark type attractions as compared to FLAT WATER POOLS. Settings of AQUATIC FACILITIES that are recommended to be discussed include community POOLS, apartment complex/condominium/homeowners' association POOLS, hotel/motel POOLS, and water parks.
# General Requirements for Operator Training Courses
# Course Length
The MAHC intentionally does not prescribe a particular length of time for courses. Instead, the MAHC is more PERFORMANCE-BASED by requiring that all of the essential topics in MAHC Section 6.1.2.1 be covered during the course. Most nationally recognized operator training courses run approximately 16 hours, and the MAHC assesses that it would be unlikely that all essential topics could be effectively taught in a shorter time period.
Instructor Requirements Recognized training on AQUATIC FACILITY operation and maintenance as well as instruction (without work experience) is sufficient to qualify an individual to be an instructor if the requirements in MAHC Section 6.1.3.4 are met. It is, however, ideal to have both work experience and training in operation and instruction.
# Final Exam
The final exam is intended to assess the knowledge and skills of the pool operator. Key components of the exam should include questions on the essential topics outlined in MAHC Section 6.1.2, performing essential calculations, reading meters and electronic equipment.
In the future, it would be ideal if course final exams included more than just knowledge testing and have skills testing. This should include an on-site evaluation of skills such as proper calculations of gallonage and chemicals needed to be added to the AQUATIC FACILITY, how to operate the filtration/RECIRCULATION SYSTEM, including backwashing the filters, and water testing (chemical and physical parameters).
Course Certificates The MAHC recommends that each certificate have a unique identifier to minimize the likelihood of mistaking the identity of QUALIFIED OPERATORS.
At this time, a certification process for QUALIFIED OPERATORS is not established. This may make it advisable for some group to develop a certification program similar to that of the Food Code. Thus, the Food Protection Managers Certification Program Standards, Section 7.7, "Responsibilities to the Public and to Employers of Certified Personnel" reflect the following, "A certification organization shall maintain a registry of individuals certified."
These STANDARDS reference certified food operators; however, the same STANDARD shall apply to operator training certificates. Thus, "any title or credential awarded by the course approved organization shall appropriately reflect the" AQUATIC FACILITY QUALIFIED OPERATOR responsibilities and "shall not be confusing to employers, consumers, related professions, and/or interested parties." 444
Continuing Education It is recommended that a QUALIFIED OPERATOR continue their education by attending seminars or training courses to keep up-to-date in AQUATIC FACILITY operation and SAFETY.
In the long term, there is a need for development of a system for Continuing Education Units. However, it may not be prudent to make the leap to require CEUs all at once, especially since this MAHC 1st edition will require for the first time that all AQUATIC FACILITIES have QUALIFIED OPERATORS. To have new requirements for operators at all AQUATIC FACILITIES and for CEUs may be overly burdensome at this time.
Certificate Renewal Nationally recognized operator training courses require renewal of certificates. However, most professional certifications do not require retaking an entire course to renew certification, just passing an exam.
Most states require these certificates or copies to be readily accessible to the AHJ. Copies of certificates should be kept on file at the site and made available upon request.
Policies & Management ANNEX 268 If photocopies are provided as proof of certificate, or certificate renewal, the original documents should be provided within 72 hours upon request from the AHJ.
Certificate Suspension and Revocation The AHJ is expected to contact course providers with questions about the validity of any certificate or with questions about an operator's performance. In turn, course providers are expected to readily provide verification of certificates and suspensions and revocations of certificates and to notify the AHJ of actions taken in response to its reported concerns.
The Food Protection Managers Certification Program Standards, Section 7.5 reflect the following, "A certification organization shall have formal certification policies and operating procedures including the sanction or revocation of the certificate. These procedures shall incorporate due process." 445
Additional Training or Testing Reasons for requiring such training or testing include but are not limited to operator performance or new developments in technology or operation. Such situations include but are not limited to repeat or serious violations identified on inspection, an investigation implicating operation as a contributing factor to illness or injury, or implementation of substantial rule changes. Training can range from brief dialogue during POOL inspection to full-day seminar for all operators in a jurisdiction. Testing can range from questions during inspection to paper-or computer-based exams.
Certificate Recognition The MAHC aims to delegate authority to the AHJ both to choose to recognize individual certificates and to reverse its decisions if operators with certificates demonstrate inadequate knowledge or poor performance or due cause.
Course Recognition The MAHC aims to delegate authority to the AHJ to choose to recognize operator training courses and to reverse its decisions if operators demonstrate inadequate knowledge or poor performance or due cause.
# Length of Certificate Validity
Policies & Management ANNEX 271 emergency response. Training agencies should develop appropriate skills to address the variety of water depths in which a victim may be found. These skills should be trained not only for the technical aspects of the skill, but also how the skill is incorporated into a venue's EMERGENCY ACTION PLAN. Lifeguards should be trained to respond within the scope of, at a minimum, Basic First Aid skills to provide care for illness or injury that may occur on land within the AQUATIC FACILITY until EMS arrives.
Resuscitation Skills Lifeguards should be competent in CPR/AED at the professional rescuer level. The predominant body for the research of such skills is the International Liaison Commission on Resuscitation (ILCOR; www.ilcor.org). ILCOR currently reviews available research every five years and is composed of physicians and medical researchers from across the globe. One organization from each country/region of the world is assigned to interpret the science-based evidence and prepare guidelines for voluntary use by training agencies in that country/region. In the United States, this designated agency is the American Heart Association. The AHA collaborates with host groups, training agencies, as well as leaders in the field from nonprofit, educational, and commercial organizations to create the "Guidelines for CPR and ECC". 463 These recommendations are also commonly known as "AHA Guidelines". Emergency Cardiovascular Care Update (ECCU; www.citizencpr.org) conferences are held biennially to present research and recommendations for guidelines. Detailed Information about the process and current research is available on the ILCOR and ECCU websites.
First Aid The evidence-based application of first aid skills is currently reviewed through the National First Aid Science Advisory Board and recommendations published as a separate section of the AHA CPR and ECC Guidelines and are available at the website identified in MAHC Section 6.2.1.1.3.
Legal Issues Lifeguards are part of the pre-hospital chain of response and should have basic understandings of critical legal concepts such as consent, refusal of care, and negligence. Legal topics to be covered are not limited to these listed topics. Training agencies are strongly recommended to add topics based on the typical environment in which the trained lifeguard will be employed.
Lifeguard Training Delivery
# Standardized and Comprehensive
A standardized method of training with comprehensive materials is essential to the implementation of a consistently-delivered lifeguard training program.
A specific method is not being recommended by the MAHC.
463 American Heart Association Guidelines available at : /.
Policies & Management ANNEX 272
Skills Practice While much of the necessary cognitive knowledge may be obtained through selfdirected study, especially in an interactive online format, physical skills practice is necessary to develop an understanding of how to apply knowledge and identify the various needs in an emergency situation. During skills practice an instructor can provide individualized learning approaches, corrective feedback, and lead simulations and scenarios.
# Shallow Water Training
It is important that the student lifeguard be able to practice and be tested in the deepest water specified in their certification.
# Deep Water Training
It is important that the student lifeguard be able to practice and be tested in at least the minimum water depth specified in their certification.
Sufficient Time This CODE does not prescribe a particular length of time for courses. Instead, this CODE is more performance based by requiring that all of the essential topics in MAHC Section 6.
Certified Instructors The instruction of a course by an individual not directly authorized by the training agency is extremely problematic and risks the quality controls established by the training agency. This also places public SAFETY at risk, in that the unauthorized instructor may not be fully qualified to teach the materials as intended. It also affects the training agency in that there is no direct recourse against an unauthorized, and unqualified, instructor. Lifeguard certifications, obtained from a lifeguard training course taught by an instructor who is not currently certified or authorized by the training agency to teach lifeguarding courses, will not be recognized as certified or trained by the AHJ per MAHC Section 6.2.1.3.
# Minimum Prerequisites
The creation of minimum instructor prerequisites is a crucial piece to create quality and consistency for the training agency. Instructors who lack such experiences may not fully understand the requirements and demands of a lifeguarding position and may not provide an experienced instructor's insight to students on how to apply the skills and knowledge found in the training agency curriculum.
It is necessary that lifeguard instructors have a firm understanding of the course they will be teaching. While it may be possible for an individual to pass a lifeguard instructor course without first taking a basic course, such an instructor would lack a firm understanding of the skills required by the training agency. It should be noted however, that training agencies should have the ability to create curriculum that would allow an individual from another training agency, or an individual who chooses to take an alternative to a full basic level course, to become instructors.
A Lifeguard Instructor Training Course must also provide information to the instructor candidates on how to safely and effectively conduct a course including: Knowledge of how to provide for the health and SAFETY of the students. (example knowing how to disinfect manikins for use); Ability to maintain adequate supervision at all times during in water skills and have a lifeguard on duty; Knowledge of how to effectively use program materials and training equipment as listed in MAHC Section 6.2.1.2.7; Ability to supervise student skill practice and provide timely, positive and corrective feedback; and Knowledge and ability to evaluate students as to meeting the criteria set forth by the training agency for which they are an instructor.
# Instructor Renewal/Recertification Process
The training agency must have a process in place for renewal/recertification of instructors. The process should identify the criteria when reauthorization is required such as an instructor must teach a certain number of lifeguard courses in a certain time period (years) and/or do in-person or on-line updates as needed (e.g., when course materials or content have been revised).
# Quality Control
Quality instruction is crucial to the survival of a training agency and, in the case of lifeguard training, crucial to the SAFETY and well-being of millions of swimmers every year. Training agencies must have procedures that allow for the correction, remediation and, if necessary, the revocation of instructor credentials.
Training Equipment These pieces of equipment are required to accomplish the objectives of lifeguard training as outlined in the code. It is educationally sound to provide enough equipment based on the number of students who will be using it at the same time. Below is a listing o Four triangular bandages, o One 3-inch roller bandage, o A blanket or pillow, and o a rigid splint such as a magazine, cardboard, or long and short boards; Spinal immobilization materials; Backboards, each equipped with 3 straps and head immobilizers (one backboard for every three participants is recommended); if fewer backboards are available, additional time may be required.
# Competency and Certification
# Requirements
The readiness of lifeguard candidates to respond to aquatic-based emergencies should be assessed thoroughly for skill mastery, knowledge, and practical application prior to being issued a certificate. In regards to a written exam, all nationally recognized training agencies currently require an 80% correct answer rate as the minimum threshold for passing.
Policies & Management ANNEX 275
Instructor Physically Present The physical presence of the instructor of record assures that students are evaluated accordingly in both cognitive and physical testing. This also significantly reduces the risk of individuals becoming certified who lack the basic skills and knowledge necessary through either acts of omission caused by the substitution of another individual to provide testing, or by student fraud.
Certifications A certification issued at the end of a lifeguard course indicates that the individual successfully met the training requirements on the day of assessment. A completion certificate does not imply future performance or suitability in all circumstances. It is the responsibility of the employer to verify skills and ongoing competency suitable for the environment in which the lifeguard will be assigned through pre-service and in-service training.
# Number of Years
The United States Lifeguarding Standards Coalition (USLSC) final report 464 , the scientific review by the American Red Cross (ARC) 465 , and the MAHC agree that lifeguarding skills need to be refreshed as often as possible. The ARC reviewed 12 peer-reviewed publications on CPR skill retention in healthcare providers (retraining intervals of 6 weeks to 24 months) and 28 papers focused on non-healthcare providers (retraining interval of 3 to 48 months). 466 The data from these 40 studies (all measured manikin skills, none measured patient outcomes) showed significant CPR skill degradation within the first year after training in both job categories and the majority of skill degradation occurred in the first year. None of the 40 studies documented adequate skill retention after two years but several showed improved retention if a brief refresher was given at 6-12 months. As a result of this review and the low probability that lifeguards use the skill often enough in their job to retain the skill, the MAHC felt that the skills needed to be refreshed every year through re-certification. They did not think that the convenience of aligning the length of valid certifications for lifeguarding and first aid at two years overrode the strong data showing CPR skill degradation over two years that could put BATHER health at risk. The time periods listed in the MAHC are acceptable only if ongoing in-service and pre-service STANDARDS are followed.
# Documentation
In order to verify compliance with MAHC Section 6.2.1.3.5, requiring the expiration date of the certification allows employers and the AHJ to identify that the lifeguard has a current certification.
Policies & Management ANNEX 287 When first hired, lifeguarding skills should be assessed during pre-service training prior to the first duty assignment. In-service training should assess skills on a regular basis to determine ability for ongoing duty assignments. Training agencies require that employees have training, knowledge and the proper equipment to protect the employee and the PATRON against disease transmission. This level of awareness must be in place before active PATRON surveillance takes place.
All lifeguard training agencies require lifeguards to be able to perform a combined rescue skill with equipment to receive completion certification. All lifeguard training agencies train their lifeguards that they must be able and ready to recognize, respond, rescue, and resuscitate a victim as quickly as possible. The employer should verify that the lifeguard maintains these skills in the workplace.
Documentation of Pre-Service Training Documentation provides a method for the AHJ to verify compliance. An example of the type of documentation required is a skills check-off form with a participant attendance sheet.
In-Service Training
# Documentation of In-Service Training
All lifeguard training agencies support the need for ongoing in-service training. Both ANSI/APSP -1 and -9 state that certain topics be covered in this training. These inservice trainings should include all the SAFETY PLANS and in and out of water rescue skills for lifeguards.
The United States Lifeguarding Standards Coalition final report 473 , the scientific review by the American Red Cross 474 and the MAHC agree that lifeguarding skills need to be refreshed as often as possible. The Texas state POOL code requires at least 4 hours of in-service a month. Other states require that in-service training be documented and signed. The MAHC agrees that all AQUATIC FACILITIES should have an ongoing in-service program for their SAFETY TEAM members.
The term "periodic" is to offer flexibility to the QUALIFIED OPERATOR based on their seasonality, staff scheduling, and the training agency requirements.
In-Service Documentation Documentation is maintained at the AQUATIC FACILITY to provide a method for the AHJ to verify compliance during an inspection. Documentation is crucial to prove that the inservice training took place, and this documentation should include a list of the topics covered, who was in attendance, and the date and time of the training.
Policies & Management ANNEX 289
Competency Demonstration The point of this section is to have the skills performed consecutively and not individually as they may be done in some training classes. If all of these skills cannot be done consecutively, it is difficult to expect a successful rescue. This is not intended to preclude scenario-based activities that accomplish the same.
AHJ Authority to Approve Safety Plan Some jurisdictions will have the resources to review the SAFETY PLAN and others may not. These line items allow for that flexibility but as a matter of enforcement, the submittal of the SAFETY PLAN is required in either scenario. Should an incident occur in which the jurisdiction is investigating, the SAFETY PLAN on file would be a good point of reference. The MAHC agreed that there needs to be an SAFETY PLAN that is retained and available for review by the AHJ as a point of reference detailing the intended operation to compare to the operation observed in the field.
# Safety Plan on File
The SAFETY PLAN itself should be a tool for facility staff to utilize and as such should be present at the AQUATIC FACILITY and not merely a book sitting on a shelf in an administrative office.
Safety Plan Implemented These MAHC Sections are written to be performance-based and since each AQUATIC FACILITY is different, each SAFETY PLAN may be different. The SAFETY PLAN is developed to be a written document that establishes the processes the AQUATIC FACILITY will employ to be compliant with the code. It is important to also put in the code that those processes, although written, are also practiced and in evidence for the AHJ to see and compare the operation to what is written in the SAFETY PLAN and therefore compliant with the code. During routine inspections, the AHJ may want to see the SAFETY PLAN for the AQUATIC FACILITY as a point of reference but also to enforce a requirement of the code to have a plan.
Staff Management
# Lifeguard Staff
# Minimum Number of Lifeguards
Parts of POOLS or additional POOLS within the same AQUATIC FACILITY may not be open at all times during any given day. For example, only three lanes of a large POOL may be open during early morning lap swim. All zones of PATRON surveillance must be staffed unless the AQUATIC FACILITY can effectively limit access to only the lap lanes. A potential problem arises though, when the entire POOL is not under surveillance because a PATRON in the open section may move to a section/zone not intended to be open. Without surveillance, it may go unnoticed. So, the ability to restrict access and monitor or otherwise assure that no one enters the un-opened section/zone must be able to be effectively addressed and those details must be included in the SAFETY PLAN.
Policies & Management ANNEX 290
Lifeguard Responsibilities QUALIFIED LIFEGUARDS are the front line personnel at an AQUATIC FACILITY to witness most of the situations in which an AQUATIC FACILITY or AQUATIC VENUE should be closed. The QUALIFIED LIFEGUARD must be aware of these emergency closure issues in order to enforce them -examples include an inability to see the bottom or main drains, fecal accidents, severe weather, and others developed by the MAHC.
The MAHC agreed that since there is no established guideline for vision needed for the job of a QUALIFIED LIFEGUARD that if the individual QUALIFIED LIFEGUARD has corrected vision via lenses that they should wear them while conducting PATRON surveillance. Further research needs to be done in this area. Some professions require a minimum vision STANDARD non-corrected while others accept corrected vision to a certain level.
# Shallow Water Certified Lifeguards
If a training agency issues a shallow water certification, the shallow water lifeguard is not qualified to be stationed in a zone that has a water depth greater than that identified for the certification. If any part of the zone has a depth of water greater than that depth, the shallow water lifeguard is not qualified to be assigned to that zone.
Direct Surveillance The factors of recognition, intrusion, and distraction have been identified as major contributor to drowning in guarded venues. Nothing should be allowed to interfere with a lifeguard's duty to perform PATRON surveillance. The MAHC agreed that QUALIFIED LIFEGUARDS performing PATRON surveillance should not be doing other tasks that could distract them.
When on duty, a QUALIFIED LIFEGUARD should scan and supervise the AQUATIC VENUE with no other distracting activities such as cleaning, water testing, and minimize unnecessary conversing with PATRONS.
Distractions When QUALIFIED LIFEGUARDS are engaged in conversations while performing PATRON surveillance activities, their attention is distracted from surveillance. As a parallel, research has shown that even hands-free cell phone conversations can cause drivers to be distracted. 480
Supervisor Staff
# Lifeguard Supervisor Required
The LIFEGUARD SUPERVISOR fulfills the role of making QUALIFIED LIFEGUARDS accountable for performing well and making sure the rotations are conducted properly. It is critical that QUALIFIED LIFEGUARDS perform their duties as trained and that the risk factors that affect the QUALIFIED LIFEGUARD'S ability to perform have been mitigated. In addition, Low or absent disinfectant levels lead to reduced inactivation of pathogens and these conditions have been associated with infectious disease outbreaks. 487 Low pH has been associated with loss of dental enamel 488,489,490 . Dental erosion begins to occur below pH 6.0 and rapidly accelerates as the pH drops. High pH reduces the efficacy of CHLORINEbased DISINFECTION by reducing the amount of molecular hypochlorous acid (HOCl), the active form that is available for DISINFECTION. At pH 7.0, about 70% of the hypochlorous acid is molecular, at pH 7.5 about 50% is molecular, at pH 8.0 about 20% is molecular, and at pH 8.5 only 10% is molecular. As a result, the MAHC decided to set upper and lower limits for pH as an IMMINENT HEALTH HAZARD.
# Operations
# Operations Manual
# Develop
The facility design consultant can provide valuable assistance with preparation of a manual based on their knowledge of the physical system. The facility owner/operator must provide their preferences for operation and maintenance activities, based on location, climate, programs, budget, etc.
Include A manual for the operation of AQUATIC FACILITIES should be kept at the facility, in both printed and digital formats. The manual should include basic information, chemical data, and operation and maintenance instructions about each POOL, SPA and spray ground feature at the facility. The manual should be updated on a regular basis to include added features, renovation work, and new code requirements.
# Chemical Data
The operations manual should also provide chemical data for each chemical system in the facility. This includes but is not necessarily limited to the following: Description of chemicals provided for primary disinfectant, pH adjustment, alkalinity adjustment, stabilizer, SUPERCHLORINATION, coagulant, filter aid, etc.; SECONDARY DISINFECTION SYSTEM description, if provided (UV, ozone, other); Type of chemical feed equipment and rated capacities; Discussion of water treatment goals and range of chemical targets; Description of chemical testing equipment; Testing frequency and location for each test; Chemical controller information, probe cleaning, and calibration procedures; Water testing log forms for chemical results; and Chemical supplies (STORAGE quantity, providers, safety procedures).
# Facility Operation Info
The operations manual should also provide instructions for AQUATIC FACILITY operations. These instructions should include, but not necessarily be limited to the following: Filter backwash or cleaning schedule and procedure; Periodic vacuuming and cleaning schedule and procedures; Seasonal cleaning procedures; SUPERCHLORINATION basis and procedure; Controller sensor maintenance (if applicable); Preventive maintenance tasks and schedule; Winterizing procedures; and Start-up and closing procedures.
# Maintenance Instructions
The operations manual should provide instruction for proper maintenance for the facility. Both daily and seasonal or periodic maintenance will be required for the AQUATIC FACILITY. Available time and budget must always be balanced with the maintenance need. Regardless of whether the facility is large or small, frequent maintenance is more effective and more efficient than waiting until a larger problem occurs.
- Provide an inventory of available maintenance equipment and materials; Develop a daily maintenance schedule; Develop a schedule for periodic or seasonal maintenance; and Create a maintenance log with date and activity for future planning and budgeting.
# Policies & Management ANNEX 298
# Office Management
The operations manual also provides office management information for the facility. This manual should include, but not be limited to following: Active and inactive records and general file information; Forms for water test results and filter cleaning frequency; Forms for inventory of chemicals, equipment, cleaning supplies, etc.; Maintenance inspection forms for facility, equipment, and structures; Maintenance work forms; Requisition forms for purchasing based on facility policies; Staff evaluation forms log; POOL operation log (water quality, attendance, weather, open hours, injuries, complaints, equipment issues, etc.); and Security (opening and closing, underwater lighting, overhead lighting, doors, windows, alarms, bank deposits, etc.).
# Personnel Records
Accurate records should be maintained for all personnel.
The options for this category are varied and numerous. The following list of personnel items is offered as an outline and a starting point for developing an operations manual including, but not limited to the following: Illness and Injury Incident Reports Aquatic injuries and illnesses can occur after normal office working hours; therefore, a 24/7 system for reporting and responding to injury and illnesses at AQUATIC VENUES must be maintained. Early reporting and intervention could reduce the spread of illness or prevent additional injury.
Notify the AHJ The POOL owner/operator should immediately report to the permit issuing official any injuries resulting in death or that require emergency medical response, resuscitation or transport to medical facility, or any illness suspected of being associated with bathing water quality or use of the AQUATIC FACILITY. The POOL owner/operator will have posted and available for use the routine phone numbers and after hours phone numbers necessary for reporting to the permit issuing official. This will facilitate a rapid investigation of the incident and could result in limiting further spread of disease and additional injuries.
Most jurisdictions have some reporting requirements. This section is more comprehensive than the existing reporting requirements of many jurisdictions. Prompt reporting of significant injuries or waterborne illness allows for the permit issuing agency to immediately assess the conditions at the AQUATIC VENUE to determine if it can continue to operate safely or must be closed. Prompt reporting and investigation also allows for more accurate investigations to determine the causes of injury and illness. This information can be used to prevent future injuries or illness.
# Contamination Incidents
The Body Fluid Contamination Response Log is an important part of the administrative procedures for the venue and will document, in the case of a subsequent fecal, vomit, or blood contamination incident, that an appropriate response was conducted. A sample Body Fluid Contamination Response Log is provided below:
Policies & Management ANNEX 302
Patron-Related Management Aspects
# Signage
The purpose of this is to limit injuries and the spread of communicable disease spread by direct contact with objects. Healthy swimming messages can also be put on posters to be hung in bathroom stalls, at the AQUATIC FACILITY entrance, on the back of ticket stubs, and in group-event contracts. Ideally, signage should be provided to encourage BATHERS to take a second shower after using the toilet before reentering the AQUATIC VENUE. While this requirement may be difficult to enforce, the posting of such signs may encourage compliance. Consider the needs of clients and provide effective communication which could include signs in more than one language, Braille, etc.
Sign Messages Suggested content for WATERSLIDES should also include content on their signs to comply with the manufactures recommendations. Minimum content should include: Rider position, Number of riders allowed at a time, Dispatch instructions, Water depth at slide exit, and Height requirement if specified by manufacturer.
# Spa Signs
See discussion on temperature and relevant data pertaining to SPA temperatures in MAHC Section 5.7.4.7.2. These data have been used to support wording for SPA venue signs.
# Suggested Spa Sign Content
- Post signs with suggested time limits (15 minutes); It is recommended that all SPAS have the following statement included on the signage. "Depth of spa is variable. Enter with caution;" Other suggested SPA and SAFETY equipment; Place time clocks with numbers large enough to read from a distance on a nearby wall in clear view of all users; Place a thermometer on the wall with numbers large enough to read from a distance or place the thermometer in the SPA itself; Place a 15-minute timer on the water jets. The reset button should be placed at least 10 feet (3 m) from the tub so users must physically leave the tub to turn the water jets on again.
# Infants and Toddlers
Infants and toddlers are not recommended in a SPA. Small children are still developing internal temperature regulations, and infants in particular have a small body mass compared to body surface area. HOT WATER also could cause hyperthermia, and a SPA seat is not designed for a small child to sit properly to keep their head above water.
Swimmer Empowerment Methods
# Public Information and Health Messaging
The MAHC felt strongly that public education and health communication with users should be required at any INDOOR AQUATIC FACILITY. This messaging should make clear the responsibility of the user to shower before entering the POOL and that they should not urinate in the POOL. It is known that urine and sweat contribute nitrogen to the POOL resulting in chloramines. By actively limiting the introduction of urine and sweat, the result should be fewer chloramines in the POOL and the air. A summary of health and exposure data can be found in MAHC Appendix 1: Summary of Health and Exposure Data for Chemical and Biological Contaminants.
# Post Inspection Results
There are only a relatively small number of municipal organizations that require public or web-based disclosure of inspection reports. However, as inspection activity is tax payer supported, there is a growing trend toward requiring public disclosure. One recent example is the Beaches Environmental Assessment and Coastal Health (BEACH) Act of 2000, a Federal Act that requires public disclosure of coastal beach closings. Additionally, DeKalb County, Georgia requires the public posting of inspection results for AQUATIC FACILITIES as well as posting them on the internet, which is similar to the ever expanding requirement for posting inspection results at food service establishments. The posting of inspections at AQUATIC FACILITIES will increase public awareness of aquatic SAFETY and health and encourage aquatic operators to comply with all code requirements.
Most jurisdictions require the permit to be conspicuously posted. This is to inform the public that the facility has met the minimum SAFETY STANDARDS required by law.
# Fecal/Vomit/Blood Contamination Response
The following discussion gives the rationale behind the remediation recommendations. Fecal contamination of recreational water is an increasing problem in the United States and other countries. Since the mid 1980s, the number of outbreaks of diarrheal illness 491 American Academy Of Pediatrics, et al. (2002). Caring for Our Children: National Health and Safety Performance Standards; Guidelines for Out-of-Home Child Care Programs, 2nd edition. Elk Grove Village, IL: American Academy of Pediatrics and Washington, DC: American Public Health Association. Available at .
Policies & Management ANNEX 304 associated with recreational water has been increasing in the United States. 492 Of these outbreaks, disinfected, man-made swimming venues, the target of the MAHC, have had the greatest increase. These outbreaks are usually a result of people swimming while they have infectious, pathogen-containing diarrhea caused by pathogens such as Cryptosporidium, Giardia, Shigella, Salmonella, or E. coli O157:H7. Contamination of swimming water by infected persons and subsequent swallowing of contaminated water by other swimmers continues the spread of diarrheal illness.
Diarrheal illness is common in the United States with surveys indicating that 7.2-9.3% of the general public have had diarrhea in the previous month. 493 Additional studies demonstrated that people routinely have a mean of 0.14 grams (range = 0.1 to 10 grams) of fecal contamination on their buttocks and peri-anal surface. 494 The increase in outbreaks, the high prevalence of diarrheal illness in the public, and likelihood of frequent fecal contamination of POOLS by BATHERS raised the question of how to respond to overt fecal releases, particularly formed stools that were more visible, in POOLS. The need to develop a response plan was amplified by the emergence of the CHLORINEtolerant parasite Cryptosporidium as the leading cause of disinfected venue-associated outbreaks of diarrheal illness. First, formed stools were thought to be a significantly lower risk for spreading illness compared to diarrhea, since most pathogens are shed in the greatest numbers in diarrhea. As the highest risk material, diarrhea was considered the worst case contamination scenario that could potentially contain Cryptosporidium. As a result, a response should require the extreme treatment conditions needed to inactivate Cryptosporidium. Formed stool was assessed as a lower risk than diarrhea but several questions remained. Should formed stools be treated as potentially infectious materials? If so, then should the stool be treated as a potential Cryptosporidium contamination event like diarrhea (i.e., longer inactivation time) or could it be treated to inactivate all other pathogens other than Cryptosporidium (i.e., shorter inactivation time).
To collect data relevant to answering the questions above, a study to collect fecal releases from POOLS in the United States was conducted in 1999. POOL staff volunteers from across the United States collected almost 300 samples from fecal incidents that occurred at water parks and POOLS. 495 The Centers for Disease Control and Prevention then tested these samples for Cryptosporidium and Giardia. Giardia was chosen as a representative surrogate for moderately-CHLORINE resistant pathogens like hepatitis A virus and norovirus. Using conditions to inactivate Giardia would inactivate most pathogens other than Cryptosporidium. None of the sampled feces tested positive for Cryptosporidium, but Giardia was found in 4.4% of the samples collected. These results suggested that formed fecal incidents posed only a very small Cryptosporidium threat but should be treated as a risk for spreading other pathogens such as Giardia. As a
Policies & Management ANNEX 305 result of these data and the discussion above, it was decided to treat formed stools as potential Giardia contamination events, and liquid stool as potential Cryptosporidium contamination events.
It was thought that norovirus contamination posed the greatest threat from vomit contamination and that the virus would be inactivated by a formed stool response using Giardia inactivation times as discussed above. Further assessment also suggested that blood contamination of POOL water posed little health risk due to the sensitivity of bloodborne pathogens (e.g., viruses, bacteria) to environmental exposure, dilution in the water, and chlorination. In addition, POOL water exposures would lack the requisite bloodborne exposure routes needed to spread the pathogens to other people.
# Contamination Response Plan
The Fecal/Vomit/Blood CONTAMINATION RESPONSE PLAN is a vital part of the administrative procedures for the venue. All staff associated with the operation of the POOL should be aware of the response plan and trained in implementation procedures. At least one responder should be available on-site during all hours of operation.
# Contamination Training
# Minimum
A staff member trained in fecal/vomit/blood contamination response should be on site during all operational hours. OSHA discusses occupational issues related to potential bloodborne pathogen exposure in the Bloodborne Pathogens Standard, 29 CFR 1910.1030 with further discussion under General Guidance 497 and the OSHA Fact Sheet: OSHA's Bloodborne Pathogens Standard 498 .
Aquatic Venue Water Contamination Response 6.5.2.2 Physical Removal
# No Vacuum Cleaners
Questions are often received concerning the MAHC recommendation to NOT VACUUM fecal material from the POOL. When the material is drawn through the vacuum, the vacuum itself is then contaminated and must be disinfected. At the present time, the MAHC is not aware of any manufacturer that has a decontamination protocol for disinfecting fecal-, vomit-, or blood-contaminated POOL vacuum units.
Treated Many conventional test kits cannot measure FREE AVAILABLE CHLORINE levels up to 20 mg/L. Operators should use, in order of preference, a FAS-DPD titration test kit with or 496 OSHA. Bloodborne pathogens and needlestick prevention standards. Available at: . Accessed: 5/1/2013. 497 OSHA. Bloodborne pathogens and needlestick prevention. Available at: . Accessed: 5/1/2013. 498 OSHA. Fact Sheet: OSHA's bloodborne pathogen standards. Available at: . Accessed: 5/1/2013.
Policies & Management ANNEX 306 without dilutions using CHLORINE-free water, or use test strips that measure FREE AVAILABLE CHLORINE in a range that includes 20 mg/L. The inactivation time should only be started once testing indicates that the intended FREE CHLORINE FAS-DPD should be used instead of a color comparator DPD test.
It is important that the operator use a non-stabilized CHLORINE product when raising the FREE CHLORINE RESIDUAL particularly when raising to high levels such as 40 mg/L. If a stabilized product such as dichlor or trichlor were used, a high level of cyanuric acid would remain in the POOL after the HYPERCHLORINATION process. The cyanuric acid level in POOL water can only be lowered by dilution of POOL water with make-up water. Since CHLORINE products degrade over time, it is not recommended that non-stabilized CHLORINE products be stored in case of a fecal incident. The operator could either purchase a non-stabilized product at a POOL supply store or buy unscented household bleach (sodium hypochlorite) product that has a label indicating it is EPA-REGISTERED for use as a drinking water disinfectant.
Aquatic Venue Water Contamination Disinfection 6.5.3.1 Formed-Stool Contamination For formed-stool contamination, a free CHLORINE value of 2 mg/L was selected to keep the POOL closure time to approximately 30 minutes. Other CHLORINE concentrations or closure times can be used as long as the CT inactivation value is kept constant. The CT VALUE is the concentration (C) of FREE AVAILABLE CHLORINE in mg/L multiplied by time (T) in minutes: (CT Value = C x T).
For formed-stool contaminated water the CT VALUE for Giardia ( 45) is used as a basis for calculations:
Policies & Management ANNEX 307 6.5.
Pools Containing Chlorine Stabilizers CHLORINE stabilizers such as cyanuric acid slow DISINFECTION; therefore, higher CHLORINE levels are likely necessary to reach the CT VALUE for Giardia inactivation in POOLS using CHLORINE stabilizers. However, at this time there is no standardized protocol to compensate for CHLORINE stabilizers and no data determining how the inactivation of Giardia is affected by CHLORINE stabilizers under POOL conditions. A SAFETY value of 2 has been incorporated until these data can be gathered.
Diarrheal-Stool Contamination For diarrheal-stool contamination, inactivation times are based on Cryptosporidium inactivation times. The CT VALUE for Cryptosporidium is 15,300. If a different CHLORINE concentration or inactivation time is used, an operator must ensure that the CT VALUES remain the same.
For example, to determine the length of time needed to disinfect a POOL at 20 mg/L after a diarrheal accident, use the following formula: C x T = 15,300.
Solve for time: T= 15,300 ÷ 20 mg/L = 12.75 hours.
Therefore, it would take 12.75 hours to inactivate Cryptosporidium at 20 mg/L. See table below:
The CT 3log used is for a 3-log inactivation to achieve a decrease in the concentration of oocysts below one infectious dose per volume of water swallowed (1 oocyst/100 mL). Similar to the assumptions made for secondary disinfection (See MAHC Section 4.7.3.3.2.5), this calculation assumes a single contamination event (e.g. diarrheal incident) of ~100 mL could introduce 10 8 Cryptosporidium OOCYSTS into the water 499,500 . This allows for a safety factor to include smaller volume venues and still achieve the required concentration. An additional safety factor not included is the impact of the filtration system since filter oocyst removal efficacy varies widely. This may be more quantifiable in the future so that it could be included in the calculation. Volume calculations indicate that small volume AQUATIC VENUES like splash pads should be able to achieve this goal by using the CT VALUE cited:
Policies & Management ANNEX 308 10 8 oocysts / 10,000 gallons = 10 8 oocysts / (10,000 gallons X 3785.4 mL/gallon) = 2.64 OOCYSTS/mL = 264 OOCYSTS / 100 mL With the 3-log inactivation, this volume will contain 0.264 OOCYSTS per 100 mL which is below the required one OOCYST/100 mL and larger volume facilities will exceed this requirement.
Pools Containing Chlorine Stabilizers CHLORINE stabilizers such as cyanuric acid slow DISINFECTION; therefore, higher CHLORINE levels may be necessary to reach the CT VALUE for Cryptosporidium inactivation in POOLS using CHLORINE stabilizers. Limited data suggest that a 3-log inactivation of Cryptosporidium is possible in more extreme conditions when 50 PPM cyanuric acid was present in the water (pH of 6.5, FREE CHLORINE RESIDUAL of 40 mg/L). 501 The level of cyanurate mentioned above (i.e., 50 PPM) was the concentration used in the experiment and should not be construed with suggested operating conditions; POOL operators should not add additional cyanurate to a POOL to reach 50 PPM. Higher levels of stabilization (i.e., over 50 PPM) may or may not decrease DISINFECTION efficacy further so more data are needed to address the issue.
Along with the pH level and FREE CHLORINE RESIDUAL, the cyanuric acid level should be checked and adjusted if necessary prior to reopening the POOL. Data are not currently available for remediation procedures with POOLS that contain stabilized CHLORINE or cyanuric acid. CDC has extrapolated current data and has the following suggestions for remediation.
In POOL water that contains CHLORINE stabilizer such as cyanuric acid under 50 mg/L, the pH should either be lowered to 6.5 and the FREE CHLORINE RESIDUAL shall be raised to 40 mg/L using a non-stabilized CHLORINE product and maintained for at least 30 hours or an equivalent time to reach the same CT VALUE as shown in the MAHC Annex 6.5.3.2. Further data are being collected by CDC to better address the issue of HYPERCHLORINATION of Cryptosporidium in POOLS using stabilizers in POOL water that contains CHLORINE stabilizer such as cyanuric acid under 50 mg/L.
Another method for remediation could be reached by dilution, draining the POOL of enough water to reach 50 mg/L stabilizer and then following the procedure above. If that cannot be accomplished, the POOL could be drained completely and scrubbed. Vomit-Contamination For vomit contamination, the CT VALUE for norovirus is thought to be in the same range as Giardia, so the same CT VALUES are used as for a formed stool contamination. 502
Blood-Contamination If the CHLORINE or bromine residual and pH are in a satisfactory range, there is no public health reason to recommend closing a POOL due to blood contamination. Data suggest that the risk posed by potential bloodborne pathogens is greatly diminished by dilution and normal FREE CHLORINE RESIDUAL levels. However, the operator may wish to temporarily close the POOL for aesthetic reasons or to satisfy PATRON concerns.
Procedures for Brominated Pools There are no inactivation data for Giardia or Cryptosporidium for bromine or any developed protocols for how to hyperbrominate a swimming POOL and inactivate pathogens that may be present in fecal matter or vomit. Therefore, POOL operators should use CHLORINE in their DISINFECTION procedures. It should also be noted that DPD test kits cannot differentiate between CHLORINE and bromine. This is because DPD undergoes the same chemical reaction with both CHLORINE and bromine. Therefore, it is important that the POOL'S bromine residual be measured before CHLORINE is added to the POOL. This bromine residual should be taken into consideration when determining that the FREE CHLORINE RESIDUAL necessary for the type of contamination has been met (i.e., the FREE CHLORINE RESIDUAL measured minus the bromine residual should be equal to or greater than the intended FREE CHLORINE RESIDUAL). If a DPD test kit with a CHLORINE comparator is used; the total bromine residual can be determined by multiplying the FREE CHLORINE RESIDUAL by a factor of 2.2.
Policies & Management ANNEX 310
Surface Contamination Cleaning and Disinfection 6.5.4.1 Limit Access Body fluids, including blood, feces, and vomit are all considered potentially contaminated with pathogens. Therefore, spills of these fluids on the POOL DECK should be cleaned up immediately. Visible contamination should be removed first, followed by DISINFECTION of the contaminated surfaces.
Clean Surface The CDC protocol for cleaning body fluid spills from POOL DECKS entitled "Cleaning up Body Fluid Spills on Pool Surfaces" can be found on the CDC Healthy Swimming/Recreational Water website at: .
These procedures are based on hospital infection control guidelines. 503
Contaminant Removal and Disposal Currently, there are no standardized procedures for removing CONTAMINANTS, particularly those found in biofilms/slime layers, in piping, or AQUATIC FEATURES that spray or dump water. All water features should be well drained and disinfected per manufacturer's instructions. Development of appropriate guidelines deserves further investigation and data gathering.
Disinfect Surface The efficacy of disinfectants is greatly impacted by the organic load on the surface to be disinfected. Reducing the organic load as much as possible through cleaning and removal of all visible contamination BEFORE adding disinfectant is critical to successful DISINFECTION. Contact times apply only if all visible organic material has been removed before DISINFECTION.
# AHJ Inspections
# Inspection Process
The AHJ has the authority to enter the facility for both routine inspections and to investigate reports of illness and injury. At the time of investigation, all records and facility personnel required for interviews must be available.
Policies & Management ANNEX 312 o This requirement does not apply to wiring inside walls/ceilings, etc. at an indoor POOL.
# Enforcement
# Enforcement Penalties
This is meant to apply to an AQUATIC FACILITY not making a good faith effort to correct the problem. This is not meant to apply to a closed AQUATIC FACILITY that is working on correcting an imminent health hazard or other violation (e.g., parts on order, maintenance scheduled).
7.0 MAHC Resources ANNEX 313
# MAHC Resources
# A Note about Resources:
The resources used in all MAHC modules come from peer-reviewed journals and government publications. No company-endorsed publications have been permitted to be used as a basis for writing code or annex materials.
Codes Cited within the MAHC
# MAHC
Appendices ANNEX 354
Appendix 2: Air Quality Formula NOTE: Significant numbers of public comments were received regarding the proposed increase, above ASHRAE 62 STANDARDS of required outdoor air. The commenters noted that the requirements will result in increased costs for equipment and operation while lacking adequate data to support the increase. Based on the potential negative impact and the need for additional research and data to differentiate the causes and sources of indoor air quality problems on design criteria (e.g., design, inappropriate operation, inadequate maintenance), the MAHC Committee decided to defer to ASHRAE outdoor air requirements in this version of the MAHC. The Committee thought it important to preserve the work done by the Technical Committee, so the proposed code language for additional outdoor air has been moved to Appendix 2 The following outlines the discussion by the MAHC Committee. One of the goals was to establish a more comprehensive formula than is currently published in the ASHRAE 62 ventilation document (e.g., adding additional air requirements to the minimum ASHRAE standards). The formula should include consideration for the type of feature as well as what type of water treatment is being utilized to maintain the water chemistry. The Committee realized early on that there is very little research in the off-gassing of chemicals for INDOOR AQUATIC FACILITIES. ASHRAE completed a preliminary research project 539 but did not perform detailed research on various AQUATIC VENUES and treatment methods. The Committee had to use the experience of its members on what was working in the real world and what was not working to modify the formula used in ASHRAE 62. In other words, the Committee had the final answer and developed a modified formula that yielded the desired results. This formula calculated the minimum air required in ASHRAE 62 and then added additional air TURNOVER requirements depending on the type and area of AQUATIC FEATURE or DECK/spectator area.
The matrix was set up with three types of AQUATIC VENUES: FLAT WATER, AGITATED WATER, and HOT WATER as each type of AQUATIC VENUE differs in how it affects air quality. One of the key drivers that the Committee identified that made these AQUATIC VENUES different was the expected THEORETICAL PEAK OCCUPANCY density. With increased BATHERS per unit volume of water, there is an increase in the organic contamination from the POOL users and thus the presence of combined CHLORINE or combined bromine. The second factor was how much surface area of the AQUATIC VENUE water would come in contact with the air to increase the expected off-gassing of chemicals. Design professionals experience factored into the final cfm/ft 2 . Design professionals knew from experience where the final number needed to be, added in reasonable density factors and then addressed the individual characteristics of the AQUATIC VENUES to include splashing at the surface and the temperature of the water.
To calculate the minimum cfm of fresh air required:
# Appendix 3: Dye Test Procedure
Dye testing should be performed to determine and adjust the performance of the RECIRCULATION SYSTEM. Dye studies tend to be qualitative in nature. 540 Some judgment is generally required to determine whether a dye study should be classified as passing or a failing. In general, dead zones (or areas of poor circulation) would indicate a failure that could be fixed by adjusting the INLETS or other system hydraulics. If the POOL does not reach a uniform color within 15 minutes, then adjustments are required. MAHC Appendices ANNEX 362 react with the dye. Come back the following day to make sure there is no CHLORINE, and likewise on the day of the dye study. 5. Prepare the pump by attaching the tubing to the existing piping and calibrate the flow rate to 700 mL/min. At this flow rate, the stock solution of dye will be injected into the POOL over a 16 minute period. Tube clamps may be used to secure the connection between the tubing and the connectors. 6. Prepare the filter room by laying down a trash bag (or similar item) as protection from a potential chemical spill/leak. Then place the pump and containers containing the dye stock solution and sodium hypochlorite solution on the plastic cover. 7. Prepare a location in the pipe network (preferably after the filter) to inject the chemicals. If a location does not already exist (e.g., an existing CHLORINE feed or acid feed point) then one will need to be made by tapping the pipe and inserting the proper fitting. 8. Attach the tubing from the pump to the existing or newly created injection point.
# Materials
Depending on what fitting is present you might need an adapter for the tubing.
The other end of the tubing should be placed in the chemical container holding the dye. 9. Make sure all assistants are in place to record video, take pictures, collect data, and time injection to 15 minute pass/fail observation point. 10. When ready to start, turn on the pump. The dye should begin to flow into the POOL. Start the timer at the same time as the pump is turned on (pump on, time (t) = 0 min). The stock dye solution should be depleted in 16 minutes. After 16 minutes, turn the pump off so that air will not be introduced into the system. 11. Record the time when the dye is first observed coming into the POOL. 12. Record the time when the POOL water is completely dyed (having uniform color | 109,025 | {
"id": "078516ea49cefcb64ca30c42c82b098445f4c6e9",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Outbreaks of serogroup C meningococcal disease (SCMD) have been occurring more frequently in the United States since the early 1990s, and the use of vaccine to control these outbreaks has increased. These outbreaks are characterized by increased rates of disease among persons who may have a common organizational affiliation or who live in the same community. By using surveillance for SCMD and calculation of attack rates, public health officials can identify SCMD outbreaks and determine whether use of meningococcal vaccine is warranted. This report describes 10 steps for evaluation and management of suspected SCMD outbreaks. The principles described also apply to suspected outbreaks caused by meningococcal serogroups A, Y, and W-135. The effectiveness of mass chemoprophylaxis (administration of antibiotics to large populations) has not been demonstrated in most settings in which community and organizational outbreaks occur. However, in outbreaks involving small populations, administration of chemoprophylaxis to all persons within this group may be considered. The ability to validate some aspects of these recommendations is currently limited by incomplete reporting of serogroup information in most systems for meningococcal disease surveillance in the United States and by the relative rarity of SCMD and SCMD outbreaks.# INTRODUCTION
In the United States, outbreaks of serogroup C meningococcal disease (SCMD) have been occurring more frequently since the early 1990s, and the use of meningococcal vaccine to control these outbreaks has increased. During 1980-1993, 21 outbreaks of SCMD were identified, eight of which occurred during 1992-1993 (1). Each of these 21 outbreaks involved from three to 45 cases of SCMD, and most outbreaks had attack rates exceeding 10 cases per 100,000 population, which is approximately 20 times higher than rates of endemic SCMD. During 1981During -1988, only 7,600 doses of meningococcal vaccine were used to control four outbreaks, whereas from January 1992 through June 1993, 180,000 doses of vaccine were used in response to eight outbreaks.
The decision to implement mass vaccination to prevent meningococcal disease depends on whether the occurrence of more than one case of the disease represents an outbreak or an unusual clustering of endemic meningococcal disease. Because the number of cases in outbreaks is usually small, this determination is not easily made without evaluation and analysis of the pattern of disease occurrence. Mass vaccination campaigns are expensive, require a massive public health effort, and can create unwarranted concern among the public. However, mass vaccination can prevent unnecessary morbidity and mortality. This report provides public health professionals (i.e., epidemiologists in state and local health departments) with guidelines for determining whether mass vaccination should be implemented to prevent meningococcal disease.
# BACKGROUND
Meningococcal disease is an infection caused by Neisseria meningitidis. Meningococcal disease manifests most commonly as meningitis and/or meningococcemia that can progress rapidly to purpura fulminans, shock, and death. N. meningitidis is transmitted from person to person via respiratory secretions; carriage is usually asymptomatic.
# Endemic Disease
In the United States, rates of endemic SCMD have remained unchanged at approximately 0.5 cases per 100,000 population per year (2). Most of these cases are sporadic and are not epidemiologically associated with other SCMD cases. Secondary and co-primary SCMD cases sometimes occur among close contacts of persons with primary disease; however, such cases are rare, primarily because close contacts are administered chemoprophylaxis (3).
# Control of Outbreaks
SCMD outbreaks represent a different epidemiologic phenomenon than does endemic SCMD. SCMD outbreaks are characterized by increased rates of disease among persons who may have a common organizational affiliation or who live in the same community yet do not have close contact. By using the guidelines contained in this report, public health officials can identify SCMD outbreaks and determine whether the use of meningococcal vaccine is warranted. Meningococcal vaccine is recommended for the control of SCMD outbreaks, which often affect older children and adults, for whom vaccination is effective.
The benefit of vaccination for control of SCMD outbreaks is difficult to assess because the pattern of disease occurrence is unpredictable and the numbers of cases are usually small. However, in three recent SCMD outbreaks in the United States during which vaccination campaigns were conducted, additional SCMD cases occurred only among nonvaccinated persons in the group targeted for vaccination (1), suggesting that additional SCMD cases probably were prevented by vaccination.
# Outbreak Settings
In the United States, SCMD outbreaks have occurred in organizations and communities. In a communitybased outbreak, identifying groups most likely to benefit from vaccination is more difficult because communities include a broad range of ages among whom risk for disease and vaccine efficacy vary. Thus, the recommendations for evaluation and management of organization-based and community-based outbreaks are considered separately.
# DEFINITIONS
In this report, the following definitions for SCMD and other definitions are used (4): Case Definitions A confirmed case of SCMD is defined by isolation of N. meningitidis serogroup C obtained from a normally sterile site (e.g., blood or cerebrospinal fluid) from a person with clinically compatible illness. A probable case of SCMD is defined by the detection of serogroup C meningococcal polysaccharide antigen in cerebrospinal fluid (by latex agglutination or counterimmunoelectrophoresis) in the absence of a diagnostic culture from a person with clinically compatible illness.
# Close Contacts
Close contacts of a patient who has meningococcal disease include a) household members, b) day care center contacts, and c) persons directly exposed to the patient's oral secretions (e.g., through mouth-tomouth resuscitation or kissing) (3). Primary, Secondary, and Co-Primary Cases A primary case is a case that occurs in the absence of previous known close contact with another casepatient. A secondary case is defined as one that occurs among close contacts of a primary case-patient greater than or equal to 24 hours after onset of illness in the primary case-patient. If two or more cases occur among a group of close contacts with onset of illnesses separated by less than 24 hours, these cases are considered to be co-primary.
# Organization-and Community-Based Outbreaks
An organization-based outbreak of SCMD is defined as the occurrence of three or more confirmed or probable cases of SCMD during a period of less than or equal to 3 months in persons who have a common affiliation but no close contact with each other, resulting in a primary disease attack rate of at least 10 cases per 100,000 persons. In instances where close contact has occurred, chemoprophylaxis should be administered to close contacts. Organization-based outbreaks have recently occurred in schools, universities, and correctional facilities (1). Investigation of organization-based outbreaks may reveal even closer links between patients than suggested by initial reports. For example, data from an investigation of one outbreak at a school indicated that all persons who had meningococcal disease had ridden the same school bus (5).
A community-based outbreak of SCMD is defined as the occurrence of three or more confirmed or probable cases during a period of less than or equal to 3 months among persons residing in the same area who are not close contacts of each other and who do not share a common affiliation, with a primary attack rate of at least 10 cases per 100,000 population. Community-based outbreaks have occurred in towns, cities, and counties (1). Distinguishing whether an outbreak is organization-based or community-based is complicated by the fact that, in some instances, these types of outbreaks may occur simultaneously.
# Population at Risk
The population at risk is defined as a group of persons who, in addition to close contacts, are considered to be at increased risk for SCMD when compared with historical patterns of disease in the same population or with the risk for disease in the general U.S. population. This group is usually defined on the basis of organizational affiliation or community of residence. The population at risk is used as the denominator in calculations of the disease attack rate.
# Vaccination Group and Seasonality of Outbreaks
During a vaccination campaign, the group designated to be administered vaccine is called the vaccination group. In some instances, the vaccination group will be the same as the population at risk; however, in other instances, these groups may differ. For example, in an organization-based outbreak at a university in which all cases have occurred among undergraduates rather than graduate students, faculty, or other staff, undergraduates may be the vaccination group. In community-based outbreaks, cases often occur in persons within a narrow age range (e.g., only in persons less than 30 years of age) (1). Because the available vaccine is probably not effective in children less than 2 years of age, these children are not usually included in the vaccination group, and the vaccination group may be that portion of the population at risk who are 2-29 years of age.
In the United States, the incidence of meningococcal disease varies by season, with the highest rates of disease occurring in February and March and the lowest in September (2). For control of SCMD outbreaks, vaccination administered before or during the seasonal peak (i.e., fall and winter months) is more likely to prevent cases than vaccination administered during lower incidence periods (i.e., spring and summer).
# RECOMMENDATIONS
The following recommendations regarding the evaluation and management of suspected SCMD outbreaks are based on experience with SCMD outbreaks in the United States. However, the principles described apply to outbreaks caused by the other vaccine-preventabl e meningococcal serogroups A, Y, and W-135. Establish a diagnosis of SCMD. Only confirmed and probable SCMD cases should be considered in the characterization of a suspected SCMD outbreak. Cases not fulfilling these criteria should be excluded from consideration.
Administer chemoprophylaxis to appropriate contacts. Chemoprophylaxis should be administered to close contacts of patients. Administering chemoprophylaxis to persons who are not close contacts of patients has not been effective in preventing community outbreak-associated cases and usually is not recommended. Neither oropharyngeal nor nasopharyngeal cultures for N. meningitidis are useful in deciding who should receive chemoprophylaxis or when investigating suspected outbreaks (3).
Enhance surveillance, save isolates, and review historical data. Most state and local health departments rely on passive surveillance for meningococcal disease, which may result in delayed or incomplete reporting of cases. When an SCMD outbreak is suspected, potential reporting sites should be alerted and encouraged to report new cases promptly. Reporting sites also should send all N. meningitidis isolates to a designated local or state laboratory until investigation of the suspected SCMD outbreak is completed. This action will ensure availability of isolates for confirmation of serogroup and application of other methods for subtyping.
Information on the serogroup of N. meningitidis isolates is needed to fulfill criteria for confirmed and probable case definitions. This information should be obtained promptly with all meningococcal disease case reports in the United States. To ensure availability of serogroup information, health department laboratories should support laboratory facilities that do not routinely perform serogrouping on meningococcal isolates.
Public health officials should review overall and serogroup-specific meningococcal disease rates for previous years in the same or comparable population(s) and in different regions within the state. These data should be compared with data currently reported for the population being evaluated to characterize both the geographic extent and magnitude of the outbreak.
Investigate links between cases. In addition to demographic information, public health professionals should collect age-appropriate information concerning each SCMD patient (e.g., close contact with other case-patients, day care attendance, participation in social activities, participation in sports activities, and affiliation with organizations). This information will help identify secondary and coprimary cases and also may reveal links between cases that will help define the population at risk. Consider subtyping. Subtyping of N. meningitidis isolates, using methods such as multilocus enzyme electrophoresis or pulsed-field gel electrophoresis of enzyme-restricted DNA fragments, may provide information that will be useful in determining whether a group of cases represents an outbreak. SCMD outbreaks usually are caused by closely related strains. Subtyping data can allow identification of an "outbreak strain" and aid in better defining the extent of an outbreak. If strains from a group of patients are unrelated by subtyping, the group of cases most likely does not represent an outbreak. Although subtyping is potentially useful, it is time consuming and can be done only in specialized reference laboratories. In addition, results can sometimes be difficult to interpret. Initiation of outbreak-control efforts should not be delayed until subtyping results are available.
Exclude secondary and co-primary cases. To calculate a primary disease attack rate, all confirmed and probable cases should be summed; secondary cases should be excluded and each set of coprimary cases counted as one case. Because the purpose of calculating attack rates is both to characterize the risk for disease among the general population and to determine whether overall rates have increased, related cases (i.e., secondary and co-primary cases) should not be included. Epidemiologically, secondary and co-primary cases can be considered as representing single episodes of disease with direct spread to one or more close contact(s), which is consistent with endemic disease. Because the risk for acquiring meningococcal disease is 500-800 times greater among close contacts of case-patients than among the total population, chemoprophylaxis is recommended for these persons (3). Because secondary and co-primary cases occur infrequently, they should represent a small portion of outbreak-associated SCMD cases in the United States.
Determine if the suspected outbreak is organization-or community-based. Epidemiologic and laboratory investigations can reveal common affiliations among case-patients. Potential affiliations can be organizational, with all or most of the patients attending a particular day care center, school, or university; belonging to a sports team or club; or sharing an activity (e.g., riding a school bus). Alternatively, common affiliations can be geographic (e.g., residing in the same town, city, or county). Of 21 U.S. outbreaks identified between 1980 and mid-1993, 11 (52%) were organizationbased and 10 (48%) were community-based. Eight (73%) of the 11 organization-based outbreaks occurred in schools (1). If a common organizational affiliation other than community can be identified, the outbreak is termed organization-based; otherwise, it is considered to be communitybased.
Define population at risk and determine its size. In organization-based outbreaks, cases are linked by a common affiliation other than a shared geographically delineated community. The population at risk is the group of persons who best represent that affiliation. For example, if the only association between patients was attending the same school or university, the population at risk would be all persons attending the school or university. Information concerning the size of the organization should be obtained from officials in the organization. In community-based outbreaks, there are no common affiliations among patients other than a shared, geographically defined community. The population at risk can be defined by the smallest geographically contiguous population that includes all (or almost all) case-patients. This population is usually a neighborhood, town, city, or county. The size of the population can be obtained from census information.
Calculate the attack rate. If three or more SCMD cases have occurred in either an organization-or community-based outbreak in less than or equal to 3 months (starting at the time of the first confirmed or probable case), a primary disease attack rate should be calculated. Because of the small number of cases typically involved and the seasonal patterns of meningococcal disease, rate calculations should not be annualized for use in this algorithm. The following formula can be used to calculate this attack rate: Attack rate per 100,000 = */(Number of population at risk)] x 100,000
If an attack rate exceeds 10 SCMD cases per 100,000 persons, vaccination of population at risk should be considered. + The actual attack rate at which the decision to vaccinate is made may vary. Public health personnel should consider the following factors: a) completeness of surveillance and number of possible SCMD cases for which bacteriologic confirmation or serogroup data are not available; b) occurrence of additional SCMD cases after recognition of a suspected SCMD outbreak (e.g., if the SCMD outbreak occurred 2 months previously and if no additional cases have occurred, vaccination may be unlikely to prevent additional SCMD cases); and c) logistic and financial considerations.
Select the target group for vaccination. In most organization-based outbreaks, the vaccination group may include the whole population at risk provided all persons are greater than 2 years of age. If a substantial proportion of patients are less than 2 years of age and, thus, not eligible to receive the current vaccine, patients less than 2 years of age may be excluded and, if at least three case-patients remain, an attack rate should be recalculated. If after recalculation the attack rate is still more than 10 cases per 100,000 persons, vaccination should be considered for some or all of the population at risk greater than or equal to 2 years of age. In some organization-based outbreaks, a vaccination group larger than the population at risk may be designated. For example, in a high school in which all outbreak-associated cases occurred among students, authorities may decide to offer vaccine to staff. In community-based outbreaks, the vaccination group usually can be defined as a subset of the entire population at risk, based on a group greater than or equal to 2 years of age (e.g., 2-19 or 2-29 years of age). This age range should contain all (or almost all) SCMD patients greater than or equal to 2 years of age. If a large proportion of patients are less than 2 years of age and probably will not be protected with the current vaccine, patients less than 2 years of age may be excluded from calculation of an attack rate. * In some situations, the entire population greater than or equal to 2 years of age, without other age restriction, might be the most appropriate vaccination group. For example, in a small town in which several cases have occurred among children greater than or equal to 2 years and adults greater than 29 years of age, it may be most appropriate to select all persons greater than or equal to 2 years of age as the vaccination group. For larger populations, this decision would be costly in terms of finances and human resources and restricting the vaccination group to the persons in age groups with the highest attack rates may be more appropriate. Age-specific attack rates can be calculated by using the formula previously provided and restricting the numerator and denominator to persons within specific age groups (e.g., persons 2-19 years of age). Many recent immunization programs have been directed at persons who are 2-19 years of age or who are 2-29 years of age (1). The 10 steps are summarized as follows:
Summary of 10 steps in the evaluation and management of suspected outbreaks of serogroup C meningococcal disease (SCMD) 1. Establish a diagnosis of SCMD. 10. Select the target group for vaccination.
# Vaccine
Quadrivalent meningococcal vaccine is available in single, 10-or 50-dose vials. Fifty-dose vials are designed for use with jet-injector devices. Questions about vaccination or use of jet-injector devices should be addressed to the National Immunization Program, CDC (telephone: 639-8257) (6).
From 7 to 10 days are required following vaccination for development of protective levels of antimeningococcal antibodies. Cases of SCMD occurring in vaccinated persons within 10 days after vaccination should not be considered vaccine failures.
# Other Control Measures
Mass chemoprophylaxis (i.e., administration of antibiotics to large populations) is not effective in most settings in which community-based or organization-based outbreaks have occurred. Disadvantages of widespread administration of antimicrobial drugs for chemoprophylaxis include cost of the drug and administration, difficulty of ensuring simultaneous administration of chemoprophylactic antimicrobial drugs to large populations, side effects of the drugs, and emergence of resistant organisms. In most outbreak settings, these disadvantages outweigh the possible (and unproven) benefit in disease prevention. However, in outbreaks involving small populations (e.g., an outbreak in a small organization, such as a single school), administration of chemoprophylaxis to all persons within this population may be considered. If mass chemoprophylaxis is undertaken, it should be administered to all members at the same time. In the United States, measures that have not been recommended for control of SCMD outbreaks include restricting travel to areas with a SCMD outbreak, closing schools or universities, or cancelling sporting or social events.
Educating communities, physicians, and other health-care workers about meningococcal disease is an important part of managing suspected SCMD outbreaks. Educational efforts should be initiated as soon as an SCMD outbreak is suspected.
# CONCLUSIONS
The ability to validate some aspects of these recommendations is currently limited by both incomplete reporting of serogroup information in most systems for meningococcal disease surveillance in the United States and the infrequency of SCMD cases and SCMD outbreaks. As additional information becomes available from ongoing surveillance projects, these recommendations may be revised. | 4,480 | {
"id": "42189bb722efc4aaed78da270bdb79e6d9a2f714",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | This report provides interim recommendations for prevention and control of hantavirus infections associated with rodents in the southwestern United States. It is based on principles of rodent and infection control and contains specific recommendations for reducing rodent shelter and food sources in and around the home, recommendations for eliminating rodents inside the home and preventing them from entering the home, precautions for preventing hantavirus infection while rodent-contaminated areas are being cleaned up, prevention measures for persons who have occupational exposure to wild rodents, and precautions for campers and hikers.# INTRODUCTION
The recently recognized hantavirus-associated disease among residents of the southwestern United States (1)(2)(3)(4) and the identification of rodent reservoirs for the virus in the affected areas warrant recommendations to minimize the risk of exposure to rodents for both residents and visitors. While information is being gathered about the causative virus and its epidemiology, provisional recommendations can be made on the basis of knowledge about related hantaviruses. These recommendations are based on current understanding of the epidemiologic features of hantavirus infections in the Southwest; they will be periodically evaluated and modified as more information becomes available.
Rodents are the primary reservoir hosts of recognized hantaviruses. Each hantavirus appears to have preferential rodent hosts, but other small mammals can be infected as well (5,6 ). Available data strongly suggest that the deer mouse (Peromyscus maniculatus) is the primary reservoir of the newly recognized hantavirus in the southwestern United States (1 ). Serologic evidence of infection has also been found in piñon mice (P. truei ), brush mice (P. boylii ), and western chipmunks (Tamias spp.). P. maniculatus is highly adaptable and is found in different habitats, including human residences in rural and semirural areas, but generally not in urban centers.
Hantaviruses do not cause apparent illness in their reservoir hosts (7 ). Infected rodents shed virus in saliva, urine, and feces for many weeks, but the duration and period of maximum infectivity are unknown (8)(9)(10)(11). The demonstrated presence of infectious virus in saliva of infected rodents and the marked sensitivity of these animals to hantaviruses following inoculation suggests that biting may be an important mode of transmission among rodents (7 ).
Human infection may occur when infective saliva or excreta are inhaled as aerosols produced directly from the animal. Persons visiting laboratories where infected rodents were housed have been infected after only a few minutes of exposure to animal holding areas (12 ). Transmission may also occur when dried materials contaminated by rodent excreta are disturbed, directly introduced into broken skin, introduced onto the conjunctivae, or, possibly, ingested in contaminated food or water. Persons have also become infected after being bitten by rodents (13,14 ).
Arthropod vectors are not known to have a role in the transmission of hantaviruses (7,12 ). Person-to-person transmission has not been associated with any of the previously identified hantaviruses (9 ) or with the recent outbreak in the Southwest. Cats and dogs are not known to be reservoir hosts of hantaviruses in the United States. However, these domestic animals may bring infected rodents into contact with humans.
Known hantavirus infections of humans occur primarily in adults and are associated with domestic, occupational, or leisure activities that bring humans into contact with infected rodents, usually in a rural setting. Patterns of seasonal occurrence differ, depending on the virus, species of rodent host, and patterns of human behavior (5,7 ). Cases have been epidemiologically associated with the following situations:
- planting or harvesting field crops;
- occupying previously vacant cabins or other dwellings;
- cleaning barns and other outbuildings;
- disturbing rodent-infested areas while hiking or camping;
- inhabiting dwellings with indoor rodent populations;
- residing in or visiting areas in which the rodent population has shown an increase in density (15)(16)(17).
Hantaviruses have lipid envelopes that are susceptible to most disinfectants (e.g., dilute hypochlorite solutions, detergents, ethyl alcohol , or most general-purpose household disinfectants) (18 ). How long these viruses survive after being shed in the environment is uncertain.
The reservoir hosts of the hantavirus in the southwestern United States also act as hosts for the bacterium Yersinia pestis, the etiologic agent of plague. Although fleas and other ectoparasites are not known to play a role in hantavirus epidemiology, rodent fleas transmit plague. Control of rodents without concurrent control of fleas may increase the risk of human plague as the rodent fleas seek an alternative food source.
Eradicating the reservoir hosts of hantaviruses is neither feasible nor desirable. The best currently available approach for disease control and prevention is risk reduction through environmental hygiene practices that deter rodents from colonizing the home and work environment.
# GENERAL HOUSEHOLD PRECAUTIONS IN AFFECTED AREAS
Although epidemiologic studies are being conducted to identify specific behaviors that may increase the risk for hantavirus infection in humans in the United States, rodent control in and around the home will continue to be the primary prevention strategy (Box 1). CDC has issued recommendations for rodent-proofing urban and suburban dwellings and reducing rodent populations through habitat modification and sanitation (19,20 ).
# Box 1. General precautions for residents of affected areas
Eliminate rodents and reduce the availability of food sources and nesting sites used by rodents inside the home.
- Follow the recommendations in the section on Eliminating Rodents Inside the Home.
- Keep food (including pet food) and water covered and stored in rodent-proof metal or thick plastic containers with tight-fitting lids.
- Store garbage inside homes in rodent-proof metal or thick plastic containers with tight-fitting lids.
- Wash dishes and cooking utensils immediately after use and remove all spilled food.
- Dispose of trash and clutter.
- Use spring-loaded rodent traps in the home continuously.
- As an adjunct to traps, use rodenticide with bait under a plywood or plastic shelter (covered bait station) on an ongoing basis inside the house.
Note: Environmental Protection Agency (EPA)-approved rodenticides are commercially available. Instructions on product use should always be followed. Products that are used outdoors should be specifically approved for exterior use. Any use of a rodenticide should be preceded by use of an insecticide to reduce the risk of plague transmission. Insecticide sprays or powders can be used in place of aerosols if they are appropriately labeled for flea control.
# Box 1. General precautions for residents of affected areas, cont'd.
Prevent rodents from entering the home. Specific measures should be adapted to local circumstances.
- Use steel wool or cement to seal, screen, or otherwise cover all openings into the home that have a diameter ≥1/4 inch.
- Place metal roof flashing as a rodent barrier around the base of wooden, earthen, or adobe dwellings up to a height of 12 inches and buried in the soil to a depth of 6 inches.
- Place 3 inches of gravel under the base of homes or under mobile homes to discourage rodent burrowing.
Reduce rodent shelter and food sources within 100 feet of the home.
- Use raised cement foundations in new construction of sheds, barns, outbuildings, or woodpiles.
- When possible, place woodpiles 100 feet or more from the house, and elevate wood at least 12 inches off the ground.
- Store grains and animal feed in rodent-proof containers.
- Near buildings, remove food sources that might attract rodents, or store food and water in rodent-proof containers.
- Store hay on pallets, and use traps or rodenticide continuously to keep hay free of rodents.
- Do not leave pet food in feeding dishes.
- Dispose of garbage and trash in rodent-proof containers that are elevated at least 12 inches off the ground.
- Haul away trash, abandoned vehicles, discarded tires, and other items that may serve as rodent nesting sites.
- Cut grass, brush, and dense shrubbery within 100 feet of the home.
- Place spring-loaded rodent traps at likely spots for rodent shelter within 100 feet around the home, and use continuously.
- Use an EPA-registered rodenticide approved for outside use in covered bait stations at places likely to shelter rodents within 100 feet of the home. NOTE: Follow the recommendations specified in the section on Clean-Up of Rodent-Contaminated Areas if rodent nests are encountered while these measures are being carried out.
# ELIMINATING RODENTS INSIDE THE HOME AND REDUCING RODENT ACCESS TO THE HOME
Rodent infestation can be determined by direct observation of animals or inferred from the presence of feces in closets or cabinets or on floors or from evidence that rodents have been gnawing at food. If rodent infestation is detected inside the home or outbuildings, rodent abatement measures should be completed (Box 2). The directions in the section on Special Precautions should be followed if evidence of heavy rodent infestation (e.g., piles of feces or numerous dead animals) is present or if a structure is associated with a confirmed case of hantavirus disease.
# Box 2. Eliminating rodent infestation: Guidance for residents of affected areas
- Before rodent elimination work is begun, ventilate closed buildings or areas inside buildings by opening doors and windows for at least 30 minutes. Use an exhaust fan or cross ventilation if possible. Leave the area until the airingout period is finished. This airing may help remove any aerosolized virus inside the closed-in structure.
- Second, seal, screen, or otherwise cover all openings into the home that have a diameter of ≥ 1 ⁄4 inch. Then set rodent traps inside the house, using peanut butter as bait. Use only spring-loaded traps that kill rodents.
- Next, treat the interior of the structure with an insecticide labeled for flea control; follow specific label instructions. Insecticide sprays or powders can be used in place of aerosols if they are appropriately labeled for flea control. Rodenticides may also be used while the interior is being treated, as outlined below.
- Remove captured rodents from the traps. Wear rubber or plastic gloves while handling rodents. Place the carcasses in a plastic bag containing a sufficient amount of a general-purpose household disinfectant to thoroughly wet the carcasses. Seal the bag and then dispose of it by burying in a 2-to 3-foot-deep hole or by burning. If burying or burning are not feasible, contact your local or state health department about other appropriate disposal methods. Rebait and reset all sprung traps. Box 2. Eliminating rodent infestation: Guidance for residents of affected areas cont'd.
- Before removing the gloves, wash gloved hands in a general household disinfectant and then in soap and water. A hypochlorite solution prepared by mixing 3 tablespoons of household bleach in 1 gallon of water may be used in place of a commercial disinfectant. When using the chlorine solution, avoid spilling the mixture on clothing or other items that may be damaged. Thoroughly wash hands with soap and water after removing the gloves.
- Leave several baited spring-loaded traps inside the house at all times as a further precaution against rodent reinfestation. Examine the traps regularly. Disinfect traps no longer in use by washing in a general household disinfectant or the hypochlorite solution. Disinfect and wash gloves as described above, and wash hands thoroughly with soap and water before beginning other activities. NOTE: EPA-approved rodenticides are commercially available. Instructions on product use should always be followed. Products that are used outdoors should be specifically approved for exterior use. Any use of a rodenticide should be preceded by use of an insecticide to reduce the risk of plague transmission. Insecticide sprays or powders can be used in place of aerosols if they are appropriately labeled for flea control.
# CLEAN-UP OF RODENT-CONTAMINATED AREAS
Areas with evidence of rodent activity (e.g., dead rodents, rodent excreta) should be thoroughly cleaned to reduce the likelihood of exposure to hantavirus-infected materials. Clean-up procedures must be performed in a manner that limits the potential for aerosolization of dirt or dust from all potentially contaminated surfaces and household goods (Box 3).
# Box 3. Clean-up of rodent-contaminated areas: Guidance for residents of affected areas
- Persons involved in the clean-up should wear rubber or plastic gloves.
- Spray dead rodents, rodent nests, droppings, or foods or other items that have been tainted by rodents with a general-purpose household disinfectant. Soak the material thoroughly, and place in a plastic bag. When clean-up is complete (or when the bag is full), seal the bag, then place it into a second plastic bag and seal. Dispose of the bagged material by burying in a 2-to 3foot-deep hole or by burning. If these alternatives are not feasible, contact the local or state health department concerning other appropriate disposal methods.
- After the above items have been removed, mop floors with a solution of water, detergent, and disinfectant. Spray dirt floors with a disinfectant solution. A second mopping or spraying of floors with a general-purpose household disinfectant is optional. Carpets can be effectively disinfected with household disinfectants or by commercial-grade steam cleaning or shampooing. To avoid generating potentially infectious aerosols, do not vacuum or sweep dry surfaces before mopping.
- Disinfect countertops, cabinets, drawers, and other durable surfaces by washing them with a solution of detergent, water, and disinfectant, followed by an optional wiping-down with a general-purpose household disinfectant.
- Rugs and upholstered furniture should be steam cleaned or shampooed. If rodents have nested inside furniture and the nests are not accessible for decontamination, the furniture should be removed and burned.
- Launder potentially contaminated bedding and clothing with hot water and detergent. (Use rubber or plastic gloves when handling the dirty laundry; then wash and disinfect gloves as described in the section on Eliminating Rodents Inside the Home.) Machine-dry laundry on a high setting or hang it to air dry in the sun.
# SPECIAL PRECAUTIONS FOR HOMES OF PERSONS WITH CONFIRMED HANTAVIRUS INFECTION OR BUILDINGS WITH HEAVY RODENT INFESTATIONS
Special precautions are indicated in the affected areas for cleaning homes or buildings with heavy rodent infestations (Box 4). Persons conducting these activities should contact the responsible local, state, or federal public health agency for guidance. These precautions may also apply to vacant dwellings that have attracted numbers of rodents while unoccupied and to dwellings and other structures that have been occupied by persons with confirmed hantavirus infection. Workers who are either hired specifically to perform the clean-up or asked to do so as part of their work activities should receive a thorough orientation from the responsible health agency about hantavirus transmission and should be trained to perform the required activities safely.
# Box 4. Special precautions for clean-up in homes of persons with hantavirus infection or buildings with heavy rodent infestation
- A baseline serum sample, preferably drawn at the time these activities are initiated, should be available for all persons conducting the clean-up of homes or buildings with heavy rodent infestation. The serum sample should be stored at -20 C.
- Persons involved in the clean-up should wear coveralls (disposable if possible), rubber boots or disposable shoe covers, rubber or plastic gloves, protective goggles, and an appropriate respiratory protection device, such as a half-mask air-purifying (or negative-pressure) respirator with a high-efficiency particulate air (HEPA) filter or a powered air-purifying respirator (PAPR) with HEPA filters. Respirators (including positive-pressure types) are not considered protective if facial hair interferes with the face seal, since proper fit cannot be assured. Respirator practices should follow a comprehensive user program and be supervised by a knowledgeable person (21 ).
- Personal protective gear should be decontaminated upon removal at the end of the day. If the coveralls are not disposable, they should be laundered on site. If no laundry facilities are available, the coveralls should be immersed in liquid disinfectant until they can be washed.
- All potentially infective waste material (including respirator filters) from cleanup operations that cannot be burned or deep buried on site should be double bagged in appropriate plastic bags. The bagged material should then be labeled as infectious (if it is to be transported) and disposed of in accordance with local requirements for infectious waste.
# PRECAUTIONS FOR WORKERS IN AFFECTED AREAS WHO ARE REGULARLY EXPOSED TO RODENTS
Persons who frequently handle or are exposed to rodents (e.g., mammalogists, pest-control workers) in the affected area are probably at higher risk for hantavirus infection than the general public because of their frequency of exposure. Therefore, enhanced precautions are warranted to protect them against hantavirus infection (Box 5). - Workers in potentially high-risk settings should be informed about the symptoms of the disease and be given detailed guidance on prevention measures.
- Workers who develop a febrile or respiratory illness within 45 days of the last potential exposure should immediately seek medical attention and inform the attending physician of the potential occupational risk of hantavirus infection. The physician should contact local health authorities promptly if hantavirusassociated illness is suspected. A blood sample should be obtained and forwarded with the baseline serum through the state health department to CDC for hantavirus antibody testing.
- Workers should wear a half-face air-purifying (or negative-pressure) respirator or PAPR equipped with HEPA filters when removing rodents from traps or handling rodents in the affected area. Respirators (including positive-pressure types) are not considered protective if facial hair interferes with the face seal, since proper fit cannot be assured. Respirator use practices should be in accord with a comprehensive user program and should be supervised by a knowledgeable person (21 ).
# PRECAUTIONS FOR OTHER OCCUPATIONAL GROUPS WHO HAVE POTENTIAL RODENT CONTACT
Insufficient information is available at this time to allow general recommendations regarding risks or precautions for persons in the affected areas who work in occupations with unpredictable or incidental contact with rodents or their habitations. Examples of such occupations include telephone installers, maintenance workers, plumbers, electricians, and certain construction workers. Workers in these jobs may have to enter various buildings, crawl spaces, or other sites that may be rodent infested. Recommendations for such circumstances must be made on a case-by-case basis after the specific working environment has been assessed and state or local health departments have been consulted. - Workers should wear rubber or plastic gloves when handling rodents or handling traps containing rodents. Gloves should be washed and disinfected before removing them, as described above.
- Traps contaminated by rodent urine or feces or in which a rodent was captured should be disinfected with a commercial disinfectant or bleach solution. Dispose of dead rodents as described in the section on Eliminating Rodents inside the Home.
# CONCLUSION
The control and prevention recommendations in this report represent general measures to minimize the likelihood of human exposure to hantavirus-infected rodents in areas of the southwestern United States affected by the outbreak of hantavirus-associated respiratory illness. Many of the recommendations may not be applicable or necessary in unaffected locales. The impact and utility of the recommendations will be assessed as they are implemented and will be continually reviewed by CDC and the involved state and local health agencies as additional epidemiologic and laboratory data related to the outbreak become available. If required, these recommendations may be supplemented or modified in the future.
# PRECAUTIONS FOR CAMPERS AND HIKERS IN THE AFFECTED AREAS
There is no evidence to suggest that travel into the affected areas should be restricted. Most usual tourist activities pose little or no risk that travelers will be exposed to rodents or their excreta. However, persons engaged in outdoor activities such as camping or hiking should take precautions to reduce the likelihood of their exposure to potentially infectious materials (Box 6).
# Box 6. Reducing risk of hantavirus infection: Guidance for hikers and campers
- Avoid coming into contact with rodents and rodent burrows or disturbing dens (such as pack rat nests).
- Do not use cabins or other enclosed shelters that are rodent infested until they have been appropriately cleaned and disinfected.
- Do not pitch tents or place sleeping bags in areas in proximity to rodent feces or burrows or near possible rodent shelters (e.g., garbage dumps or woodpiles).
- If possible, do not sleep on the bare ground. Use a cot with the sleeping surface at least 12 inches above the ground. Use tents with floors.
- Keep food in rodent-proof containers.
- Promptly bury (or-preferably-burn followed by burying, when in accordance with local requirements) all garbage and trash, or discard in covered trash containers.
- Use only bottled water or water that has been disinfected by filtration, boiling, chlorination, or iodination for drinking, cooking, washing dishes, and brushing teeth. | 4,446 | {
"id": "63148ae2c3d5177ac1a03e87074ee5ecbb1f128e",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Millions of people in America live in manufactured structures-a range of units that includes manufact ured homes, travel trailers, camping trailers, and park trailers. Manufactured structures are used for longterm residence; for temporary housing following disasters; for recreational and travel purposes; and also for classrooms, day care centers, and workplaces. Housing is a primary purpose of these structures, with manufactured homes accounting for 6.3% of the housing units in the U.S. and housing 17.2 million persons. Manufactured homes offer flexibility and affordability, and comprise an important part of the U.S. housing stock. Whether used for long-term housing or for shortterm shelter following a disaster, for classrooms or for offices, manufactured structures should be safe and healthy for the people who live, work, study, and play in them. With Americans spending the vast majority of their time indoors, it is vital that buildings protect occupants from the elements and provide privacy, comfort, and peace of mind. At the same time, these structures should not present risks to occupant's health and safety due to design, construction, or maintenance problems. This report identifies and summarizes safety and health issues in manufactured structures based on a wide expanse of research. The end result is a thorough characterization of health and safety hazards in manufactured structures, along with mitigation strategies and discussions of opportunities for health/ safety enhancements and at-risk populations. Many of the hazards discussed in this report are not unique to manufactured structures, while other issues have been identified as particular problems for this form of housing. Further, when manufactured structures are used as interim housing following a disaster, additional health/safety issues can arise. The specific topics covered in this report are an introduction to manufactured structures, fire safety, moisture and mold, indoor air quality (IAQ), pests and pesticides, siting and installation, utilities, postdisaster housing, and potential opportunities for future enhancements.#
The health and safety hazards related to fire safety, moisture and mold, IAQ, pests and pesticides, and other issues generally fall into the categories of design, construction, and maintenance. Thus, for an issue like effective moisture management to prevent mold and related problems, strategies range from good product selection in the design phase to proper grading of the site during construction all the way to regular maintenance of the building envelope after many years of service. Most other health and safety hazards are similar in nature, with multiple parties playing an important role in managing risks from the design of the manufactured home through its use as a home for years to come.
Fortunately, the challenges of managing health and safety risks in manufactured structures are well documented, along with appropriate strategies and solutions. This report documents and summarizes this information, with the intent of serving as a comprehensive resource to inform discussions and future decisions regarding the design, construction, maintenance, and deployment of manufactured structures in the United States. Design Principles for MHCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Select Types of Foundations and Anchoring Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47
# Table of Contents
Pier-anchor Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47
Slabs-on-grade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47 Proprietary Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48
Relocating Manufactured Homes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Structural Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65
Building Accessibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65 Special Populations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .66 | 1,508 | {
"id": "96520fc91585f3c99363886c31bb6e4458ac787a",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | These revised guidelines were developed by CDC for laboratories performing lymphocyte immunophenotyping assays in human immunodeficiency virusinfected persons. This report updates previous recommendations ) and reflects current technology in a field that is rapidly changing. The recommendations address laboratory safety, specimen collection, specimen transport, maintenance of specimen integrity, specimen processing, flow cytometer quality control, sample analyses, data analysis, data storage, data reporting, and quality assurance. 2. If blood is hemolyzed or frozen, reject the specimen and request another. 3. If clots are visible, reject the specimen and request another. 4. If the specimen is >48 hours old (from the time of draw), reject it and request another.A. Hematologic testing *49 CFR parts 100-171 (56 FR 47158).#
The following CDC staff members prepared this report:
# INTRODUCTION
Accurate and reliable measures of CD4+ T-lymphocytes (CD4+ T-cells) are essential to the assessment of the immune system of human immunodeficiency virus (HIV)infected persons (1)(2)(3). The pathogenesis of acquired immunodeficiency syndrome (AIDS) is largely attributable to the decrease in T-lymphocytes that bear the CD4 receptor (4)(5)(6)(7)(8). Progressive depletion of CD4+ T-lymphocytes is associated with an increased likelihood of clinical complications (9,10 ). Consequently, the Public Health Service (PHS) has recommended that CD4+ T-lymphocyte levels be monitored every 3-6 months in all HIV-infected persons (11 ). The measurement of CD4+ T-cell levels has been used to establish decision points for initiating Pneumocystis carinii pneumonia prophylaxis (12 ) and antiviral therapy (13 ) and to monitor the efficacy of treatment (14)(15)(16). CD4+ T-lymphocyte levels also are used as prognostic indicators in patients who have HIV disease (17,18 ) and recently have been included as one of the criteria for initiating prophylaxis for several opportunistic infections that are sequelae of HIV infection (19,20 ). Moreover, CD4+ T-lymphocyte levels are a criterion for categorizing HIV-related clinical conditions by CDC's classification system for HIV infection and surveillance case definition for AIDS among adults and adolescents (21 ).
Most laboratories measure absolute CD4+ T-cell levels in whole blood by a multiplatform, three-stage process. The CD4+ T-cell number is the product of three laboratory techniques: the white blood cell (WBC) count; the percentage of WBCs that are lymphocytes (differential); and the percentage of lymphocytes that are CD4+ T-cells. The last stage in the process of measuring the percentage of CD4+ Tlymphocytes in the whole-blood sample is referred to as "immunophenotyping by flow cytometry" (22)(23)(24)(25)(26)(27)(28). Immunophenotyping refers to the detection of antigenic determinants (which are unique to particular cell types) on the surface of WBCs using antigen-specific monoclonal antibodies that have been labeled with a fluorescent dye or fluorochrome (e.g., phycoerythrin or fluorescein isothiocyanate ). The fluorochrome-labeled cells are analyzed by using a flow cytometer, which categorizes individual cells according to size, granularity, fluorochrome, and intensity of fluorescence. Size and granularity, detected by light scattering, characterize the types of WBCs (i.e., granulocytes, monocytes, and lymphocytes). Fluorochrome-labeled antibodies distinguish populations and subpopulations of WBCs. Although flow cytometric immunophenotyping is a highly complex technology, methodology for performing CD4+ T-cell determinations has become more standardized between laboratories. The publication of several sets of guidelines addressing aspects of the CD4+ T-lymphocyte testing process (e.g., quality control, quality assurance, and reagents for flow cytometric immunophenotyping of lymphocytes) has contributed to this standardization (29)(30)(31)(32).
The CDC guidelines concerning CD4+ T-cell determinations (33 ) were first published in the MMWR in 1992 to provide laboratorians with the most complete information about how to measure CD4+ T-lymphocytes in blood from HIV-infected persons by using flow cytometry. These guidelines were based on data from scientific literature, information from discussions with technical experts, and experience with related voluntary standards for flow cytometric analyses (29 ). The 1992 guidelines concluded that more data were needed and that revisions would be published as additional information became available and as important innovations in technology were made. In 1993, a national conference was convened by CDC with sponsorship from the Food and Drug Administration (FDA), National Institutes of Health, and Association of State and Territorial Public Health Laboratory Directors. The objectives of the conference were to review data collected after 1992 and to obtain input about the efficacy of the 1992 guidelines. As a result of the 1993 conference, the revised guidelines for performing CD4+ T-cell determinations in HIV-infected persons were published in 1994 (34 ).
Since 1994, the field of CD4+ T-cell testing has rapidly expanded. Flow cytometric analyses of T-cell subsets using three-and four-color approaches (in contrast to the two-color approach addressed in previous reports ), flow cytometric analyses for measuring both the proportion and the absolute numbers of CD4+ T-lymphocytes, and other methods for deriving an absolute CD4+ T-cell count in a blood sample are now commercially available. (Some of these other methods do not depend on the multi-stage process and are collectively referred to in this report as single-platform methods.) Moreover, data evaluating some of the parameters of two-color flow cytometric testing and the routine testing practices of laboratories that provide these testing services have been collected. A second national conference on CD4+ Tlymphocyte immunophenotyping was held in Atlanta, Georgia, on December 12-13, 1995, to discuss these changes. Information shared at the conference and new data collected about laboratory testing practices serve as the basis for the revisions and additions that have been made to the 1994 guidelines. These changes include a) quality assurance (namely, revision of the recommended monoclonal panel to provide a cost-effective solution for laboratories using three-color and four-color approaches), b) the importance of following manufacturers' instructions when using tests and testing devices approved by the FDA, c) recommendations for laboratories performing three-and four-color T-lymphocyte immunophenotyping (TLI), and d) recommendations about the validation and verification procedures that laboratories should conduct before implementing new tests.
# RECOMMENDATIONS I. Laboratory Safety
A. Use universal precautions with all specimens (37 ). B. Establish the following safety practices (38-44 ):
1. Wear laboratory coats and gloves when processing and analyzing specimens, including reading specimens on the flow cytometer.
2. Never pipette by mouth. Use safety pipetting devices.
3. Never recap needles. Dispose of needles and syringes in puncture-proof containers designed for this purpose.
4. Handle and manipulate specimens (e.g., aliquoting, adding reagents, vortexing, and aspirating) in a class I or II biological safety cabinet.
5. Centrifuge specimens in safety carriers.
6. After working with specimens, remove gloves and wash hands with soap and water.
7. For stream-in-air flow cytometers, follow the manufacturer's recommended procedures to eliminate the operator's exposure to any aerosols or droplets of sample material.
8. Disinfect flow cytometer wastes. Add a volume of undiluted household bleach (5% sodium hypochlorite) to the waste container before adding waste materials so that the final concentration of bleach will be 10% (0.5% sodium hypochlorite) when the container is full (e.g., add 100 mL of undiluted bleach to an empty 1,000-mL container).
9. Disinfect the flow cytometer as recommended by the manufacturer. One method is to flush the flow cytometer fluidics with a 10% bleach solution for 5-10 minutes at the end of the day, then flush with water or saline for at least 10 minutes to remove excess bleach, which is corrosive.
10. Disinfect spills with household bleach or an appropriate dilution of mycobactericidal disinfectant. Note: Organic matter will reduce the ability of bleach to disinfect infectious agents. For specific procedures about how areas should be disinfected, see reference 44. For use on smooth, hard surfaces, a 1% solution of bleach is usually adequate for disinfection; for porous surfaces, a 10% solution is needed (44 ).
11. Assure that all samples have been properly fixed after staining and lysing, but before analysis. Note: Some commercial lysing/fixing reagents will reduce the infectious activity of cell-associated HIV by 3-5 logs (45 ); however, these reagents have not been evaluated for their effectiveness against other agents (e.g., hepatitis virus). Buffered (pH 7.0-7.4) 1%-2% paraformaldehyde or formaldehyde can inactivate cell-associated HIV to approximately the same extent (45)(46)(47)(48). Cell-free HIV can be inactivated with 1% paraformaldehyde within 30 minutes (49 ). Because the commercial lysing/fixing reagents do not completely inactivate cellassociated HIV and the time frame for complete inactivation is not firmly established, stained and lysed samples should be resuspended and retained in fresh 1%-2% paraformaldehyde or formaldehyde through flow cytometric analysis.
# II. Specimen Collection
A. Select the appropriate anticoagulant for hematologic testing and flow cytometric immunophenotyping.
1. Anticoagulant for hematologic testing: a. Use tripotassium ethylenediamine tetra-acetate (K 3 EDTA, 1.5 ± 0.15 mg/mL blood) (50,51 ), and perform the test within the time frame allowed by the manufacturer of the hematology analyzer, not to exceed 30 hours.
b. Reject a specimen that cannot be processed within this time frame unless the hematology instrumentation is suitable for analyzing such specimens. Note: Some hematology instruments are capable of generating accurate results 12-30 hours after specimen collection (52 ). To ensure accurate results for specimens from HIV-infected persons, laboratories must validate their hematology instrument's ability to give the same result at time 0 and at the maximum time claimed by the manufacturer when using specimens from both persons infected with HIV and those not infected.
2. Anticoagulant for flow cytometric immunophenotyping, depending on the delay anticipated before sample processing: a. Use K3EDTA, acid citrate dextrose (ACD), or heparin if specimens will be processed within 30 hours after collection. Note: K 3 EDTA should not be used for specimens held for >30 hours before testing because the proportion of some lymphocyte populations changes after this period (53 ).
b. Use either ACD or heparin, not K 3 EDTA, if specimens will be processed within 48 hours after specimen collection.
c. Reject a specimen that cannot be processed within 48 hours after specimen collection and request another.
B. Collect blood specimens by venipuncture (54 ) into evacuated tubes containing an appropriate anticoagulant, completely expending the vacuum in the tubes.
1. Draw specimens from children in pediatric tubes to avoid underdrawing.
2. Mix the blood well with the anticoagulant to prevent clotting.
C. Draw the appropriate number of tubes:
1. Use one tube containing K 3 EDTA when a) hematology and flow cytometric immunophenotyping will be performed in the same laboratory on the same specimen or b) a single measurement is performed on the flow cytometer that results in an absolute number. Note: For single-platform methods that do not use determinations from a hematology analyzer or from conventional flow cytometers to derive absolute CD4+ T-cell numbers, follow the manufacturer's recommendations for anticoagulant and maximum times between specimen collection and testing.
2. In all other circumstances, draw two separate tubes (K 3 EDTA for hematologic determinations and K 3 EDTA, ACD, or heparin for flow cytometric immunophenotyping).
D. Label all specimens with the date, time of collection, and a unique patient identifier.
1. Assure that patient information and test results are accorded confidentiality.
2. Provide on the submission form pertinent medications and disease conditions that may affect the immunophenotyping test (Appendix).
# III. Specimen Transport
A. Maintain and transport specimens at room temperature (64-72 F ) (52,(55)(56)(57). Avoid extremes in temperature so that specimens do not freeze or become too hot. Temperatures >99 F (37 C) may cause cellular destruction and affect both hematology and flow cytometry measurements (52 ).
In hot weather, packing the specimen in an insulated container and placing this container inside another containing an ice pack and absorbent material may be necessary. This method helps retain the specimen at ambient temperature. The effect of cool temperatures (i.e., 39 F ) on immunophenotyping results is not clear (52,57 ).
B. Transport specimens to the immunophenotyping laboratory as soon as possible.
C. For transport to locations outside the collection facility but within the state, follow state or local guidelines. One method for packaging such specimens is to place the tube containing the specimen in a leak-proof container (e.g., a sealed plastic bag) and to pack this container inside a cardboard canister containing sufficient material to absorb all the blood should the tube break or leak. Cap the canister tightly. Fasten the request slip securely to the outside of this canister with a rubber band. For mailing, this canister should be placed inside another canister containing the mailing label.
D. For interstate shipment, follow federal guidelines- for transporting diagnostic specimens. Note: Use overnight carriers with an established record of consistent overnight delivery to ensure arrival the following day. Check with these carriers for their specific packaging requirements.
E. Obtain specific protocols and arrange appropriate times of collection and transport from the facility collecting the specimen.
# IV. Specimen Integrity
A. Inspect the tube and its contents immediately upon arrival.
B. Take corrective actions if the following occur:
1. If the specimen is hot or cold to the touch but not obviously hemolyzed or frozen, process it but note the temperature condition on the worksheet and report form. Do not rapidly warm or chill specimens to bring them to room temperature because this may adversely affect the immunophenotyping results (52 ). Abnormalities in light-scattering patterns will reveal a compromised specimen.
1. Perform the hematologic tests within the time frame specified by the manufacturer of the specific hematology instrument used (time from blood specimen draw to hematologic test). (See Note under II.A.1.b.)
2. Perform an automated WBC count and differential, counting 10,000-30,000 cells (58 ). If the specimen is rejected or "flagged" by the instrument, a manual differential of at least 400 cells can be performed. If the flag is not on the lymphocyte population and the lymphocyte differential is reported by the instrument, the automated lymphocyte differential should be used.
3. If absolute counts are determined by using a single-platform method, hematology results are not needed for this determination.
B. Immunophenotyping 1. For optimal results, perform the test within 30 hours, but no later than 48 hours, after drawing the blood specimen (59,60 ).
2. When centrifuging, maintain centrifugation forces of no greater than 400 g for 3-5 minutes for wash steps.
3. Vortex sample tubes to mix the blood and reagents and break up cell aggregates. Vortex samples immediately before analysis to optimally disperse cells.
4. Include a source of protein (e.g., fetal bovine serum or bovine serum albumin) in the wash buffer to reduce cell clumps and non-specific fluorescence.
5. Incubate all tubes in the dark during the immunophenotyping procedure.
6. Before analysis on the flow cytometer, be sure all samples have been adequately fixed. Although some of the commercial lysing/fixing reagents can inactivate cell-associated HIV, all tubes should be fixed after staining and lysing with 1%-2% buffered paraformaldehyde or formaldehyde. Note: The characteristics of paraformaldehyde and formaldehyde may vary between lots. They may also lose their effectiveness over time. Therefore, these fixatives should be made fresh weekly from electronmicroscopy-grade aqueous stock.
7. Immediately after processing the specimens, store all stained samples in the dark and at refrigerator temperatures (39-50 F ) until flow cytometric analysis. These specimens should be stored for no longer than 24 hours unless the laboratory can demonstrate that scatter and fluorescence patterns do not change for specimens stored longer.
8. If absolute counts are determined on the flow cytometer, follow the manufacturer's recommended protocols.
# VI. Monoclonal Antibody Panels
A. Monoclonal antibody panels must contain appropriate monoclonal antibody combinations to enumerate CD4+ and CD8+ T-cells and to ensure the quality of the results (61 ).
1. CD4 T-cells must be identified as being positive for both CD3 and CD4.
2. CD8 T-cells must be identified as being positive for both CD3 and CD8.
B. Two-color monoclonal antibody panels 1. The recommended two-color immunophenotyping antibody panel (Table 1), delineated by CD nomenclature ( 62 2. An abbreviated two-color panel should only be used for testing specimens from patients for whom CD4+ T-cell levels are being requested as part of sequential follow-up, and then only after consulting with the requesting clinician. Because some of the internal controls are no longer included, when using an abbreviated panel, the immunophenotyping results should be reviewed carefully to ensure that CD3+ T-cell levels are similar to those determined previously with the full recommended panel. When discrepancies occur, the specimens must be reprocessed by using the full recommended two-color monoclonal antibody panel.
# VII. Negative and Positive Controls for Immunophenotyping
A. Negative (isotype) reagent control 1. Use this control to determine nonspecific binding of the mouse monoclonal antibody to the cells and to set markers for distinguishing fluorescence-negative and fluorescence-positive cell populations.
2. Use a monoclonal antibody with no specificity for human blood cells but of the same isotype(s) as the test reagents. Note: In many cases, the isotype control may not be optimal for controlling nonspecific fluorescence because of differences in F/P ratio, antibody concentration between the isotype control and the test reagents, and other characteristics of the immunoglobulin in the isotype control. Additionally, isotype control reagents from one manufacturer are not appropriate for use with test reagents from another manufacturer.
3. The isotype control is not needed for use with CD45 because CD45 is used to identify leukocyte populations based on fluorescence intensity.
4. For monoclonal antibody panels containing antibodies to CD3, CD4, and CD8, the isotype control may not be needed because labeling with these antibodies results in fluorescence patterns in which the unlabeled cells are clearly separated from the labeled cells. In these instances, the negative cells in the histogram are the appropriate isotype control.
5. The isotype control must be used when a monoclonal antibody panel contains monoclonal antibodies that label populations that do not have a distinct negative population (e.g., some CD16 or CD56 monoclonal antibodies).
B. Positive methodologic control 1. The methodologic control is used to determine whether procedures for preparing and processing the specimens are optimal. This control is prepared each time specimens from patients are prepared.
2. Use either a whole-blood specimen from a control donor or commercial materials validated for this purpose. Ideally, this control will match the population of patients tested in the laboratory. (See Section XII.D.)
3. If the methodologic control falls outside established normal ranges, determine the reason. Note: The purpose of the methodologic control is to detect problems in preparing and processing the specimens. Biologic factors that cause only the whole-blood methodologic control to fall outside normal ranges do not invalidate the results from other specimens processed at the same time. Poor lysis or poor labeling in all specimens, including the methodologic control, invalidates results.
C. Positive control for testing reagents 1. Use this control to test the labeling efficiency of new lots of reagents or when the labeling efficiency of the current lot is questioned. Prepare this control only when needed (i.e., when reagents are in question) in parallel with lots of reagents of known acceptable performance. Note: New reagents must demonstrate similar results to those of known acceptable performance.
2. Use a whole-blood specimen or other human lymphocyte preparation (e.g., cryopreserved or commercially obtained lyophilized lymphocytes).
# VIII.Flow Cytometer Quality Control (29 )
A. Align optics daily. This ensures that the brightest and tightest peaks are produced in all parameters. Note: Some clinical flow cytometers can be aligned by laboratory personnel whereas others can be aligned only by qualified service personnel.
1. Align the flow cytometer by using stable calibration material (e.g., microbeads labeled with fluorochromes) that has measurable forward scatter, side scatter, and fluorescence peaks.
2. Align the calibration particles optimally in the path of the laser beam and in relation to the collection lens so the brightest and tightest peaks are obtained.
3. Align stream-in-air flow cytometers daily (at a minimum) and stream-incuvette flow cytometers (most clinical flow cytometers are this type) as recommended by the manufacturer.
B. Standardize daily. This ensures that the flow cytometer is performing optimally each day and that its performance is the same from day to day.
1. Select machine settings that are optimal for fluorochrome-labeled, whole-blood specimens.
2. Use microbeads or other stable standardization material to place the scatter and fluorescence peaks in the same scatter and fluorescence channels each day. Adjust the flow cytometer as needed.
3. Maintain records of all daily standardizations. Monitor these to identify any changes in flow cytometer performance.
4. Retain machine standardization settings for the remaining quality control procedures (sensitivity and color compensation) and for reading the specimens.
C. Determine fluorescence resolution daily. The flow cytometer must differentiate between the dim peak and autofluorescence in each fluorescence channel.
1. Evaluate standardization/calibration material or cells that have low-level fluorescence that can be separated from autofluorescence (e.g., microbeads with low-level and negative fluorescence or CD56-labeled lymphocyte preparation).
2. Establish a minimal acceptable distance between peaks, monitor this difference, and correct any daily deviations.
D. Compensate for spectral overlap daily. This step corrects the spectral overlap of one fluorochrome into the fluorescence spectrum of another.
1. Use either microbead or cellular compensation material containing three populations for two-color immunofluorescence (no fluorescence, PE fluorescence only, and FITC fluorescence only), four populations for three-color immunofluorescence (the three above plus a population that is positive for only the third color), or five populations for four-color (the four above plus a population that is positive for only the fourth color).
2. Analyze this material and adjust the electronic compensation circuits on the flow cytometer to place the fluorescent populations in their respective fluorescence quadrants with no overlap into the double-positive quadrant (Figure 1). If three fluorochromes are used, compensation must be carried out in an appropriate sequence: FITC, PE, and the third color, respectively (64 ). For four-color monoclonal antibody panels, follow the flow cytometer manufacturer's instructions for four fluorochromes.
Avoid over-compensation.
3. If standardization or calibration particles (microbeads) have been used to set compensation, confirm proper calibration by using lymphocytes labeled with FITC-and PE-labeled monoclonal antibodies (and a thirdcolor-or fourth-color-labeled monoclonal antibody for three-color or four-color panels) that recognize separate cell populations but do not overlap. These populations should have the brightest expected signals.
Note: If a dimmer-than-expected signal is used to set compensation, suboptimal compensation for the brightest signal can result.
4. Reset compensation when photomultiplier tube voltages or optical filters are changed.
E. Repeat all four instrument quality control procedures whenever instrument problems occur or if the instrument is serviced during the day.
F. Maintain instrument quality-control logs, and monitor them continually for changes in any of the parameters. In the logs, record instrument settings, peak channels, and coefficient of variation (CV) values for optical alignment, standardization, fluorescence resolution, and spectral compensation. Reestablish fluorescence levels for each quality-control procedure when lot numbers of beads are changed.
# IX. Sample Analyses
A. For the two-color immunophenotyping panel using a light-scatter gate, analyze the sample tubes of each patient's specimen in the following order:
1) The tube containing CD45 and CD14 (gating reagent): read this tube first so that gates can be set around the lymphocyte cluster; 2) Isotype control: set cursors for differentiating positive and negative populations so that ≤2% of the cells are positive; and 3) Remaining tubes in the panel.
1. Count at least 2,500 gated lymphocytes in each sample. This number ensures with 95% confidence that the result is ≤2% standard deviation (SD) of the "true'' value (binomial sampling). Note: This model assumes that variability determined from preparing and analyzing replicates is ≤2% SD. Each laboratory must determine the level of variability by preparing and analyzing at least eight replicates of the last four tubes in the recommended panel. Measure variability when first validating the methodology used and again when methodologic changes are made.
# X. Data Analysis
A. Light-scatter gate (for two-color panels).
1. Reading from the sample tube containing CD45 and CD14, draw lymphocyte gates using forward and side light-scattering patterns and fluorescence staining.
a. When using CD45 and CD14 and light-scattering patterns for drawing lymphocyte gates, define populations on the following basis:
- Lymphocytes stain brightly with CD45 and are negative for CD14.
- Monocytes and granulocytes have greater forward and side lightscattering properties than lymphocytes.
- Monocytes are positive for CD14 and have intermediate to high intensity for CD45.
- Granulocytes are dimly positive for CD14 and show less intense staining with CD45.
- Debris, red cells, and platelets show lower forward scattering than lymphocytes and do not stain specifically with CD45 or CD14.
b. Using the above characteristics, draw a light-scattering gate around the lymphocyte population (66 ). Note: Other methods for drawing a lymphocyte gate must accurately identify lymphocytes and account for non-lymphocyte contamination of the gate.
2. Verify the lymphocyte gate by determining the recovery of lymphocytes within the gate and the lymphocyte purity of the gate.
a. Definitions
- The lymphocyte recovery (previously referred to as the proportion of lymphocytes within the gate) is the percentage of lymphocytes in the sample that are within the gate.
- The lymphocyte purity of the gate is the percentage of cells within the gate that are lymphocytes. The remainder may be monocytes, granulocytes, red cells, platelets, and debris.
b. Optimally, the lymphocyte recovery should be ≥95%.
c. Optimally, the lymphocyte purity of the gate should be ≥90%.
d. Optimal gates include as many lymphocytes and as few contaminants as possible.
e. Lymphocyte recovery within the gate using CD45 and CD14 can be determined by two different methods: light-scatter gating and fluorescence gating (Figures 2 and 3). Note: The number of lymphocytes identified will be the same whether determined by light-scatter gating or by fluorescence gating.
- Lymphocyte recovery determined by light-scatter gating is done as follows: first, identify the lymphocytes by setting a relatively large light-scatter gate (Figure 2, Panel A), then set an analysis region around CD45 and CD14 lymphocyte reactivity (bright CD45-positive, negative for CD14) (Figure 2, Panel B). Determine the number of cells that meet both criteria (total number of lymphocytes). Set a smaller lymphocyte light-scatter gate that will be used for analyzing the remaining tubes (Figure 2, Panel C). Determine the number of cells that fall within this gate and the CD45/ CD14 analysis region (bright CD45-positive, negative for CD14) (Figure 2, Panel D). This number divided by the total number of lymphocytes times 100 is the lymphocyte recovery. The advantage of this method is that it can easily be done on most software programs.
- Lymphocyte recovery determined by fluorescence gating is done as follows. First, identify lymphocytes by setting a fluorescence gate around the bright CD45-positive, CD14-negative cells (Figure 3, Panel A), then set an analysis region around a large light-scatter region that includes lymphocytes (Figure 3, Panel B). The number of cells that meet both criteria is the total number of lymphocytes. Set a smaller lymphocyte light-scatter gate that will be used for analyzing the remaining tubes (Figure 3, Panel C). Determine the number of cells that fall within this gate and the CD45/CD14 analysis region (bright CD45+, negative for CD14) (Figure 3, Panel D). This number divided by the total number of lymphocytes times 100 is the lymphocyte recovery. The advantage of this method is that the light-scatter pattern of lymphocytes can be easily determined. Note: Some instrument software packages automatically determine lymphocyte recovery by fluorescence gating; others do not.
f. The lymphocyte purity of the gate is determined from the CD45 and CD14 tube by calculating the percentage of cells in the light-scattering gate that are bright CD45-positive and negative for CD14. 3. CD45/side scatter gates for lymphocytes are assumed to contain >95% lymphocytes, and no further corrections need be made to the percentage subset results (65 ).
4. Lymphocyte recovery cannot be determined without using a panel of monoclonal antibodies that identify T-, B-, and NK-cells. Note: Validation of a CD45/side scatter gate is recommended when beginning to use CD45/ side scatter gates to help determine the CD45 and side scatter characteristics of T-, B-, and NK-cells and to ensure their inclusion in the gate.
C. Set cursors using the isotype control so that <2% of cells are positive. Note:
If an isotype control is not used, set cursors based on the tube containing CD3 and CD4 so that the negative and positive cells in the histogram are clearly separated. These cursors may be used for the remaining tubes. If CD16 and/or CD56 are included in a monoclonal antibody panel, an isotype control may be needed to help identify negative cells.
D. Analyze the remaining samples with the cursors set. Note: In some instances, the isotype-set cursors will not accurately separate positive and negative staining for another sample tube from the same specimen. In such cases, the cursors can be moved on that sample to more accurately separate these populations. The cursors should not be moved when fluorescence distributions are continuous with no clear demarcation between positively and negatively labeled cells.
E. Analyze each patient or control specimen with lymphocyte gates and cursors for positivity set for that particular patient or control.
F. When spectral compensation of a particular specimen appears to be inappropriate because FITC-labeled cells have been dragged into the PE-positive quadrant or vice-versa (when compensation on all other specimens is appropriate) (67 ), repeat the sample preparation, prewashing the specimen with phosphate-buffered saline (PBS) (pH 7.2) to remove plasma before monoclonal antibodies are added.
G. Include the following analytic reliability checks, when available:
1. Optimally, at least 95% lymphocyte recovery (proportion of lymphocytes within the lymphocyte gate) should be achieved. Minimally, at least 90% lymphocyte recovery should be achieved. Note: These determinations can only be made when using either CD14 and CD45 to validate the gate or when using T, B, and NK reagents to validate a gate.
Optimally, ≥90% lymphocyte purity should be observed within the lymphocyte gate. Minimally, ≥85% purity should be observed within the gate.
3. Optimally, the sum of the percentage of CD3+CD4+ and CD3+CD8+ cells should equal the total percentage of CD3+ cells within ±5%, with a maximum variability of ≤10%. Note: In specimens containing a considerable number of T γδ cells (68,69 ), this reliability check may exceed the maximum variability. ), the sum should ideally equal 95%-105% (or at a minimum 90%-110%).
# XI. Data Storage
A. If possible, store list-mode data on all specimens analyzed. This allows for reanalysis of the raw data, including redrawing of gates. At a minimum, retain hard copies of the lymphocyte gate and correlated dual histogram data of the fluorescence of each sample.
B. Retain all primary files, worksheets, and report forms for 2 years or as required by state or local regulation, whichever is longer. Data can be stored electronically. Disposal after the retention period is at the discretion of the laboratory director. B. If using light-scatter gates, report data as a percentage of the total lymphocytes and correct for the lymphocyte purity of the gate. For example, if the lymphocyte purity is 94% and the CD3 value is 70%, correct the CD3 value by dividing 0.70 by 0.94 and then multiply the result by 100 to result in a T-lymphocyte value of 74%.
# XII. Data Reporting
C. Report absolute lymphocyte subset values when an automated complete blood cell (CBC) count (WBC and differential) has been performed from blood drawn at the same time as that for immunophenotyping.
# XIII.Quality Assurance
A. Assure the overall quality of the laboratory's CD4+ T-cell testing by monitoring and evaluating the effectiveness of the laboratory policies and procedures for the preanalytic, analytic, and postanalytic testing phases. The practices and processes to be monitored and evaluated include:
1. Methods for collecting, handling, transporting, identifying, processing, and storing specimens.
2. Information provided on test request and results report forms.
3. Instrument performance, quality-control protocols, and maintenance.
4. Reagent quality-control protocols.
5. Process for reviewing and reporting results. 8. Review and revision (as necessary, or at established intervals) of the laboratory's policies and procedures to assure adherence to the quality assurance program. All staff involved in the testing should be informed of any problems identified during the quality assurance review, and the corrective actions should be taken to prevent recurrences.
B. Document all quality assurance activities.
# LABORATORY VALIDATION OF SINGLE-PLATFORM CD4+ T-CELL METHODS
When performing method-validation studies on the new single-platform methods for enumerating CD4+ T-cell populations, laboratorians must consider that these assays may determine the absolute CD4+ count using methodologies that are very different from multi-platform techniques. In most clinical settings, multi-platform methods do not perform at the level of a gold standard. Still, the single-platform methods must be compared with accepted methods or testing procedures. When no optimal standard exists and bias is present, the amount of error contributed by each method cannot be determined. Therefore, if results yielded from a single-platform method are significantly different from those obtained using a multi-platform method, the new method is not necessarily in error. Conducting a large-scale study correlating results from single-platform methods with clinical disease data to establish new medical decision points may be the only surrogate for comparison with a gold standard. Laboratories should not adopt methods that yield results significantly different from multi-platform methods until these studies can be performed, published, and accepted by the scientific and medical communities.
Traditional method comparison tools may be used for validation of single-platform methods that compare favorably with multi-platform methods. Single-platform methods, as the name implies, derive the absolute CD4+ T-cell counts from a single measurement and therefore have the potential to yield a less variable (although not necessarily more accurate) analysis than multi-platform methods, which utilize a combination of hematology and flow cytometry measurements. Laboratorians should utilize statistical tools that provide useful information about these new methodologies but that do not presume that either the comparative or test method is definitive. Linear least squares regression analysis must be conducted based on the assumption that no error exists in the comparative method, and regression-type scatter plots provide inadequate resolution when the errors are small in comparison to the analytical range (70,71 ). The bias scatterplot may provide laboratorians with a more useful tool for determining bias (Figure 4). These simple, high resolution graphs plot the difference in the individual measurements of each method (X test method -X comparative method ) against those by one of the methods (X comparative method ) (70 ). Such graphs provide an easy means of determining if bias is present and distinguishing if bias is systematic, proportional, or random/non-constant. The laboratorian may visually determine the significance of these differences over the entire range of values, and when sufficient values are plotted, outliers and/or samples containing interfering substances can be identified. The laboratorian may then divide the data into ranges relevant to medical decisions and calculate the systematic error (mean of the bias), the random error (standard deviation of the bias), and total error (the greatest absolute 95% error limit of the systematic error twice the random error) to gain insight into analytical performance at the specified decision points (70,71 ). Several detailed guidelines and texts can provide laboratorians with additional information regarding quality goals, method evaluation, estimation of bias, and bias scatter plots (70)(71)(72)(73)(74)(75)(76). Once a new method is accepted and implemented, the laboratory should continue to monitor the correlation between the results and the patient's clinical disease data to ensure that no problems have gone undetected by the relatively few samples typically tested during method evaluations.
# DISCUSSION
On the basis of the reported number of tests performed annually by laboratories participating in CDC's Model Performance Evaluation Program for T-lymphocyte immunophenotyping in 1995, more than 1.6 million CD4+ T-cell measurements are performed yearly by the approximately 600 testing laboratories in the United States (77 ). Most of these measurements are made by using multi-platform flow cytometric methods, although new single-platform methods (both flow cytometric and others) are available (78)(79)(80)(81)(82)(83)(84)(85). Recommendations concerning CD4+ T-lymphocyte immunophenotyping have focused on the more complex multi-platform process of measuring CD4+ T-cells. The recommendations for testing have increasingly been adopted (86 ), and as a result, laboratorians have reported improved testing practices (86,87 ). Testing outcomes associated with following the recommendations include a) increased confidence in results, b) more reproducible results, c) increased ability to resolve discrepant problems, d) decreased proportion of unacceptable specimens received for testing, e) decreased proportion of specimens requiring reanalysis, and f) decreased incidents that could pose biohazard risks (86 ).
Although data suggest that guidelines for CD4+ T-cell lymphocyte immunophenotyping have improved many laboratory practices, developing guidelines that address every aspect of CD4+ T-cell testing (including some laboratory-specific practices) is not possible. Moreover, measuring the outcomes associated with the adoption of these guidelines is inherently difficult. First, the guidelines lack evaluation protocols that can adequately account for the interactions among recommendations. No weight of importance has been assigned for the individual recommendations that address unique steps in the testing process; hence, the consequences of incompletely following the entire set of recommendations are uncertain. Second, because published data were not available as the basis for every guideline, some recommendations are based on experience and expert opinion. Recommendations made on this basis, in the absence of data, may be biased and inaccurate. Finally, variations in testing practices and interactions among the practices (e.g., how specimens are obtained and processed, laboratory personnel credentials and experience, testing methods used, test-result reporting practices, and compliance with other voluntary standards and laboratory regulations) complicate both development of guidelines that will fit every laboratory's unique circumstances and measurement of the value of guideline implementation.
When the first CDC recommendations for laboratory performance of CD4+ T-cell testing were developed, the guidelines were written so as not to impede development of new technology or investigations into better ways to assess the status of the immune system in HIV-infected persons. Presentations at the second national conference in Atlanta indicated that although CD4+ T-cell testing by multi-platform flow cytometry is still being performed by most laboratories, other single-platform methods are being implemented. In addition, alternative T-cell phenotypic markers are being investigated as prognostic indicators or markers of treatment efficacy, alone and in combination with other markers (88 ).
Participants at the second national conference emphasized the need for monitoring the intralaboratory and interlaboratory accuracy, precision, and reliability of current and new procedures. Decisions about implementing and modifying procedures should be based on performance data collected to assess the extent to which the quality goals established by providers and users of laboratory testing services are achieved (76 ). In testing areas where no absolute gold standards exist (e.g., CD4+ T-cell enumeration), method validation and verification processes are even more critical. Laboratorians should continue to rely on as many sources of information and data as possible to help in their decision processes. Factors that have contributed to improved testing practices and that are important resources for laboratorians include regulatory- and voluntary laboratory standards (29,31,32,34,89 ); manufacturer's recommendations; proficiency testing and performance evaluation program data; information shared at scientific conferences, meetings, and training sessions; and publications in scientific literature.
# APPENDIX. Effects of medications and other biologic factors on immunophenotyping results
# Causative | 8,931 | {
"id": "39811c2201ebae2772dcc26de076913b922b417a",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Hepatitis B vaccination is the most effective measure to prevent hepatitis B virus (HBV) infection and its consequences, including cirrhosis of the liver, liver cancer, liver failure, and death. In adults, ongoing HBV transmission occurs primarily among unvaccinated persons with behavioral risks for HBV transmission (e.g., heterosexuals with multiple sex partners, injection-drug users , and men who have sex with men ) and among household contacts and sex partners of persons with chronic HBV infection. This report, the second of a two-part statement from the Advisory Committee on Immunization Practices (ACIP), provides updated recommendations to increase hepatitis B vaccination of adults at risk for HBV infection. The first part of the ACIP statement, which provided recommendations for immunization of infants, children, and adolescents, was published previously (CDC. A comprehensive immunization strategy to eliminate transmission of hepatitis B virus infection in the United States:# Introduction
Hepatitis B is a disease caused by the hepatitis B virus (HBV), which is transmitted through percutaneous (i.e., puncture through the skin) or mucosal (i.e., direct contact with mucous membranes) exposure to infectious blood or body flu ids. HBV can cause chronic infection, resulting in cirrhosis of the liver, liver cancer, liver failure, and death. Persons with chronic infection also serve as the main reservoir for continued HBV transmission. Although chronic infection is more likely to develop in persons infected as infants or young children, rates of new infection and acute disease are highest among adults.
Hepatitis B vaccination is the most effective measure to pre vent HBV infection and its consequences. Since recommen dations for hepatitis B vaccination were first issued in 1982, a comprehensive strategy to eliminate HBV transmission in the United States has evolved (1)(2)(3)(4)(5). This strategy includes 1) universal vaccination of infants beginning at birth, 2) pre vention of perinatal HBV infection through routine screening of all pregnant women for hepatitis B surface antigen (HBsAg) and postexposure immunoprophylaxis of infants born to HBsAg-positive women or to women with unknown HBsAg status, 3) vaccination of all children and adolescents who were not vaccinated previously, and 4) vaccination of previously unvaccinated adults at risk for HBV infection (Box 1).
To date, immunization strategies for infants, children, and adolescents have been implemented with considerable suc cess. Recent estimates indicate that approximately 95% of pregnant women are tested for HBsAg and that case manage ment has been effective in ensuring high levels of initiation and completion of postexposure immunoprophylaxis among infants born to HBsAg-positive women (6). Hepatitis B vac cine has been integrated successfully into the childhood vac cination schedule, and infant vaccination coverage levels now are equivalent to those of other vaccines in the childhood
# BOX 1. Immunization strategy to eliminate transmission of hepatitis B virus (HBV) in the United States
- Universal vaccination of infants beginning at birth - Prevention of perinatal HBV infection through -routine screening of all pregnant women for hepatitis B surface antigen (HBsAg), and -immunoprophylaxis of infants born to HBsAg positive women or to women with unknown HBsAg status - Routine vaccination of previously unvaccinated children and adolescents - Vaccination of previously unvaccinated adults at risk for HBV infection schedule (7). Vaccination coverage among adolescents also has increased substantially; preliminary data from 2003 indicated that approximately 50%-60% of adolescents aged 13-15 years have records indicating vaccination (with 3 doses) against hepa titis B (8). During 1990-2005, incidence of acute hepatitis B in the United States declined 78%. The greatest decline (96%) occurred among children and adolescents, coincident with an increase in hepatitis B vaccination coverage. This success can be attributed in part to the established infrastructure for vac cine delivery to children and to federal support for perinatal hepatitis B prevention programs. Among adults, ongoing HBV transmission occurs prima rily among unvaccinated adults with risk behaviors for HBV transmission (e.g., heterosexuals with multiple sex partners, injection-drug users , and men who have sex with men ) and among household contacts and sex partners of persons with chronic HBV infection. During 2000-2004, selfreported hepatitis B vaccination coverage among adults at risk for HBV infection increased from 30% to 45% (9); this increase in vaccination coverage likely contributed to the 35% decline in acute hepatitis B incidence that occurred during this period (from 3.7 to 2.4 per 100,000 population). How ever, incidence of acute hepatitis B remains highest among adults, who accounted for approximately 95% of an estimated 51,000 new HBV infections in 2005. Although acceptance of vaccination is high among adults offered vaccination (10), the low adult vaccination coverage reflects the lack of hepati tis B vaccination services in settings in which a high propor tion of adults have risk factors for HBV infection (e.g., sexually transmitted disease /human immunodeficiency virus testing and treatment facilities, drug-abuse treatment and prevention settings, health-care settings targeting services to IDUs, health-care settings targeting services to MSM, and correctional facilities) and missed opportunities to vaccinate adults at risk for HBV infection in primary care and specialty medical settings. Although hepatitis B incidence among adults is expected to continue to decline during the next decade as successive cohorts of persons vaccinated in infancy, childhood, and adolescence reach adulthood, new implementation strat egies are needed to protect unvaccinated adults at risk for HBV infection.
This report provides updated guidance from the Advisory Committee on Immunization Practices (ACIP) to increase hepa titis B vaccination coverage among adults. It includes recom mendations regarding which adults should receive hepatitis B vaccine and outlines implementation strategies to ensure that those adults are vaccinated. The first part of this statement, which provided recommendations for immunization of infants, children, and adolescents, was published previously (11).
# Methods
In response to continuing low rates of hepatitis B vaccina tion among adults at risk for HBV infection, ACIP's Hepati tis Vaccines Work Group met multiple times during October 2004-September 2005 to review previous guidelines and make recommendations for improving vaccination coverage in adults. The work group examined the progress made since 1991 in implementing the U.S. strategy to eliminate HBV transmission (e.g., vaccination coverage data and hepatitis B disease rates), surveillance data on missed opportunities for hepatitis B vaccination among adults with acute hepatitis B, and results of cost-effectiveness analyses. In addition, demon stration projects conducted in settings in which a high pro portion of clients were at risk for HBV infection identified the components of successful adult hepatitis B vaccination programs and ongoing challenges to implementing adult hepa titis B vaccination.
In January 2005, the proposed recommendations were posted online for public comment. In May 2005, CDC con vened a meeting of external consultants, including research ers, physicians, state and local public health professionals, immunization program directors, and directors of viral hepa titis, STD, and HIV/AIDS prevention programs, to obtain input into the draft recommendations and consider the feasi bility of the recommended strategies. In October 2005, the revised recommendations were approved by ACIP.
# Major Updates to the Recommendations
This report updates ACIP recommendations published pre viously for hepatitis B vaccination of adults (3). The primary changes from previous recommendations are as follows:
- In settings in which a high proportion of persons are likely to be at risk for HBV infection (e.g., STD/HIV testing and treatment facilities, drug-abuse treatment and pre vention settings, health-care settings targeting services to IDUs, health-care settings targeting services to MSM, and correctional facilities), ACIP recommends universal hepa titis B vaccination for all adults who have not completed the vaccine series. - In primary care and specialty medical settings, ACIP rec ommends implementation of standing orders to identify adults recommended for hepatitis B vaccination and administer vaccination as part of routine services. To ensure vaccination of adults at risk for HBV infection who have not completed the vaccine series, ACIP recom mends the following implementation strategies:
-Provide information to all adults regarding the health benefits of hepatitis B vaccination, including risk fac tors for HBV infection and persons for whom vacci nation is recommended. -Help all adults assess their need for vaccination by obtaining a history that emphasizes risks for sexual transmission and percutaneous or mucosal exposure to blood. -Vaccinate all adults who report risks for HBV infection.
-Vaccinate all adults requesting protection from HBV infection, without requiring them to acknowledge a specific risk factor.
# Background Clinical Features and Natural History of HBV Infection
HBV is a 42-nm DNA virus classified in the Hepadnaviridae family. The liver is the primary site of HBV replication. After a susceptible person is exposed, the virus enters the liver via the bloodstream; no evidence exists indicating that the virus replicates at mucosal surfaces. HBV infection can produce either asymptomatic or symptomatic infection. The average incubation period is 90 days (range: 60-150 days) from exposure to onset of jaundice and 60 days (range: 40-90 days) from exposure to onset of abnormal serum alanine aminotrans ferase (ALT) levels (12,13).
The onset of acute disease typically is insidious. Infants, children aged 5 years and adults have initial clinical signs or symptoms (14). When present, clinical symptoms and signs can include anorexia, malaise, nausea, vomiting, abdominal pain, and jaundice. Extrahepatic mani festations of disease (e.g., skin rashes, arthralgias, and arthri tis) also can occur (15). The fatality rate among persons with reported cases of acute hepatitis B is 0.5%-1.0%, with the highest rates in adults aged >60 years; however, because a sub stantial number of infections are asymptomatic and therefore are not reported, the overall fatality rate among all persons with HBV infection likely is lower (16).
Approximately 95% of primary infections in adults with nor mal immune status are self-limited, with elimination of virus from blood and subsequent lasting immunity to reinfection. Chronic infection occurs in 5 years, approximately 30% of infected children aged <5 years, and approximately 90% of infected infants, with continuing viral replication in the liver and persistent viremia (14,(17)(18)(19).
Primary infections become chronic more frequently in immu nosuppressed persons (e.g., hemodialysis patients and persons with HIV infection) (19,20) and persons with diabetes (21). Overall, approximately 25% of persons who become chroni cally infected during childhood and 15% of those who become chronically infected after childhood die prematurely from cir rhosis or liver cancer; the majority remain asymptomatic until onset of cirrhosis or end-stage liver disease (22).
No specific treatment exists for acute hepatitis B; supportive care is the mainstay of therapy. Persons who have chronic HBV infection require medical evaluation and regular monitoring (23)(24)(25). Therapeutic agents approved by the Food and Drug Administration (FDA) for treatment of chronic hepatitis B can achieve sustained suppression of HBV replication and remis sion of liver disease in certain persons (24). Periodic screening with ultrasonography and alfa-fetoprotein has been demon strated to enhance early detection of hepatocellular carcinoma (HCC) (25). Certain chronically infected persons with HCC have experienced long-term survival after resection of small hepatocellular carcinomas, and persons who were screened had HCC detected at an earlier stage and had a substantial survival advantage compared with historical controls (25); however, data from controlled studies are lacking. Guidance for the diagnosis and management of hepatitis B is available (26).
# Interpretation of Serologic Markers of HBV Infection
Antigens and antibodies associated with HBV infection in clude HBsAg and antibody to HBsAg (anti-HBs), hepatitis B core antigen (HBcAg) and antibody to HBcAg (anti-HBc), and hepatitis B e antigen (HBeAg) and antibody to HBeAg (anti-HBe). At least one serologic marker is present during each of the different phases of HBV infection (13,27). The serologic markers typically used to differentiate between acute, resolving, and chronic infection are HBsAg, anti-HBc, and anti-HBs (Table 1). HBeAg and anti-HBe screening typically is used for the management of patients with chronic infec tion. Serologic assays are available commercially for all mark ers except HBcAg because no free HBcAg circulates in blood.
The presence of a confirmed HBsAg-positive result in serum indicates active HBV infection. All HBsAg-positive per sons should be considered infectious. In newly infected per sons, HBsAg is the only serologic marker detected during the first 3-5 weeks after infection. The average time from expo sure to detection of HBsAg is 30 days (range: 6-60 days) (12,13). Highly sensitive single-sample nucleic acid tests can detect HBV DNA in the serum of an infected person 10-20 days before detection of HBsAg (28). Transient HBsAg posi tivity has been reported for up to 18 days after hepatitis B vaccination and is clinically insignificant (29,30). § § To ensure that an HBsAg-positive test result is not a false-positive, samples with reactive HBsAg results should be tested with a licensed neutralizing confirmatory test if recommended in the manufacturer's package insert. ¶ ¶ Persons positive only for anti-HBc are unlikely to be infectious except under unusual circumstances in which they are the source for direct percutane ous exposure of susceptible recipients to large quantities of virus (e.g., blood transfusion or organ transplant). * Milli-international units per milliliter.
Anti-HBc appears at the onset of symptoms or liver-test abnormalities in acute HBV infection and persists for life. Acute or recently acquired infection can be distinguished by the presence of the immunoglobulin M (IgM) class of anti-HBc, which is detected at the onset of acute hepatitis B and persists for up to 6 months if the disease resolves. In patients who have chronic HBV infection, IgM anti-HBc can persist during viral replication at low levels that typically are not detectable by assays used in the United States. However, per sons with exacerbations of chronic infection can test positive for IgM anti-HBc (31). Using IgM anti-HBc testing for diag nosis of acute hepatitis B should be limited to persons for whom clinical evidence of acute hepatitis or an epidemiologic link to a case has been identified because the positive predic tive value of this test is low in asymptomatic persons.
In persons who recover from HBV infection, HBsAg is elimi nated from the blood, and anti-HBs develops, typically within 3-4 months. The presence of anti-HBs typically indicates immunity from HBV infection. Infection or immunization with one serotype of HBV confers immunity to all serotypes. In addition, anti-HBs can be detected for several months after hepa titis B immune globulin (HBIG) administration. Persons who recover from natural infection typically will be positive for both anti-HBs and anti-HBc, whereas persons who respond to hepa titis B vaccine have only anti-HBs. In persons who become chronically infected, HBsAg and anti-HBc persist, typically for life. HBsAg will become undetectable in approximately 0.5%-2% of persons with chronic infection yearly; anti-HBs will occur in the majority of these persons (32)(33)(34)(35).
In certain persons, the only HBV serologic marker detected in serum is anti-HBc. Isolated anti-HBc can be detected after HBV infection in persons who have recovered but whose anti-HBs levels have waned. Certain chronically infected persons with anti-HBc alone have circulating HBsAg not detectable by commercial serology. HBV DNA has been detected in the blood of <10% of persons with isolated anti-HBc (36,37). These persons are unlikely to be infectious except under cir cumstances in which they are a source for direct percutaneous exposure of susceptible recipients to substantial quantities of virus (e.g., through blood transfusion or organ transplanta tion) (38). An isolated anti-HBc result also can be a falsepositive. Typically, the frequency of isolated anti-HBc relates directly to the prevalence of HBV infection in the popula tion. In populations with a high prevalence of HBV infec tion, isolated anti-HBc likely indicates previous infection, with loss of anti-HBs. For persons in populations with a low preva lence of HBV infection, isolated anti-HBc is found in approximately 10%-20% of persons with serologic markers of HBV infection (37) and often represents a false-positive reaction; the majority of these persons have a primary anti-HBs response after a 3-dose series of hepatitis B vaccine (39,40).
HBeAg can be detected in the serum of persons with acute or chronic HBV infection. The presence of HBeAg correlates with high levels of viral replication (i.e., HBV DNA levels typically of 10 7 -10 9 IU/mL, indicating high infectivity) (41,42). Loss of HBeAg correlates with low levels (i.e., HBV DNA levels of <10 5 IU/mL) of replicating virus, although certain HBeAg-negative persons have HBV DNA levels up to 10 8 -10 9 IU/mL (43). A mutation in the precore region of the HBV genome has been identified in HBeAg-negative per sons with high HBV DNA levels (44,45).
# Epidemiology of HBV Infection
HBV is transmitted by percutaneous or mucosal exposure to infectious blood or body fluids. Although HBsAg has been detected in multiple body fluids, only serum, semen, and saliva have been demonstrated to be infectious (46,47). HBV is concentrated most highly in serum, with lower concentra tions in semen and saliva. All HBsAg-positive persons are infectious, but those who are also HBeAg positive are more infectious because their blood contains high titers of HBV (typically HBV DNA levels of 10 7 -10 9 IU/mL) (41,42). HBV is comparatively stable in the environment and remains viable for >7 days on environmental surfaces at room temperature (48). HBV DNA at concentrations of 10 2 -10 3 IU/mL can be present on environmental surfaces in the absence of any visible blood and still cause transmission (48,49).
For adults, the two primary sources of HBV infection are sexual contact and percutaneous exposure to blood. Personto-person transmission of HBV also can occur in settings involving nonsexual interpersonal contact over an extended period (e.g., among household contacts of a person with chronic HBV infection and developmentally disabled persons living in a long-term-care facility).
HBV is transmitted efficiently by sexual contact among heterosexuals and among MSM. Risk factors associated with sexual transmission among heterosexuals include having unprotected sex with an infected partner, having unprotected sex with more than one partner, and history of another STD. Risk factors associated with sexual transmission among MSM include having multiple sex partners, history of another STD, and anal intercourse.
Percutaneous transmission of HBV can occur from receipt of blood transfusion or organ or tissue transplant from an infectious donor; injection-drug use, including sharing of injection-preparation equipment; and frequent exposure to blood or needles among health-care workers. In the United States, donor selection procedures and routine testing of donors have made transmission of HBV via transfusion of whole blood and blood components a rare occurrence (50,51). Persons with hemophilia who received plasma-derived clot ting factor concentrates were previously at high risk for HBV infection, but such transmission has been eliminated through viral inactivation procedures and use of recombinant clotting factor concentrates. Among persons with bleeding disorders treated at U.S. hemophilia treatment centers during 1998-2002, no infections with viral hepatitis, including HBV, were attributable to blood products received during that time (52). Outbreaks of HBV infection from exposure to contaminated equipment used for therapeutic injections and other health care-related procedures, tattooing, and acupuncture also have been reported, although such exposures among patients with acute hepatitis B are reported rarely (53)(54)(55)(56)(57). In the majority of cases, transmission resulted from noncompliance with aseptic techniques for administering injections and recom mended infection-control practices designed to prevent cross-contamination of medical equipment and devices. No infections have been demonstrated in susceptible persons who had oral mucous membrane exposure to HBsAg-positive saliva, but transmission has occurred through a human bite and has been demonstrated in animals by subcutaneous inoculation of saliva (46,(58)(59)(60).
Persons living with chronically infected persons are at risk for HBV infection through percutaneous or mucosal expo sures to blood or infectious body fluids (e.g., sharing a tooth brush or razor, contact with exudates from dermatologic lesions, or contact with HBsAg-contaminated surfaces). Per sons with chronic HBV infection also can transmit HBV in other settings (e.g., schools, child care centers, or facilities for developmentally disabled persons), especially if they behave aggressively or have medical problems (e.g., exudative derma titis or open skin lesions) that increase the risk for exposure to blood or serous secretions.
# Adults at Risk for HBV Infection
In the United States in 2005, the highest incidence of acute hepatitis B was among adults aged 25-45 years (Figure 1). Approximately 79% of newly acquired cases of hepatitis B are associated with high-risk sexual activity or injection-drug use; other known exposures (i.e., occupational, household, travel, and health-care-related) together account for 5% of new cases, and 16% deny a specific risk factor for infection (61;CDC, unpublished data, 2001CDC, unpublished data, -2005.
Adults at risk for infection by sexual exposure. The most common source of HBV infection among adults in the United States is sexual contact. Heterosexual transmission accounts for approximately 39% of new HBV infections among adults, (72)(73)(74). Risk for HBV transmission increases with the number of years of drug use and is associated with frequency of injection and with sharing of drug-preparation equipment (e.g., cottons, cookers, and rinse water), indepen dent of syringe sharing (73,75).
In a study of the seroprevalence of HBV infection among IDUs admitted to drug treatment in six U.S. cities, 64% (range: 50%-81%) had serologic evidence of HBV infection, and seroprevalence increased with age (76). Studies of street-recruited IDUs (77,78) and female IDUs (79) have iden tified similar prevalence of HBV infection, whereas a lower prevalence (25%) was found in a study of young IDUs (aged 18-30 years) (74). Chronic HBV infection has been identi fied in 3.1% of IDUs in a detention setting (77) and 7.1% of IDUs with HIV coinfection (80).
Household contacts of persons with chronic HBV infec tion. Seroprevalence of HBV infection among susceptible household contacts of persons with chronic infection has var ied, ranging from 14% to 60% (69,71,(81)(82)(83)(84)(85). The risk for infection is highest among sex partners of, and children living with, a person with chronic HBV infection in a household or extended family setting (83)(84)(85).
Developmentally disabled persons in long-term-care facilities. Developmentally disabled persons in residential and nonresidential facilities historically have had high rates of HBV infection (86,87), but the prevalence of infection has declined substantially since the implementation of routine hepatitis B vaccination in these settings (88,89). Nonetheless, because HBsAg-positive persons reside in such facilities, clients and staff continue to be at risk for infection.
Persons at risk for occupational exposure to HBV. Before hepatitis B vaccination was widely implemented, HBV infection was recognized as a common occupational hazard among persons who were exposed to blood while caring for patients or working in laboratories (90,91). Since then, rou tine hepatitis B vaccination of health-care workers and use of standard precautions to prevent exposure to bloodborne patho gens have made HBV infection a rare event in these popula tions (92)(93)(94). Since the mid-1990s, the incidence of HBV infection among health-care workers has been lower than that among the general population (94). Public safety workers with exposures to blood also might be at risk for HBV infection (95)(96)(97); however, the prevalence of HBV infection in occu pational groups such as police officers, firefighters, and cor rections officers generally does not differ from that in the general population when adjusted for race and age (97), and infection is associated most often with nonoccupational risk factors (97,98). No increased risk for occupationally acquired HBV infection has been documented in workers exposed infrequently to blood or body fluids (e.g., ward clerks, dietary workers, maintenance workers, housekeeping personnel, teach ers, and persons employed in day care settings) (91).
Hemodialysis patients. Since the initiation of strict infec tion-control practices and hepatitis B vaccination, the rate of HBV infection among patients undergoing hemodialysis has declined approximately 95% (99,100). Nonetheless, repeated outbreaks of HBV infection among unvaccinated patients underscore the continued risk for infection in this population (101).
Persons with chronic liver disease. Persons with chronic liver disease are not at increased risk for HBV infection unless they have percutaneous or mucosal exposure to infectious blood or body fluids. Furthermore, studies of the outcomes of acute hepatitis B among patients with chronic liver disease provide little evidence that acute hepatitis B increases their risk for an acute liver failure. However, concurrent chronic HBV infection might increase the risk for progressive chronic liver disease in HCV-infected patients (102).
Travelers to HBV-endemic regions. Short-term travelers to regions in which HBV infection is of high or intermediate endemicity (Box 2) typically are at risk for infection only through exposure to blood in medical, health-care, or disasterrelief activities; receipt of medical care that involves parenteral exposures; or sexual activity or drug use (103). Infection rates of 2%-5% per year among persons working in such regions for >6 months have been reported (104,105).
HIV-positive persons. Published data on the overall preva lence of HBV and HIV coinfection in the United States are limited. Studies of certain subgroups have identified prevalence of previous or current HBV infection of 45% in HIV-infected MSM aged 22-29 years (CDC, unpublished data, 1998(CDC, unpublished data, -2000, 24% in adolescent HIV-infected males (106), and 43% in HIVinfected women, including 76% among HIV-infected female IDUs (79). Chronic HBV infection has been identified in 6%-14% of HIV-positive persons from Western Europe and the United States, including 9%-17% of MSM, 7%-10% of IDUs, and 4%-6% of heterosexuals (107).
The course of HBV infection can be modified in the pres ence of HIV, with a lower incidence of jaundice and a higher incidence of chronic HBV infection (20,108,109). Limited data also indicate that HIV-infected patients with chronic HBV infection have an increased risk for liver-related mortality and morbidity (110).
# Incidence of Acute Hepatitis B
During 1990-2005, the overall incidence of reported acute hepatitis B declined 78%, from 8.5 to 1.9 per 100,000 popu lation (Figure 2), and the estimated number of new HBV infections, after adjusting for underreporting and asymptom atic infections, declined from approximately 232,000 to approximately 51,000 infections (CDC, unpublished data, 1990(CDC, unpublished data, -2005. Among children and adolescents aged 19 years, incidence declined 76%, from 9.9 to 2.4 per 100,000 population, and racial/ethnic disparities in incidence were nearly eliminated for Asians/ Pacific Islanders, American Indians/Alaska Natives, and His panics (Figure 3). Incidence also declined substantially among blacks aged >19 years during this period, from 19.7 to 4.2 per 100,000 population; however, in 2005, incidence among blacks remained nearly three times higher than that among other racial/ethnic populations. After leveling during 1999-
# Prevalence of HBV Infection
During 1988-1994, the overall age-adjusted prevalence of HBV infection (including previous or chronic infection) in the U.S. population was 4.9%, and the prevalence of chronic infection was 0.4% (111). Persons who have immigrated to the United States from countries in which HBV is endemic (Box 2, Figure 4) are affected disproportionately by chronic HBV infection; in particular, the majority of chronic HBV infections in the United States are among Asians/Pacific Islanders (112)(113)(114). The prevalence of chronic HBV infec tion among persons immigrating to the United States from Central and Southeast Asia, the Middle East, and Africa var ies (range: 5%-15%) and reflects the patterns of HBV infec tion in the countries and regions of origin. During 1994-2003, approximately 40,000 immigrants with chronic HBV infec tion were admitted annually to the United States for perma nent residence (115;CDC, unpublished data, 2005).
# Prophylaxis Against HBV Infection
# Hepatitis B Vaccine
Hepatitis B vaccine is available as a single-antigen formula tion and also in fixed combination with other vaccines. Two (116,117). Vaccine antigen can be purified from the plasma of persons with chronic HBV infection or produced by recombinant DNA technology. For vaccines available in the United States, recombinant DNA technology is used to express HBsAg in yeast, which then is purified from the cells by biochemical and biophysical separation techniques (118,119). Hepatitis B vaccines licensed in the United States are formulated to contain 10-40 µg of HBsAg protein/mL. Hepatitis B vaccines produced for distribution in the United States do not contain thimerosal as a preservative or contain only a trace amount (<1.0 µg mercury/mL) from the manu facturing process (120,121).
# Hepatitis B Immune Globulin
HBIG provides passively acquired anti-HBs and temporary protection (i.e., 3-6 months) when administered in standard doses. HBIG typically is used as an adjunct to hepatitis B vaccine for postexposure immunoprophylaxis to prevent HBV infection. For nonresponders to hepatitis B vaccination, HBIG administered alone is the primary means of protection after an HBV exposure.
HBIG is prepared from the plasma of donors with high concentrations of anti-HBs. The plasma is screened to elimi nate donors who are positive for HBsAg, antibodies to HIV and hepatitis C virus (HCV), and HCV RNA. In addition, proper manufacturing techniques for HBIG inactivate viruses (e.g., HBV, HCV, and HIV) from the final product (122,123). No evidence exists to indicate that HBV, HCV, or HIV ever has been transmitted by HBIG commercially available in the United States. HBIG that is commercially available in the United States does not contain thimerosal.
# Adult Vaccination Schedules and Results of Vaccination Preexposure Vaccination
# Vaccination of Adults
Primary vaccination consists of >3 intramuscular doses of hepatitis B vaccine (Table 2). The 3-dose vaccine series administered intramuscularly at 0, 1, and 6 months produces a protective antibody response in approximately 30%-55% of healthy adults aged 90% after the third dose (124,125). After age 40 years, the proportion of persons who have a pro tective antibody response after a 3-dose vaccination regimen declines below 90%, and by age 60 years, protective levels of antibody develop in only 75% of vaccinated persons (126). In addition to age, other host factors (e.g., smoking, obesity, genetic factors, and immune suppression) contribute to decreased vaccine response (127)(128)(129)(130). Alternative vaccina tion schedules (e.g., 0, 1, and 4 months or 0, 2, and 4 months) have been demonstrated to elicit dose-specific and final rates of seroprotection similar to those obtained on a 0-, 1-, 6-month schedule (131).
The combined hepatitis A-hepatitis B vaccine (Twinrix) is indicated for vaccination of persons aged >18 years with risk factors for both hepatitis A and hepatitis B. The dosage of the hepatitis A component in the combined vaccine is lower than that in the single-antigen hepatitis A vaccine, allowing it to be administered in a 3-dose schedule instead of the 2-dose sched ule used for the single-antigen vaccine.
# Nonstandard Vaccine Schedules
No apparent effect on immunogenicity has been docu mented when minimum spacing of doses (i.e., 4 weeks between doses 1 and 2, 8 weeks between doses 2 and 3, and 16 weeks between doses 1 and 3) is not achieved precisely. Increasing the interval between the first 2 doses has little effect on immunogenicity or final antibody concentration (132)(133)(134). The third dose confers the maximum level of seroprotection but acts primarily as a booster and appears to provide optimal long-term protection (135). Longer intervals between the last 2 doses result in higher final antibody levels but might increase the risk for acquisition of HBV infection among persons who have a delayed response to vaccination. No differences in immunogenicity are observed when vac cines from different manufacturers are used to complete the vaccine series.
# Response to Revaccination
Although serologic testing for immunity is not necessary after routine vaccination of adults, postvaccination testing is recommended for persons whose subsequent clinical manage ment depends on knowledge of their immune status, includ ing certain health-care and public safety workers; chronic hemodialysis patients, HIV-infected persons, and other immunocompromised persons; and sex or needle-sharing part ners of HBsAg-positive persons (Appendix A). Of persons who did not respond to a primary 3-dose vaccine series with anti-HBs concentrations of >10 mIU/mL, 25%-50% responded to an additional vaccine dose, and 44%-100% responded to a 3-dose revaccination series (136)(137)(138)(139)(140)(141). Better response to Dialysis formulation administered on a 3-dose schedule at 0, 1, and 6 months. † † Two 1.0-mL doses administered in 1 or 2 injections on a 4-dose schedule at 0, 1, 2, and 6 months. § § Not applicable. revaccination occurs in persons who have measurable but low (<10 mIU/mL) levels of antibody after the initial series (136,137). Increased vaccine doses (e.g., double the standard dose) were demonstrated to enhance revaccination response rates in one study (140) but not in another (138). Intrader mal vaccination has been reported to be immunogenic in per sons who did not respond to intramuscular vaccination (142,143); however, intradermal vaccination is not a route of administration indicated in the manufacturers' package label ing. Persons who do not have protective levels of anti-HBs 1-2 months after revaccination either are primary nonresponders or are infected with HBV. Genetic factors might contribute to nonresponse to hepatitis B vaccination (130,137).
# Groups Requiring Different Vaccination Doses or Schedules
Compared with immunocompetent adults, hemodialysis patients are less likely to have protective levels of antibody after vaccination with standard vaccine dosages; protective levels of antibody developed in 67%-86% (median: 64%) of adult hemodialysis patients who received 3-4 doses of either vaccine in various dosages and schedules (100). Higher seroprotection rates have been identified in patients with chronic renal failure, particularly those with mild or moder ate renal failure, who were vaccinated before becoming dialy sis dependent. After vaccination with a 4-dose series, the seroprotection rate among adult predialysis patients with serum creatinine levels of 4.0 mg/dL, 88% of whom were dialysis patients (144).
Humoral response to hepatitis B vaccination also is reduced in other immunocompromised persons (e.g., HIV-infected persons, hematopoietic stem-cell transplant recipients, and patients undergoing chemotherapy) (145)(146)(147). Modified dosing regimens, including doubling the standard antigen dose or administering additional doses, might increase response rates (148)(149)(150). However, limited data regarding response to these alternative vaccination schedules are available.
# Immune Memory
Anti-HBs is the only easily measurable correlate of vaccineinduced protection. Immunocompetent persons who achieve anti-HBs concentrations of >10 mIU/mL after preexposure vaccination have nearly complete protection against both acute disease and chronic infection, even if anti-HBs concentrations decline subsequently to <10 mIU/mL (151)(152)(153)(154). Although immunogenicity is lower among immunocompromised per sons, those who achieve and maintain a protective antibody response before exposure to HBV have a high level of protec tion from infection (155,156).
After primary immunization with hepatitis B vaccine, anti-HBs levels decline rapidly within the first year and more slowly thereafter. Among young adults who respond to a primary vaccine series with antibody concentrations of >10 mIU/mL, 17%-50% have low or undetectable concentrations of anti-HBs (reflecting anti-HBs loss) 10-15 years after vaccination (155)(156)(157). In the absence of exposure to HBV, the persis tence of detectable anti-HBs after vaccination depends on the concentration of postvaccination antibodies (158).
Even when anti-HBs concentrations decline to <10 mIU/mL, nearly all vaccinated persons remain protected against HBV infection. The mechanism for continued vaccine-induced pro tection is thought to be the preservation of immune memory through selective expansion and differentiation of clones of antigen-specific B and T lymphocytes (159). Persistence of vaccine-induced immune memory among persons who responded to a primary adult vaccine series 4-23 years previ ously but then had anti-HBs concentrations of <10 mIU/mL has been demonstrated by an anamnestic increase in anti-HBs concentrations in 74%-100% of these persons 2-4 weeks after administration of an additional vaccine dose and by antigen-specific B and T cell proliferation (160). Although direct measurement of immune memory is not yet possible, these data indicate that a high proportion of vaccinees retain immune memory and would have an anti-HBs response upon exposure to HBV.
Population-based studies of highly vaccinated populations have demonstrated elimination of new HBV infections for up to 2 decades after hepatitis B immunization programs were initiated (161)(162)(163). Breakthrough infections (detected by the presence of anti-HBc or HBV DNA) have been documented in a limited percentage of vaccinated persons (159,164), but these infections typically are transient and asymptomatic; breakthrough infections resulting in chronic HBV infection have been documented only rarely among infants born to HBsAg-positive mothers (165) and have not been observed among immunocompetent adults.
Limited data are available on the duration of immune memory after hepatitis B vaccination in immunocompromised persons (e.g., HIV-infected patients, dialysis patients, patients undergoing chemotherapy, or hematopoietic stem-cell trans plant patients). No clinically significant HBV infections have been documented among immunocompromised persons who maintain protective levels of anti-HBs. In studies of long-term protection among HIV-infected persons, breakthrough infec tions occurring after a decline in anti-HBs concentrations to <10 mIU/mL have been transient and asymptomatic (155).
However, among hemodialysis patients who responded to the vaccine, clinically significant HBV infection has been docu mented in persons who have not maintained anti-HBs con centrations of >10 mIU/mL (166).
# Postexposure Prophylaxis
Both passive-active postexposure prophylaxis (PEP) using HBIG and hepatitis B vaccine and active PEP using hepatitis B vaccine alone are highly effective in preventing infection after exposure to HBV (167)(168)(169)(170). HBIG alone has also been demonstrated to be effective in preventing HBV transmission (68,(171)(172)(173), but with the availability of hepatitis B vac cine, HBIG typically is used as an adjunct to vaccination. Guidelines for PEP for adults with occupational (174) and nonoccupational exposures (Appendix B) to HBV have been developed.
The major determinant of the effectiveness of PEP is early administration of the initial dose of vaccine. The effectiveness of PEP diminishes the longer after exposure it is initiated (27,175,176). Studies are limited on the maximum interval after exposure during which PEP is effective, but the interval is likely <7 days for needlestick (171,172,177) exposures and <14 days for sexual exposures (68,154,168,170,173).
Substantial evidence suggests that adults who respond to hepatitis B vaccination are protected from chronic HBV infection for at least 20 years even if vaccinees lack detectable anti-HBs at the time of an exposure (151)(152)(153). For this rea son, immunocompetent persons who have had postvaccina tion testing and are known to have responded to hepatitis B vaccination with anti-HBs concentrations of >10 mIU/mL do not require additional passive or active immunization after an HBV exposure and do not need further periodic testing to assess anti-HBs concentrations.
# Vaccine Safety
Hepatitis B vaccines have been demonstrated to be safe when administered to infants, children, adolescents, and adults (178). Since 1982, an estimated 70 million adolescents and adults and 50 million infants and children in the United States have received >1 dose of hepatitis B vaccine (CDC, unpub lished data, 2004).
# Vaccine Reactogenicity
The most frequently reported side effects in persons receiv ing hepatitis B vaccine are pain at the injection site (3%-29%) and temperature of >99.9 °F (>37.7°C) (1%-6%) (124,125).
However, in placebo-controlled studies, these side effects were reported no more frequently among persons receiving hepati tis B vaccine than among persons receiving placebo (179).
# Adverse Events
CDC and FDA continually assess the safety of hepatitis B vaccine and other vaccines through ongoing monitoring of data from the Vaccine Safety Datalink (VSD) project, the Vaccine Adverse Events Reporting System (VAERS), and other surveillance systems. A causal association has been established between receipt of hepatitis B vaccine and anaphylaxis (178). On the basis of VSD data, the estimated incidence of anaphy laxis among children and adolescents who received hepatitis B vaccine is one case per 1.1 million vaccine doses distributed (95% confidence interval = 0.1-3.9) (180).
Early postlicensure surveillance of adverse events suggested a possible association between Guillain-Barré syndrome (GBS) and receipt of the first dose of plasma-derived hepatitis B vac cine among U.S. adults (181). However, in a subsequent analy sis of GBS cases reported to CDC, FDA, and vaccine manufacturers, among an estimated 2.5 million adults who received >1 dose of recombinant hepatitis B vaccine during 1986-1990, the rate of GBS that occurred after hepatitis B vaccination did not exceed the background rate among unvaccinated persons (CDC, unpublished data, 1992). An Institute of Medicine review concluded that evidence was insufficient to reject or accept a causal association between GBS and hepatitis B vaccination (178,182,183).
One retrospective case-control study (184,185) reported an association between hepatitis B vaccine and multiple sclerosis (MS) among adults. However, multiple studies (186)(187)(188)(189) have demonstrated no such association. Reviews by scientific pan els have favored rejection of a causal association between hepa titis B vaccination and MS (190,191).
In rare instances, chronic illnesses have been reported after hepatitis B vaccination, including chronic fatigue syndrome (192), neurologic disorders (e.g., leukoencephalitis, optic neu ritis, and transverse myelitis) (193)(194)(195), rheumatoid arthri tis (196,197), type 1 diabetes (198), and autoimmune disease (199). However, no evidence of a causal association between these conditions or other chronic illnesses and hepatitis B vac cine has been demonstrated (183,190,(200)(201)(202)(203).
Reported episodes of alopecia (hair loss) after rechallenge with hepatitis B vaccine suggest that vaccination might, in rare cases, trigger episodes of alopecia (204). However, a popu lation-based study determined no statistically significant association between alopecia and hepatitis B vaccine (205).
# Contraindications and Precautions
Hepatitis B vaccination is contraindicated for persons with a history of hypersensitivity to yeast or any vaccine compo nent (206)(207)(208)(209). Despite a theoretic risk for allergic reaction to vaccination in persons with allergy to Saccharomyces cerevisiae (baker's yeast), no evidence exists to document adverse reac tions after vaccination of persons with a history of yeast allergy.
Persons with a history of serious adverse events (e.g., anaphy laxis) after receipt of hepatitis B vaccine should not receive additional doses. As with other vaccines, vaccination of persons with moderate or severe acute illness, with or without fever, should be deferred until illness resolves (210). Vaccination is not contraindicated in persons with a history of MS, GBS, autoimmune disease (e.g., systemic lupus erythematosis or rheu matoid arthritis), or other chronic diseases.
Pregnancy is not a contraindication to vaccination. Limited data suggest that developing fetuses are not at risk for adverse events when hepatitis B vaccine is administered to pregnant women (211). Available vaccines contain noninfectious HBsAg and should cause no risk of infection to the fetus.
# Implementation Barriers and Rationale for New Recommendations
Soon after hepatitis B vaccine was licensed in 1982, ACIP recommended vaccination for adults at increased risk for HBV infection (212). However, the recommendations were not widely implemented, and coverage among adults at risk for HBV infection remained low. By the early 1990s, the diffi culty in vaccinating adults at risk for HBV infection and the substantial burden of HBV-related disease resulting from infections acquired during childhood indicated that additional hepatitis B vaccination strategies were needed (213,214). In 1991, recommendations for vaccination of unvaccinated adults at high risk for HBV infection became part of the national strategy adopted by ACIP and professional medical organiza tions to eliminate HBV transmission in the United States (3). However, hepatitis B vaccine still is not offered routinely in medical settings serving adults, and a substantial number of adults at risk for HBV infection remain unvaccinated.
Multiple factors contribute to low hepatitis B vaccination coverage among adults at risk. In contrast to vaccination of children, no national program exists to support vaccine pur chase and infrastructure for vaccine delivery to uninsured and underinsured adults. Reimbursement mechanisms for vacci nation of adults with health insurance also are not widely used. In addition, certain patients and health-care providers are reluctant to discuss risk behaviors (215), and providers might not make hepatitis B vaccination a priority compared with other clinical care services.
One strategy demonstrated to be effective at increasing vac cination coverage among adults at risk for HBV infection is to offer vaccination to all adults as part of routine prevention services in settings in which a high proportion of adults have HBV risk factors (10,(216)(217)(218)(219)(220)(221)(222)(223). In STD and HIV treatment facilities, health-care settings serving IDUs, and health-care settings targeting services to MSM, nearly all patients have behavioral risk factors for HBV infection. Furthermore, a high proportion of persons receiving health care in HIV testing facilities or correctional facilities report sexual and drug-use risk behaviors (224,225). Therefore, providing hepatitis B vaccination in these settings offers an efficient and effective way to reach adults at highest risk. During 2001-2004, in a study of 760 adults with reported acute hepatitis B who par ticipated in CDC's Sentinel Counties Study of Viral Hepati tis, 39% reported a history of STD treatment, 40% reported a history of incarceration, and 22% reported a history of drug treatment; overall, 61% would have had at least one opportu nity to be vaccinated either during STD or drug treatment or at a correctional facility (226).
Demonstration projects that supported the purchase of hepa titis B vaccine and its administration in settings in which a high proportion of adults have HBV risk factors have estab lished the feasibility of providing the vaccine as part of com prehensive STD, HIV, and hepatitis prevention services (10). When clients were offered hepatitis B vaccination in such set tings, first-dose acceptance rates of 70%-85% were achieved (10,216,223,227). These demonstration projects have identi fied the components of successful adult hepatitis B vaccina tion programs (Box 3). In addition, "one-stop" delivery of integrated prevention services was preferred by the majority of patients and typically resulted in enhanced delivery of all (227). Return visits for second and third doses of hepatitis B vaccine also provide opportunities for patients to receive other STD/HIV-related services (e.g., test results, additional counseling, and referral). Multiple studies have established the cost-effectiveness of providing hepatitis B vaccination at STD/HIV counseling and testing sites, cor rectional institutions, drug-abuse treatment centers, and other settings serving adults at risk for HBV infection (228)(229)(230)(231).
Universal vaccination of adults in settings in which a high proportion of persons have HBV risk factors will reach a sub stantial proportion of all adults at risk for HBV infection. However, not all adults with risk factors for HBV infection visit these settings. For example, an estimated 80%-95% of STDs are diagnosed in settings other than STD clinics (232,233). Therefore, primary care and clinical preventive service providers (e.g., physicians' offices, community health centers, family planning clinics, liver disease clinics, and travel clinics) also should provide hepatitis B vaccine whenever indicated or requested as part of regular preventive care. Lim ited data are available regarding best practices in primary care and specialty medical settings to achieve high vaccination cov erage among adults at risk for HBV infection. In one project in which hepatitis B vaccine was made available free of charge to primary care clients in community clinics, low vaccination coverage rates were observed, compared with rates at other venues (10). This finding suggests that provision of free vac cine alone might not ensure increased use of hepatitis B vac cine and that other implementation strategies (e.g., education and training of clinicians and standing orders) are needed to prompt providers to offer vaccination to adults.
In primary care settings, targeting vaccination to persons at risk is an efficient approach to preventing HBV infection. During 2001-2005, among persons with acute hepatitis B who participated in CDC's Sentinel Counties Study of Viral Hepatitis, 84% reported risk behaviors or characteristics, either during the incubation period (i.e., 6 weeks-6 months) or during their lifetimes, that placed them in a group for which hepatitis B vaccination was recommended (CDC, unpublished data, 2001(CDC, unpublished data, -2005. Providers in primary care settings can ascertain patients' risks for HBV infection and identify candi dates for hepatitis B vaccination during routine patient visits. Assessment of patients' sex-and drug-related risk factors is recommended by the U.S. Preventive Services Task Force and the American Medical Association (AMA) (234,235) and has the ancillary benefit of identifying candidates for other pre vention interventions (e.g., screening for HIV infection and other STDs and drug-abuse treatment).
However, risk-targeted approaches can miss persons in need of prevention services. Patients might be reluctant to report sex-and drug-related behaviors, particularly when these behaviors are not perceived as relevant to the clinical encounter. In addition, despite recommendations of the U.S. Preventive Services Task Force and AMA, providers might be reluctant to inquire about behavioral risk factors. For example, surveys of physicians and patients conducted during 1995-1999 indicated that fewer than half of patients were asked about sexual behaviors (236)(237)(238). Health-care providers should educate all patients about the health benefits of hepatitis B vaccination, including risk factors for HBV infection and the importance of vaccination for per sons who engage in certain risk behaviors. This information might stimulate patients to request vaccination from their pri mary care providers, without requiring them to acknowledge a specific risk factor.
Another possible strategy is to offer vaccination to all adults in age groups with the highest incidence of infection as part of routine medical care (Figure 1). An age-based approach might simplify vaccination-related decision-making by prac titioners and remove the stigma associated with disclosure of risk behaviors. Other adult vaccines, including those for influenza and pneumococcal disease, are delivered on age-based schedules. However, the effectiveness of age-based strategies in increasing hepatitis B vaccination coverage among adults at risk is unknown. In addition, age-based strategies for adult vaccination would be substantially more costly than risktargeted approaches (CDC, unpublished data, 2005).
Lack of funding for vaccine and its administration is a major barrier to provision of hepatitis B vaccine to adults. Hepatitis B vaccine often is a reimbursable charge in healthcare settings that bill insurance or Medicaid for services, and surveys of public and private insurers indicate high rates of coverage for hepatitis B vaccination (239,240). In one study, an estimated 74% of adults aged 18-49 years with risk factors for HBV infection had health insurance coverage (241). How ever, public clinics might not have systems in place to bill for vaccination services, and reimbursement of private providers might be inadequate to cover the purchase and administra tion of vaccine. Other adults either lack private insurance or are not eligible for reimbursement by Medicaid. Although the Vaccines for Children program provides federally funded vac cine and administration costs for vaccination of uninsured and underinsured children and youth aged <19 years, no simi lar program supports adult vaccination.
Certain adult hepatitis B vaccination programs have been successful at identifying federal, state, or local funds to provide free or low-cost hepatitis B vaccination to uninsured or underinsured adults. For example, the Immunization Grant Program, created under Section 317 of the Public Health Service Act, provides funding to state, local, and territorial public health agencies for vaccine purchase and vaccinationprogram operation (242). Section 317 funds can be used to purchase childhood and adult vaccines, including adult hepa titis B vaccine. However, lack of adequate funding constrains efforts to increase vaccination coverage among adults. To maximize available resources for hepatitis B vaccination, pub lic and private health-care providers should become familiar with insurance billing and reimbursement mechanisms that can be used for hepatitis B vaccine. AMA billing and reim bursement guidelines are available at / ama1/pub/upload/mm/36/ama_hep_coding_trifo.pdf.
Although HBV infections are expected to decline as a result of universal childhood immunization and increased vaccina tion of adults at risk, an estimated 1.25 million persons in the United States are living with chronic HBV infection and require essential prevention services and medical management. In particular, Asians/Pacific Islanders in the United States have a high prevalence of chronic HBV infection, representing a major health disparity. Persons with chronic HBV infection often are unaware of their infection status or do not receive needed care. Few programs have been implemented to iden tify HBsAg-positive persons, provide or refer these persons for appropriate medical management, and provide vaccina tion to their contacts (243). During delivery of hepatitis B vaccination and provision of other preventive services, healthcare providers have opportunities to identify persons with chronic HBV infection, refer them for counseling and man agement, and ensure that their susceptible contacts receive vaccination. Guidelines to identify and manage HBsAg positive persons have been developed (Appendix C).
Implementation of the recommendations and strategies outlined in this report and the companion ACIP recommen dations for infants, children, and adolescents (11) should lead ultimately to the elimination of HBV transmission in the United States. New information will have implications for this effort, and adjustments and changes are expected to occur.
# Recommendations and Implementation Strategies for Hepatitis B Vaccination of Adults Recommendations
- Hepatitis B vaccination is recommended for all unvacci nated adults at risk for HBV infection and for all adults requesting protection from HBV infection (Box 4). Acknowledgment of a specific risk factor should not be a requirement for vaccination. HIV prevention programs, and homeless shelters). - In primary care and specialty medical settings (e.g., physician's offices, family planning clinics, community health centers, liver disease clinics, and travel clinics), pro viders should implement standing orders to identify adults recommended for hepatitis B vaccination and administer vaccination as part of routine services. To ensure vaccina tion of persons at risk for HBV infection, health-care providers should -provide information to all adults regarding the health benefits of hepatitis B vaccination, including the risk factors for HBV infection and persons for whom vaccination is recommended; -help all adults assess their need for vaccination by obtaining a history that emphasizes risks for sexual transmission and percutaneous or mucosal exposure to blood; -administer hepatitis B vaccine to adults who report risk factors for HBV infection; and -provide hepatitis B vaccine to all adults requesting pro tection from HBV infection without requiring acknowledgment of a specific risk factor. - Occupational health programs should -identify all staff whose work-related activities involve exposure to blood or other potentially infectious body fluids in a health-care, laboratory, public safety, or institutional setting (including employees, students, contractors, attending clinicians, emergency medical technicians, paramedics, and volunteers); -provide education to staff to encourage vaccination;
# Interrupted Vaccine Schedules
- When the hepatitis B vaccine schedule is interrupted, the vaccine series does not need to be restarted. - If the series is interrupted after the first dose, the second dose should be administered as soon as possible, and the second and third doses should be separated by an interval of at least 8 weeks. - If only the third dose has been delayed, it should be ad ministered as soon as possible.
# Minimum Dosing Intervals and Management of Persons Who Were Vaccinated Incorrectly
- The third dose of vaccine must be administered at least 8 weeks after the second dose and should follow the first dose by at least 16 weeks; the minimum interval between the first and second doses is 4 weeks. - Inadequate doses of hepatitis B vaccine (see Table 2) or doses received after a shorter-than-recommended dosing interval should be readministered, using the correct dosage or schedule.
# Accelerated Vaccine Schedules
- The Food and Drug Administration has not approved accelerated schedules in which hepatitis B vaccine is administered more than once in 1 month. If an accelerated schedule (e.g., doses at 0, 7, and 14 days) is used, the patient also should receive a booster dose at least 6 months after the start of the series to promote long-term immunity.
# Hemodialysis Patients and Other Immunocompromised Persons
- Hepatitis B vaccination is recommended for pre-end-stage renal disease patients before they become dialysis depen dent and for peritoneal and home dialysis patients because they might require in-center hemodialysis. - Higher hepatitis B vaccine doses are recommended for adult dialysis patients and other immunocompromised persons (see seled to use methods (e.g., condoms) to protect them selves from sexual exposure to infectious body fluids (e.g., semen or vaginal secretions) unless they have been dem onstrated to be immune after vaccination (i.e., antibody to HBsAg concentrations of >10 mIU/mL) or previously infected (anti-HBc positive). Partners should be made aware that use of condoms and other prevention meth ods might reduce their risks for human immunodeficiency virus (HIV) and other sexually transmitted diseases (STDs). - To prevent or reduce the risk for transmission to others, HBsAg-positive persons should be advised concerning the risks for -perinatal transmission to infants born to HBsAg positive women and the need for such infants to receive hepatitis B vaccine beginning at birth (1) and -transmission to household, sex, and needle-sharing contacts and the need for such contacts to receive hepa titis B vaccine. - HBsAg-positive persons should also be advised to -notify their sex partners about their status; -use methods (e.g., condoms) to protect nonimmune sex partners from acquiring HBV infection from sexual activity until the sex partners can be vaccinated and their immunity documented (persons should be made aware that use of condoms and other prevention meth ods might reduce their risks for HIV and other STDs); -cover cuts and skin lesions to prevent spread through infectious secretions or blood; -refrain from donating blood, plasma, tissue, or semen (organs may be donated to HBV-immune or chroni cally infected persons needing a transplant; decisions about organ donation should be made on an individual basis); and -refrain from sharing household articles (e.g., tooth brushes, razors, or personal injection equipment) that could become contaminated with blood. - To protect the liver from further harm, HBsAg-positive persons should be advised to -avoid or limit alcohol consumption because of the effects of alcohol on the liver; -refrain from taking any new medicines, including over the-counter and herbal medicines, without consulting with their health-care provider; and -obtain vaccination against hepatitis A if chronic liver disease is present (2). - When seeking medical or dental care, HBsAg-positive persons should be advised to inform those responsible for their care of their HBsAg status so that they can be evalu ated and their care managed appropriately. - Other counseling messages include the following: -HBV is not spread by breastfeeding, kissing, hugging, coughing, ingesting food or water, sharing eating uten sils or drinking glasses, or casual contact. all of the test questions;
sign and date this form or a photocopy;
[ submit your answer form by for continuing education credit.
# ] E ] E ] E ] E ] E ] E ] E ] E ] E ] E
D D D D D ] D ] D ] D ] D ] D D ] ] ] ] ]] | 13,408 | {
"id": "7ff32cb258db3f95ff922acf4e52526efd3b6d7f",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Revisions to the January 28, 2016, version of the Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents include key updates to several sections. Significant updates are highlighted throughout the document.#
- TAF/FTC was added as a 2-NRTI option in several Recommended and Alternative regimens, as noted in Table 6 of the guidelines. The addition of TAF/FTC to these recommendations is based on data from comparative trials demonstrating that TAF-containing regimens are as effective in achieving or maintaining virologic suppression as tenofovir disoproxil fumarate (TDF)-containing regimens and with more favorable effects on markers of bone and renal health.
- In the What to Start section, the evidence quality rating "II" was expanded to include "relative bioavailability/bioequivalence studies or regimen comparisons from randomized switch studies." This evidence rating was broadened because not all recommended regimens were evaluated in randomized, controlled trials in antiretroviral therapy (ART)-naive patients. The Panel on Antiretroviral Guidelines for Adults and Adolescents (the Panel) based their recommendations for some regimens on either data from bioequivalence or relative bioavailability studies, or by extrapolating results from randomized "switch" studies that evaluated a drug's or regimen's ability to maintain virologic suppression in patients whose HIV was suppressed on a previous regimen. Guidance for clinicians on choosing between abacavir (ABC)-, TAF-, and TDF-containing regimens was added to What to Start.
- The lopinavir/ritonavir (LPV/r) plus 2-NRTI regimen was removed from the list of Other regimens because regimens containing this protease inhibitor (PI) combination have a larger pill burden and greater toxicity than other currently available options.
# Regimen Switching
- Based on the most current data, this section was simplified to focus on switch strategies for virologically suppressed patients. The strategies are categorized as Strategies with Good Supporting Evidence, Strategies Under Evaluation, and Strategies Not Recommended.
# HIV-Infected Women
- The Panel emphasizes that ART is recommended for all HIV-infected patients, including all HIV-infected women.
- The Panel also stresses the importance of early treatment for HIV-infected women during pregnancy and continuation of ART after pregnancy.
- This section was updated to include new data on interactions between antiretroviral (ARV) drugs and hormonal contraceptives.
- As rifamycins are potent inducers of P-glycoprotein (P-gp), and TAF is a P-gp substrate, coadministration of TAF and rifamycins is not recommended.
# Additional Updates
Minor revisions were made to the following sections:
- Laboratory Testing for Initial Assessment and Monitoring of HIV-Infected Patients on Antiretroviral Therapy
- Drug Resistance Testing
- Adverse Effects of Antiretroviral Agents and Tables 14 and 15 - Monthly Average Wholesale Price of Commonly Used Antiretroviral Drugs (Table 16)
- Drug Interaction Tables 18,19a-e, and 20b
- Drug Characteristics Tables (Appendix B, Tables 1-7)
Downloaded from on 7/31/2017
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents iii These guidelines represent current knowledge regarding the use of ARVs. Because the science of HIV evolves rapidly, the availability of new agents and new clinical data may change therapeutic options and preferences. Information included in these guidelines may not always be consistent with approved labeling for the particular drugs or indications, and the use of the terms "safe" and "effective" may not be synonymous with the Food and Drug Administration (FDA)-defined legal standards for drug approval. The Panel frequently updates the guidelines (current and archived versions of the guidelines are available on the AIDSinfo website at ). However, the guidelines cannot always be updated apace with the rapid evolution of new data and cannot offer guidance on care for all patients. Patient management decisions should be based on clinical judgement and attention to unique patient circumstances.
# Management of the
# List of Tables
The Panel recognizes the importance of clinical research in generating evidence to address unanswered questions related to the optimal safety and efficacy of ART, and encourages both the development of protocols and patient participation in well-designed, Institutional Review Board (IRB)-approved clinical trials.
# Basis for Recommendations
Recommendations in these guidelines are based upon scientific evidence and expert opinion. Each recommended statement includes a letter (A, B, or C) that represents the strength of the recommendation and a Roman numeral (I, II, or III) that represents the quality of the evidence that supports the recommendation (see Table 2).
# Table 2. Rating Scheme for Recommendations
# HIV Expertise in Clinical Care
Several studies have demonstrated that overall outcomes in HIV-infected patients are better when care is delivered by clinicians with HIV expertise (e.g., care for a larger panel of patients), reflecting the complexity of HIV infection and its treatment. Appropriate training, continuing education, and clinical experience are all components of optimal care. Providers who do not have this requisite training and experience should consult HIV experts when needed.
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents B-1
Baseline Evaluation (Last updated May 1, 2014; last reviewed May 1, 2014)
Every HIV-infected patient entering into care should have a complete medical history, physical examination, and laboratory evaluation and should be counseled regarding the implications of HIV infection. The goals of the initial evaluation are to confirm the diagnosis of HIV infection, obtain appropriate baseline historical and laboratory data, ensure patient understanding about HIV infection and its transmission, and to initiate care as recommended in HIV primary care guidelines 1 and guidelines for prevention and treatment of HIV-associated opportunistic infections. 2 The initial evaluation also should include introductory discussion on the benefits of antiretroviral therapy (ART) for the patient's health and to prevent HIV transmission. Baseline information then can be used to define management goals and plans. In the case of previously treated patients who present for an initial evaluation with a new health care provider, it is critical to obtain a complete antiretroviral (ARV) history (including drug-resistance testing results, if available), preferably through the review of past medical records. Newly diagnosed patients should also be asked about any prior use of ARV agents for prevention of HIV infection.
The following laboratory tests performed during initial patient visits can be used to stage HIV disease and to assist in the selection of ARV drug regimens:
- HIV antibody testing (if prior documentation is not available or if HIV RNA is below the assay's limit of detection) (AI);
- CD4 T-cell count (CD4 count) (AI);
- Plasma HIV RNA (viral load) (AI);
- Complete blood count, chemistry profile, transaminase levels, blood urea nitrogen (BUN), and creatinine, urinalysis, and serologies for hepatitis A, B, and C viruses (AIII);
- Fasting blood glucose and serum lipids (AIII); and
- Genotypic resistance testing at entry into care, regardless of whether ART will be initiated immediately (AII). For patients who have HIV RNA levels <500 to 1,000 copies/mL, viral amplification for resistance testing may not always be successful (BII).
In addition, other tests (including screening tests for sexually transmitted infections and tests for determining the risk of opportunistic infections and need for prophylaxis) should be performed as recommended in HIV primary care and opportunistic infections guidelines. 1,2 Patients living with HIV infection often must cope with many social, psychiatric, and medical issues that are best addressed through a patient-centered, multi-disciplinary approach to the disease. The baseline evaluation should include an evaluation of the patient's readiness for ART, including an assessment of high-risk behaviors, substance abuse, social support, mental illness, comorbidities, economic factors (e.g., unstable housing), medical insurance status and adequacy of coverage, and other factors that are known to impair adherence to ART and increase the risk of HIV transmission. Once evaluated, these factors should be managed accordingly.
The baseline evaluation should also include a discussion of risk reduction and disclosure to sexual and/or needle sharing partners, especially with untreated patients who are still at high risk of HIV transmission. HIV RNA (viral load) and CD4 T lymphocyte (CD4) cell count are the two surrogate markers of antiretroviral treatment (ART) responses and HIV disease progression that have been used for decades to manage and monitor HIV infection.
Viral load is a marker of response to ART. A patient's pre-ART viral load level and the magnitude of viral load decline after initiation of ART provide prognostic information about the probability of disease progression. 1 The key goal of ART is to achieve and maintain durable viral suppression. Thus, the most important use of the viral load is to monitor the effectiveness of therapy after initiation of ART.
Measurement of CD4 count is particularly useful before initiation of ART. The CD4 cell count provides information on the overall immune function of an HIV-infected patient. The measurement is critical in establishing thresholds for the initiation and discontinuation of opportunistic infection (OI) prophylaxis and in assessing the urgency to initiate ART.
The management of HIV-infected patients has changed substantially with the availability of newer, more potent, and less toxic antiretroviral (ARV) agents. In the United States, ART is now recommended for all HIV-infected patients regardless of their viral load or CD4 count. In the past, clinical practice, which was supported by treatment guidelines, was generally to monitor both CD4 cell count and viral load concurrently. However, because most HIV-infected patients in care now receive ART, the rationale for frequent CD4 monitoring is weaker. The roles and usefulness of these two tests in clinical practice are discussed in the following sections.
# Plasma HIV-1 RNA (Viral Load) Monitoring
Viral load is the most important indicator of initial and sustained response to ART (AI) and should be measured in all HIV-infected patients at entry into care (AIII), at initiation of therapy (AIII), and on a regular basis thereafter. For those patients who choose to delay therapy, repeat viral load testing while not on ART is optional (CIII). Pre-treatment viral load level is also an important factor in the selection of an initial ARV regimen because several currently approved ARV drugs or regimens have been associated with poorer responses in patients with high baseline viral load (see What to Start). Commercially available HIV-1 RNA assays do not detect HIV-2 viral load. For further discussion on HIV-2 RNA monitoring in patients with HIV-1/HIV-2 co-infection or HIV-2 mono-infection, see HIV-2 Infection.
Several systematic reviews of data from clinical trials involving thousands of participants have established that decreases in viral load following initiation of ART are associated with reduced risk of progression to AIDS or death. Thus, viral load testing is an established surrogate marker for treatment response. 4 The minimal change in viral load considered to be statistically significant (2 standard deviations) is a three-fold change (equivalent to a 0.5 log 10 copies/mL change). Optimal viral suppression is defined generally as a viral load persistently below the level of detection (HIV RNA 200 copies/mL-a threshold that eliminates most cases of apparent viremia caused by viral load blips or assay variability 10 (see Virologic Failure and Suboptimal Immunologic Response).
Individuals who are adherent to their ARV regimens and do not harbor resistance mutations to the component drugs can generally achieve viral suppression 8 to 24 weeks after ART initiation; rarely, in some patients it
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents C-6 may take longer. Recommendations on the frequency of viral load monitoring are summarized below:
- After initiation of ART or modification of therapy because of virologic failure. Plasma viral load should be measured before initiation of ART and within 2 to 4 weeks but no later than 8 weeks after treatment initiation or modification (AIII). The purpose of the measurements is to confirm an adequate initial virologic response to ART, indicating appropriate regimen selection and patient adherence to therapy. Repeat viral load measurement should be performed at 4-to 8-week intervals until the level falls below the assay's limit of detection (BIII).
- In virologically suppressed patients in whom ART was modified because of drug toxicity or for regimen simplification. Viral load measurement should be performed within 4 to 8 weeks after changing therapy (AIII). The purpose of viral load monitoring at this point is to confirm the effectiveness of the new regimen.
- In patients on a stable, suppressive ARV regimen. Viral load should be repeated every 3 to 4 months (AIII) or as clinically indicated to confirm continuous viral suppression. Clinicians may extend the interval to 6 months for adherent patients whose viral load has been suppressed for more than 2 years and whose clinical and immunologic status is stable (AIII).
- In patients with suboptimal response. The frequency of viral load monitoring will depend on clinical circumstances, such as adherence and availability of further treatment options. In addition to viral load monitoring, a number of additional factors, such as patient adherence to prescribed medications, suboptimal drug exposure, or drug interactions, should be assessed. Patients who fail to achieve viral suppression should undergo resistance testing to aid in the selection of an alternative regimen (see Drug-Resistance Testing and Virologic Failure and Suboptimal Immunologic Response sections).
# CD4 Count Monitoring
The CD4 count is the most important laboratory indicator of immune function in HIV-infected patients. It is also the strongest predictor of subsequent disease progression and survival according to findings from clinical trials and cohort studies. 11,12 CD4 counts are highly variable; a significant change (2 standard deviations) between 2 tests is approximately a 30% change in the absolute count, or an increase or decrease in CD4 percentage by 3 percentage points. Monitoring of lymphocyte subsets other than CD4 (e.g., CD8, CD19) has not proven clinically useful, is more expensive, and is not routinely recommended (BIII).
# Use of CD4 Count for Initial Assessment
CD4 count should be measured in all patients at entry into care (AI). It is the key factor in determining the need to initiate OI prophylaxis (see the Adult Opportunistic Infection Guidelines) 13 and the urgency to initiate ART (AI) (see the Initiating Antiretroviral Therapy in Antiretroviral-Naive Patients section of these guidelines). Although most OIs occur in patients with CD4 counts <200 cells/mm 3 , some OIs can occur in patients with higher CD4 counts. 14
# Use of CD4 Count for Monitoring Therapeutic Response
The CD4 count is used to assess a patient's immunologic response to ART. It is also used to determine whether prophylaxis for OIs can be discontinued (see the Adult Opportunistic Infection Guidelines) 13 . For most patients on therapy, an adequate response is defined as an increase in CD4 count in the range of 50 to 150 cells/mm 3 during the first year of ART, generally with an accelerated response in the first 3 months of treatment. Subsequent increases average approximately 50 to 100 cells/mm 3 per year until a steady state level is reached. 15 Patients who initiate therapy with a low CD4 count 16 or at an older age 17 may have a blunted increase in their counts despite virologic suppression.
The CD4 count response to ART varies widely, but a poor CD4 response in a patient with viral suppression is rarely an indication for modifying an ARV regimen. In patients with consistently suppressed viral loads who have already experienced ART-related immune reconstitution, the CD4 count provides limited information.
Frequent testing is unnecessary because the results rarely lead to a change in clinical management. One retrospective study found that declines in CD4 count to 300 cells/mm 3 . 18 Similarly, the ARTEMIS trial found that CD4 monitoring had no clinical benefit in patients who had suppressed viral loads and CD4 counts >200 cells/mm 3 after 48 weeks of therapy. 19 Furthermore, the risk of Pneumocystis jirovecii pneumonia is extremely low in patients on suppressive ART who have CD4 counts between 100 and 200 cells/mm 3 . 20 Although uncommon, CD4 count declines can occur in a small percentage of virologically suppressed patients and may be associated with adverse clinical outcomes such as cardiovascular disease, malignancy, and death. 21 An analysis of costs associated with CD4 monitoring in the United States estimated that reducing CD4 monitoring in treated patients from every 6 months to every 12 months could result in annual savings of approximately $10 million. 22 For the patient on a suppressive regimen whose CD4 count has consistently ranged between 300 and 500 cells/ mm 3 for at least 2 years, the Panel recommends CD4 monitoring on an annual basis (BII). Continued CD4 monitoring for virologically suppressed patients whose CD4 counts have been consistently >500 cells/mm 3 for at least 2 years may be considered optional (CIII). The CD4 count should be monitored more frequently, as clinically indicated, when there are changes in a patient's clinical status that may decrease CD4 count and thus prompt OI prophylaxis. Examples of such changes include the appearance of new HIV-associated clinical symptoms or initiation of treatment known to reduce CD4 cell count (e.g., interferon, chronic corticosteroids, or anti-neoplastic agents) (AIII). In patients who fail to maintain viral suppression while on ART, the Panel recommends CD4 count monitoring every 3 to 6 months (AIII) (see Virologic Failure and Suboptimal Immunologic Response section).
# Factors that Affect Absolute CD4 Count
The absolute CD4 count is a calculated value based on the total white blood cell (WBC) count and the percentages of total and CD4+ T lymphocytes. This absolute number may fluctuate in individuals or may be influenced by factors that may affect the total WBC count and lymphocyte percentages, such as use of bone marrow-suppressive medications or the presence of acute infections. Splenectomy 23,24 or co-infection with human T-lymphotropic virus type I (HTLV-1) 25 may cause misleadingly elevated CD4 counts. Alpha-interferon may reduce the absolute CD4 count without changing the CD4 percentage. 26 In all these settings, CD4 percentage remains stable and may be a more appropriate parameter to assess a patient's immune function.
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents C-8
# Clinical Scenario Viral Load Monitoring CD4 Count Monitoring
Before initiating ART At entry into care (AIII)
If ART initiation is deferred, repeat before initiating ART (AIII).
In patients not initiating ART, repeat testing is optional (CIII).
# At entry into care (AI)
If ART is deferred, every 3 to 6 months (AIII).
b
After initiating ART Preferably within 2 to 4 weeks (and no later than 8 weeks) after initiation of ART (AIII); thereafter, every 4 to 8 weeks until viral load suppressed (BIII).
# months after initiation of ART (AIII)
After modifying ART because of drug toxicities or for regimen simplification in a patient with viral suppression 4 to 8 weeks after modification of ART to confirm effectiveness of new regimen (AIII).
Monitor according to prior CD4 count and duration on ART, as outlined below.
After modifying ART because of virologic failure
Preferably within 2 to 4 weeks (and no later than 8 weeks) after modification (AIII); thereafter, every 4 to 8 weeks until viral load suppressed (BIII). If viral suppression is not possible, repeat viral load every 3 months or more frequently if indicated (AIII).
# Every 3 to 6 months (AI)
During the first 2 years of ART Every 3 to 4 months (AIII) Every 3 to 6 months a (BII)
After 2 years of ART (VL consistently suppressed, CD4 consistently 300-500 cells/mm 3 ) Can extend to every 6 months for patients with consistent viral suppression for ≥2 years (AIII).
# Every 12 months (BII)
After 2 years of ART (VL consistently suppressed, CD4 consistently >500 cells/ mm 3 )
Optional (CIII)
While on ART with detectable viremia (VL repeatedly >200 copies/mL) a Monitoring of lymphocyte subsets other than CD4 (e.g., CD8, CD19) has not proven clinically useful, adds to costs, and is not routinely recommended (BIII).
Every
b Some experts may repeat CD4 count every 3 months in patients with low baseline CD4 count (300 cells/mm 3 ). c The following are examples of clinically indicated scenarios: changes in a patient's clinical status that may decrease CD4 count and thus prompt initiation of prophylaxis for opportunistic infections (OI), such as new HIV-associated symptoms, or initiation of treatment with medications which are known to reduce CD4 cell count.
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents C-11
Drug-Resistance Testing (Last updated July 14, 2016; last reviewed July 14, 2016)
# Genotypic and Phenotypic Resistance Assays
Genotypic and phenotypic resistance assays are used to assess viral strains and select treatment strategies. These assays provide information on resistance to nucleoside reverse transcriptase inhibitors (NRTIs), non-nucleoside reverse transcriptase inhibitors (NNRTIs), protease inhibitors (PIs), and integrase strand transfer inhibitors (INSTIs). In some circumstances, INSTI-resistance tests may need to be ordered separately. Clinicians should check with the testing laboratory. INSTI-resistance testing is particularly important in persons who experience virologic failure while taking an INSTI-containing regimen. Testing for fusion inhibitor resistance can also be ordered separately. Co-receptor tropism assays should be performed when considering the use of a CCR5 antagonist. Phenotypic co-receptor tropism assays have been used in clinical practice. A genotypic assay to predict co-receptor use is now commercially available (see Co-receptor Tropism Assays).
# Genotypic Assays
Genotypic assays detect drug-resistance mutations in relevant viral genes. Most genotypic assays involve sequencing the reverse transcriptase (RT), protease (PR), and integrase (IN) genes to detect mutations that are known to confer drug resistance. A genotypic assay that assesses mutations in the gp41 (envelope) gene
# Panel's Recommendations
For Antiretroviral Therapy-Naive Patients:
- HIV drug-resistance testing is recommended for persons with HIV infection at entry into care to guide selection of the initial antiretroviral therapy (ART) regimen (AII). If therapy is deferred, repeat testing may be considered at the time of ART initiation (CIII).
- Genotypic testing is recommended as the preferred resistance testing to guide therapy in antiretroviral (ARV)-naive patients (AIII).
- In special circumstances (e.g., in patients with acute or recent HIV infection and in pregnant HIV-infected women, ART initiation should not be delayed while awaiting resistance testing results; the regimen can be modified once results are reported (AIII).
- Standard genotypic drug-resistance testing in ARV-naive persons involves testing for mutations in the reverse transcriptase (RT) and protease (PR) genes. If transmitted integrase strand transfer inhibitor (INSTI) resistance is a concern, providers should ensure that genotypic resistance testing also includes INSTI genotype testing (BIII).
For Antiretroviral Therapy-Experienced Patients:
- HIV drug-resistance testing should be performed to assist in the selection of active drugs when changing ART regimens in the following patients:
- In patients with virologic failure and HIV RNA levels >1000 copies/mL (AI).
- In patients with HIV RNA levels >500 copies/mL but <1000 copies/mL, drug-resistance testing may be unsuccessful but should still be considered (BII). - Drug-resistance testing should also be performed when managing suboptimal viral load reduction (AII).
- When a patient experiences virologic failure while receiving an INSTI-based regimen, genotypic testing for INSTI resistance should be performed to determine whether to include a drug from this class in subsequent regimens (AII).
- Drug-resistance testing in the setting of virologic failure should be performed while the person is taking prescribed ARV drugs or, if not possible, within 4 weeks after discontinuing therapy (AII). If more than 4 weeks have elapsed since the ARVs were discontinued, resistance testing may still provide useful information to guide therapy; however, it is important to recognize that previously selected resistance mutations can be missed (CIII).
- Genotypic testing is recommended as the preferred resistance testing to guide therapy in patients with suboptimal virologic response or virologic failure while on first-or second-line regimens (AII).
- The addition of phenotypic to genotypic testing is generally preferred for persons with known or suspected complex drug-resistance mutation patterns (BIII). associated with resistance to the fusion inhibitor enfuvirtide is also commercially available. Genotypic assays can be performed rapidly and results are available within 1 to 2 weeks of sample collection. Interpreting these test results requires knowledge of the mutations selected by different antiretroviral (ARV) drugs and of the potential for cross resistance to other drugs conferred by certain mutations. The International AIDS Society-USA (IAS-USA) maintains an updated list of significant resistance-associated mutations in the RT, PR, IN, and envelope genes (see ). 1 The Stanford University HIV Drug Resistance Database () also provides helpful guidance for interpreting genotypic resistance test results. Various tools to assist the provider in interpreting genotypic test results are now available. Clinical trials have demonstrated that consulting with specialists in HIV drug resistance improves virologic outcomes. 6 Clinicians are thus encouraged to consult a specialist to interpret genotypic test results and design optimal new regimens.
# Rating of Recommendations
# Phenotypic Assays
Phenotypic assays measure the ability of a virus to grow in different concentrations of ARV drugs. RT and PR gene sequences and, more recently, integrase and envelope sequences derived from patient plasma HIV RNA are inserted into the backbone of a laboratory clone of HIV or used to generate pseudotyped viruses that express the patient-derived HIV genes of interest. Replication of these viruses at different drug concentrations is monitored by expression of a reporter gene and is compared with replication of a reference HIV strain. The drug concentration that inhibits viral replication by 50% (i.e., the median inhibitory concentration ) is calculated, and the ratio of the IC 50 of test and reference viruses is reported as the fold increase in IC 50 (i.e., fold resistance).
Automated phenotypic assays that can produce results in 2 to 3 weeks are commercially available, but they cost more to perform than genotypic assays. In addition, interpreting phenotypic assay results is complicated by incomplete information regarding the specific resistance level (i.e., fold increase in IC 50 ) associated with drug failure, although clinically significant fold increase cutoffs are now available for some drugs. Again, consulting with a specialist to interpret test results can be helpful.
# Limitations of Genotypic and Phenotypic Assays
Limitations of both genotypic and phenotypic assays include lack of uniform quality assurance testing for all available assays, relatively high cost, and insensitivity to minor viral species. Drug-resistant viruses that constitute less than 10% to 20% of the circulating virus population will probably not be detected by commercially available assays. This limitation is important to note because a wild-type virus often reemerges as the predominant population in the plasma after drugs that exert selective pressure on drug-resistant populations are discontinued. As a consequence, the proportion of virus with resistance mutations decreases to below the 10% to 20% threshold. In the case of some drugs, this reversion to predominantly wild-type virus can occur in the first 4 to 6 weeks after the drugs are discontinued.
Prospective clinical studies have shown that despite this plasma reversion, re-initiation of the same ARV agents (or those sharing similar resistance pathways) is usually associated with early drug failure, and that the virus present at failure is derived from previously archived resistant virus. 15 Therefore, resistance testing is most valuable when performed before or within 4 weeks after drugs are discontinued (AII). Because resistant virus may persist longer in the plasma of some patients, resistance testing done 4 to 6 weeks after discontinuation of drugs may still detect mutations. However, the absence of detectable resistance in such patients must be interpreted with caution when designing subsequent ARV regimens.
# Use of Resistance Assays in Clinical Practice (See Table 5)
# Use of Resistance Assays in Determining Initial Treatment
Transmission of drug-resistant HIV strains is well documented and associated with suboptimal virologic response to initial antiretroviral therapy (ART). The risk of acquiring drug-resistant virus is related to the prevalence of drug resistance in HIV-infected persons engaging in high-risk behaviors in a given community.
In high-income countries (e.g., the United States, some European countries, Australia, and Japan), approximately 10% to 17% of ART-naive patients have resistance mutations to at least 1 ARV drug. 20 Up to 8%, but generally less than 5%, of transmitted viruses will exhibit resistance to drugs from more than 1 class. Transmitted resistant HIV is generally either NRTI-or NNRTI-resistant. PI resistance is much less common, and to date, transmitted INSTI resistance is rare. 24 In persons with acute or recent (early) HIV infection, resistance testing can guide therapy selection to optimize virologic response. Therefore, resistance testing in this situation is recommended (AII). A genotypic assay is preferred for this purpose (AIII). In this setting, treatment initiation should not be delayed pending resistance testing results if the patient is willing and able to begin treatment. Once results are reported, the regimen can be modified if warranted (see Acute and Recent HIV (Early) Infection). In the absence of ART, resistant viruses may decline over time to less than the detection limit of standard resistance tests. However, when ART is eventually initiated, even low levels of resistant viruses may still increase the risk of treatment failure. Therefore, if ART is deferred, resistance testing should still be performed during acute HIV infection (AIII). In this situation, the genotypic resistance test result may be kept on record until the patient begins ART. Repeat resistance testing at the start of treatment may be considered because a patient may acquire drug-resistant virus (i.e., superinfection) between entry into care and initiation of ART (CIII).
Interpretation of drug-resistance testing before ART initiation in patients with chronic HIV infection is less straightforward. The rate at which transmitted resistance-associated mutations revert to wild-type virus has not been completely delineated, but mutations present at the time of HIV transmission are more stable than those selected under drug pressure. It is often possible to detect resistance-associated mutations in viruses that were transmitted several years earlier. No prospective trial has addressed whether drug-resistance testing before initiation of therapy confers benefit in this population. However, data from several studies suggest that virologic responses in persons with baseline resistance mutations are suboptimal. In addition, an analysis of early genotypic resistance testing in treatment-naive HIV-infected patients suggests that baseline testing in this population is cost effective and should be performed. 34 Therefore, resistance testing in chronically infected persons is recommended at the time of entry into HIV care (AII). Although no definitive prospective data exist to support the choice of one type of resistance testing over another, genotypic testing is generally preferred over phenotypic testing because of lower cost, more rapid turnaround time, greater sensitivity for detecting mixtures of wild-type and resistant virus, and easier to interpret test results (AIII). If therapy is deferred, repeat testing shortly before initiating ART may be considered because the patient may have acquired drug-resistant virus (i.e., superinfection) (CIII).
Standard genotypic drug-resistance testing in ARV-naive persons involves testing for mutations in the RT and PR genes. Although reports of transmission of INSTI-resistant virus are rare, as use of INSTIs increases, the potential for transmission of INSTI-resistant virus may also increase. Therefore, when INSTI resistance is suspected, providers should supplement standard baseline genotypic resistance testing with genotypic testing for resistance to this class of drugs (BIII).
# Use of Resistance Assays in the Event of Virologic Failure
Resistance assays are important tools to inform treatment decisions for patients who experience virologic failure while on ART. Several prospective studies assessed the utility of resistance testing to guide ARV drug selection in patients with virologic failure. These studies involved genotypic assays, phenotypic assays, or both. 6, In general, these studies found that changes in therapy based on resistance testing results produced better early virologic response to salvage regimens than regimen changes guided only by clinical judgment.
In addition, one observational cohort study found that performance of genotypic drug-resistance testing in ART-experienced patients with detectable plasma HIV RNA was independently associated with improved survival. 42 Thus, resistance testing is recommended as a tool for selecting active drugs when changing ARV When the use of a CCR5 antagonist is being considered, a co-receptor tropism assay should be performed (AI). Phenotypic co-receptor tropism assays have been used in clinical practice. A genotypic assay to predict co receptor use is now commercially available and is less expensive than phenotypic assays. Evaluation of genotypic assays is ongoing, but current data suggest that genotypic tropism testing should be considered as an alternative phenotypic tropism testing. The same principles regarding testing for co-receptor use also apply to testing when patients exhibit virologic failure on a CCR5 antagonist. 47 Resistance to CCR5 antagonists in the absence of detectable CXCR4-using virus has been reported, but such resistance is uncommon (see Co-receptor Tropism Assays).
A next-generation sequencing genotypic resistance assay, which analyzes HIV-1 pro-viral DNA in the host cells, is now commercially available. This test aims to detect archived resistance mutations in patients with HIV RNA below the limit of detection. However, the clinical utility of this assay has yet to be determined.
# Use of Resistance Assays in Pregnant Women
In pregnant women, the goal of ART is to maximally reduce plasma HIV RNA to provide optimal maternal therapy and to prevent perinatal transmission of HIV. Genotypic resistance testing is recommended for all pregnant women before initiation of therapy (AIII) and for those entering pregnancy with detectable HIV RNA levels while on therapy (AI). Phenotypic testing in those found to have complex drug-resistance mutation patterns may provide additional information (BIII). Optimal prevention of perinatal transmission requires initiation of ART pending resistance testing results. Once the results are available, the ARV regimen can be changed as needed. If ART is deferred, repeat resistance testing may be considered when therapy is initiated (CIII). A genotypic assay is generally preferred (AIII).
Drug-resistance testing can determine whether drug-resistant virus was transmitted. The initial regimen can be modified once resistance test results are available. Genotypic testing is preferred to phenotypic testing because of lower cost, faster turnaround time, and greater sensitivity for detecting mixtures of wild-type and resistant virus.
Repeat testing when ART is initiated may be considered because the patient may have acquired a drug-resistant virus (i.e., superinfection).
In ART-naive patients with chronic HIV infection: Drug-resistance testing is recommended at entry into HIV care to guide selection of initial ART (AII). A genotypic assay is generally preferred (AIII).
If an INSTI is considered for an ART-naive patient and transmitted INSTI resistance is a concern, providers should supplement standard resistance testing with a specific INSTI genotypic resistance assay (BIII).
If therapy is deferred, repeat resistance testing may be considered before initiation of ART (CIII). A genotypic assay is generally preferred (AIII).
If use of a CCR5 antagonist is being considered, a co-receptor tropism assay should be performed (AI) (see Co-receptor Tropism Assays).
Transmitted HIV with baseline resistance to at least 1 drug is seen in 10% to 17% of patients, and suboptimal virologic responses may be seen in patients with baseline resistant mutations. Some drug-resistance mutations can remain detectable for years in untreated, chronically infected patients.
Genotypic assays provide information on resistance to NRTIs, NNRTIs, PIs, and INSTIs. In some circumstances, INSTIresistance tests need to be ordered separately (clinicians should check with the testing laboratory).
Currently, transmitted INSTI resistance is infrequent, but the risk of a patient acquiring INSTI-resistant strains may be greater in certain known exposure settings.
Repeat testing before initiation of ART may be considered because the patient may have acquired a drug-resistant virus (i.e., a superinfection).
Genotypic testing is preferred to phenotypic testing because of lower cost, faster turnaround time, and greater sensitivity for detecting mixtures of wild-type and resistant virus.
In patients with virologic failure: Drug-resistance testing is recommended in patients on combination ART with HIV RNA levels >1,000 copies/mL (AI). In patients with HIV RNA levels >500 copies/mL but <1,000 copies/mL, testing may not be successful but should still be considered (BII).
A standard genotypic resistance assay is generally preferred for patients experiencing virologic failure on their first or second regimens (AII).
When virologic failure occurs while a patient is on an INSTI-based regimen, genotypic testing for INSTI resistance should be performed to determine whether to include drugs from this class in subsequent regimens (AII).
If use of a CCR5 antagonist is being considered, a co-receptor tropism assay should be performed (AI) (see Co-receptor Tropism Assays).
Adding phenotypic testing to genotypic testing is generally preferred in patients with known or suspected complex drug-resistance patterns, particularly to PIs (BIII).
Drug-resistance testing can help determine the role of resistance in drug failure and maximize the clinician's ability to select active drugs for the new regimen.
Drug-resistance testing should be performed while the patient is taking prescribed ARV drugs or, if not possible, within 4 weeks after discontinuing therapy.
Genotypic testing is preferred to phenotypic testing because of lower cost, faster turnaround time, and greater sensitivity for detecting mixtures of wild-type and resistant HIV.
Genotypic assays provide information on resistance to NRTI-, NNRTI-, PI-, and INSTI-associated mutations. In some circumstances, INSTI resistance tests need to be ordered separately (clinicians should check with the testing laboratory).
Phenotypic testing can provide additional useful information in patients with complex drug resistance mutation patterns, particularly to PIs.
# Clinical Setting/Recommendation Rationale
In patients with suboptimal suppression of viral load: Drugresistance testing is recommended in patients with suboptimal viral load suppression after initiation of ART (AII).
Testing can determine the role of resistance and thus help the clinician identify the number of active drugs available for a new regimen.
In HIV-infected pregnant women: Genotypic resistance testing is recommended for all pregnant women before initiation of ART (AIII) and for those entering pregnancy with detectable HIV RNA levels while on therapy (AI).
The goal of ART in HIV-infected pregnant women is to achieve maximal viral suppression for treatment of maternal HIV infection and for prevention of perinatal transmission of HIV. Genotypic resistance testing will assist the clinician in selecting the optimal regimen for the patient. However, treatment should not be delayed while awaiting results of resistance testing. The initial regimen can be modified once resistance test results are available.
# Drug-resistance assay not usually recommended
After therapy is discontinued: Drug-resistance testing is not usually recommended more than 4 weeks after ARV drugs are discontinued (BIII).
Drug-resistance mutations may become minor species in the absence of selective drug pressure, and available assays may not detect minor drug-resistant species. If testing is performed in this setting, the detection of drug resistance may be of value; however, the absence of resistance does not rule out the presence of minor drug-resistant species.
In patients with low HIV RNA levels: Drug-resistance testing is not usually recommended in patients with a plasma viral load <500 copies/mL (AIII).
Resistance assays cannot be consistently performed given low HIV RNA levels. HIV enters cells by a complex process that involves sequential attachment to the CD4 receptor followed by binding to either the CCR5 or CXCR4 molecules and fusion of the viral and cellular membranes. 1 CCR5 coreceptor antagonists prevent HIV entry into target cells by binding to the CCR5 receptors. 2 Phenotypic and, to a lesser degree, genotypic assays have been developed that can determine or predict the co-receptor tropism (i.e., CCR5, CXCR4, or both) of the patient's dominant virus population. An older generation assay (Trofile, Monogram Biosciences, Inc., South San Francisco, CA) was used to screen patients who were participating in clinical trials that led to the approval of maraviroc (MVC), the only CCR5 antagonist currently available. The assay has been improved and is now available with enhanced sensitivity. In addition, a genotypic assay to predict co-receptor usage is now commercially available.
During acute/recent infection, the vast majority of patients harbor a CCR5-utilizing virus (R5 virus), which suggests that the R5 variant is preferentially transmitted. Viruses in many untreated patients eventually exhibit a shift in co-receptor tropism from CCR5 usage to either CXCR4 or both CCR5 and CXCR4 tropism (i.e., dual-or mixed-tropic; D/M-tropic). This shift is temporally associated with a more rapid decline in CD4 T-cell counts, 3,4 but whether this tropism shift is a cause or a consequence of progressive immunodeficiency remains undetermined. 1 Antiretroviral (ARV)-treated patients with extensive drug resistance are more likely to harbor X4-or D/M-tropic variants than untreated patients with comparable CD4 counts. 5 The prevalence of X4-or D/M-tropic variants increases to more than 50% in treated patients who have CD4 counts <100 cells/mm 3 . 5,6
# Phenotypic Assays
Phenotypic assays characterize the co-receptor usage of plasma-derived virus. These assays involve the generation of laboratory viruses that express patient-derived envelope proteins (i.e., gp120 and gp41). These pseudoviruses, which are replication-defective, are used to infect target cell lines that express either CCR5 or CXCR4. 7,8 Using the Trofile assay, the co-receptor tropism of the patient-derived virus is confirmed by testing the susceptibility of the virus to specific CCR5 or CXCR4 inhibitors in vitro. This assay takes about 2 weeks to perform and requires a plasma HIV RNA level ≥1,000 copies/mL.
The performance characteristics of these assays have evolved. Most, if not all, patients enrolled in premarketing clinical trials of MVC and other CCR5 antagonists were screened with an earlier, less sensitive version of the Trofile assay. 8 This earlier assay failed to routinely detect low levels of CXCR4-utilizing variants. As a consequence, some patients enrolled in these clinical trials harbored low levels of CXCR4utilizing virus at baseline that were below the assay limit of detection and exhibited rapid virologic failure after initiation of a CCR5 antagonist. 9 The assay has been revised and is now able to detect lower levels of CXCR4-utlizing viruses. In vitro, the assay can detect CXCR4-utilizing clones with 100% sensitivity when those clones represent 0.3% or more of the virus population. 10 Although this more sensitive assay has had limited use in prospective clinical trials, it is now the only one that is commercially available. For unclear
# Panel's Recommendations
- A co-receptor tropism assay should be performed whenever the use of a CCR5 co-receptor antagonist is being considered (AI).
- Co-receptor tropism testing is also recommended for patients who exhibit virologic failure on a CCR5 antagonist (BIII).
- A phenotypic tropism assay is preferred to determine HIV-1 co-receptor usage (AI).
- A genotypic tropism assay should be considered as an alternative test to predict HIV-1 co-receptor usage (BII). The ABC HSR is a multiorgan clinical syndrome typically seen within the initial 6 weeks of ABC treatment. This reaction has been reported in 5%-8% of patients participating in clinical trials when using clinical criteria for the diagnosis, and it is the major reason for early discontinuation of ABC. Discontinuing ABC usually promptly reverses HSR, whereas subsequent rechallenge can cause a rapid, severe, and even lifethreatening recurrence. 1 Studies that evaluated demographic risk factors for ABC HSR have shown racial background as a risk factor, with white patients generally having a higher risk (5%-8%) than black patients (2%-3%). Several groups reported a highly significant association between ABC HSR and the presence of the major histocompatibility complex (MHC) class I allele HLA-B*5701. Because the clinical criteria used for ABC HSR are overly sensitive and may lead to false-positive ABC HSR diagnoses, an ABC skin patch test (SPT) was developed as a research tool to immunologically confirm ABC HSR. 4 A positive ABC SPT is an ABC-specific delayed HSR that results in redness and swelling at the skin site of application. All ABC SPT-positive patients studied were also positive for the HLA-B*5701 allele. 5 The ABC SPT could be falsely negative for some patients with ABC HSR and, at this point, is not recommended for use as a clinical tool. The PREDICT-1 study randomized patients before starting ABC either to be prospectively screened for HLA-B*5701 (with HLA-B*5701-positive patients not offered ABC) or to standard of care at the time of the study (i.e., no HLA screening, with all patients receiving ABC). 6 The overall HLA-B*5701 prevalence in this predominately white population was 5.6%. In this cohort, screening for HLA-B*5701 eliminated immunologic ABC HSR (defined as ABC SPT positive) compared with standard of care (0% vs. 2.7%), yielding a 100% negative predictive value with respect to SPT and significantly decreasing the rate of clinically suspected ABC HSR (3.4% vs. 7.8%). The SHAPE study corroborated the low rate of immunologically validated ABC HSR in black patients and confirmed the utility of HLA-B*5701 screening for the risk of ABC HSR (100% sensitivity in black and white populations). 7 On the basis of the results of these studies, the Panel recommends screening for HLA-B*5701 before starting patients on an ABC-containing regimen (AI). HLA-B*5701-positive patients should not be prescribed ABC (AI), and the positive status should be recorded as an ABC allergy in the patient's medical record (AII). HLA-B*5701 testing is needed only once in a patient's lifetime; thus, efforts to carefully record and maintain the test result and to educate the patient about its implications are important. The specificity of the HLA-B*5701 test in predicting ABC HSR is lower than the sensitivity (i.e., 33%-50% of HLA-B*5701-positive patients would likely not develop confirmed ABC HSR if exposed to ABC). HLA-B*5701 should not be used as a substitute for clinical judgment or pharmacovigilance, because a negative HLA-B*5701 result does not absolutely rule out the possibility of some form of ABC HSR. When HLA-B*5701 screening is not
# Rating of Recommendations
# Panel's Recommendations
- The Panel recommends screening for HLA-B*5701 before starting patients on an abacavir (ABC)-containing regimen to reduce the risk of hypersensitivity reaction (HSR) (AI).
- HLA-B*5701-positive patients should not be prescribed ABC (AI).
- The positive status should be recorded as an ABC allergy in the patient's medical record (AII).
- When HLA-B*5701 screening is not readily available, it remains reasonable to initiate ABC with appropriate clinical counseling and monitoring for any signs of HSR (CIII). Antiretroviral therapy (ART) has reduced HIV-related morbidity and mortality at all stages of HIV infection and has reduced HIV transmission. Maximal and durable suppression of plasma viremia delays or prevents the selection of drug-resistance mutations, preserves or improves CD4 T lymphocyte (CD4) cell numbers, and confers substantial clinical benefits, all of which are important treatment goals. 9,10 HIV suppression with ART may also decrease inflammation and immune activation thought to contribute to higher rates of cardiovascular and other end-organ damage reported in HIV-infected cohorts (see Initiating Antiretroviral Therapy). Despite these benefits, eradication of HIV infection cannot be achieved with available antiretrovirals (ARVs). Treatment interruption has been associated with rebound viremia, worsening of immune function, and increased morbidity and mortality. 11 Thus, once initiated, ART should be continued, with the following key treatment goals:
# Rating of Recommendations
- Maximally and durably suppress plasma HIV RNA,
- Restore and preserve immunologic function,
- Reduce HIV-associated morbidity and prolong the duration and quality of survival, and
- Prevent HIV transmission.
Achieving viral suppression currently requires the use of combination ARV regimens that generally include three active drugs from two or more drug classes. Baseline patient characteristics and results from drug resistance testing should guide design of the specific regimen (see What to Start: Initial Combination Regimens for the Antiretroviral-Naive Patient). When initial HIV suppression is not achieved or not maintained, changing to a new regimen with at least two active drugs is often required (see Virologic Failure).
The increasing number of ARV drugs and drug classes makes viral suppression below detection limits an achievable goal in most patients.
After initiation of effective ART, viral load reduction to below limits of assay detection usually occurs within the first 12 to 24 weeks of therapy. Predictors of virologic success include:
- low baseline viremia,
- high potency of the ARV regimen,
- tolerability of the regimen,
- convenience of the regimen,
- excellent adherence to the regimen.
# Strategies to Achieve Treatment Goals
# Selection of Initial Combination Regimen
Several ARV regimens are recommended for use in ART-naive patients (see What to Start). Most of the recommended regimens have comparable efficacy but vary in pill burden, potential for drug interactions and/or side effects, and propensity to select for resistance mutations if ART adherence is suboptimal. Regimens should be tailored for the individual patient to enhance adherence and support long-term treatment success. Considerations when selecting an ARV regimen for an individual patient include potential side effects, patient comorbidities, possible interactions with conconcomitant medications, results of pretreatment genotypic drugresistance testing, and regimen convenience (see Table 7).
# Improving Adherence
Suboptimal adherence may result in reduced treatment response. Incomplete adherence can result from complex medication regimens; patient-related factors, such as active substance abuse, depression, or the experience of adverse effects; and health system issues, including interruptions in patient access to medication and inadequate
# Introduction
Without antiretroviral therapy (ART), most HIV-infected individuals will eventually develop progressive immunodeficiency marked by CD4 T lymphocyte (CD4) cell depletion and leading to AIDS-defining illnesses and premature death. The primary goal of ART is to prevent HIV-associated morbidity and mortality. This goal is best accomplished by using effective ART to maximally inhibit HIV replication to sustain plasma HIV-1 RNA (viral load) below limits of quantification by commercially available assays. Durable viral suppression improves immune function and overall quality of life, lowers the risk of both AIDS-defining and non-AIDS-defining complications, and prolongs life.
Furthermore, high plasma HIV-1 RNA is a major risk factor for HIV transmission, and effective ART can reduce viremia and transmission of HIV to sexual partners by more than 96%. 1,2 Modelling studies suggest that expanded use of ART may lower incidence and, eventually, prevalence of HIV on a community or population level. 3 Thus, a secondary goal of ART is to reduce the risk of HIV transmission.
Historically, HIV-infected individuals have had low CD4 counts at presentation to care. 4 However, there have been concerted efforts to increase testing of at-risk individuals and to link HIV-infected individuals to medical care before they have advanced HIV disease. Deferring ART until CD4 counts decline puts HIV-infected individuals at risk of AIDS-defining and certain serious non-AIDS conditions. Furthermore, the magnitude of CD4 recovery is directly correlated with CD4 count at ART initiation. Consequently, many individuals who start treatment with CD4 counts 500 cells/mm 3 after up to 6 years on ART 5 and have a shorter life expectancy than those initiating therapy at higher CD4 count thresholds. 5,6 For the above reasons, since 2012, the Panel on Antiretroviral Guidelines for Adults and Adolescents (the Panel) has recommended initiating ART in all HIV-infected individuals; however, based on published evidence available at the time, the strength of the recommendation has differed by CD4 count strata (AI for CD4 count 500 cells/mm 3 ). However, findings from two large, randomized controlled trials that addressed the optimal time to initiate ART-START (Strategic Timing of Antiretroviral Therapy) 7 and TEMPRANO 8 -have led the Panel to increase the strength and evidence rating of this recommendation to AI for all patients, regardless of CD4 cell count. Both studies demonstrated about a 50% reduction in morbidity and mortality among HIVinfected individuals with CD4 counts >500 cells/mm 3 randomized to receive ART immediately versus delaying initiation of ART (described in more detail below). Prompt initiation of ART is particularly important for patients with certain clinical conditions, as discussed below.
The decision to initiate ART should always include consideration of a patient's comorbid conditions and his
# Panel's Recommendations
- Antiretroviral therapy (ART) is recommended for all HIV-infected individuals, regardless of CD4 T lymphocyte cell count, to reduce the morbidity and mortality associated with HIV infection (AI).
- ART is also recommended for HIV-infected individuals to prevent HIV transmission (AI).
- When initiating ART, it is important to educate patients regarding the benefits and considerations regarding ART, and to address strategies to optimize adherence. On a case-by-case basis, ART may be deferred because of clinical and/or psychosocial factors, but therapy should be initiated as soon as possible. or her willingness and readiness to initiate therapy. Thus, on a case-by-case basis, ART may be deferred because of clinical and/or psychosocial factors; however, therapy should be initiated as soon as possible.
# Rating of Recommendations
# Panel's Recommendations
ART is recommended for all HIV-infected individuals, regardless of CD4 cell count, to reduce the morbidity and mortality associated with HIV infection (AI). ART is also recommended for HIV-infected individuals to prevent HIV transmission (AI). When initiating ART, it is important to educate patients about the benefits of ART, and to address barriers to adherence and recommend strategies to optimize adherence. On a case-by-case basis, ART may be deferred because of clinical and/or psychosocial factors; however, therapy should be initiated as soon as possible. Patients should also understand that currently available ART does not cure HIV. To improve and maintain immunologic function and maintain viral suppression, ART should be continued indefinitely.
While ART is recommended for all patients, the following conditions increase the urgency to initiate therapy:
- Pregnancy (refer to the Perinatal Guidelines for more detailed recommendations on the management of HIV-infected pregnant women) 9
- AIDS-defining conditions, including HIV-associated dementia (HAD) and AIDS-associated malignancies
- Acute opportunistic infections (OIs) (see discussion below)
- Lower CD4 counts (e.g., <200 cells/mm 3 )
- HIV-associated nephropathy (HIVAN)
- Acute/early infection (see discussion in the Acute/Early Infection section)
- HIV/hepatitis B virus coinfection
# HIV/hepatitis C virus coinfection
# Acute Opportunistic Infections and Malignancies
In patients who have AIDS-associated opportunistic diseases for which there is no effective therapy (e.g., cryptosporidiosis, microsporidiosis, progressive multifocal leukoencephalopathy), improvement of immune function with ART may improve disease outcomes, thus ART should be started as soon as possible. For patients with mild to moderate cutaneous Kaposi's sarcoma (KS), prompt initiation of ART alone without chemotherapy has been associated with improvement of the KS lesions, even though initial transient progression of KS lesions as a manifestation of immune reconstitution inflammatory syndrome (IRIS) can also occur. 10 Similarly, although an IRIS-like presentation of non-Hodgkins lymphoma after initiation of ART has been described, 11 greater ART-mediated viral suppression is also associated with longer survival among individuals undergoing treatment for AIDS lymphoma. 12 Drug interactions should be considered when selecting ART given the potential for significant interactions between chemotherapeutic agents and some ARV drugs (particularly some non-nucleoside reverse transcriptase inhibitor and ritonavir-or cobicistat-boosted regimens). However, a diagnosed malignancy should not delay initiation of ART nor should initiation of ART delay treatment for the malignancy.
In the setting of some OIs, such as cryptococcal and tuberculous meningitis, for which immediate therapy may increase the risk of serious IRIS, a short delay before initiating ART may be warranted. When ART is initiated in a patient with an intracranial infection, the patient should be closely monitored for signs and symptoms associated with IRIS. In the setting of other OIs, such as Pneumocystis jirovecii pneumonia, early initiation of ART is associated with increased survival; 17 therefore, therapy should not be delayed.
In patients who have active non-meningeal tuberculosis, initiating ART during treatment for tuberculosis confers a significant survival advantage; therefore, ART should be initiated as recommended in Mycobacterium Tuberculosis Disease with HIV Coinfection.
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents E-3
Clinicians should refer to the Guidelines for Prevention and Treatment of Opportunistic Infections in HIV-Infected Adults and Adolescents 10 for more detailed discussion on when to initiate ART in the setting of a specific OI.
# The Need for Early Diagnosis of HIV
Fundamental to the earlier initiation of ART recommended in these guidelines is the assumption that patients will be diagnosed early in the course of HIV infection. Unfortunately, some patients with HIV infections are still diagnosed at later stages of disease. Despite the recommendations for routine, opt-out HIV screening in the health care setting regardless of perceptions about a patient's risk of infection 23 and the gradual increase in CD4 counts at first presentation to care, the median CD4 count of newly diagnosed patients remains below 350 cells/mm 3 . 4 Diagnosis of HIV infection is delayed more often in nonwhites, injection drug users, and older adults than in other populations, and many individuals in these groups develop AIDS-defining illnesses within 1 year of diagnosis. Therefore, to ensure that the current treatment guidelines have maximum impact, routine HIV screening per current Centers for Disease Control and Prevention recommendations is essential. It is also critical that all newly diagnosed patients are educated about HIV disease and linked to care for full evaluation, follow-up, and management. Once patients are in care, focused effort is required to retain them in the health care system so that both the infected individuals and their sexual partners can fully benefit from early diagnosis and treatment.
# Evidence Supporting Benefits of ART to Prevent Morbidity and Mortality
Although observational studies had been inconsistent in defining the optimal time to initiate ART, randomized controlled trials now definitively demonstrate that ART should be initiated in all HIV-infected patients, regardless of disease stage. The urgency to initiate ART is greatest for patients at lower CD4 counts, where the absolute risk of OIs, non-AIDS morbidity, and death is highest. Randomized controlled trials have long shown that ART improves survival and delays disease progression in patients with CD4 counts <200 cells/mm 3 and/or history of AIDS-defining conditions. 17,31 Additionally, a randomized controlled trial conducted in Haiti showed that patients who started ART with CD4 counts between 200 to 350 cells/mm 3 survived longer than those who deferred ART until their CD4 counts fell below 200 cells/mm 3 . 32 Most recently, the published START and TEMPRANO trials provide the evidence for the Panel's recommendation to initiate ART in all patients regardless of CD4 cell count (AI). The results of these two studies are summarized below.
The START trial is a large, multi-national, randomized controlled clinical trial designed to evaluate the role of early ART in asymptomatic HIV-infected patients in reducing a composite clinical endpoint of AIDSdefining illnesses, serious non-AIDS events, or death. In this study, ART-naive adults (aged >18 years) with CD4 counts >500 cells/mm 3 were randomized to initiate ART soon after randomization (immediate-initiation arm) or to wait to initiate ART until their CD4 counts declined to 500 cells/mm 3 , 68 primary outcome events were reported in 61 patients. The risk of primary events was lower with immediate ART than with deferred ART, with a hazard ratio of 0.56 in favor of early ART (CI, 0.33-0.94). On the basis of these results, the study team concluded that early ART is beneficial in reducing the rate of these clinical events. 8 The TEMPRANO and START trials had very similar estimates of the protective effect of immediate ART among HIV-infected individuals with CD4 counts >500 cells/mm 3 , further strengthening the Panel's recommendation that ART be initiated in all patients regardless of CD4 cell count.
# Theoretical Continued Benefit of Early ART Initiation Long After Viral Suppression is Achieved
While the START and TEMPRANO studies demonstrated a clear benefit of immediate ART initiation in individuals with CD4 cell counts >500 cells/mm 3 , it is plausible that the benefits of early ART initiation continue long after viral suppression is achieved. As detailed in the Poor CD4 Cell Recovery and Persistent Inflammation section, persistently low CD4 counts and abnormally high levels of immune activation and inflammation despite suppressive ART predict an increased risk of not only AIDS events, but also non-AIDS events including kidney disease, liver disease, cardiovascular disease, neurologic complications, and malignancies. Earlier ART initiation appears to increase the probability of restoring normal CD4 counts, a normal CD4/CD8 ratio, and lower levels of immune activation and inflammation. Individuals initiating ART very early (i.e., during the first 6 months after infection) also appear to achieve lower immune activation levels and better immune function (as assessed by vaccine responsiveness) during ART-mediated viral suppression than those who delay therapy for a few years or more. Thus, while these questions have yet to be addressed in definitive randomized controlled trials, earlier ART initiation may result in less residual immune dysfunction during treatment, which theoretically may result in reduced risk of disease for decades to come.
# Evidence Supporting the Use of ART to Prevent HIV Transmission
# Prevention of Sexual Transmission
A number of investigations, including biological, ecological, and epidemiological studies and one randomized clinical trial, provide strong evidence that treatment of the HIV-infected individual can significantly reduce sexual transmission of HIV. Lower plasma HIV RNA levels are associated with decreases in the concentration of the virus in genital secretions. 42,43 Studies of HIV-serodiscordant heterosexual couples have demonstrated a relationship between level of plasma viremia and risk of HIV transmission-when plasma HIV RNA levels are lower, transmission events are less common. 1,44 Most significantly, the multi-continental HPTN 052 trial enrolled 1,763 HIV-serodiscordant couples in which the HIV-infected partner was ART naive with a CD4 count of 350 to 550 cells/mm 3 at enrollment to compare the effect of immediate ART versus delayed therapy (not started until CD4 count <250 cells/mm 3 ) on HIV transmission to the HIV-uninfected partner. 2 At study entry, 97% of the participants reported to be in a
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents E-5
heterosexual monogamous relationship. All study participants were counseled on behavioral modification and condom use. The interim results reported 28 linked HIV transmission events during the study period, with only 1 event in the early therapy arm. This 96% reduction in transmission associated with early ART was statistically significant (HR 0.04; 95% CI, 0.01-0.27; P < 0.001). The final results of this study showed a sustained 93% reduction of HIV transmission within couples when the HIV-infected partner was taking ART as prescribed and viral load was suppressed. 45 Notably, there were only eight cases of HIV transmission within couples after the HIV-infected partner started ART; four transmissions occurred before the HIV-infected partner was virologically suppressed and four other transmissions occurred during virologic failure. These results provide evidence that suppressive ART is more effective at preventing transmission of HIV than all other behavioral and biomedical prevention interventions studied. This study, as well as other observational studies and modeling analyses showing a decreased rate of HIV transmission among serodiscordant heterosexual couples following the introduction of ART, demonstrates that suppression of viremia in ART-adherent patients with no concomitant sexually transmitted diseases (STDs) substantially reduces the risk of HIV transmission. 3, HPTN 052 was conducted in heterosexual couples and not in populations at risk of HIV transmission via male-to-male sexual contact or needle sharing. In addition, in this clinical trial, adherence to ART was excellent. However, the prevention benefits of effective ART observed in HPTN 052 can reasonably be presumed to apply broadly. Therefore, the Panel recommends that ART be offered to individuals who are at risk of transmitting HIV to sexual partners (AI). Clinicians should discuss with patients the potential individual and public health benefits of therapy and the need for adherence to the prescribed regimen. Clinicians should also stress that ART is not a substitute for condom use and behavioral modification and that ART does not protect against other STDs (see Preventing Secondary Transmission of HIV).
# Prevention of Perinatal Transmission
As noted above, effective ART reduces transmission of HIV. The most dramatic and well-established example of this effect is the use of ART in pregnant women to prevent perinatal transmission of HIV. Effective suppression of HIV replication is a key determinant in reducing perinatal transmission. In the setting of maternal viral load suppressed to <50 copies/mL near delivery, use of combination ART during pregnancy has reduced the rate of perinatal transmission of HIV from approximately 20% to 30% to 0.1% to 0.5%. 50,51 ART is thus recommended for all HIV-infected pregnant women, for both maternal health and for prevention of HIV transmission to the newborn. In ART-naive pregnant women ART should be initiated as soon as possible, with the goal of suppressing plasma viremia throughout pregnancy (see Perinatal Guidelines).
# Considerations When Initiating ART
ART regimens for treatment-naive patients currently recommended in this guideline (see What to Start) can suppress and sustain viral loads below the level of quantification in most patients who adhere to their regimens. Most of the recommended regimens have low pill burden and are well tolerated. Once started on treatment, patients must continue ART indefinitely.
# Optimizing Adherence and Retention in Care
The key to successful ART in maintaining viral suppression is adherence to the prescribed regimen. Treatment failure and resultant emergence of drug resistance mutations may compromise future treatment options. While optimizing adherence and linkage to care are critical regardless of the timing of ART initiation, the evidence thus far indicates that drug resistance occurs more frequently in individuals who initiate therapy later in the course of infection than in those who initiate ART earlier. 52 In both the START 7 and TEMPRANO 8 trials, participants randomized to immediate ART achieved higher rates of viral suppression than those randomized to delayed ART. Nevertheless, it is important to discuss strategies to optimize adherence and retention in care with patients before ART initiation. ART initiation may need to be briefly delayed to resolve issues identified during such discussions.
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents E-6
Several clinical, behavioral, and social factors have been associated with poor adherence. These factors include untreated major psychiatric disorders, neurocognitive impairment, active substance abuse, unstable housing, other unfavorable social circumstances, patient concerns about side effects, and poor adherence to clinic visits. Clinicians should identify areas where additional intervention is needed to improve adherence both before and after initiation of therapy. Some strategies to improve adherence are discussed in Adherence to Antiretroviral Therapy. ART reduces morbidity and mortality even in patients with relatively poor adherence and established drug resistance. Thus, mental illness, substance abuse, and psychosocial challenges are not reasons to withhold ART from a patient. Rather, these issues indicate the need for additional interventions to support adherence and possibly the type of ART regimen to recommend (see What to Start section).
# Considerations for Special Populations
# Elite HIV Controllers
A small subset of HIV-infected individuals maintains plasma HIV-1 RNA levels below level of quantification for years without ART. These individuals are often referred to as "elite HIV controllers." 53,54 There are limited data on the role of ART in these individuals. Given the clear benefit of ART regardless of CD4 count from the START and TEMPRANO studies, delaying ART to see if a patient becomes an elite controller after initial diagnosis is strongly discouraged. Nevertheless, significant uncertainty remains about the optimal management of elite controllers who have maintained undetectable viremia in the absence of ART for years. Given that ongoing HIV replication occurs even in elite controllers, ART is clearly recommended for controllers with evidence of HIV disease progression, as defined by declining CD4 counts or development of HIV-related complications. Nonetheless, even elite controllers with normal CD4 counts also have evidence of abnormally high immune activation and surrogate markers of atherosclerosis, which may contribute to an increased risk of non-AIDS related diseases. 53, One observational study suggests that elite controllers are hospitalized more often for cardiovascular and respiratory disease than patients from the general population and ART-treated patients. 58 Moreover, elite controllers with preserved CD4 counts appear to experience a decline in immune activation after ART initiation, suggesting that treatment may be beneficial. 59 Whether this potential immunologic benefit of ART in elite controllers outweighs potential ART toxicity and results in clinical benefit is unclear. Unfortunately, randomized controlled trials to address this question are unlikely given the very low prevalence of elite controllers. Although the START study included a number of participants with very low viral loads and demonstrated the benefit of immediate ART regardless of the extent of viremia, the study did not include a sufficient number of controllers to definitively determine the clinical impact of ART in this specific population. Nevertheless, there is a clear theoretical rationale for prescribing ART to HIV controllers even in the absence of detectable plasma HIV RNA levels. If ART is withheld, elite controllers should be followed closely, as some may experience CD4 cell decline, loss of viral control, or complications related to HIV infection.
# HIV-Infected Adolescents
Neither the START trial nor the TEMPRANO trial included adolescents. The Panel's recommendation to initiate ART in all patients is extrapolated to adolescents based on the expectation that they will derive benefits from early ART similar to those observed in adults. Historically, compared to adults, youth have demonstrated significantly lower levels of ART adherence and viral suppression, and higher rates of viral rebound following initial viral suppression. 60 Because youth often face multiple psychosocial and other barriers to adherence, their ability to adhere to therapy should be carefully considered when making decisions about ART initiation. Although some adolescents may not be ready to initiate therapy, clinicians should offer ART while providing effective interventions to assess and address barriers to accepting and adhering to therapy. To optimize the benefits of ART for youth, a multidisciplinary care team should provide psychosocial and adherence support (see HIV-Infected Adolescents section). 61 Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents E-7
# Conclusion
The results of definitive randomized controlled trials support the Panel's recommendation to initiate ART to all HIV-infected individuals, regardless of CD4 cell count. Early diagnosis of HIV infection, followed by prompt ART initiation, has clear clinical benefits in reducing morbidity and mortality for HIV-infected patients and decreasing HIV transmission to their sexual partners. Although there are certain clinical and psychosocial factors that may occasionally necessitate a brief delay in ART, ART should be started as soon as possible.
Clinicians should educate patients on the benefits and risks of ART and the importance of adherence.
# Introduction
More - On the basis of individual patient characteristics and needs, an Alternative regimen or, less frequently, an Other regimen, may be the optimal regimen for a particular patient. A list of Alternative and Other regimens can be found in Table 6. - Given the many excellent options for initial therapy, selection of a regimen for a particular patient should be guided by factors such as virologic efficacy, toxicity, pill burden, dosing frequency, drug-drug interaction potential, resistance testing results, comorbid conditions, and cost. Table 7 provides guidance on choosing an ARV regimen based on selected clinical case scenarios. Table 8 highlights the advantages and disadvantages of different components in a regimen. based on clinical trial data published in peer-reviewed journals and data prepared by manufacturers for FDA review. In select cases, the Panel considers data from abstracts presented at major scientific meetings. The Panel views that the strongest evidence on which to base recommendations is published information from a randomized, prospective clinical trial with an adequate sample size that demonstrates that an ARV regimen has shown high rates of viral suppression, increased CD4 cell count, and has a favorable safety profile. Comparative clinical trials of initial treatments generally show no significant differences in HIV-related clinical endpoints or survival. Thus, assessment of regimen efficacy and safety are primarily based on surrogate marker endpoints (especially rates of HIV RNA suppression) and the incidence and severity of adverse events.
# Rating of Recommendations
In some instances, the Panel recommends regimens that include medications approved by the FDA based on bioequivalence or relative bioavailability studies demonstrating that the exposure of the drug(s) in the new formulation or combination is comparable to the exposure of a reference drug(s) that has demonstrated safety and efficacy in randomized clinical trials. When developing recommendations, the Panel may also consider data from randomized switch studies in which a new medication replaces an existing medication from the same class in patients who have achieved virologic suppression on an initial regimen. Switch trials do not evaluate the ability of a drug or regimen to induce viral suppression; they only examine the drug or regimen's ability to maintain suppression. Therefore, results from switch trials may not be directly applicable to the selection of an initial regimen and should be considered in conjunction with other data, including from trials conducted in treatment-naive patients and bioequivalence/bioavailability studies. In this section of the guidelines, the definition of evidence rating of II, is expanded to include supporting data from bioavailability/bioequivalence studies or randomized switch studies.
When developing recommendations, the Panel also considers tolerability and toxicity profiles, ease of use, post-marketing safety data, observational cohort data published in peer-reviewed publications, and the experience of clinicians and community members who are actively engaged in patient care.
The Panel reviewed the available data to arrive at Recommended, Alternative, or Other regimens, as specified in Table 6. Recommended regimens are primarily those with demonstrated durable virologic efficacy, favorable tolerability and toxicity profiles, and ease of use, including some newer combinations whose use is supported by evidence from bioequivalence/bioavailability studies or randomized switch trials. Alternative regimens are those that are effective but have potential disadvantages, limitations for use in certain patient populations, or less supporting data than Recommended regimens. In certain situations, depending on an individual patient's characteristics and needs, an Alternative regimen may actually be the optimal regimen for a specific patient. Some regimens are classified as Other regimens because, when compared with Recommended or Alternative regimens, they have reduced virologic activity, limited supporting data from large comparative clinical trials, or other factors such as greater toxicities, higher pill burden, drug interaction potential, or limitations for use in certain patient populations.
In addition to Table 6, a number of tables presented below and at the end of the Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents (Adult and Adolescent Guidelines) provide clinicians with guidance on selecting and prescribing an optimal regimen for an individual patient. Table 7 lists specific case scenarios to guide regimen selection for patients with common clinical conditions. Table 8 lists the potential advantages and disadvantages of the components used in Recommended and Alternative regimens. Table 9 lists agents or regimens not recommended for initial treatment. Appendix B, Tables 1-6 lists characteristics of individual ARV agents (eg, formulations, dosing recommendations, PKs, common adverse effects). Appendix B, Table 7 provides ARV dosing recommendations for patients who have renal or hepatic insufficiency.
# Changes Since the Last Revision of the Guidelines
Since the last revision of the Adult and Adolescent Guidelines, new data from clinical trials and cohort studies, as well as experience in clinical practice, have prompted several changes to the list of Recommended, Alternative, and Other regimens for treatment-naive patients (Table 6). Among these changes, the following deserve emphasis:
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents F-3
- TAF, an oral prodrug of tenofovir (TFV), is now included as a component of several Recommended regimens, including EVG/c/TAF/FTC, dolutegravir (DTG) plus TAF/FTC, darunavir/ritonavir (DRV/r) plus TAF/FTC, and raltegravir (RAL) plus TAF/FTC. These recommendations are based on data from comparative trials demonstrating that TAF-containing regimens are as effective in achieving 4 or maintaining virologic suppression 5 as TDF-containing regimens but with more favorable effects on markers of renal and bone health. In these studies, participants randomized to receive TDF had more favorable lipid profiles than those who received TAF. 4,5 Unlike TDF, which should be avoided or dose-reduced in patients with estimated creatinine clearance (CrCl) <50 to 60 mL/min, TAF-containing regimens appear to be safe and are FDA approved for use in patients with estimated CrCl as low as 30 mL/min.
- The list of Alternative regimens has also been expanded to include TAF/FTC in combination with EFV, rilpirivine (RPV), COBI-or RTV-boosted atazanavir (ATV/c or ATV/r), or COBI-boosted DRV (DRV/c).
- Guidance for clinicians on choosing between ABC-, TAF-, and TDF-containing regimens has been added to the Adult and Adolescent Guidelines.
- Lopinavir/ritonavir (LPV/r) plus 2-NRTI regimen has been removed from the list of Other regimens because therapies containing this PI combination have a larger pill burden and greater toxicity than other currently available options. 7, and the product prescribing information for recommendations on ARV dose modification in the setting of renal impairment. Drug classes and regimens within each class are arranged first by evidence rating and when ratings are equal, in alphabetical order.
# Recommended Regimen Options
Recommended regimens are those with demonstrated durable virologic efficacy, favorable tolerability and toxicity profiles, and ease of use.
# Choosing the 2 NRTIs
All Recommended and Alternative regimens include an NRTI combination of ABC/3TC, TAF/FTC, or TDF/FTC, each of which is available as a fixed-dose combination tablet. The choice of NRTI combination is usually guided by differences between ABC, TAF, and TDF, because FTC and 3TC have few adverse events and comparable efficacy. The main advantages of TAF and TDF over ABC are their activity against hepatitis B virus (HBV) (relevant in HBV-coinfected patients) and the fact that HLA-B*5701 testing is not required their use. Moreover, TDF has been associated with favorable lipid effects. However, TDF use has been associated with declines in kidney function, proximal renal tubulopathy (leading to proteinuria and phosphate wasting), and reductions in bone mineral density (BMD). These tenofovir toxicities are less common with TAF, which results in lower plasma tenofovir concentrations than TDF. As a result, the main advantages of TAF over TDF are TAF's more favorable effects on renal markers and BMD. TAF has less favorable lipid effects than TDF, probably because of lower tenofovir plasma concentrations. The main advantages of ABC over TDF are that it does not require dose adjustment in patients with renal insufficiency and has less nephrotoxicity and less deleterious effects on BMD than TDF. However, ABC use has been linked to cardiovascular events in some, but not all, observational studies. There have been no head-to-head studies comparing ABC and TAF. Considerations germane to the choice between TAF, TDF, and ABC in specific clinical scenarios are summarized in Table 7, Table 8, and in the section on dual NRTI options below.
# Choosing Between an INSTI-, an NNRTI-, or a PI-Based Regimen
The choice between an INSTI, NNRTI, or PI as the third drug in an initial ARV regimen should be guided by the regimen's efficacy, genetic barrier to resistance, adverse effects profile, and convenience. The patient's co- ATV/r has demonstrated excellent virologic efficacy in clinical trials and has relatively few metabolic adverse effects in comparison to other boosted PI regimens; however, clinical trial data showed that ATV/r had a higher rate of adverse effect-associated drug discontinuation than DRV/r and RAL. 8 Thus, despite these favorable attributes, based on the above considerations, EFV-, RPV-, and ATV/r-containing regimens are now listed as Alternative regimens for initial therapy. However, based on individual patient characteristics, some Alternative regimens may actually be the optimal regimen for some patients. Furthermore, patients who are doing well on EFV-, RPV-, and ATV/r-containing regimens should not necessarily be switched to other agents.
# Factors to Consider When Selecting an Initial Regimen
When selecting a regimen for an individual patient, a number of patient and regimen specific characteristics should be considered. The goal is to provide a potent, safe, tolerable, and easy to adhere to regimen for the patient in order to achieve sustained virologic control. Some of the factors can be grouped into the following categories:
Initial Characteristics to Consider in All Patients:
- 7 for recommendations on ARV dose modification in the setting of renal impairment.
# Food effects
Regimens that Can be Taken Without Regard to Food:
- RAL-or DTG-based regimens Oral bioavailability of these regimens is not significantly affected by food.
Regimens that Should be Taken with Food:
- ATV/r or ATV/c-based regimens
- DRV/r or DRV/c-based regimens - EVG/c/TAF/FTC - EVG/c/TDF/FTC - RPV-based regimens
Food improves absorption of these listed regimens. RPV-containing regimens should be taken with at least 390 calories of food.
Regimens that Should be Taken on an Empty Stomach:
- EFV-based regimens Food increases EFV absorption and may increase CNS side effects.
# Presence of Other Conditions
Chronic kidney disease (defined as eGFR <60 mL/min) Avoid TDF.
Use ABC or TAF.
ABC may be used if HLA-B*5701 negative.
If HIV RNA >100,000 copies/mL, do not use ABC/3TC plus (EFV or ATV/r).
TAF may be used if eGFR >30 mL/min Other Options When ABC or TAF Cannot be Used (See Text for Discussion):
- LPV/r plus 3TC; or
- RAL plus DRV/r (if CD4 count >200 cells/mm 3 , HIV RNA <100,000 copies/mL) TDF has been associated with proximal renal tubulopathy. Higher rates of renal dysfunction reported in patients using TDF in conjunction with RTV-containing regimens.
TAF has less impact on renal function and lower rates of proteinuria than TDF.
ABC has not been associated with renal dysfunction.
See Appendix B, Table 7 for recommendations on ARV dose modification in patients with renal insufficiency.
# Liver disease with cirrhosis
Some ARVs are contraindicated or may require dosage modification in patients with Child-Pugh class B or C disease.
Refer to Appendix B, Table 7 for specific dosing recommendations.
Patients with cirrhosis should be carefully evaluated by an expert in advanced liver disease.
# Osteoporosis
Avoid TDF.
Use ABC or TAF.
ABC may be used if HLA-B*5701 negative. If HIV RNA >100,000 copies/mL, do not use ABC/3TC plus (EFV or ATV/r).
TDF is associated with decreases in bone mineral density along with renal tubulopathy, urine phosphate wasting and resultant osteomalacia.
TAF and ABC are associated with smaller declines in bone mineral density than TDF.
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents F-8 EFV and RPV can exacerbate psychiatric symptoms and may be associated with suicidality.
# HIV-associated dementia (HAD)
Avoid EFV-based regimens if possible.
Favor DRV-based or DTG-based regimens.
EFV-related neuropsychiatric effects may confound assessment of ART's beneficial effects on improvement of HAD-related symptoms.
Theoretical CNS penetration advantage
# Narcotic replacement therapy required
If patient is receiving methadone, consider avoiding EFV-based regimens.
If EFV is used, an increase in methadone dose may be necessary.
EFV reduces methadone concentrations and may lead to withdrawal symptoms.
# High cardiac risk Consider avoiding ABC-and LPV/rbased regimens.
Increased cardiovascular risk in some studies
# Hyperlipidemia
The Following ARV Drugs have been Associated with Dyslipidemia:
- PI/r or PI/c
- EFV
- EVG/c DTG and RAL have fewer lipid effects.
TDF has been associated with more favorable lipid effects than ABC or TAF.
# Pregnancy
Refer to the Perinatal Guidelines.
# Presence of Coinfections
# HBV infection
Use TDF or TAF, with FTC or 3TC, whenever possible.
If TDF and TAF are Contraindicated:
- For treatment of HBV, use FTC or 3TC with entecavir and a suppressive ART regimen (see HBV/HIV Coinfection).
TDF, TAF, FTC, and 3TC are active against both HIV and HBV. 3TC-or FTC-associated HBV mutations can emerge rapidly when these drugs are used without another drug active against HBV.
# HCV treatment required
Refer to recommendations in HCV/HIV Coinfection.
Treating TB disease with rifamycins TAF is not recommended with any rifamycincontaining regimen.
If Rifampin is Used:
- EFV can be used without dosage adjustment
- If RAL is used, increase RAL dose to 800 mg BID.
- Use DTG at 50 mg BID dose only in patients without selected INSTI mutations (refer to product label).
If using a PI-based regimen, rifabutin should be used in place of rifampin in the TB regimen.
- Rifamycins may significantly reduce TAF exposure.
- Rifampin is a strong inducer of CYP3A4 and UGT1A1 enzymes, causing significant decrease in concentrations of PI, INSTI, and RPV.
- Rifampin has a less significant effect on EFV concentration than on other NNRTIs, PIs, and INSTIs.
- Rifabutin is a less potent inducer and is a good option for patients receiving non-EFVbased regimens.
Refer to Tables 19a, b, d and e for dosing recommendations for rifamycins used with different ARV agents.
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents F-9
# Choosing Among Different Drugs from an Antiretroviral Drug Class
The sections below provide clinicians with comparisons of different, currently recommended ARV drugs within a drug class. These comparisons include information related to the safety and virologic efficacy of different drugs based on clinical trial results and/or post-marketing data, specific factors to consider, and the rationales for the Panel's recommendations.
# Dual-Nucleoside Reverse Transcriptase Inhibitor Options as Part of Initial Combination Therapy
# Summary
ABC/3TC, TAF/FTC, and TDF/FTC are NRTI combinations recommended for use as components of initial therapy. Table 6 provides recommendations and ratings for the individual regimens. These recommendations are based on the virologic potency and durability, short-and long-term toxicity, and dosing convenience of these drugs.
# Clinical Trials Comparing Nucleoside Reverse Transcriptase Inhibitors
Abacavir/Lamivudine Compared to Tenofovir Disoproxil Fumarate/Emtricitabine Several randomized, controlled trials in ART-naive participants compared ABC/3TC to TDF/FTC, either with the same or a different (third) ARV drug (also see discussion in the Dolutegravir section). 14 - The ACTG 5202 study, a randomized controlled trial in more than 1,800 participants, evaluated the efficacy and safety of ABC/3TC and TDF/FTC when each was used in combination with either EFV or ATV/r.
- Treatment randomization was stratified on the basis of a screening HIV RNA level <100,000 copies/mL or ≥100,000 copies/mL. HLA-B*5701 testing was not required before study entry.
- A Data Safety Monitoring Board recommended early termination of the ≥100,000 copies/mL stratification group because of a significantly shorter time to study-defined virologic failure in the ABC/3TC arm than in the TDF/FTC arm. 11 This difference in time to virologic failure between the arms was observed regardless of whether the third active drug was EFV or ATV/r.
- There was no difference in time to virologic failure between ABC/3TC and TDF/FTC for participants who had plasma HIV RNA <100,000 copies/mL at screening. 15 - The ASSERT study compared open label ABC/3TC with TDF/FTC in 385 HLA-B*5701-negative, ARTnaive patients; all participants also received EFV. The primary study endpoint was renal safety of the regimens. At week 48, the proportion of participants with HIV RNA <50 copies/mL was lower among ABC/3TC-treated participants than among TDF/FTC-treated participants. 12 - In the HEAT study, 688 participants received ABC/3TC or TDF/FTC in combination with once-daily LPV/r. Virologic efficacy was similar in the two study arms. In a subgroup analysis of patients with baseline HIV RNA ≥100,000 copies/mL, the proportion of participants who achieved HIV RNA <50 copies/mL at 96 weeks did not differ between the two regimens. To date, there are no published results from a head-to-head clinical trial comparing ABC and TAF.
# Tenofovir Alafenamide Compared with Tenofovir Disoproxil Fumarate
- Two randomized, double-blind phase 3 clinical trials compared the safety and efficacy of EVG/c/TDF/FTC and EVG/c/TAF/FTC in 1,584 ART-naive adults with estimated glomerular filtration rate (eGFR) ≥50 mL/min.
- At 48 weeks, 92% of participants randomized to receive TAF and 90% of those randomized to receive TDF achieved plasma HIV RNA <50 copies/mL, demonstrating that TAF was noninferior to TDF when combined with EVG/c/FTC. Both regimens were well tolerated. The studies did not have adequate power to assess whether renal failure and fracture rates were different between the TAF and TDF groups. 6 - Participants in the TAF arm had significantly smaller reductions in BMD at the spine and the hip than those in the TDF arm.
- Through 96 weeks, change from baseline eGFR and renal biomarkers favored EVG/c/TAF/FTC, and renal tubular function was less affected by the EVG/c/TAF/FTC regimen than by the EVG/c/TDF/FTC regimen. Clinically significant renal events, including discontinuations for renal adverse events, were less frequent in participants receiving EVG/c/TAF/FTC than in those treated with EVG/c/TDF/FTC. 16 A subset analysis of patients at high risk for chronic kidney disease showed a lower rate of at least 25% decline in eGFR in patients on EVG/c/TAF/FTC, compared to patients on EVG/c/TDF/FTC (11.5% vs. 24.9%, P < 0.001). 7 - Fasting lipid levels, including low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol, and triglycerides, increased more in the TAF group than in the TDF group at 96 weeks, with no change in total cholesterol to HDL ratio. 4 - Combination TAF/FTC was also approved based on efficacy and safety data from one switch study in virologically suppressed patients. 5 This study included 663 patients with HIV-1 RNA <50 copies/mL for at least 6 months on a regimen containing TDF/FTC. Participants were randomized to continue TDF/FTC or switch to TAF/FTC.
- At 48 weeks, TAF/FTC was noninferior to TDF/FTC in that viral suppression was maintained by 94.3% and 93% of the particpants, respectively.
- Improvement in eGFR and renal biomarkers was more frequent in those switched to TAF/FTC. BMD improved in those switched to TAF/FTC but declined in those continuing on TDF/FTC.
- Fasting lipid levels increased more in those who switched to TAF/FTC than in those who continued TDF/FTC.
- To assess the ability of TAF to maintain HIV and HBV suppression, 72 HIV/HBV coinfected patients with HIV-1 RNA <50 copies/mL and HBV DNA <9 log 10 IU/mL on a stable regimen were switched to EVG/c/TAF/FTC. 17 In this study, 96% of participants were on a TDF/FTC-containing regimen prior to the switch.
- Those who switched to EVG/c/TAF/FTC maintained HIV suppression: 94.4% and 91.7% of participants at 24 and 48 weeks, respectively. At 24 and 48 weeks, 86.1% and 91.7% of participants had HBV DNA <29 log 10 IU/mL.
- Decreases in markers of proximal tubular proteinuria and biomarkers of bone turnover were seen in those who switched to EVG/c/TAF/FTC. 17 Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents F-11
# Dual-NRTI Choices
Note: In alphabetical order
# Abacavir/Lamivudine (ABC/3TC)
ABC plus 3TC has been studied in combination with EFV, several PIs, and DTG in ART-naive patients. 14, Adverse Effects
Hypersensitivity Reactions:
- Clinically suspected hypersensitivity reactions (HSRs) were observed in 5% to 8% of individuals who started ABC in clinical trials conducted before the use of HLA-B*5701 testing. The risk of HSRs is highly associated with the presence of the HLA-B*5701 allele; approximately 50% of HLA-B*5701positive patients will have an ABC-related HSR if given this drug. 21,22 HLA-B*5701 testing should precede use of ABC. ABC should not be given to patients who test positive for HLA-B*5701 and, based on a positive test result, ABC hypersensitivity should be noted on a patient's allergy list. Patients who are HLA-B*5701-negative are far less likely to experience an HSR, but they should be counseled about the symptoms of the reaction. Patients who discontinue ABC because of a suspected HSR should never be rechallenged, regardless of their HLA-B*5701 status.
Cardiovascular Risk:
- An association between ABC use and myocardial infarction (MI) was first reported in the D:A:D study. This large, multinational, observational study group found that recent (ie, within 6 months) or current use of ABC was associated with an increased risk of MI, particularly in participants with pre-existing cardiac risk factors. 23,24 - Since the D:A:D report, several studies have evaluated the relationship between ABC therapy and cardiovascular events. Some studies have found an association. Others, including an FDA metaanalysis of 26 randomized clinical trials that evaluated ABC, have not. - No consensus has been reached on the association between ABC use and MI risk or the mechanism for such an association.
Other Factors and Considerations:
- ABC/3TC is available as a coformulated tablet and as a coformulated single-tablet regimen with DTG.
- ABC and 3TC are available separately in generic tablet formulations.
- ABC does not cause renal dysfunction and can be used instead of TDF in patients with underlying renal dysfunction or who are at high risk for renal effects. No dosage adjustment is required in patients with renal dysfunction.
The Panel's Recommendations:
- ABC should only be prescribed for patients who are HLA-B*5701 negative.
- On the basis of clinical trial safety and efficacy data, experience in clinical practice, and the availability of ABC/3TC as a component of coformulated products, the Panel classifies DTG/ABC/3TC as a Recommended regimen (AI) (see discussion of DTG in this section regarding the clinical efficacy data for ABC/3TC plus DTG).
- ABC/3TC use with EFV, ATV/r, ATV/c, or RAL is only recommended for patients with pretreatment HIV RNA <100,000 copies/mL. See Table 6 for more detailed recommendations on use of ABC/3TC with these drugs. - ABC should be used with caution or avoided in patients with known high cardiovascular risk.
# Tenofovir Alafenamide/Emtricitabine (TAF/FTC)
TAF, an oral prodrug of TFV, is hydrolyzed to TFV in plasma and then converted to TFV-diphosphate (TFV-DP) intracellularly, where it exerts its activity as an NRTI. Unlike TDF, which readily converts to TFV in plasma after oral absorption, TAF remains relatively stable in plasma, resulting in lower plasma and higher intracellular TFV concentrations. After oral administration, TAF 25 mg resulted in plasma TFV concentrations that were 90% lower than those seen with TDF 300 mg. Intracellular TFV-DP concentrations, however, were substantially higher with TAF.
Adverse Effects:
- The potential for adverse kidney and bone effects is less likely with TAF than with TDF. In randomized controlled trials that compared TAF and TDF in treatment-naive or virally suppressed patients, TAF had more favorable effects on renal biomarkers and bone density than TDF.
- In the randomized controlled trials in ART-naive patients, as well as in switch studies, levels of LDL and HDL cholesterol and triglycerides were higher in patients receiving TAF than in patients receiving TDF. However, total cholesterol to HDL ratios did not differ between patients receiving TAF and TDF.
Other Factors and Considerations:
- TAF/FTC is available in fixed-dose drug combinations with EVG/c or RPV, allowing the regimens to be administered as a single pill taken once daily with food.
- TAF-containing compounds are approved for patients with eGFR ≥30 mL/min. Renal function, urine glucose, and urine protein should be assessed before initiating treatment with TAF and these assessments should be repeated periodically during treatment (see Laboratory Testing for Initial Assessment and Monitoring of HIV-Infected Patients on Antiretroviral Therapy).
- Both TAF and FTC are active against HBV. In patients with HIV/HBV coinfection, TAF/FTC may be used as the NRTI pair of the ART regimen because the drugs have activity against both viruses (see HBV/HIV Coinfection). 17 The Panel's Recommendation:
- On the basis of clinical trial safety and efficacy data, supportive bioequivalence data, 34 and the combination's availability as a component of coformulated products, the Panel considers TAF/FTC a Recommended NRTI combination for initial ART in treatment-naive patients when combined with DTG (AII), EVG/c (AI), RAL (AII), or DRV/r (AII).
# Tenofovir Disoproxil Fumarate/Emtricitabine (TDF/FTC)
TDF, with either 3TC or FTC, has been studied in combination with EFV, RPV, several boosted PIs, EVG/c, RAL, and DTG in randomized clinical trials. Adverse Effects
Renal Effects:
- New onset or worsening renal impairment has been associated with TDF use. 45,46 Risk factors may include advanced HIV disease, longer treatment history, low body weight (especially in females) 47 and pre-existing renal impairment. 48 Concomitant use of a PK-enhanced regimen (with a PI or EVG) can increase TDF concentrations; studies have suggested a greater risk of renal dysfunction when TDF is used in these regimens. 46, Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents F-13
Bone Effects:
- While initiation of all NRTI-containing regimens has been associated with a decrease in BMD, the loss of BMD is greater with TDF-containing regimens. For example, in two randomized studies comparing TDF/FTC with ABC/3TC, participants receiving TDF/FTC experienced a significantly greater decline in BMD than ABC/3TC-treated participants. 54,55 BMD generally stabilizes following an early decline after ART initiation. Loss of BMD with TDF is also greater than with TAF (see above).
- Cases of osteomalacia associated with proximal renal tubulopathy have been reported with the use of TDF. 56 Other Factors and Considerations:
- TDF/FTC is available in fixed-dose drug combinations with EFV, EVG/c, and RPV, allowing the regimens to be administered as a single pill, taken once daily.
- Renal function, urine glucose, and urine protein should be assessed before initiating treatment with TDF and periodically during treatment (see Laboratory Testing for Initial Assessment and Monitoring of HIV-Infected Patients on Antiretroviral Therapy). In patients who have pre-existing renal insufficiency (CrCl <60 mL/min), 57 use of TDF should generally be avoided. If TDF is used, dosage adjustment is required if the patient's CrCl falls below 50 mL/min (see Appendix B, Table 7 for dosage recommendations).
- Both TDF and FTC are active against HBV. In patients with HIV/HBV coinfection, TDF/FTC may be used as the NRTI pair of the ART regimen because the drugs have activity against both viruses (also see HBV/HIV Coinfection section).
The Panel's Recommendations:
- On the basis of clinical trial safety and efficacy data, long-term experience in clinical practice, and the combination's availability as a component of coformulated products, the Panel considers TDF/FTC a Recommended NRTI combination for initial ART in treatment-naive patients when combined with DTG, EVG/c, RAL, or DRV/r. See Table 6 for recommendations regarding use of TDF/FTC with other drugs.
- TDF should be used with caution or avoided in patients with renal disease and osteoporosis.
# INSTI-Based Regimens
# Summary
Three INSTIs-DTG, EVG, and RAL-are currently approved for HIV-infected, ARV-naive patients. DTG and EVG are currently available as components of one-tablet, once daily complete regimens: DTG is coformulated with ABC/3TC; EVG is coformulated with a PK enhancer (COBI) and TAF/FTC or TDF/FTC. All INSTIs are generally well-tolerated, though there are reports of insomnia in some patients. Depression and suicidal ideation, primarily in patients with a history of psychiatric illnesses, have rarely been reported in patients receiving INSTI-based regimens.
# Recommended Integrase Strand Transfer Inhibitor-Based Regimens
Note: In alphabetical order
# Dolutegravir (DTG)
DTG is an INSTI with a higher genetic barrier to resistance than EVG or RAL. In treatment-naive patients, DTG is given once daily, with or without food.
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents F-14
Efficacy in Clinical Trials:
The efficacy of DTG in treatment-naive patients has been evaluated in 3, fully powered clinical trials, including two randomized double-blinded clinical trials and one randomized open-label clinical trial. In these three trials, DTG-based regimens were noninferior or superior to a comparator INSTI, NNRTI, or PI-based regimen. The primary efficacy endpoint in these clinical trials was the proportion of participants with plasma HIV RNA <50 copies/mL.
- The SPRING-2 trial compared DTG 50 mg once daily to RAL 400 mg twice daily. Each drug was administered in combination with an investigator-selected 2-NRTI regimen, either ABC/3TC or TDF/FTC, to 822 participants. At week 96, DTG was noninferior to RAL. 44 - The SINGLE trial compared DTG 50 mg once daily plus ABC/3TC to EFV/TDF/FTC in 833 participants. At week 48, DTG was superior to EFV, primarily because the study treatment discontinuation rate was higher in the EFV arm than in the DTG arm. 14 At week 144, DTG plus ABC/3TC remained superior to EFV/TDF/FTC. 58 - The FLAMINGO study, a randomized open-label clinical trial, compared DTG 50 mg once daily to DRV/r 800 mg/100 mg once daily, each in combination with investigator-selected ABC/3TC or TDF/FTC. At week 48, DTG was superior to DRV/r because of the higher rate of discontinuation in the DRV/r arm. 59,60 The difference in response rates favoring DTG was greater in patients with pretreatment HIV RNA levels >100,000 copies/mL. At week 96, DTG remained superior to DRV/r. 61 Adverse Effects:
- DTG is generally well tolerated. The most common adverse reactions of moderate to severe intensity with an incidence ≥2% in the clinical trials were insomnia and headache. Cases of HSRs were reported in <1% of trial participants.
Other Factors and Considerations:
- DTG decreases tubular secretion of creatinine without affecting glomerular function, with increases in serum creatinine observed within the first 4 weeks of treatment (mean increase in serum creatinine was 0.11 mg/dL after 48 weeks).
- DTG has few drug interactions. DTG increases metformin levels approximately 2-fold; close monitoring for metformin adverse effects is advisable. Rifampin decreases DTG levels; therefore, an increase in dosing of DTG to 50 mg twice daily is required.
- DTG absorption may be reduced when the ARV is coadministered with polyvalent cations (see Drug Interactions). DTG should be taken at least 2 hours before or 6 hours after cation-containing antacids or laxatives. Alternatively, DTG and supplements containing calcium or iron can be taken simultaneously with food.
- Treatment-emergent mutations that confer DTG resistance have not been reported in patients receiving DTG for initial therapy, which suggests that DTG has a higher genetic barrier to resistance than other INSTIs.
The Panel's Recommendation:
- On the basis of clinical trial data, the Panel categorizes DTG in combination with ABC/3TC (AI), TAF/FTC (AII), or TDF/FTC (AI) as a Recommended regimen in ART-naive patients.
# Elvitegravir (EVG)
EVG is available as a component of 2 fixed-dose combination products containing EVG, COBI, TDF, and FTC or EVG, COBI, TAF, and FTC. COBI is a specific, potent CYP3A inhibitor that has no activity against HIV. It acts as a PK enhancer of EVG, which allows for once daily dosing of the combination.
Efficacy in Clinical Trials:
- The efficacy of EVG/c/TDF/FTC in ARV-naive participants has been evaluated in two randomized, double-blind active-controlled trials.
- At 144 weeks, EVG/c/TDF/FTC was noninferior to fixed-dose EFV/TDF/FTC. 62 - EVG/c/TDF/FTC was also found to be noninferior to ATV/r plus TDF/FTC. 63 - In a randomized, blinded trial performed in HIV-infected women, EVG/c/TDF/FTC had superior efficacy when compared to ATV/r plus TDF/FTC, in part because of a lower rate of treatment discontinuation. 10 - The efficacy of EVG/c/TAF/FTC in ARV-naive participants has been evaluated in two randomized, double-blind controlled trials in adults with eGFR ≥50mL/min. 4,6 - At 48 and 96 weeks, TAF was noninferior to TDF when both were combined with EVG/c/FTC (see details in NRTI discussion).
Adverse Effects:
- The most common adverse events reported with EVG/c/TDF/FTC were diarrhea, nausea, upper respiratory infection, and headache. 62,63 - The most common adverse events reported with EVG/c/TAF/FTC were nausea, diarrhea, headache, and fatigue. 64 Other Factors and Considerations:
- EVG is metabolized primarily by CYP3A enzymes; as a result, CYP3A inducers or inhibitors may alter EVG concentrations.
- Because COBI inhibits CYP3A, it interacts with a number of medications that are metabolized by this enzyme (see Drug Interactions). 65 - EVG plasma concentrations are lower when it is administered simultaneously with polyvalent cationcontaining antacids or supplements (see Drug Interactions section). Separate EVG/c/TDF/FTC or EVG/c/TAF/FTC and polyvalent antacid administration by at least 2 hours; administer polyvalent cationcontaining supplements at least 2 hours before or 6 hours after EVG dosing.
- COBI inhibits active tubular secretion of creatinine, resulting in increases in serum creatinine and a reduction in estimated CrCl without reducing glomerular function. 66 Patients with a confirmed increase in serum creatinine greater than 0.4 mg/dL from baseline while taking EVG/c/TDF/FTC should be closely monitored and evaluated for evidence of TDF-related proximal renal tubulopathy. 53 - EVG/c/TDF/FTC is not recommended for patients with pre-treatment estimated CrCl <70 mL/min. 53 - EVG/c/TAF/FTC is not recommended for patients with pre-treatment estimated CrCl <30 mL/min.
- At the time of virologic failure, INSTI-associated mutations were detected in some EVG/c/TDF/FTCtreated patients whose therapy failed. 62,63 These mutations conferred cross-resistance to RAL, with most retaining susceptibility to DTG.
The Panel's Recommendation:
- On the basis of the above factors, the Panel classifies EVG/c/TAF/FTC as a Recommended initial regimen for patients with estimated CrCl ≥30 mL/min (AI) and EVG/c/TDF/FTC for patients with estimated CrCl ≥70 mL/min (AI).
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents F-16
# Raltegravir (RAL)
RAL was the first INSTI approved for use in both ARV-naive and ARV-experienced patients.
Efficacy in Clinical Trials:
- The efficacy of RAL (with either TDF/FTC or ABC/3TC) as initial therapy has been evaluated in two randomized, double-blinded, controlled clinical trials, and a third open-label, randomized trial.
- STARTMRK compared RAL 400 mg twice daily to EFV 600 mg once daily, each in combination with TDF/FTC. RAL was noninferior to EFV at 48 weeks. 40 RAL was superior to EFV at 4 and 5 years, 43,67 in part because of more frequent discontinuations due to adverse events in the EFV group than in the RAL group.
- The SPRING-2 trial compared DTG 50 mg once daily to RAL 400 mg twice daily, each in combination with investigator-selected ABC/3TC or TDF/FTC. At week 96, DTG was noninferior to RAL.
- The SPRING-2 trial also provided non-randomized data on the efficacy of RAL plus ABC/3TC. In this trial, 164 participants (39 and 125 participants with baseline viral loads ≥100,000 copies/mL and <100,000 copies/mL, respectively) received RAL in combination with ABC/3TC. After 96 weeks, there was no difference in virologic response between the ABC/3TC and TDF/FTC groups when RAL was given as the third drug. 44 - ACTG A5257, a large randomized open-label trial, compared 3 NNRTI-sparing regimens containing RAL, ATV/r, or DRV/r, each given with TDF/FTC. At week 96, all 3 regimens had similar virologic efficacy, but RAL was superior to both ATV/r and DRV/r for the combined endpoints of virologic efficacy and tolerability. Participants had greater increases in lipid levels in the ritonavir-boosted protease inhibitor (PI/r) arms than in the RAL arm, and bone mineral density decreased to a greater extent in participants in the PI/r arms than in participants in the RAL arm. 8 Adverse Effects:
- RAL use has been associated with creatine kinase elevations. Myositis and rhabdomyolysis have been reported.
- Rare cases of severe skin reactions and systemic hypersensitivity reactions in patients who received RAL have been reported during post-marketing surveillance. 68 Other Factors and Considerations:
- RAL must be administered twice daily-a potential disadvantage when comparing RAL-based treatment with other Recommended regimens.
- Coadministration of RAL with aluminum-and/or magnesium-containing antacids can reduce absorption of RAL and is not recommended. RAL may be coadministered with calcium carbonate-containing antacids. Polyvalent cation-containing supplements may also reduce absorption of RAL; thus, RAL should be given at least 2 hours before or 6 hours after cation-containing supplements.
- RAL has a lower genetic barrier to resistance than RTV-boosted PIs and DTG.
The Panel's Recommendations:
- On the basis of these data and long-term clinical experience with RAL, the Panel considers RAL plus TDF/FTC (AI) or TAF/FTC (AII) as a Recommended regimen in ARV-naive patients.
- Because few patients have received RAL plus ABC/3TC in clinical trials or practice and there has not been a randomized trial comparing ABC/3TC plus RAL to TDF/FTC plus RAL, the Panel categorizes RAL plus ABC/3TC as an Other regimen option (BII). 2. EFV is less well tolerated than the Recommended regimens; and 3. In a randomized controlled trial that compared RPV and EFV, the rate of virologic failure among participants with high pre-treatment viral load (>100,000 copies/mL) or low CD4 cell count (<200 cells/mm 3 ) was higher among the RPV-treated participants.
# Efavirenz (EFV)
Efficacy in Clinical Trials:
Large randomized, controlled trials and cohort studies in ART-naive patients have demonstrated potent and durable viral suppression in patients treated with EFV plus two NRTIs. In clinical trials, EFV-based regimens in ART-naive patients have demonstrated superiority or noninferiority to several comparator regimens.
- In ACTG 5202, EFV was comparable to ATV/r when each was given with either TDF/FTC or ABC/3TC. 72
- In the ECHO and THRIVE studies, EFV was noninferior to RPV, with less virologic failure. However, EFV caused more discontinuations due to adverse events. The virologic advantage of EFV was most notable in participants with pre-ART viral loads >100,000 copies/mL, and NRTI and NNRTI resistance was more frequent with RPV failure. 73 - In the GS 102 study, EFV/TDF/FTC was noninferior to EVG/c/TDF/FTC. 62 Some regimens have demonstrated superiority to EFV, based primarily on fewer discontinuations because of adverse events:
- In the SINGLE trial, a DTG-based regimen was superior to EFV at the primary endpoint of viral suppression at Week 48. 14 - In the STARTMRK trial, RAL was noninferior to EFV at 48 weeks. 40 RAL was superior to EFV at 4 and 5 years, 43,67 in part because of more frequent discontinuations due to adverse events in the EFV group than in the RAL group.
- In the open-label STaR trial, participants with baseline viral loads ≤100,000 copies/mL had higher rates of treatment success on RPV than on EFV. 74 ENCORE 1, a multinational randomized placebo-controlled trial compared 2 once-daily doses of EFV (combined with TDF/FTC): EFV 600 mg (standard dose) versus EFV 400 mg (reduced dose). At 96 weeks, EFV 400 mg was noninferior to EFV 600 mg for rate of viral suppression. 75 Study drug-related adverse
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents F-18
events were less frequent in the EFV 400 mg group than in the 600 mg group. Although there were fewer self-reported CNS events in the 400 mg group, the groups had similar rates of psychiatric events. Unlike the 600 mg dose of EFV, the 400 mg dose is not approved for initial treatment and is not coformulated in a fixeddose combination tablet.
Adverse Effects:
- EFV can cause CNS side effects (eg, abnormal dreams, dizziness, headache, depression), which resolve over a period of days to weeks in most patients. However, subtler, long-term neuropsychiatric effects can occur. An analysis of 4 AIDS Clinical Trial Group (ACTG) comparative trials showed a higher rate of suicidality (ie, reported suicidal ideation or attempted or completed suicide) among EFV-treated patients than among patients taking comparator regimens. 76 This association, however, was not found in analyses of 3 large observational cohorts. 77,78 - EFV may cause elevation in LDL cholesterol and triglycerides.
Other Factors and Considerations:
- EFV is formulated both as a single-drug tablet and in a fixed-dose combination tablet of EFV/TDF/FTC that allows for once-daily dosing.
- EFV is a substrate of CYP3A4 and an inducer of CYP3A4 and 2D6 and therefore may potentially interact with other drugs using the same pathways (see Tables 19b, 20a, and 20b).
- EFV has been associated with CNS birth defects in nonhuman primates, and cases of neural tube defects have been reported after first trimester exposure in humans. 79 Alternative regimens should be considered in women who are planning to become pregnant or who are sexually active and not using effective contraception. Because the risk of neural tube defects is restricted to the first 5 to 6 weeks of pregnancy, before pregnancy is usually recognized, a suppressive EFV-based regimen can be continued in pregnant women who present for antenatal care in the first trimester, or may be initiated after the first trimester (see Perinatal Guidelines).
The Panel's Recommendations:
- Given the availability of regimens with fewer treatment-limiting adverse events and also with noninferior or superior efficacy, the Panel classifies EFV/TDF/FTC (BI) or EFV plus TAF/FTC (BII) as an Alternative regimen for ART-naive patients.
- Given virologic and pharmacogenetic parameters that limit its use in some patients, the Panel recommends EFV with ABC/3TC as an Other regimen, and only for patients with a pre-ART viral load <100,000 copies/mL and negative HLA-B*5701 status (see discussion in ABC/3TC section) (CI).
- EFV at a reduced dose has not been studied in the U.S. population. The Panel cannot recommend use of reduced-dose EFV.
# Rilpivirine (RPV)
RPV is an NNRTI approved for use in combination with NRTIs for ART-naive patients with pre-treatment viral loads <100,000 copies/mL.
# Efficacy in Clinical Trials:
Two Phase 3 randomized, double-blinded clinical trials-ECHO and THRIVE-compared RPV and EFV, each combined with 2 NRTIs. 73 At 96 weeks, the following findings were reported:
- RPV was noninferior to EFV overall.
- Among participants with a pre-ART viral load >100,000 copies/mL, more RPV-treated than EFV-treated participants experienced virologic failure. Moreover, in this subgroup of participants with virologic failure, NNRTI and NRTI resistance was more frequently identified in those treated with RPV.
- Among the RPV-treated participants, the rate of virologic failure was greater in those with pre-treatment CD4 counts <200 cells/mm 3 than in those with CD4 counts ≥200 cells/mm 3 .
STaR, a Phase 3b, open-label study, compared the fixed-dose combinations of RPV/TDF/FTC and EFV/TDF/FTC in 786 treatment-naive patients. At 96 weeks, the following key findings were reported: 74
- RPV was noninferior to EFV overall.
- RPV was superior to EFV in patients with pre-ART viral loads ≤100,000 copies/mL and noninferior in those with pre-ART viral loads >100,000 copies/mL. In patients with pre-ART viral loads >500,000 copies/mL, virologic failure was more common in RPV-treated patients than in EFV-treated patients.
- There were more participants with emergent resistance in the RPV/FTC/TDF arm than in the EFV/FTC/TDF arm (4 vs. 1%, respectively).
The fixed-dose combination tablet of RPV/TAF/FTC was approved by the FDA based on results from a bioequivalence study. In this study, plasma concentrations of RPV, FTC, and TAF were similar in participants who received the single tablet formulation and in those who received the reference drugs (RPV tablet alone and TAF 10 mg/FTC coadministered with EVG/c as a fixed-dose combination), which have demonstrated safety and efficacy in clinical trials. 34 Adverse Effects:
- RPV is generally well tolerated. In the ECHO, THRIVE, and STaR trials, fewer CNS adverse events (eg, abnormal dreams, dizziness, psychiatric side effects), skin rash, and dyslipidemia were reported in the RPV arms than the EFV arms, and fewer patients in the RPV arms discontinued therapy due to adverse events. However, up to 9% of clinical trial participants experienced depressive disorders, including approximately 1% of participants who had suicidal thoughts or attempted suicide. Patients with severe depressive symptoms should be evaluated to assess whether symptoms may be due to RPV and if the risks of continued treatment outweigh the benefits.
Other Factors and Considerations:
- RPV is formulated both as a single-drug tablet and in fixed-dose combination tablets with TAF/FTC and with TDF/FTC. Among available single pill regimens, RPV/TAF/FTC is the smallest tablet.
- RPV/TAF/FTC and RPV/TDF/FTC are given once daily, and must be administered with a meal (at least 390 kcal).
- The oral drug absorption of RPV can be significantly reduced in the presence of acid-lowering agents. RPV is contraindicated in patients who are receiving proton pump inhibitors, and should be used with caution in those receiving H2 antagonists or antacids (see Drug Interactions for dosing recommendations).
- RPV is primarily metabolized in the liver by the CYP3A enzyme; its plasma concentration may be affected in the presence of CYP3A inhibitors or inducers (see Drug Interactions).
- At higher than the approved dose of 25 mg, RPV may cause QTc interval prolongation. RPV should be used with caution when coadministered with a drug known to increase the risk of Torsades de Pointes.
The Panel's Recommendations:
- Given the availability of other effective regimens that do not have virologic and immunologic prerequisites to initiate treatment, the Panel recommends RPV/TDF/FTC and RPV/TAF/FTC as Alternative regimens. pretreatment viral load 200 cells/mm 3 .
- Data on RPV with ABC/3TC are insufficient to consider recommending this regimen as a Recommended, Alternative, or Other regimen. 8 and Appendix B, Table 3.
# PI-Based Regimens
# Summary
PIs that are recommended for use in ART-naive patients should have proven virologic efficacy, once-daily dosing, a low pill count, and good tolerability. On the basis of these criteria, the Panel considers once-daily DRV/r plus TDF/FTC as a Recommended PI regimen (AI). In a large, randomized controlled trial comparing DRV/r, ATV/r, and RAL, all in combination with TDF/FTC, all three regimens achieved similar virologic suppression rates; however, the proportion of patients who discontinued their assigned treatment because of adverse effects was greater in the ATV/r arm than in the other two arms. 8 Because of the higher rate of adverse effects, the Panel now classifies regimens containing ATV/r or ATV/c as Alternative regimens (BI). DRV/c-based regimens are considered Alternative PI regimens because data only exist from single-arm clinical trials and bioequivalence studies, rather than comparative clinical trials (BII).
A number of metabolic abnormalities, including dyslipidemia and insulin resistance, have been associated with PI use. The currently available PIs differ in their propensity to cause these metabolic complications, which also depends on the dose of RTV used as a PK-enhancing agent. Two large observational cohort studies suggest that LPV/r, IDV, FPV, or FPV/r may be associated with increased rates of MI or stroke. 24,30 This association was not seen with ATV. 82 Because of the limited number of patients receiving DRV/r, this boosted-PI was not included in the analysis of the 2 studies.
LPV/r has twice the daily dose of RTV as other PI/r regimens and is associated with more metabolic complications and gastrointestinal side effects than PK-enhanced ATV or DRV. The Panel no longer recommends LPV/r plus 2-NRTI as a regimen for initial therapy, given the availability of other PIs coformulated with PK enhancers that can be given once daily and the accumulation of experience with other classes of ART regimens with fewer toxicities. LPV/r may remain an Alternative option for HIV-infected pregnant women given experience in clinical trials and clinical practice. For more detailed recommendations on ARV choices and dosing in HIV-infected pregnant women, refer to the Perinatal guidelines. LPV/r plus 3TC is an Other regimen option for patients who cannot use ABC, TAF, or TDF. Compared to other PIs, FPV/r, unboosted ATV, and SQV/r have disadvantages such as greater pill burden, lower efficacy, or increased toxicity, and thus are not included as options for initial therapy. Nonetheless, patients who are doing well on regimens containing these PIs should not necessarily be switched to other agents. - The ARTEMIS study compared DRV/r (800/100 mg once daily) with LPV/r (800/200 mg once daily or 400/100 mg twice daily), both in combination with TDF/FTC, in a randomized, open-label, noninferiority trial. DRV/r was noninferior to LPV/r at week 48, 38 and superior at week 192. 83 Among participants with baseline HIV RNA levels >100,000 copies/mL, virologic response rates were lower in the LPV/r arm than in the DRV/r arm.
- The FLAMINGO study compared DRV/r with DTG, each in combination with 2 NRTIs, in 488 ARTnaive participants. The rate of virologic suppression at week 96 was significantly greater among those who received DTG than in those who received DRV/r. The excess failure observed in the DRV/r group was primarily related to a higher rate of virologic failure among those with a viral load >100,000 copies/mL and secondarily due to more drug discontinuations in the DRV/r group. 9 - ACTG A5257, a large randomized open-label trial, compared ATV/r with DRV/r or RAL, each given with TDF/FTC. The trial showed similar virologic efficacy for DRV/r, ATV/r, and RAL, but more participants in the ATV/r group discontinued randomized treatment because of adverse events. 8 - A small retrospective study that followed participants for 48 weeks suggested that DRV/r plus ABC/3TC may be effective in treatment-naive patients. 84 Adverse Effects:
- Patients starting DRV/r may develop a skin rash, which is usually mild-to-moderately severe and selflimited. Treatment discontinuation is necessary on rare occasions when severe rash with fever or elevated transaminases occur.
- ACTG A5257 showed similar lipid changes in participants in the ATV/r and DRV/r arms. BMD decreased to a greater extent in participants in the ATV/r and DRV/r arms than in participants in the RAL arm. 8 The likelihood of developing metabolic syndrome was equivalent between the three arms, although a larger increase in waist circumference was observed in participants assigned to the RAL arm than in those in the DRV/r arm at 96 weeks (P ≤ 0.02). 85 Other Factors and Considerations:
- DRV/r is administered once daily with food in treatment-naive patients.
- DRV has a sulfonamide moiety, and should be used with caution in patients with severe sulfonamide allergies. In clinical trials, the incidence and severity of rash were similar in participants who did or did not have a history of sulfonamide allergy. Most patients with sulfonamide allergy are able to tolerate DRV.
- DRV/r is a potent CYP3A4 inhibitor, and may lead to significant interactions with other medications metabolized through this same pathway (see Drug Interactions).
The Panel's Recommendation:
- On the basis of efficacy and safety data from clinical trials and clinical experience, the Panel classifies DRV/r with TDF/FTC (AI) or TAF/FTC (AII) as a Recommended regimen. DRV/r with ABC/3TC is considered an Alternative regimen because there are fewer studies to support its use (BII). - The ACTG A5202 study compared open-label ATV/r and EFV, each given in combination with placebocontrolled TDF/FTC or ABC/3TC. Efficacy was similar in the ATV/r and EFV groups. 72 In a separate analysis, women assigned to receive ATV/r were found to have a higher risk of virologic failure than women assigned to receive EFV or men assigned to receive ATV/r. 87 - In a study comparing ATV/r plus TDF/FTC to EVG/c/TDF/FTC, virologic suppression rates through 144 weeks were similar in the two groups. 63 - In ACTG A5257, a significantly higher proportion of patients in the ATV/r arm discontinued randomized treatment because of adverse events, mostly for elevated indirect bilirubin/jaundice or gastrointestinal toxicities. Lipid changes in participants in the ATV/r and DRV/r arms were similar. BMD decreased to a greater extent in participants in the ATV/r and DRV/r arms than in participants in the RAL arm. 8 - In the Gilead Study 114, all patients received TDF/FTC and ATV, and were randomized to receive either RTV or COBI as PK enhancers. Both RTV and COBI were given as a separate pill with matching placebos. 88 Through 144 weeks, the percentage of patients who achieved virologic suppression was similar in both study arms. The percentage of treatment discontinuing adverse events and changes in serum creatinine and indirect bilirubin levels were comparable. 89 Adverse Effects:
- The main adverse effect associated with ATV/c or ATV/r is reversible indirect hyperbilirubinemia, with or without jaundice or scleral icterus, but without concomitant hepatic transaminase elevations. The risk for treatment-limiting indirect hyperbilirubinemia is greatest for patients who carry two UGT1A1 decreased-function alleles. 90 - Nephrolithiasis, nephrotoxicity, 94 and cholelithiasis 95 have also been reported in patients who received ATV, with or without RTV.
- Both ATV/c and ATV/r can cause gastrointestinal side effects including diarrhea.
Other Factors and Considerations:
- ATV/c and ATV/r are dosed once daily and with food.
- ATV requires acidic gastric pH for dissolution. As a result, concomitant use of drugs that raise gastric pH (eg, antacids, H2 antagonists, and particularly proton pump inhibitors ) may impair absorption of ATV. Table 19a provides recommendations for use of ATV/c or ATV/r with these agents.
- ATV/c and ATV/r are potent CYP3A4 inhibitors and may have significant interactions with other medications that are metabolized through this same pathway (see Drug Interactions).
The Panel's Recommendations:
- On the basis of clinical trial safety and efficacy data, the Panel classifies ATV/r and ATV/c plus TAF/FTC (BII) or TDF/FTC (BI) as Alternative regimens for ART-naive patients regardless of pretreatment HIV RNA.
- The Panel recommends against the use of ATV/r or ATV/c plus ABC/3TC in patients with pre-ART HIV-1 RNA >100,000 copies/mL given inferior virologic response seen in patients with a high baseline viral
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents F-23 load on ATV/r plus ABC/3TC. ATV/r or ATV/c may be used with ABC/3TC in patients whose pre-ART HIV RNA is <100,000 copies/mL (CI). Because of these limitations, these regimens are classified in the Other category.
- ATV/c plus TDF/FTC is not recommended for patients with CrCl <70 mL/min, whereas ATV/c plus TAF/FTC is not recommended for patients with CrCl <30 mL/min.
# Darunavir/Cobicistat (DRV/c)
A combination of (DRV 800 mg with COBI 150 mg) is bioequivalent to (DRV 800 mg with RTV 100 mg) in healthy volunteers based on the maximum concentration and area under the concentration time curve for each boosted drug. 96 Because the minimum concentration (C min ) of DRV combined with COBI was 31% lower than that with DRV combined with RTV, bioequivalence for the C min was not achieved. 97 Efficacy in Clinical Trial:
- In a single-arm trial of treatment-naive (94%) and treatment-experienced (6%) patients, the coformulated DRV/c 800 mg/150 mg tablet was evaluated in combination with investigator-selected NRTI/NtRTI (99% of participants were given TDF/FTC). At week 48, 81% of participants achieved HIV RNA <50 copies/ml; 5% of participants discontinued treatment because of adverse events. 98 Adverse Effects:
- In the single arm trial, the most common treatment-emergent adverse events were diarrhea, nausea, and headache.
Other Factors:
- (DRV 800 mg and COBI 150 mg) is available as a coformulated tablet.
The Panel's Recommendations:
- On the basis of the bioequivalence study and the single arm trial, the Panel recommends DRV/c plus TAF/FTC or TDF/FTC (BII) and DRV/c plus ABC/3TC (BIII) as Alternative regimens for ART-naive patients.
- DRV/c plus TDF/FTC is not recommended for patients with CrCl <70 mL/min, whereas DRV/c plus TAF/FTC is not recommended for patients with CrCl <30 mL/min.
# Other Antiretroviral Regimens for Initial Therapy When Abacavir, Tenofovir Alafenamide, or Tenofovir Disoproxil Fumarate Cannot Be Used
All currently Recommended and Alternative regimens consist of two NRTIs plus a third active drug. This strategy, however, may not be possible or optimal in all patients. In some situations it may be necessary to avoid ABC, TAF, and TDF, such as in the case of a patient who is HLA-B*5701 positive or at high risk of cardiovascular disease and with significant renal impairment.
Based on these concerns, several clinical studies have evaluated strategies using initial regimens that avoid 2 NRTIs or the NRTI drug class altogether. Many of these studies were not fully powered to permit comparisons, and regimens from these studies will not be discussed further. However, there are now sufficient data on two regimens (DRV/r plus RAL and LPV/r plus 3TC) to warrant including them as options when ABC, TAF, or TDF cannot be used.
# Darunavir/Ritonavir plus Raltegravir (DRV/r plus RAL)
- In the NEAT/ANRS 143 study, 805 treatment-naive participants were randomized to receive either twicedaily RAL or once-daily TDF/FTC, both with DRV/r (800 mg/100 mg once daily). At week 96, DRV/r
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents F-24
plus RAL was noninferior to DRV/r plus TDF/FTC based on the primary endpoint of proportion of patients with virologic or clinical failure. Among those with baseline CD4 cell count 100,000 copies/mL were also seen in 2 smaller studies of DRV/r plus RAL. 100,101 The Panel's Recommendation:
- On the basis of these study results, the Panel recommends that DRV/r plus RAL be considered for use only in patients with HIV RNA 200 cells/mm 3 , and only in those patients who cannot take ABC, TAF, or TDF (CI).
# Lopinavir/Ritonavir plus Lamivudine (LPV/r plus 3TC)
- In the GARDEL study, 426 ART-naive patients were randomized to receive twice-daily LPV/r plus either open-label 3TC (twice daily) or 2 NRTIs selected by the study investigators. At 48 weeks, a similar number of patients in each arm had HIV RNA <50 copies/mL, meeting the study's noninferiority criteria.
The LPV/r plus 3TC regimen was better tolerated than the LPV/r plus 2 NRTI regimen. 102 - Important limitations of the GARDEL study are the use of LPV/r, twice daily dosing, and the relatively high pill burden (total of 6 tablets per day). LPV/r is not considered a Recommended or Alternative initial PI because of its unfavorable adverse event and pill burden characteristics when compared to PKenhanced ATV and DRV. Given the above limitations, the Panel recommends that LPV/r plus 3TC be considered for use only in patients who cannot take ABC, TAF, or TDF (CI).
In summary, the aggregate results from these two fully powered studies with NRTI-limiting regimens demonstrate that these initial strategies have significant deficiencies when compared to standard-of-care treatment approaches. In particular, these disadvantages are related to pill burden or dosing frequency. Some antiretroviral (ARV) regimens or components are not generally recommended because of suboptimal antiviral potency, unacceptable toxicities, or pharmacologic concerns. These are summarized below.
# Antiretroviral Regimens Not Recommended
Monotherapy with nucleoside reverse transcriptase inhibitor (NRTI). Single-NRTI therapy does not demonstrate potent and sustained antiviral activity and should not be used (AII). For prevention of motherto-child transmission (PMTCT), zidovudine (ZDV) monotherapy is not recommended but might be considered in certain unusual circumstances in women with HIV RNA <1,000 copies/mL, although the use of a potent combination regimen is preferred. (See Perinatal Guidelines, 1 available at .)
Single-drug treatment regimens with a ritonavir (RTV)-boosted protease inhibitor (PI), either lopinavir (LPV), 2 atazanavir (ATV), 3 or darunavir (DRV) are under investigation with mixed results, and cannot be recommended outside of a clinical trial at this time.
Dual-NRTI regimens. These regimens are not recommended because they have not demonstrated potent and sustained antiviral activity compared with triple-drug combination regimens (AI). 6 Triple-NRTI regimens. In general, triple-NRTI regimens other than abacavir/lamivudine/zidovudine (ABC/3TC/ZDV) (BI) and possibly lamivudine/zidovudine + tenofovir (3TC/ZDV + TDF) (BII) should not be used because of suboptimal virologic activity or lack of data (AI).
# Antiretroviral Components Not Recommended
Atazanavir (ATV) + indinavir (IDV). Both of these PIs can cause Grade 3 to 4 hyperbilirubinemia and jaundice. Additive adverse effects may be possible when these agents are used concomitantly. Therefore, these two PIs are not recommended for combined use (AIII).
# Didanosine (ddI) + stavudine (d4T).
The combined use of ddI and d4T as a dual-NRTI backbone can result in a high incidence of toxicities, particularly peripheral neuropathy, pancreatitis, and lactic acidosis. This combination has been implicated in the deaths of several HIV-infected pregnant women secondary to severe lactic acidosis with or without hepatic steatosis and pancreatitis. 14 Therefore, the combined use of ddI and d4T is not recommended (AII).
# Didanosine (ddI) + tenofovir (TDF).
Use of ddI + TDF may increase ddI concentrations 15 and serious ddIassociated toxicities including pancreatitis and lactic acidosis. These toxicities may be lessened by ddI dose reduction. The use of this combination has also been associated with immunologic nonresponse or CD4 cell decline despite viral suppression, high rates of early virologic failure, and rapid selection of resistance mutations. Because of these adverse outcomes, this dual-NRTI combination is not generally recommended (AII). Clinicians caring for patients who are clinically stable on regimens containing ddI + TDF should consider altering the NRTIs to avoid this combination.
Two-non-nucleoside reverse transcriptase inhibitor (2-NNRTI) combinations. In the 2NN trial, ARVnaive participants were randomized to receive once-or twice-daily nevirapine (NVP) versus efavirenz (EFV) versus EFV plus NVP, all combined with d4T and 3TC. 23 A higher frequency of clinical adverse events that led to treatment discontinuation was reported in participants randomized to the two-NNRTI arm. Both EFV and NVP may induce metabolism of etravirine (ETR), which leads to reduction in ETR drug exposure. 24 Based on these findings, the Panel does not recommend using two NNRTIs in combination in any regimen (AI).
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents G-2
Efavirenz (EFV) in first trimester of pregnancy and in women with significant childbearing potential.
EFV use was associated with significant teratogenic effects in nonhuman primates at drug exposures similar to those representing human exposure. Several cases of congenital anomalies have been reported after early human gestational exposure to EFV. EFV should be avoided in pregnancy, particularly during the first trimester, and in women of childbearing potential who are trying to conceive or who are not using effective and consistent contraception (AIII). If no other ARV options are available for the woman who is pregnant or at risk of becoming pregnant, the provider should consult with a clinician who has expertise in both HIV infection and pregnancy. (See Perinatal Guidelines, 1 available at .)
Emtricitabine (FTC) + lamivudine (3TC). Both of these drugs have similar resistance profiles and have minimal additive antiviral activity. Inhibition of intracellular phosphorylation may occur in vivo, as seen with other dual-cytidine analog combinations. 27 These two agents should not be used as a dual-NRTI combination (AIII).
Etravirine (ETR) + unboosted PI. ETR may induce the metabolism and significantly reduce the drug exposure of unboosted PIs. Appropriate doses of the PIs have not been established 24 (AII).
# Etravirine (ETR) + ritonavir (RTV)-boosted atazanavir (ATV) or fosamprenavir (FPV).
ETR may alter the concentrations of these PIs. Appropriate doses of the PIs have not been established 24 (AII).
# Etravirine (ETR) + ritonavir (RTV)-boosted tipranavir (TPV).
RTV-boosted TPV significantly reduces ETR concentrations. These drugs should not be coadministered 24 (AII).
Nevirapine (NVP) initiated in ARV-naive women with CD4 counts >250 cells/mm 3 or in ARV-naive men with CD4 counts >400 cells/mm 3 . Greater risk of symptomatic hepatic events, including serious and life-threatening events, has been observed in these patient groups. NVP should not be initiated in these patients (BI) unless the benefit clearly outweighs the risk. Patients who experience CD4 count increases to levels above these thresholds as a result of antiretroviral therapy (ART) can be safely switched to NVP. 31 Unboosted darunavir (DRV), saquinavir (SQV), or tipranavir (TPV). The virologic benefit of these PIs has been demonstrated only when they were used with concomitant RTV. Therefore, use of these agents as part of a combination regimen without RTV is not recommended (AII).
# Stavudine (d4T) + zidovudine (ZDV).
These two NRTIs should not be used in combination because of antagonism demonstrated in vitro 32 and in vivo 33 (AII).
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents G-3
# 2-NNRTI combination (AI)
- When EFV combined with NVP, higher incidence of clinical adverse events seen when compared with either EFV-or NVP-based regimen.
- Both EFV and NVP may induce metabolism and may lead to reductions in ETR exposure; thus, they should not be used in combination with ETR. Antiretroviral (ARV) regimens currently recommended for initial therapy of HIV-infected patients have a high likelihood of achieving and maintaining plasma HIV RNA levels below the lower limits of detection (LLOD) of currently used assays (see What to Start). Patients on antiretroviral therapy (ART) who do not achieve this treatment goal or who experience virologic rebound often develop resistance mutations to one or more components of their regimen. Based on surveillance data for HIV patients in care in selected cities in the United States in 2009, an estimated 89% of the patients were receiving ART, of whom 72% had viral loads <200 copies/mL. 1 Many patients with detectable viral loads are non-adherent to treatment. Depending on their treatment histories, some of these patients may have minimal or no drug resistance; others may have extensive resistance. Managing patients with extensive resistance is complex and usually requires consultation with an HIV expert. This section of the guidelines defines virologic failure in patients on ART and discusses strategies to manage these individuals.
# Virologic Response Definitions
The following definitions are used in this section to describe the different levels of virologic response to ART.
# Panel's Recommendations
- Assessing and managing a patient experiencing failure of antiretroviral therapy (ART) is complex. Expert advice is critical and should be sought.
- Evaluation of virologic failure should include an assessment of adherence, drug-drug or drug-food interactions, drug tolerability, HIV RNA and CD4 T lymphocyte (CD4) cell count trends over time, treatment history, and prior and current drug-resistance testing results.
- Drug-resistance testing should be performed while the patient is taking the failing antiretroviral (ARV) regimen (AI) or within 4 weeks of treatment discontinuation (AII). Even if more than 4 weeks have elapsed since ARVs were discontinued, resistance testingalthough it may not detect previously selected resistance mutations-can still provide useful information to guide therapy (CIII).
- The goal of treatment for ART-experienced patients with drug resistance who are experiencing virologic failure is to establish virologic suppression (i.e., HIV RNA below the lower limits of detection of currently used assays) (AI).
- A new regimen should include at least two, and preferably three, fully active agents (AI). A fully active agent is one that is expected to have uncompromised activity on the basis of the patient's treatment history and drug-resistance testing results and/or the drug's novel mechanism of action.
- In general, adding a single ARV agent to a virologically failing regimen is not recommended because this may risk the development of resistance to all drugs in the regimen (BII).
- For some highly ART-experienced patients, maximal virologic suppression is not possible. In this case, ART should be continued (AI) with regimens designed to minimize toxicity, preserve CD4 cell counts, and delay clinical progression.
- When it is not possible to construct a viable suppressive regimen for a patient with multidrug resistant HIV, the clinician should consider enrolling the patient in a clinical trial of investigational agents or contacting pharmaceutical companies that may have investigational agents available.
- Discontinuing or briefly interrupting therapy may lead to a rapid increase in HIV RNA and a decrease in CD4 cell count and increases the risk of clinical progression. Therefore, this strategy is not recommended in the setting of virologic failure (AI). Virologic suppression: A confirmed HIV RNA level below the LLOD of available assays Virologic failure: The inability to achieve or maintain suppression of viral replication to an HIV RNA level <200 copies/mL Incomplete virologic response: Two consecutive plasma HIV RNA levels ≥200 copies/mL after 24 weeks on an ARV regimen in a patient who has not yet had documented virologic suppression on this regimen. A patient's baseline HIV RNA level may affect the time course of response, and some regimens will take longer than others to suppress HIV RNA levels.
# Rating of Recommendations
Virologic rebound: Confirmed HIV RNA ≥200 copies/mL after virologic suppression Virologic blip: After virologic suppression, an isolated detectable HIV RNA level that is followed by a return to virologic suppression
# ART Treatment Goals and Virologic Responses
The goal of ART is to suppress HIV replication to a level below which drug-resistance mutations do not emerge. Although not conclusive, the evidence suggests that selection of drug-resistance mutations does not occur in patients with HIV RNA levels persistently suppressed to below the LLOD of current assays. 2 Viremia "blips"-defined by viral suppression followed by an isolated detectable HIV RNA level and subsequent return to undetectable levels-are not usually associated with subsequent virologic failure. 3 In contrast, there is controversy regarding the clinical implications of persistent HIV RNA levels between the LLOD and 200 copies/mL. 7 Two other retrospective studies also support the supposition that virologic rebound is more likely to occur in patients with viral loads >200 copies/mL than in those with low-level viremia between 50 to 199 copies/mL. 8,9 However, other studies have suggested that viremia at this low level (500 copies/mL. 14 Therefore, persistent plasma HIV RNA levels ≥200 copies/mL should be considered virologic failure.
# Causes of Virologic Failure
Virologic failure can occur for many reasons. Data from patient cohorts in the earlier era of combination ART suggested that suboptimal adherence and drug intolerance/toxicity accounted for 28% to 40% of virologic failure and regimen discontinuations. 15,16 Presence of pre-existing (transmitted) drug resistance may also be the cause of virologic failure. 17 Virologic failure may be associated with both patient-and regimenrelated factors, as listed below:
- Patient-Related Factors - Higher pretreatment or baseline HIV RNA level (depending on the specific regimen used)
- Lower pretreatment or nadir CD4 T lymphocyte (CD4) cell count (depending on the specific regimen used) - Comorbidities that may affect adherence (e.g., active substance abuse, psychiatric disease, neurocognitive deficits) - Presence of drug-resistant virus, either transmitted or acquired
# Management of Patients with Virologic Failure
# Assessment of Virologic Failure
If virologic failure is suspected or confirmed, a thorough assessment that includes consideration of the factors listed in the Causes of Virologic Failure section above is indicated. Often the causes of virologic failure can be identified, but in some cases, the causes are not obvious. It is important to distinguish among the causes of virologic failure because the approaches to subsequent therapy differ. The following potential causes of virologic failure should be explored in depth:
- Suboptimal Adherence. Assess the patient's adherence to the regimen. Identify and address the underlying cause(s) for incomplete adherence (e.g., drug intolerance, difficulty accessing medications, depression, active substance abuse) and, if possible, simplify the regimen (e.g., decrease pill count or dosing frequency). (See Adherence.)
- Medication Intolerance. Assess the patient's tolerance of the current regimen and the severity and duration of side effects, keeping in mind that even minor side effects can affect adherence. Management strategies to address intolerance in the absence of drug resistance may include: Relationship and Therapeutic Drug Monitoring).
- Suspected Drug Resistance. Perform resistance testing while the patient is still taking the failing regimen or within 4 weeks of regimen discontinuation if the patient's plasma HIV RNA level is >1,000 copies/mL (AI), and possibly even if between 500 to 1,000 copies/mL (BII). (See Drug-Resistance Testing.) In some patients, resistance testing should be considered even after treatment interruptions of more than 4 weeks, recognizing that the lack of evidence of resistance in this setting does not exclude the possibility that resistance mutations may be present at low levels (CIII). Evaluate the extent of drug resistance, taking into account the patient's past treatment history and prior resistance-test results. Drug resistance is cumulative; thus, all prior treatment history and resistance test results should be considered when evaluating resistance. Genotypic or phenotypic testing provides information relevant for selecting nucleoside reverse transcriptase inhibitors (NRTIs), NNRTIs, PIs, and INSTIs. Additional drug-resistance tests for patients experiencing failure on a fusion inhibitor (AII) and viral tropism tests for patients experiencing failure on a CCR5 antagonist (BIII) are also available. (See Drug-Resistance Testing.)
# Approach to Patients with Confirmed Virologic Failure
Once virologic failure is confirmed, every effort should be made to assess whether suboptimal adherence and drug-drug or drug-food interactions may be contributing to the inadequate virologic response to ART. If virologic failure persists after these issues have been adequately addressed, resistance testing should be performed, and the regimen should be changed as soon as possible to avoid progressive accumulation of resistance mutations. 18 In addition, several studies have shown that virologic responses to new regimens are greater in individuals with lower HIV RNA levels 10,19 and/or higher CD4 cell counts at the time of regimen changes. 10,19 Discontinuing or briefly interrupting therapy in a patient with viremia may lead to a rapid increase in HIV RNA and a decrease in CD4 cell count and increases the risk of clinical progression; 20,21 therefore, this strategy is not recommended (AI). See Discontinuation or Interruption of Antiretroviral Therapy.
Ideally, a new ARV regimen should contain at least two, and preferably three, fully active drugs whose predicted activity is based on the patient's drug treatment history, resistance testing, or the mechanistic action of a new drug class (AI). 10, Despite drug resistance, some ARV drugs (e.g., NRTIs) may contribute partial ARV activity to a regimen, 21 while other agents (e.g., enfuvirtide , NNRTIs, the INSTI raltegravir ) likely will not. Using a "new" drug that a patient has never used previously does not ensure that the drug will be fully active; there is potential for cross-resistance, particularly among drugs from the same class. In addition, archived drug-resistance mutations may not be detected by standard drug-resistance tests, particularly if testing is performed when the patient is not taking the drug in question. Therefore, both treatment history and prior and current drug-resistance test results must be considered when designing a new regimen. When designing a new ART regimen, drug potency and viral susceptibility are more important factors to consider than the number of component drugs.
In general, patients who receive at least three active drugs, selected based on a review of the patient's treatment history and past and most current drug-resistance test results, experience better and more sustained virologic response than those receiving fewer active drugs in the regimen. 23,24,26,27,35,36 However, there are increasing data in treatment-naive and treatment-experienced patients showing that an active pharmacokinetically enhanced PI plus one other active drug or several partially active drugs will effectively reduce viral load in most patients. Active drugs are ARVs that, based on resistance test results and treatment history, are expected to have antiviral activity equivalent to that seen when there is no resistance to the specific drugs; ARVs with partial activity are those predicted to reduce HIV RNA but to a lesser extent than when there is no underlying drug resistance. The activity of a given drug must be uniquely defined for each patient. Active drugs may be newer members of existing drug classes that are active against HIV isolates that are resistant to older drugs in the same classes (e.g. the fusion inhibitor T20, the CCR5 antagonist maraviroc in patients with no detectable CXCR4-using virus).
In the presence of certain drug resistance mutations, some ARVs such as DTG, ritonavir-boosted DRV, and ritonavir-boosted lopinavir (LPV/r) need to be given twice daily instead of once daily to achieve higher drug concentrations necessary to be active against the less sensitive virus. 41,42 Addressing Detectable Viral Load in Different Clinical Situations
- HIV RNA above the LLOD and <200 copies/mL. Confirm that levels remain above the LLOD and assess adherence, drug-drug interactions (including those with over-the-counter products and supplements), and drug-food interactions. Patients with HIV RNA typically below the LLOD with transient increases in HIV RNA (i.e., blips) do not require a change in treatment (AII). 5 Although there is no consensus on how to manage patients with persistent HIV RNA levels above the LLOD and <200 copies/mL, the risk of emerging resistance is believed to be relatively low. Therefore, these patients should maintain on their current regimens and have HIV RNA levels monitored at least every 3 months to assess the need for changes in ART in the future (AIII).
- HIV RNA ≥200 and 500 copies/mL. 8,9 Persistent plasma HIV RNA levels in the 200 to 1,000 copies/mL range should be considered as virologic failure, and resistance testing should be attempted, particularly if HIV RNA >500 copies/mL. When resistance testing can successfully be performed and no resistance is detected, manage the patient as outlined below in the section on HIV RNA >1,000 copies/mL and no drug resistance identified. If drug resistance is detected, manage the patient as outlined below in the section on HIV RNA >1,000 copies/mL and drug resistance identified. When resistance testing cannot be performed because of low-level viremia, the decision whether to empirically change ARVs should be made on a case-by-case basis.
- HIV RNA >1,000 copies/mL and no drug resistance identified. This scenario is almost always associated with suboptimal adherence. Conduct a thorough assessment to determine the level of adherence and identify any drug-drug and drug-food interactions. Consider the timing of the drugresistance test (e.g., was the patient mostly or completely ART-non-adherent for more than 4 weeks before testing). If the current regimen is well tolerated and there are no significant drug-drug or drugfood interactions, it is reasonable to resume the same regimen. If the agents are poorly tolerated or there are important drug-drug or drug-food interactions, consider changing the regimen. Two to four weeks after treatment is resumed or started, repeat viral load testing; if viral load remains >500 copies/mL, perform genotypic testing to determine whether a resistant viral strain emerges (CIII).
- HIV RNA >1,000 copies/mL and drug resistance identified. The availability of newer ARVs, including some with new mechanisms of action, makes it possible to suppress HIV RNA levels to below the LLOD in most of these patients. The options in this setting depend on the extent of drug resistance present and are addressed in the clinical scenarios outlined below.
# Management of Virologic Failure in Different Clinical Scenarios
First Regimen Failure
- Failing an NNRTI plus NRTI regimen. Patients failing an NNRTI-based regimen often have viral resistance to the NNRTI, with or without lamivudine (3TC) and emtricitabine (FTC) resistance. Although several options are available for these patients, several studies have explored the activity of a pharmacokinetically boosted PI with NRTIs or an INSTI. Two of the studies found that regimens containing a ritonavir-boosted PI (PI/r) combined with NRTIs were as active as regimens containing the PI/r
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents H-6 combined with RAL. 43,45 Two studies also demonstrated higher rates of virologic suppression with use of a PI/r plus NRTIs than with a PI/r alone. 44,45 On the basis of these studies, even patients with NRTI resistance can often be treated with a pharmacokinetically boosted PI plus NRTIs or RAL (AI). Although LPV/r was used in these studies, it is likely that other pharmacokinetically boosted PIs would behave similarly. Although data are limited, the second-generation NNRTI ETR or the other INSTIs (i.e., elvitegravir or DTG) combined with a pharmacokinetically boosted PI may also be options in this setting.
- Failing a pharmacokinetically boosted PI plus NRTI regimen. In this scenario, most patients will have either no resistance or resistance limited to 3TC and FTC. 46,47 Failure in this setting is often attributed to poor adherence, drug-drug interactions, or drug-food interactions. A systematic review of multiple randomized trials of PI/r first-line failure showed that maintaining the same regimen, presumably with efforts to enhance adherence, is as effective as changing to new regimens with or without drugs from new classes. 48 In this setting, resistance testing should be performed along with an assessment of overall adherence and tolerability of the regimen. If the regimen is well tolerated and there are no concerns regarding drug-drug or drug-food interactions, the regimen can be continued with adherence support and viral monitoring. Alternatively, if poor tolerability or interactions may be contributing to virologic failure, the regimen can be modified to include a different pharmacokinetically boosted PI plus NRTIs-even if not all of the NRTIs are fully active-or to a new non-PI-based regimen that includes more than two fully active agents (AII).
# Second-Line Regimen Failure and Beyond
- Drug resistance with treatment options allowing for full virologic suppression. Depending on treatment history and drug-resistance data, one can predict whether or not to have a fully active pharmacokinetically boosted PI to include in future regimens. For example, those who have no documented PI resistance and previously have never been treated with an unboosted PI are likely to harbor virus that is fully susceptible to ARVs in the PI class. In this setting, viral suppression should be achievable using a pharmacokinetically boosted PI combined with either NRTIs or an INSTI-provided the virus is susceptible to the INSTI. If a fully susceptible pharmacokinetically boosted PI is not an option, the new regimen should include at least two, and preferably three, fully active agents, if possible. Drugs to be included in the regimen should be selected based on the likelihood that they will be active as determined by the patient's treatment history, past and present drug-resistance testing, and tropism testing if a CCR5 antagonist is being considered.
- Multidrug resistance without treatment options allowing for full virologic suppression. Use of currently available ARVs has resulted in a dramatic decline in the number of patients who have few treatment options because of multi-class drug resistance. 50,51 Despite this progress, there remain patients who have experienced toxicities and/or developed resistance to all or most currently available drugs. If maximal virologic suppression cannot be achieved, the goals of ART will be to preserve immunologic function, prevent clinical progression, and minimize increasing resistance to drug classes that may eventually include new drugs that may be important for future regimens. Consensus on the optimal management of these patients is lacking. If resistance to NNRTIs, T20, EVG or RAL are identified, there is rarely a reason to continue these drugs, as there is little evidence that keeping them in the regimen helps delay disease progression (BII). Moreover, continuing these drugs, in particular INSTIs, may allow for further increasing resistance and within-class cross resistance that may limit future treatment options. It should be noted that even partial virologic suppression of HIV RNA to >0.5 log 10 copies/mL from baseline correlates with clinical benefits. 50,52 Cohort studies provide evidence that continuing therapy, even in the presence of viremia and the absence of CD4 count increases, reduces the risk of disease progression. 53 Other cohort studies suggest continued immunologic and clinical benefits with even modest reductions in HIV RNA levels. 54,55 However, all these potential benefits must be balanced with the ongoing risk of accumulating additional resistance mutations. In general, adding a single fully active ARV to the regimen is not recommended because of the risk of rapid development of resistance (BII).
Patients with ongoing viremia who lack sufficient treatment options to construct a fully suppressive regimen may be candidates for research studies or expanded access programs or may qualify for single-patient access of an investigational new drug as specified in Food and Drug Administration regulations: . Information about these programs may also be available from the sponsoring pharmaceutical manufacturer.
- Previously treated patient with suspected drug resistance who need care but present with limited information (i.e., incomplete or no self-reported history, medical records, or resistance-testing results). Every effort should be made to obtain the patient's medical records and prior drug-resistance testing results; however, this may not always be possible. One strategy is to restart the most recent ARV regimen and assess drug resistance in 2 to 4 weeks to guide selection of the next regimen. Another strategy is to start two or three drugs predicted to be active on the basis of the patient's treatment history.
# Isolated Central Nervous System (CNS) Virologic Failure and New Onset Neurologic Symptoms
Presentation with new-onset CNS signs and symptoms has been reported as a rare form of virologic failure. These patients present with new, usually subacute, neurological symptoms associated with breakthrough of HIV infection within the CNS compartment despite plasma HIV RNA suppression. 56,57 Clinical evaluation frequently shows abnormalities on MRI brain imaging and abnormal cerebrospinal fluid (CSF) findings with characteristic lymphocytic pleocytosis. When available, measurement of CSF HIV RNA shows higher concentrations in the CSF than in plasma, and in most patients, evidence of drug-resistant CSF virus. Drugresistance testing of HIV in CSF, if available, can be used to guide changes in the treatment regimen according to principles outlined above for plasma HIV RNA resistance (CIII). In these patients it may be useful to consider CNS pharmacokinetics in drug selection (CIII). If CSF HIV resistance testing is not available, the regimen may be changed based on the patient's treatment history or on predicted drug penetration into the CNS 58-60 (CIII). This "neurosymptomatic" CNS viral escape should be distinguished from: (1) other CNS infections that can induce a transient increase in CSF HIV RNA (e.g., herpes zoster 61 ), (2) incidental detection of asymptomatic mild CSF HIV RNA elevation likely equivalent to plasma blips, 62 or (3) relatively common chronic, usually mild, neurocognitive impairment in HIV-infected patients without evidence of CNS viral breakthrough. 63 None of these latter conditions currently warrant a change in ART. 64
# Summary
In summary, the management of treatment-experienced patients with virologic failure often requires expert advice to construct virologically suppressive regimens. Before modifying a regimen, it is critical to carefully evaluate the cause(s) of virologic failure, including incomplete adherence, poor tolerability, and drug and food interactions, as well as review HIV RNA and CD4 cell count changes over time, treatment history, and Despite marked improvements in antiretroviral treatment (ART), morbidity and mortality in HIV-infected individuals continues to be greater than in the general population, particularly when ART is delayed until advanced disease stages. These morbidities include cardiovascular disease, many non-AIDS cancers, non-AIDS infections, chronic obstructive pulmonary disease, osteoporosis, type II diabetes, thromboembolic disease, liver disease, renal disease, neurocognitive dysfunction, and frailty. 1 Although health-related behaviors and toxicities of antiretroviral (ARV) drugs may also contribute to the increased risk of illness and death, poor CD4 T lymphocyte (CD4) cell recovery, persistent immune activation, and inflammation likely also contribute to the risk.
# Poor CD4 Cell Recovery
As long as ART-mediated viral suppression is maintained, peripheral blood CD4 cell counts in most HIVinfected individuals will continue to increase for at least a decade. The rate of CD4 cell recovery is typically most rapid in the first 3 months of suppressive ART, followed by more gradual increases over time. If ARTmediated viral suppression is maintained, most individuals will eventually recover CD4 counts in the normal range (>500 cells /mm 3 ); however, approximately 15% to 20% of individuals who initiate ART at very low CD4 counts (<200 cells/mm 3 ) may plateau at abnormally low CD4 cell counts. Early initiation of ART in recently HIV-infected individuals likely provides the best opportunity for maximal CD4 cell recovery. 6 Persistently low CD4 cell counts despite ART-mediated viral suppression are associated with increased risk of morbidity and mortality. For example, HIV-infected individuals with CD4 counts <200 cells/mm 3 despite at least 3 years of suppressive ART had a 2.6-fold greater risk of mortality than those with higher CD4 cell counts. 7 Lower CD4 cell counts during ART-mediated viral suppression are associated with an increased risk
- Morbidity and mortality from several AIDS and non-AIDS conditions are increased in HIV-infected individuals despite antiretroviral therapy (ART)-mediated viral suppression, and are predicted by persistently low CD4 T lymphocyte (CD4) cell counts and/or persistent immune activation.
- ART intensification by adding antiretroviral (ARV) drugs to a suppressive ART regimen does not consistently improve CD4 cell recovery or reduce immune activation and is not recommended (AI).
- In individuals with viral suppression, switching ARV drug classes does not consistently improve CD4 cell recovery or reduce immune activation and is not recommended (BIII).
- No interventions designed to increase CD4 cell counts and/or decrease immune activation are recommended at this time (in particular, interleukin-2 is not recommended ) because none has been proven to decrease morbidity or mortality during ARTmediated viral suppression.
- Monitoring markers of immune activation and inflammation is not recommended because no immunologically targeted intervention has proven to improve the health of individuals with abnormally high biomarker levels, and many markers that predict morbidity and mortality fluctuate widely in individuals (AII).
- Because there are no proven interventions to improve CD4 cell recovery and/or inflammation, efforts should focus on addressing modifiable risk factors for chronic disease (e.g., encouraging smoking cessation, a healthy diet, and exercise; treating hypertension, hyperlipidemia) (AII). of non-AIDS morbidity and mortality, including cardiovascular disease, 12 osteoporosis and fractures, 13 liver disease, 14 and infection-related cancers. 15 The prognostic importance of higher CD4 cell counts likely spans all ranges of CD4 cell counts, though incremental benefits are harder to discern once CD4 counts increase to >500 cells/mm 3 . 16 Individuals with poor CD4 cell recovery should be evaluated for modifiable causes of CD4 cell lymphopenia. Concomitant medications should be reviewed, with a focus on those known to decrease white blood cells or, specifically, CD4 cells (e.g., cancer chemotherapy, interferon, zidovudine, 17 or the combination of tenofovir disoproxil fumarate (TDF) and didanosine (ddI). 18,19 If possible, these drugs should be substituted for or discontinued. Untreated coinfections (e.g., HCV, HIV-2) and serious medical conditions (e.g., malignancy) should also be considered as possible causes of CD4 lymphopenia, particularly in individuals with consistently declining CD4 cell counts (and percentages) and/or in those with CD4 counts consistently below 100 cells/mm 3 . In many cases, no obvious cause for suboptimal immunologic response can be identified.
# Rating of Recommendations
Despite strong evidence linking low CD4 cell counts and increased morbidity during ART-mediated viral suppression, no adjunctive therapies that increase CD4 cell count beyond levels achievable with ART alone have been proven to decrease morbidity or mortality. Adding ARV drugs to an already suppressive ART regimen does not improve CD4 cell recovery, and does not reduce morbidity or mortality. Therefore, ART intensification is not recommended as a strategy to improve CD4 cell recovery (AI). In individuals maintaining viral suppression, switching ARV drug classes in a suppressive regimen also does not consistently improve CD4 cell recovery and is not recommended (BIII). 26 Two large clinical trials, powered to assess impact on clinical endpoints (AIDS and death), evaluated the role of interleukin-2, an immunebased therapy, in improving CD4 cell recovery. Interleukin-2 adjunctive therapy resulted in CD4 cell count increases but with no observable clinical benefit. Therefore, interleukin-2 is not recommended (AI). 27 Other immune-based therapies that increase CD4 cell counts (e.g., growth hormone, interleukin-7) are under investigation. However, none of the therapies have been evaluated in clinical endpoint trials; therefore, whether any of these approaches will offer clinical benefit is unclear. Currently, such immune-based therapies should not be used except in the context of a clinical trial.
# Persistent Immune Activation and Inflammation
Although poor CD4 cell recovery likely contributes to morbidity and mortality during ART-mediated viral suppression, there is increasing focus on persistent immune activation and inflammation as potentially independent mediators of risk. HIV infection results in heightened systemic immune activation and inflammation, effects that are evident during acute infection, persist throughout chronic untreated infection, and predict more rapid CD4 cell decline and progression to AIDS and death, independent of plasma HIV RNA levels. 28 Although immune activation declines with suppressive ART, it often persists at abnormal levels in many HIV-infected individuals maintaining long-term ART-mediated viral suppression-even in those with CD4 cell recovery to normal levels. 29,30 Immune activation and inflammatory markers (e.g., IL-6, D-dimer, hs-CRP) also predict mortality and non-AIDS morbidity during ART-mediated viral suppression, including cardiovascular and thromboembolic events, cancer, neurocognitive dysfunction, and frailty. 28 Although individuals with poor CD4 cell recovery (i.e., counts persistently 500 cells/mm 3 , there is evidence that immune activation and inflammation contribute to morbidity and mortality. 33 Thus, innate immune activation and inflammation are potentially important targets for future interventions.
Although the drivers of persistent immune activation during ART are not completely understood, HIV persistence, coinfections, and microbial translocation likely play important roles. 28 Interventions to reduce each of these presumed drivers are currently being investigated. Importantly, adding ARV drugs to an already suppressive ART regimen (ART intensification) does not consistently improve immune activation. 25 Although some studies have suggested that switching an ART regimen to one with a more favorable lipid profile may improve some markers of immune activation and inflammation, 34,35 these studies have limitations and results are not consistent across markers and among studies. Thus, at this time, ART modification cannot be recommended as a strategy to reduce immune activation (BIII). Other commonly used medications with anti-inflammatory properties (e.g., statins, aspirin) are being studied, and preliminary evidence suggests that some may reduce immune activation in treated HIV infection. 36,37 However, because no intervention specifically targeting immune activation or inflammation has been studied in a clinical outcomes trial in treated HIV infection, no interventions to reduce immune activation are recommended at this time.
In the absence of proven interventions, there is currently no clear rationale to monitor levels of immune activation and inflammation in treated HIV infection. Furthermore, many of the inflammatory markers that predict morbidity and mortality fluctuate significantly in HIV-infected individuals. Thus, clinical monitoring with immune activation or inflammatory markers is not currently recommended (AII). The focus of care to reduce chronic non-AIDS morbidity and mortality should be on maintaining ART-mediated viral suppression and addressing strategies to reduce risk factors (e.g., smoking cessation, healthy diet, and exercise) and managing chronic comorbidities such as hypertension, hyperlipidemia, and diabetes (AII). With currently available antiretroviral therapy (ART), most HIV-infected patients are able to achieve and maintain HIV viral suppression. Furthermore, advances in treatment and a better understanding of drug resistance make it possible to consider switching an effective regimen to an alternative regimen in some situations (see below). When considering such a switch, clinicians must consider several key principles to maintain viral suppression while addressing concerns with the current regimen.
# Reasons to Consider Regimen Switching in the Setting of Viral Suppression
- To simplify the regimen by reducing pill burden and dosing frequency
- To enhance tolerability and decrease short-or long-term toxicity (see Adverse Events of Antiretroviral Agents and Table 15 for more in-depth discussion)
- To prevent or mitigate drug-drug interactions (see Drug Interactions)
- To eliminate food or fluid requirements
- To allow for optimal use of ART during pregnancy or should pregnancy occur (see Perinatal Guidelines) 1
- To reduce costs (see Cost Considerations and Antiretroviral Therapy)
- To switch from frequent parenteral administration of enfuvirtide to an oral agent that is better tolerated
# General Principles of Regimen Switching
The fundamental principle of regimen switching is to maintain viral suppression without jeopardizing future treatment options (AI). If a regimen switch results in virologic failure with the emergence of new resistance mutations, the patient may require more complex or expensive regimens.
The review of a patient's full antiretroviral (ARV) history-including virologic responses, past ARV-associated toxicities, and cumulative resistance test results (if available)-is warranted before any treatment switch (AI).
- Advances in antiretroviral (ARV) treatment and a better understanding of HIV drug resistance make it possible to consider switching an effective regimen to an alternative regimen in some situations.
- The fundamental principle of regimen switching is to maintain viral suppression without jeopardizing future treatment options (AI).
- It is critical to review a patient's full ARV history, including virologic responses, past ARV-associated toxicities, and cumulative resistance test results, if available, before selecting a new ART regimen (AI).
- Adverse events, the availability of ARVs with an improved safety profile, or the desire to simplify a regimen may prompt a regimen switch. Within-class and between-class switches can usually maintain viral suppression provided that there is no viral resistance to the ARV agents in the new regimen (AI).
- Consultation with an HIV specialist should be considered when considering a regimen switch for a patient with a history of resistance to one or more drug classes (BIII).
- More intensive monitoring to assess tolerability, viral suppression, adherence, and laboratory changes is recommended during the first 3 months after a regimen switch (AIII). If a patient with pre-ART wild-type HIV achieves and maintains viral suppression after ART initiation, one can assume that no new resistance mutation emerged while the patient was on the suppressive regimen.
# Rating of Recommendations
Once selected, a resistance mutation is generally archived in the HIV reservoir and is likely to re-emerge under the appropriate selective drug pressure, even if not detected in the patient's most recent resistance test. If resistance data are not available, resistance may often be inferred from a patient's treatment history. For example, a patient who experienced virologic failure on a lamivudine (3TC)-or emtricitabine (FTC)-containing regimen in the past is likely to have the M184V substitution, even if it is not documented. For patients with documented failure on a non-nucleoside reverse transcriptase inhibitor (NNRTI) or an elvitegravir (EVG)-or raltegravir (RAL)-containing regimen, resistance to these drugs can also be assumed because these drugs generally have a lower barrier to resistance. If there is uncertainty about prior resistance, it is generally not advisable to switch a suppressive ARV regimen unless the new regimen is likely to be as active against potential resistant virus as the suppressive regimen. Consulting an HIV specialist is recommended when contemplating a regimen switch for a patient with a history of resistance to one or more drug classes.
A commercially available test amplifies viral DNA in whole blood samples to detect the presence of archived resistance mutations in patients with suppressed HIV RNA. Its value in clinical practice is still being evaluated (see Drug-Resistance Testing).
More intensive monitoring to assess tolerability, viral suppression, adherence, and laboratory changes is recommended during the first 3 months after a regimen switch (see below).
# Specific Regimen Switching Considerations (also see Adverse Effects of Antiretroviral Agents)
# Strategies with Good Supporting Evidence
Within-class switches prompted by adverse events or the availability of in-class ARVs that offer a better safety profile, reduced dosing frequency, or lower pill burden usually maintain viral suppression provided there is no drug resistance to the new ARV. Some examples of within-class switch strategies are switching from efavirenz (EFV) to rilpivirine (RPV), 2 from tenofovir disoproxil fumarate (TDF) to tenofovir alafenamide (TAF), 3 from raltegravir (RAL) to elvitegravir/cobicistat (EVG/c) 4 or dolutegravir (DTG), from ritonavir-boosted protease inhibitors (PIs/r) to PIs coformulated with cobicistat (PIs/c), or from boosted atazanavir (ATV/c or ATV/r) to unboosted ATV (when used with abacavir /3TC). Between-class switches generally maintain viral suppression provided there is no resistance to the other components of the regimen. Some examples of between-class switch strategies are replacing a boosted PI with rilpivirine (RPV), 8 or replacing an NNRTI or a boosted PI with an integrase strand transfer inhibitor (INSTI). 9,10 However, such switches should be avoided if there is any doubt about the activity of the other agents in the regimen.
# RTV-Boosted PI plus 3TC/FTC
There is growing evidence that a boosted PI-based regimen plus 3TC can maintain virologic suppression in ART-naive individuals without baseline resistance mutations 11 and in patients with sustained viral suppression. 12 Examples of such regimens include lopinavir/ritonavir (LPV/r) plus 3TC 12 and atazanavir/ritonavir (ATV/r) plus 3TC. 13 A study evaluating darunavir/ritonavir (DRV/r) plus 3TC is currently underway. A ritonavir-boosted PI plus 3TC may be a reasonable option when the use of TDF, TAF, or ABC is contraindicated or not desirable.
# Strategies under Evaluation
Several strategies for switching regimens (described below) in patients with viral suppression are under investigation. These strategies cannot yet be recommended under most circumstances or at all until further evidence is available. If used, patients should be closely monitored to assure viral suppression is maintained.
# RTV-Boosted PI plus INSTI
The combination of a boosted PI with an INSTI (DRV/r plus RAL) has been studied in ART-naive patients. At week 96, DRV/r plus RAL was noninferior to DRV/r plus TDF/FTC based on the proportion of patients achieving viral suppression. However, in patients with low pretreatment CD4 T lymphocyte counts (100,000 copies/mL), DRV/r plus RAL was inferior to DRV/r plus TDF/FTC. 14 The efficacy of switching to DRV/r plus RAL in virologically suppressed patients with no resistance to either DRV or RAL has not been explored. In another study, virologically suppressed patients switched to a regimen consisting of ATV/r plus RAL or ATV/r plus TDF/FTC. This regimen switch was associated with higher rates of virologic failure and treatment discontinuations than switching to ATV/r plus TDF/FTC. 15 A regimen consisting of ATV/r plus RAL cannot currently be recommended.
# EVG/c/TAF/FTC plus DRV
The single-tablet regimen EVG/c/TAF/FTC plus DRV has shown promising results as a simplification strategy in patients with complicated rescue regimens. 16 A recent study enrolled 135 virologically suppressed patients who were receiving DRV-containing ART and had resistance to ≥2 ARV drug classes, but no INSTI resistance. The patients were then switched to a regimen of EVG/c/TAF/FTC plus DRV. At week 24, 97% of the patients maintained virologic suppression. The pill burden was reduced from an average of five tablets to two tablets. Currently, however, there is insufficient evidence to support this regimen switch other than in a well-controlled clinical trial or in special circumstances.
# Dolutegravir plus 3TC or FTC
In a small (20-patient), single-arm study of DTG plus 3TC for ART-naive patients, all patients achieved and maintained viral suppression at 24 weeks. 17 A clinical trial is underway to evaluate the role of this regimen as maintenance therapy in virologically suppressed patients who have no evidence of NRTI, INSTI, or PI resistance. Currently, however, there is insufficient evidence to support use of this regimen other than in a well-controlled clinical trial.
# Strategies Not Recommended
# RTV-Boosted PI Monotherapy
The strategy of switching virologically suppressed patients without PI resistance from one ART regimen to PI/r monotherapy has been evaluated in several studies. The rationale for this strategy is to avoid NRTI toxicities and decrease costs, while taking advantage of the high barrier to resistance of PIs. PI/r monotherapy maintains virologic suppression in most patients, but at slightly lower rates than standard therapy that includes 2 NRTIs. 18,19 Low-level viremia, generally without the emergence of PI resistance, appears to be more common with monotherapy. In most studies, resumption of NRTIs in patients experiencing low-level viral rebound has led to re-suppression. On the basis of the results from these studies, PI/r monotherapy should generally be avoided (BI). No clinical trials evaluating the use of coformulated cobicistat-boosted PIs as monotherapy or comparing available PI/r monotherapy regimens have been conducted.
# Switching to Maraviroc
Co-receptor usage in virologically suppressed patients can be determined from proviral DNA obtained from peripheral blood mononuclear cells. If this testing identifies R5-tropic virus, a component of the patient's regimen may potentially be switched to maraviroc (MVC). 24,25 However, although the use of MVC after DNA tropism testing has potential, this strategy cannot be recommended until more data from larger clinical studies are available (see Co-receptor Tropism Assays).
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents H-20
# Monitoring after Treatment Changes
After a treatment switch, patients should be evaluated more closely for several months (i.e., a clinic visit or phone call 1 to 2 weeks after the change, and a viral load test to check for rebound viremia 4 to 8 weeks after the switch). The purpose of more intensive monitoring is to assess medication tolerance and conduct targeted laboratory testing if the patient had pre-existing laboratory abnormalities or there are potential concerns with the new regimen. For example, if lipid abnormalities were present and/or were a reason for the ARV change, or if it is a concern with the new regimen, fasting cholesterol subsets and triglycerides should be assessed within 3 months after the change in therapy. In the absence of any new complaints, laboratory abnormalities, or evidence of viral rebound at this 3-month visit, clinical and laboratory monitoring of the patient may resume on a regularly scheduled basis (see Laboratory Testing for Initial Assessment and Monitoring of HIV-Infected Patients on Antiretroviral Therapy). Knowledge about the relationship between a drug's systemic exposure (or concentration) and responses (beneficial and/or adverse) is key in selecting the dose of a drug, in understanding why patients may respond differently to the same drug and dose, and in designing strategies to optimize drug response and tolerability.
Therapeutic drug monitoring (TDM) is a strategy used to guide dosing of certain antiarrhythmics, anticonvulsants, antineoplastics, and antimicrobial agents by using measured drug concentrations to improve the likelihood of the desired therapeutic and safety outcomes. Drugs suitable for TDM are characterized by a known exposure-response relationship and a therapeutic range of concentrations. The therapeutic range is a range of concentrations established through clinical investigations that are associated with a greater likelihood of achieving the desired therapeutic response and/or reducing the frequency of drug-associated adverse reactions.
Several antiretroviral (ARV) agents meet most of the characteristics of agents suitable for a TDM strategy. 1 Specifically, some ARVs have considerable interpatient variability in drug concentrations; other ARVs have known drug concentrations associated with efficacy and/or toxicity; and in the case of other drugs, data from small prospective studies have demonstrated that TDM improved virologic response and/or decreased the incidence of concentration-related drug toxicities. 2,3 TDM for ARV agents, however, is not recommended for routine use in the management of HIVinfected adults (BII). This recommendation is based on multiple factors that limit the routine use of TDM in HIV-infected patients. These limiting factors include lack of prospective studies that demonstrate routine use of TDM improves clinical outcomes, uncertain therapeutic thresholds for most ARV agents, great intra-and inter-patient variability in drug concentrations achieved, and a lack of commercial laboratories to perform real time quantitation of ARV concentrations.
# Scenarios for Consideration of Therapeutic Drug Monitoring
Although routine use of TDM is not recommended, in some scenarios, ARV concentration data may be useful in patient management. In these cases, assistance from a clinical pharmacologist or a clinical pharmacist to interpret the concentration data may be advisable. These scenarios include the following:
- Suspect clinically significant drug-drug or drug-food interactions that may result in reduced efficacy or increased dose-related toxicities;
- Changes in pathophysiologic states that may impair gastrointestinal, hepatic, or renal function, thereby potentially altering drug absorption, distribution, metabolism, or elimination;
- Among pregnant women who have risk factors for virologic failure (e.g., those not achieving viral suppression during earlier stage of pregnancy)-during the later stages of pregnancy, physiologic changes may result in reduced drug exposure and thus further increase the risk of virologic failure;
# Panel's Recommendations
- Therapeutic drug monitoring for antiretroviral agents is not recommended for routine use in the management of HIV-infected patients (BII).
- TDM may be considered in selected clinical scenarios, as discussed in the text below. - Heavily pretreated patients experiencing virologic failure and who may have viral isolates with reduced susceptibility to ARVs;
# Rating of
- Use of alternative dosing regimens and ARV combinations for which safety and efficacy have not been established in clinical trials;
- Concentration-dependent, drug-associated toxicities; and
- Lack of expected virologic response in medication-adherent patients.
# Resources for Therapeutic Drug Monitoring Target Concentrations
Most TDM-proposed target concentrations for ARVs focus on a minimum concentration (C min ) (i.e., the plasma concentration at the end of a dosing interval before the next ARV dose). A summary of population average ARV C min can be found in a review on the role of ARV-related TDM. 2 Population average C min for newer ARVs can be found in the Food and Drug Administration-approved product labels.
Guidelines for the collection of blood samples and other practical suggestions related to TDM can be found in a position paper by the Adult AIDS Clinical Trials Group Pharmacology Committee. 4
# Challenges and Considerations in Using Drug Concentrations to Guide Therapy
There are several challenges and considerations for implementation of TDM in the clinical setting. Use of TDM to monitor ARV concentrations in a patient requires the following:
- quantification of the concentration of the drug, usually in plasma or serum;
- determination of the patient's pharmacokinetic characteristics;
- integration of information on patient adherence;
- interpretation of the drug concentrations; and
- adjustment of the drug dose to achieve concentrations within the therapeutic range, if necessary.
A final caveat to the use of measured drug concentrations in patient management is a general one-drug concentration information cannot be used alone; it must be integrated with other clinical information, including the patient's ARV history and adherence before the TDM result. In addition, as knowledge of associations between ARV concentrations and virologic response evolves, clinicians who use a TDM strategy for patient management should evaluate the most up-to-date information regarding the exposure-response relationship of the tested ARV agent. Discontinuation of antiretroviral therapy (ART) may result in viral rebound, immune decompensation, and clinical progression. Thus, planned interruptions of ART are not generally recommended. However, unplanned interruption of ART may occur under certain circumstances as discussed below.
# Short-Term Therapy Interruptions
Reasons for short-term interruption (days to weeks) of ART vary and may include drug toxicity; intercurrent illnesses that preclude oral intake, such as gastroenteritis or pancreatitis; surgical procedures; or interrupted access to drugs. Stopping ART for a short time (i.e., less than 1 to 2 days) because of a medical/surgical procedure can usually be done by holding all drugs in the regimen. Recommendations for some other scenarios are listed below:
# Unanticipated Short-Term Therapy Interruption
When a Patient Experiences a Severe or Life-Threatening Toxicity or Unexpected Inability to Take Oral Medications:
- All components of the drug regimen should be stopped simultaneously, regardless of drug half-life.
# Planned Short-Term Therapy Interruption (Up to 2 Weeks)
When All Regimen Components Have Similar Half-Lives and Do Not Require Food for Proper Absorption:
- All drugs may be given with a sip of water, if allowed; otherwise, all drugs should be stopped simultaneously. All discontinued regimen components should be restarted simultaneously.
# When All Regimen Components have Similar Half-Lives and Require Food for Adequate Absorption, and the
Patient Cannot Take Anything by Mouth for a Short Time:
- Temporary discontinuation of all drug components is indicated. The regimen should be restarted as soon as the patient can resume oral intake.
# When the ARV Regimen Contains Drugs with Different Half-Lives:
- Stopping all drugs simultaneously may result in functional monotherapy with the drug with the longest half-life (typically a non-nucleoside reverse transcriptase inhibitor ), which may increase the risk of selection of NNRTI-resistant mutations. Some experts recommend stopping the NNRTI first and the other ARV drugs 2 to 4 weeks later. Alternatively, the NNRTI may be replaced with a ritonavir (or cobicistat)-boosted protease inhibitor (PI/r or PI/c) for 4 weeks. The optimal time sequence for staggered discontinuation of regimen components, or replacement of the NNRTI with a PI/r (or PI/c), has not been determined.
# Planned Long-Term Therapy Interruptions
Planned long-term therapy interruptions are not recommended outside of controlled clinical trials (AI). Several research studies are evaluating approaches to a functional (virological control in the absence of therapy) or sterilizing (virus eradication) cure of HIV infection. Currently, the only way to reliably test the effectiveness of these strategies may be to interrupt ART and closely monitor viral rebound over time in the setting of a clinical trial.
If therapy must be discontinued, patients should be aware of and understand the risks of viral rebound, acute retroviral syndrome, increased risk of HIV transmission, decline of CD4 count, HIV disease progression, development of minor HIV-associated manifestations such as oral thrush or serious non-AIDS complications Definitions: Acute HIV-1 infection is the phase of HIV-1 disease immediately after infection that is characterized by an initial burst of viremia; although anti-HIV-1 antibodies are undetectable, HIV-1 RNA or p24 antigen are present. Recent infection generally is considered the phase up to 6 months after infection during which anti-HIV-1 antibodies are detectable. Throughout this section, the term "early HIV-1 infection" is used to refer to either acute or recent HIV-1 infection.
An estimated 40% to 90% of patients with acute HIV-1 infection will experience symptoms of acute retroviral syndrome, such as fever, lymphadenopathy, pharyngitis, skin rash, myalgia, arthralgia, and other symptoms. However, because the self-limiting symptoms are similar to those of many other viral infections, such as influenza and infectious mononucleosis, primary care clinicians often do not recognize acute HIV-1 infection.
Acute infection can also be asymptomatic. Table 11 provides practitioners with guidance to recognize, diagnose, and manage acute HIV-1 infection.
# Diagnosing Acute HIV Infection
Health care providers should maintain a high level of suspicion of acute HIV-1 infection in patients who have a compatible clinical syndrome-especially in those who report recent high-risk behavior (see Table 11). 7 Patients may not always disclose or admit to high-risk behaviors or perceive that their behaviors put them at risk for HIV-1 acquisition. Thus, even in the absence of reported high-risk behaviors, signs and symptoms consistent with acute retroviral syndrome should motivate practitioners to consider a diagnosis of acute HIV-1 infection.
Acute HIV-1 infection is usually defined as detectable HIV-1 RNA or p24 antigen in serum or plasma in the setting of a negative or indeterminate HIV-1 antibody test result. 7,8 Combination immunoassays that detect
# Panel's Recommendations
- Antiretroviral therapy (ART) is recommended for all individuals with HIV-1 infection (AI) including those with early a HIV-1 infection.
- Once initiated, the goal of ART is to suppress plasma HIV-1 RNA to undetectable levels (AIII). Testing for plasma HIV-1 RNA levels, CD4 T lymphocyte counts, and toxicity monitoring should be performed as recommended for patients with chronic HIV-1 infection (AII).
- Genotypic drug resistance testing should be performed before initiation of ART to guide the selection of the regimen (AII).
- ART can be initiated before drug resistance test results are available. Because resistance to pharmacokinetically enhanced protease inhibitors (PIs) emerges slowly and clinically significant transmitted resistance to PIs is uncommon, ritonavir-boosted darunavir (DRV/r) and tenofovir disoproxil fumarate/emtricitabine (TDF/FTC) is a recommended regimen in this setting (AIII). For similar reasons, dolutegravir (DTG) plus TDF/FTC is also a reasonable option although data regarding transmission of integrase strand transfer inhibitor (INSTI)-resistant HIV and the efficacy of this regimen in early HIV infection is limited (AIII).
- When results of drug resistance testing are available, the treatment regimen can be modified if warranted (AII). In patients without transmitted drug resistant virus, therapy should be initiated with one of the combination regimens that is recommended for patients with chronic HIV-1 infection (see What to Start) (AIII).
- Patients starting ART should be willing and able to commit to treatment and should understand the importance of adherence (AIII).
Patients may choose to postpone therapy, and providers, on a case-by-case basis, may recommend that patients defer therapy because of clinical and/or psychosocial factors. HIV-1 and HIV-2 antibodies and HIV-1 p24 antigen are now approved by the Food and Drug Administration, and the most recent Centers for Disease Control and Prevention testing algorithm recommends them as the preferred assays to use for HIV screening, including for possible acute HIV-1 infection. Specimens that are reactive on an initial antigen/antibody (Ag/Ab) assay should be tested with an immunoassay that differentiates HIV-1 from HIV-2 antibodies. 9 Specimens that are reactive on the initial assay and have either negative or indeterminate antibody differentiation test results should be tested for quantitative or qualitative HIV-1 RNA; a negative HIV-1 RNA test result indicates that the original Ag/Ab test result was a false positive. Detection of HIV-1 RNA in this setting indicates that acute HIV-1 infection is highly likely 9 (see Treatment for Early HIV-1 Infection). HIV-1 infection should be confirmed by subsequent testing to document HIV antibody seroconversion.
# Rating of Recommendations
Some health care facilities may still be following HIV testing algorithms that recommend initial testing with an assay that only tests for anti-HIV antibodies. In such settings, when acute HIV-1 infection is suspected in a patient with a negative or indeterminate HIV antibody test result, a quantitative or qualitative HIV-1 RNA test should be performed. A negative or indeterminate HIV antibody test result and a positive HIV-1 RNA test result indicate that acute HIV-1 infection is highly likely. Providers should be aware that a low-positive quantitative HIV-1 RNA level (e.g., 100,000 copies/mL). 5,6 Therefore, when a lowpositive quantitative HIV-1 RNA test result is obtained, the HIV-1 RNA test should be repeated using a different specimen from the same patient. 6 The diagnosis of HIV-1 infection should be confirmed by subsequent documentation of HIV antibody seroconversion (see Table 11).
# Treating Early HIV-1 Infection
Clinical trial data regarding the treatment of early HIV-1 infection are limited. Many individuals who enrolled in studies to assess the role of ART in early HIV-1 infection were identified as trial participants because they presented with signs or symptoms of acute infection. With the introduction of HIV screening tests that include assays for HIV-1 RNA or p24 antigen and wider HIV screening in health care settingsparticularly HIV testing associated with broader use of pre-exposure prophylaxis (PrEP) by individuals at higher risk for HIV-the number of asymptomatic patients identified with early infection may increase. The initial burst of high level viremia in infected individuals usually declines shortly after acute infection (e.g., within 2 months). However, there is a rationale for treatment during recent infection (e.g., 2-6 months after infection) because during the transition to chronic infection, the immune system may not yet have maximally contained viral replication in the lymphoid tissue. 10 Several trials have addressed the question of the longterm benefit of potent treatment regimens initiated during early HIV-1 infection. The potential benefits and risks of treating early HIV-1 infection are discussed below.
# Potential Benefits of Treatment During Early HIV-1 Infection
Preliminary data indicate that treatment of early HIV-1 infection with ART improves laboratory markers of disease progression. The data, though limited, indicate that treatment of early HIV-1 infection may also reduce the severity of acute disease, lower the viral set point, reduce the size of the viral reservoir, 19 delay disease progression, enhance CD4 T lymphocyte (CD4) cell recovery, 20 and decrease the rate of viral mutation by suppressing viral replication and preserving immune function. 21 Because early HIV-1 infection is often associated with high viral loads and increased infectiousness, 22 and ART use by HIV-1-infected individuals reduces transmission to uninfected sexual partners, 23 treatment during early HIV-1 infection is expected to substantially reduce the risk of HIV-1 transmission. In addition, although data are limited and the clinical relevance unclear, initiating ART during early HIV-1 infection may preserve mucosal Th17 cell function 24 and mitigate the profound loss of gastrointestinal lymphoid tissue that occurs during the first weeks of infection. 25,26 Many of the potential benefits described above may be more likely to occur with treatment of acute infection, but they also may occur if treatment is initiated during recent HIV-1 infection.
The START Trial enrolled HIV-infected patients with CD4 counts >500 cells/mm 3 and randomized them to either start ART immediately or defer ART until their CD4 counts fell below 350 cells/mm 3 or an AIDS event occurred. The study demonstrated that immediate treatment resulted in a decrease in the combined endpoint of AIDS-defining illnesses, serious non-AIDS events, or death. 27 Similarly, TEMPRANO demonstrated decreased risk of death or severe HIV-related illness among HIV-infected patients who initiated ART with baseline CD4 counts >500 cells/mm 3 . 28 The results from these studies strengthen the evidence for the Panel on Antiretroviral Guidelines for Adults and Adolescents (the Panel)'s recommendation for ART initiation in all patients regardless of CD4 cell count (AI) (see Initiation of Antiretroviral Therapy section). Although neither trial collected specific information on patients with early infection, the strength of the overall results of the two studies and the evidence from other studies described above strongly suggest that, whenever possible, patients should begin ART upon diagnosis of early infection.
# Considerations When Treating Early HIV-1 Infection
As with chronic infection, patients with early HIV-1 infection must be willing and able to commit to treatment. On a case-by-case basis, providers may recommend that patients defer therapy for clinical and/or psychosocial reasons. If treatment during early infection is deferred, patients should be maintained in care and every effort should be made to initiate therapy as soon as they are ready.
# Treating Early HIV-1 Infection During Pregnancy
Because early HIV-1 infection, especially in the setting of high level viremia, is associated with a high risk of perinatal transmission, all HIV-1-infected pregnant women should start ART as soon as possible to prevent perinatal transmission of HIV-1. 29
# Treatment Regimen for Early HIV-1 Infection
Data from the United States and Europe demonstrate that transmitted virus may be resistant to at least one antiretroviral drug in up to 16% of patients. 30,31 In one study, 21% of isolates from patients with acute HIV-1 infection demonstrated resistance to at least 1 drug. 32 Therefore, before initiating ART in a person with early HIV-1 infection, genotypic antiretroviral (ARV) drug resistance testing should be performed to guide selection of an ARV regimen (AII). However, treatment initiation should not be delayed pending resistance testing results. Once results are available, the treatment regimen can be modified if warranted (AII).
As during chronic infection, the goal of therapy during early HIV-1 infection is to suppress plasma HIV-1 RNA to undetectable levels (AIII). ART should be initiated with one of the combination regimens recommended for patients with chronic infection (AIII) Given the increasing use of daily TDF/FTC for PrEP in HIV-negative individuals, early infection may be diagnosed in some patients taking TDF/FTC for PrEP. In this setting, resistance testing should be performed; however, as described above, use of a pharmacologically boosted PI (DRV/r) and TDF/FTC or DTG and TDF/FTC remain reasonable treatment options pending resistance testing results (see What to Start).
# Patient Follow-Up
Testing for plasma HIV-1 RNA levels, CD4 cell counts, and toxicity monitoring should be performed as
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents I-4
described in Laboratory Testing for Initial Assessment and Monitoring While on Antiretroviral Therapy (e.g., HIV-1 RNA at initiation of therapy, after 2 to 8 weeks, then every 4 to 8 weeks until viral suppression, and thereafter, every 3 to 4 months) (AII).
# Duration of Therapy for Early HIV-1 Infection
Once ART is initiated in patients with early HIV infection, therapy should be continued indefinitely as in guidelines for patients with chronic infection. Recent studies of early HIV-1 infection have shown some benefits of starting and then stopping treatment as a potential therapeutic strategy. However, a large randomized controlled trial of patients with chronic HIV-1 infection found that treatment interruption was harmful in terms of increased risk of AIDS and non-AIDS events, 36 and that the strategy was associated with increased markers of inflammation, immune activation, and coagulation. 37 For these reasons and the potential benefit of ART in reducing the risk of HIV-1 transmission, the Panel recommends indefinite continuation of ART in patients treated for early HIV-1 infection (AIII).
# Table 11. Identifying, Diagnosing, and Managing Acute and Recent HIV-1 Infection
Suspicion of Acute HIV-1 Infection:
- Acute HIV-1 infection should be considered in individuals with signs or symptoms described below and recent (within 2 to 6 weeks) high risk of exposure to HIV-1. a - Signs, symptoms, or laboratory findings of acute HIV-1 infection may include but are not limited to one or more of the following: fever, lymphadenopathy, skin rash, myalgia, arthralgia, headache, diarrhea, oral ulcers, leucopenia, thrombocytopenia, transaminase elevation. - High-risk exposures include sexual contact with an HIV-1-infected person or a person at risk of HIV-1 infection, sharing of injection drug use paraphernalia, or any exposure in which an individual's mucous membranes or breaks in the skin come in contact with bodily fluid potentially infected with HIV. - Differential diagnosis: The differential diagnosis of patients presenting with HIV-1 infection may include but is not limited to viral illnesses such as Epstein-Barr virus (EBV) and non-EBV (e.g., cytomegalovirus) infectious mononucleosis syndromes, influenza, viral hepatitis, streptococcal infection, or syphilis.
Evaluation/Diagnosis of Acute HIV-1 Infection:
- Acute HIV-1 infection is defined as detectable HIV-1 RNA or p24 antigen (the antigen used in currently available HIV antigen/antibody combination assays) in the setting of a negative or indeterminate HIV-1 antibody test result. - A reactive HIV antibody test result or Ag/Ab combination test result must be followed by supplemental confirmatory testing.
- A negative or indeterminate HIV-1 antibody test result in a person with a reactive Ag/Ab test result or in whom acute HIV-1 infection is suspected requires plasma HIV-1 RNA testing to diagnose acute HIV-1 infection. - A positive result on a quantitative or qualitative plasma HIV-1 RNA test in the setting of a negative or indeterminate antibody test result indicates that acute HIV-1 infection is highly likely.
ART After Diagnosis of Early HIV-1 Infection:
- ART is recommended for all HIV-infected individuals (AI), and should be offered to all patients with early HIV-1 infection.
- All pregnant women with early HIV-1 infection should begin ART as soon as possible for their health and to prevent perinatal transmission of HIV-1. - Genotypic drug resistance testing should be performed before initiation of ART to guide the selection of the regimen (AII).
- If ART is initiated before drug resistance test results are available, a pharmacologically boosted PI-based regimen is recommended because resistance to PIs emerges slowly and clinically significant transmitted resistance to PIs is uncommon. DRV/r plus TDF/FTC is a recommended regimen in this setting (AIII). For similar reasons, DTG plus TDF/FTC is a reasonable option although the data regarding transmission of INSTI-resistant HIV and the efficacy of this regimen in early HIV infection are limited (AIII). - When results of drug resistance testing are available, the treatment regimen can be modified if warranted (AII). In patients without transmitted drug-resistant virus, ART should be initiated with one of the combination regimens that is recommended for patients with chronic HIV-1 infection (see What to Start) (AIII). - Once initiated, the goal of ART should be sustained plasma virologic suppression; ART should be continued indefinitely (AIII).
a In some settings, behaviors that increase the risk of HIV-1 infection may not be recognized or perceived as risky by the health care provider or the patient or both. Thus, even in the absence of reported high-risk behaviors, symptoms and signs consistent with acute retroviral syndrome should motivate practitioners to consider a diagnosis of acute HIV-1 infection. Most adolescents who acquire HIV are infected through sex. Many of them are recently infected and unaware of their HIV infection status. Thus, many are in an early stage of HIV infection, which makes them ideal candidates for early interventions, such as prevention counseling, linkage to and engagement in care, and initiation of ART. 4 High grade viremia was reported in a cohort of youth identified as HIV-infected by adolescent HIV specialty clinics in 15 major metropolitan U.S. cities. The mean HIV viral load for the cohort was 94,398 copies/ml; 30% of the youth were not successfully linked to care. 5 A study among HIV-infected adolescents and young adults presenting for care identified primary genotypic resistance mutations to ARV medications in up to 18% of the evaluable sample of recently infected youth, as determined by the detuned antibody testing assay strategy that defined recent infection as occurring within 180 days of testing. 6 Substantial multiclass resistance was noted in a cohort of non-perinatally infected, treatment-naive youth
# Key Summary and Panel's Recommendations
- HIV-infected adolescents largely belong to two distinct groups-those who acquired HIV in infancy, and are heavily antiretroviral therapy (ART)-experienced, and those who acquired HIV more recently during their teens.
- ART is recommended for all HIV-infected individuals (AI) to reduce morbidity and mortality. Thus, ART is also recommended for ARTnaive adolescents. However, before initiation of therapy, adolescents' readiness and ability to adhere to therapy within their psychosocial context need to be carefully considered as part of therapeutic decision making (AIII).
- Once ART is initiated, appropriate support is essential to reduce potential barriers to adherence and maximize the success in achieving sustained viral suppression (AII).
- The adolescent sexual maturity rating can be helpful to guide regimen selection for initiation of or changes in ART as recommended by either these Adult and Adolescent ART Guidelines or the Pediatric ART Guidelines. These Adult/Adolescent Guidelines are more appropriate for postpubertal adolescents (i.e., sexual maturity rating IV or V) (AIII).
- Pediatric and adolescent care providers should prepare adolescents for the transition into adult care settings. Adult providers should be sensitive to the challenges associated with such transitions, consulting and collaborating with adolescent HIV care providers to insure adolescents' successful transition and continued engagement in care (AIII). who were screened for an ARV treatment trial. 7 As these youth were naive to all ART, this reflects transmission of resistant virus. This transmission dynamic reflects that a substantial proportion of youth's sexual partners are likely older and may be more ART experienced; thus, using baseline resistance testing to guide initial therapy in recently infected youth naive to ART is imperative.
# Rating of Recommendations
A limited but increasing number of HIV-infected adolescents are long-term survivors of HIV infection acquired perinatally or in infancy through blood products. These adolescents are usually heavily ART experienced and may have a unique clinical course that differs from that of adolescents infected later in life. 8 Adolescents infected perinatally or in infancy were often started on ART early in life with mono or dual therapy regimens resulting in incomplete viral suppression and emergence of viral resistance. If these heavily ART-experienced adolescents harbor resistant virus, optimal ARV regimens should be selected on the basis of the same guiding principles used for heavily ART-experienced adults (see Virologic Failure section).
Adolescents are developmentally at a difficult crossroad. Their needs for autonomy and independence and their evolving decisional capacity intersect and compete with their concrete thinking processes, risk-taking behaviors, preoccupation with self-image, and need to fit in with their peers. This makes it challenging to attract and sustain adolescents' focus on maintaining their health, particularly for those with chronic illnesses. These challenges are not specific to any particular transmission mode or stage of disease. Thus, irrespective of disease duration or mode of HIV transmission, every effort must be made to engage and retain adolescents in care so they can improve and maintain their health for the long term. Given challenges with youth remaining in care and achieving long-term viral suppression, 9 additional considerations may be given to more intensive case management approaches. 10,11 Adolescents may seek care in several settings including pediatric-focused HIV clinics, adolescent/young adult clinics, and adult-focused clinics. 12 Where youth services are available, they may be helpful to consider as one approach to enhancing HIV care engagement and retention among adolescents. 13 Regardless of the setting, expertise in caring for adolescents is critical to creating a supportive environment for engaging youth in care. 12,14
# Antiretroviral Therapy Considerations in Adolescents
The results from the START and TEMPRANO trials that favor initiating ART in all individuals who are able and willing to commit to treatment, and can understand the benefits and risks of therapy and the importance of excellent adherence, are discussed elsewhere in these guidelines (see Initiation of Antiretroviral Therapy).
Neither of these trials included adolescents; however, recommendations based on these trials have been extrapolated to adolescents based on the expectation that they will derive benefits from early ART similar to those observed in adults. Given the psychosocial turmoil that may occur frequently in the lives of HIVinfected American youth, their ability to adhere to therapy needs to be carefully considered as part of therapeutic decision making concerning the risks and benefits of starting treatment. Once ART is initiated, appropriate support is essential to reduce potential barriers to adherence and maximize the success in achieving sustained viral suppression.
The adolescent sexual maturity rating (SMR) (also known as Tanner stage) can be helpful when ART initiation is being considered for this population (see SMR table). Adult guidelines for ART initiation or regimen changes (see Adult Guidelines, What to Start) are usually appropriate for postpubertal adolescents (SMR IV or V) because the clinical course of HIV infection in postpubertal adolescents who were infected sexually or through injection drug use during adolescence is more similar to that in adults than that in children. Adult guidelines can also be useful for postpubertal youth who were perinatally infected and whose long-term HIV infection has not affected their sexual maturity (SMR IV or V). Pediatric guidelines for ART may be more appropriate for adolescents infected with HIV during their teen years (e.g., through sex), but who are sexually immature (SMR III or less) and for perinatally infected adolescents with stunted sexual maturation (i.e., delayed puberty) from long-standing HIV infection or other co-morbidities (SMR III or less) (see What to Start in the Guidelines for the Use of Antiretroviral Agents in Pediatric HIV Infection).
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents I-10
Perinatally infected, postpubertal youth often have treatment challenges associated with the long-term use of ART that mirror those of ART-experienced adults, such as extensive resistance, complex regimens, and adverse drug effects (see also Virologic Failure, Poor CD4 Recovery, Regimen Switching in the Setting of Virologic Suppression, and Adverse Effects of Antiretroviral Agents). Perinatally infected postpubertal adolescents may also have comorbid cognitive impairments that compound adherence challenges common among youth. 15 Dosage of ARV drugs should be prescribed according to the SMR and not solely on the basis of age. 16,17 Adolescents in early puberty (i.e., SMR I-III) should be administered doses on pediatric schedules, whereas those in late puberty (i.e., SMR IV-V) should follow adult dosing schedules. However, SMR stage and age are not necessarily directly predictive of drug pharmacokinetics. Because puberty may be delayed in children who were infected with HIV perinatally, 18 continued use of pediatric doses in puberty-delayed adolescents can result in medication doses that are higher than the usual adult doses. Because data are not available to predict optimal medication doses for each ARV medication for this group of children, issues such as toxicity, pill or liquid volume burden, adherence, and virologic and immunologic parameters should be considered in determining when to transition youth from pediatric to adult doses. Youth who are in their growth spurt period (i.e., Tanner Stage III in females and Tanner Stage IV in males) following adult or pediatric dosing guidelines and adolescents who have transitioned from pediatric to adult doses should be closely monitored for medication efficacy and toxicity. Therapeutic drug monitoring can be considered in each of these selected circumstances to help guide therapy decisions. Pharmacokinetic studies of drugs in youth are needed to better define appropriate dosing. For a more detailed discussion, see Guidelines for the Use of Antiretroviral Agents in Pediatric HIV Infection. 19
# Adherence Concerns in Adolescents
HIV-infected adolescents are especially vulnerable to specific adherence problems because of their psychosocial and cognitive developmental trajectory. Comprehensive systems of care are required to serve both the medical and psychosocial needs of HIV-infected adolescents, who frequently lack both health insurance and experience with health care systems. Studies in adolescents infected in their teen years and in adolescents infected through perinatal transmission demonstrate that many adolescents in both groups face numerous barriers to adherence. Compared with adults, these youth have lower rates of viral suppression and higher rates of virologic rebound and loss to follow up. 23 Reasons that HIV-infected adolescents often have difficulty adhering to medical regimens include the following:
- Denial and fear of their HIV infection;
- Misinformation;
- Distrust of the medical establishment;
- Fear of ART and lack of confidence in the effectiveness of medications;
- Low self-esteem;
- Unstructured and chaotic lifestyles;
- Mood disorders and other mental illness;
- Lack of familial and social support;
- Lack of or inconsistent access to care or health insurance; and
- Risk of inadvertent disclosure of their HIV infection if parental health insurance is used.
Clinicians selecting treatment regimens for adolescents must balance the goal of prescribing a maximally potent ART regimen with realistic assessment of existing and potential support systems to facilitate adherence. Adolescents benefit from reminder systems (e.g., apps, beepers, timers, and pill boxes) that are stylish and/or inconspicuous. 24 In a randomized controlled study among non-adherent youth 15 to 24 years of age, youth who
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents I-11
received cell phone medication reminders demonstrated significantly better adherence and lower viral loads than youth who did not receive the reminder calls. 25 It is important to make medication adherence as user friendly and the least stigmatizing as possible for the older child or adolescent. The concrete thought processes of adolescents make it difficult for them to take medications when they are asymptomatic, particularly if the medications have side effects. Adherence to complex regimens is particularly challenging at a time of life when adolescents do not want to be different from their peers. Directly observed therapy may be considered for some HIV-infected adolescents such as those with mental illness.
# Difficult Adherence Problems
Because adolescence is characterized by rapid changes in physical maturation, cognitive processes, and life style, predicting long-term adherence in an adolescent can be very challenging. The ability of youth to adhere to therapy needs to be considered as part of therapeutic decision making concerning the risks and benefits of starting treatment. Erratic adherence may result in the loss of future regimens because of the development of resistance mutations. Clinicians who care for HIV-infected adolescents frequently manage youth who, although needing therapy, pose significant concerns regarding their ability to adhere to therapy. In these cases, the following strategies can be considered:
1. A short-term deferral of treatment until adherence is more likely or while adherence-related problems are aggressively addressed;
2. An adherence testing period in which a placebo (e.g., vitamin pill) is administered; and
3. The avoidance of any regimens with low genetic resistance barriers.
Such decisions are ideally individualized to each patient and should be made carefully in context with the individual's clinical status. For a more detailed discussion on specific therapy and adherence issues for HIVinfected adolescents, see the Adherence to ART section of these guidelines and the Guidelines for Use of Antiretroviral Agents in Pediatric HIV Infection. 19
# Special Considerations in Adolescents
All adolescents should be screened for sexually transmitted diseases (STDs), in particular human papilloma virus (HPV). In young MSM, screening for STDs may require sampling from several body sites because oropharyngeal, rectal, and urethral infections may be present in this population. 34 For a more detailed discussion on STDs, see the most recent CDC guidelines 35 and the adult and pediatric opportunistic infection treatment and prevention guidelines on HPV among HIV-infected adolescents. 36,37 Family planning counseling, including a discussion of the risks of perinatal transmission of HIV and methods to reduce risks, should be provided to all youth. Providing gynecologic care for HIV-infected female adolescents is especially important. Contraception, including the interaction of specific ARV drugs with hormonal contraceptives, and the potential for pregnancy also may alter choices of ART. As an example, efavirenz (EFV) should be used with caution in females of childbearing age and should only be prescribed after intensive counseling and education about the potential effects on the fetus, the need for close monitoringincluding periodic pregnancy testing-and a commitment on the part of the teen to use effective contraception. For a more detailed discussion, see HIV-Infected Women and the Perinatal Guidelines. 38 Finally, HIV-infected transgender youth represent an important population that requires additional psychosocial and healthcare considerations. For a more detailed discussion, see Adolescent Trials Network (ATN) Transgender Youth Resources.
# Transitioning Care
Given lifelong infection with HIV and the need for treatment through several stages of growth and development, HIV care programs and providers need flexibility to appropriately transition care for HIVinfected children, adolescents, and young adults. A successful transition requires an awareness of some fundamental differences between many adolescent and adult HIV care models. In most adolescent HIV clinics, care is more teen-centered and multidisciplinary, with primary care highly integrated into HIV care. Teen services, such as sexual and reproductive health, substance abuse treatment, mental health, treatment education, and adherence counseling are all found in one clinic setting. In contrast, some adult HIV clinics may rely more on referral of the patient to separate subspecialty care settings, such as gynecology. Transitioning the care of an emerging young adult includes considerations of areas such as medical insurance; the adolescent's degree of independence/autonomy and decisional capacity; patient confidentiality; and informed consent. Also, adult clinic settings tend to be larger and can easily intimidate younger, less motivated patients. As an additional complication to this transition, HIV-infected adolescents belong to two epidemiologically distinct subgroups with unique biomedical and psychosocial considerations and needs:
- Perinatally infected adolescents-who would likely have more disease burden history, complications, and chronicity; less functional autonomy; greater need for ART; and higher mortality risk-and
- Youth more recently infected during their adolescence-who would likely be in earlier stages of HIV infection and have higher CD4 cell counts; these adolescents would be less likely to have viral drug resistance and may benefit from simpler treatment regimen options.
To maximize the likelihood of a successful transition, interventions to facilitate transition are best implemented early on. 39 These interventions include the following:
- Developing an individualized transition plan to address comprehensive care needs including medical, psychosocial, and financial aspects of transitioning;
- Optimizing provider communication between adolescent and adult clinics;
- Identifying adult care providers willing to care for adolescents and young adults;
- Addressing patient and family resistance to transition of care caused by lack of information, concerns about stigma or risk of disclosure, and differences in practice styles;
- Helping youth develop life skills, including counseling them on the appropriate use of a primary care provider and how to manage appointments, the importance of prompt symptom recognition and reporting, and the importance of self-efficacy in managing medications, insurance, and assistance benefits;
- Identifying an optimal clinic model based on specific needs (i.e., simultaneous transition of mental health and/or case management versus a gradual phase-in);
- Implementing ongoing evaluation to measure the success of a selected model;
- Engaging adult and adolescent care providers in regular multidisciplinary case conferences;
- Implementing interventions that may improve outcomes, such as support groups and mental health consultation;
- Incorporating a family planning component into clinical care; and
- Educating HIV care teams and staff about transitioning.
Discussions regarding transition should begin early and before the actual transition process. 40 Attention to the key interventions noted above will likely improve adherence to appointments and avert the potential for a youth to fall through the cracks, as it is commonly referred to in adolescent medicine. For a more detailed discussion on specific topics on transitioning care for adolescents and young adults, see /.
# Treatment Challenges of HIV-Infected Illicit Drug Users
Injection drug use is the second most common mode of HIV transmission in the United States. In addition, noninjection illicit drug use may facilitate sexual transmission of HIV. Injection and noninjection illicit drugs include the following: heroin, cocaine, marijuana, and club drugs (i.e., methamphetamine, ketamine, gammahydroxybutyrate , and amyl nitrate ). The most commonly used illicit drugs associated with HIV infection are heroin and stimulants (e.g., cocaine and amphetamines); however, the use of club drugs has increased substantially in the past several years and is common among individuals who have HIV infection or who are at risk of HIV infection. The association between club drugs and high-risk sexual behavior in men who have sex with men (MSM) is strongest for methamphetamine and amyl nitrate; this association is less consistent with the other club drugs. 1 Illicit drug use has been associated with depression and anxiety, either as part of the withdrawal process or as a consequence of repeated use. This is particularly relevant in the treatment of HIV infection because depression is one of the strongest predictors of poor adherence and poor treatment outcomes. 2 Treatment of HIV disease in illicit drug users can be successful but HIV-infected illicit drug users present special treatment challenges. These challenges may include the following: (1) an array of complicating comorbid medical and mental health conditions; (2) limited access to HIV care; (3) inadequate adherence to therapy; (4) medication side effects and toxicities; (5) the need for substance abuse treatment; and (6) drug interactions that can complicate HIV treatment. 3 Underlying health problems in injection and noninjection drug users result in increased morbidity and mortality, either independent of or accentuated by HIV disease. Many of these problems are the consequence of prior exposures to infectious pathogens from nonsterile needle and syringe use. Such problems can include hepatitis B or C virus infection, tuberculosis (TB), skin and soft tissue infections, recurrent bacterial pneumonia, and endocarditis. Other morbidities such as alteration in levels of consciousness and neurologic and renal disease are not uncommon. Furthermore, these comorbidities are associated with a higher risk of drug overdoses in illicit drug users with HIV disease than in HIV-uninfected illicit drug users, due in part to respiratory, hepatic, and neurological impairments associated with HIV infection. 4 Successful HIV therapy for illicit drug users often depends on clinicians becoming familiar with and managing these comorbid conditions and providing overdose prevention support.
Illicit drug users have less access to HIV care and are less likely to receive antiretroviral therapy (ART) than other populations. Factors associated with low rates of ART use among illicit drug users include active drug use, younger age, female gender, suboptimal health care, recent incarceration, lack of access to rehabilitation programs, and health care providers' lack of expertise in HIV treatment. The typically unstable, chaotic life patterns of many illicit drug users; the powerful pull of addictive substances; and common misperceptions about the dangers, impact, and benefits of ART all contribute to decreased adherence. 7 The chronic and relapsing nature of substance abuse as a biologic and medical disease, compounded by the high rate of mental illness that antedates and/or is exacerbated by illicit substance use, additionally complicate the relationship between health care workers and illicit drug users. The first step in provision of care and treatment for these individuals is to recognize the existence of a substance abuse problem. It is often obvious that the problem exists, but some patients may hide these problem behaviors from clinicians. Assessment of a patient for substance abuse should be part of routine medical history taking and should be done in a professional, straightforward, and nonjudgmental manner.
# Treatment Efficacy in HIV-Infected Illicit Drug Use Populations
Although illicit drug users are underrepresented in HIV therapy clinical trials, available data indicate that efficacy of ART in illicit drug users-when they are not actively using drugs-is similar to that seen in other populations. 10 Furthermore, therapeutic failure in this population generally correlates with the degree that drug use disrupts daily activities rather than with drug use per se. 11 Providers need to remain attentive to the possible impact of disruptions caused by drug use on the patient both before and while receiving ART. Although many illicit drug users can sufficiently control their drug use for long enough time to benefit from care, substance abuse treatment is often necessary for successful HIV management.
Close collaboration with substance abuse treatment programs and proper support and attention to this population's special multidisciplinary needs are critical components of successful HIV treatment. Essential to this end are accommodating, flexible, community-based HIV care sites that are characterized by familiarity with and nonjudgmental expertise in management of drug users' wide array of needs and in development of effective strategies to promote medication adherence. 9 These strategies should include, if available, the use of adherence support mechanisms such as modified directly observed therapy (mDOT), which has shown promise in this population. 12
# Antiretroviral Agents and Opioid Substitution Therapy
Compared with noninjection drug users receiving ART, injection drug users (IDUs) receiving ART are more likely to experience an increased frequency of side effects and toxicities of ART. Although not systematically studied, this is likely because underlying hepatic, renal, neurologic, psychiatric, gastrointestinal (GI), and hematologic disorders are highly prevalent among IDUs. These comorbid conditions should be considered when selecting antiretroviral (ARV) agents in this population. Opioid substitution therapies such as methadone and buprenorphine/naloxone and extended-release naltrexone are commonly used for management of opioid dependence in HIV-infected patients.
Methadone and Antiretroviral Therapy. Methadone, an orally administered, long-acting opioid agonist, is the most common pharmacologic treatment for opioid addiction. Its use is associated with decreased heroin use, decreased needle sharing, and improved quality of life. Because of its opioid-induced effects on gastric emptying and the metabolism of cytochrome P (CYP) 450 isoenzymes 2B6, 3A4, and 2D6, pharmacologic effects and interactions with ARV agents may commonly occur. 13 These may diminish the effectiveness of either or both therapies by causing opioid withdrawal or overdose, increased methadone toxicity, and/or decreased ARV efficacy. Efavirenz (EFV), nevirapine (NVP), and lopinavir/ritonavir (LPV/r) have been associated with significant decreases in methadone levels. Patients and substance abuse treatment facilities should be informed of the likelihood of this interaction. The clinical effect is usually seen after 7 days of coadministration and may be managed by increasing the methadone dosage, usually in 5-mg to 10-mg increments daily until the desired effect is achieved.
Buprenorphine and Antiretroviral Therapy. Buprenorphine, a partial µ-opioid agonist, is administrated sublingually and is often coformulated with naloxone. It is increasingly used for opioid dependence treatment. Compared with methadone, buprenorphine has a lower risk of respiratory depression and overdose. This allows physicians in primary care to prescribe buprenorphine for the treatment of opioid dependency. The flexibility of the primary care setting can be of significant value to opioid-addicted HIVinfected patients who require ART because it enables one physician or program to provide both medical and substance abuse services. Limited information is currently available about interactions between buprenorphine and ARV agents. Findings from available studies show that the drug interaction profile of buprenorphine is more favorable than that of methadone.
Naltrexone and Antiretroviral Therapy. A once-monthly extended-release intramuscular formulation of naltrexone was recently approved for prevention of relapse in patients who have undergone an opioid detoxification program. Naltrexone is also indicated for treatment of alcohol dependency. Naltrexone is not metabolized via the CYP450 enzyme system and is not expected to interact with protease inhibitors (PIs) or non-nucleoside reverse transcriptase inhibitors (NNRTIs). 15 Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents I-18
Currently available pharmacokinetic (PK) interaction data that clinicians can use as a guide for managing patients receiving ART and methadone or buprenorphine can be found in Tables 19a-d. Particular attention is needed concerning communication between HIV care providers and drug treatment programs regarding additive drug toxicities and drug interactions resulting in opiate withdrawal or excess.
Methylenedioxymethamphetamine (MDMA), GHB, ketamine, and methamphetamine all have the potential to interact with ARV agents because all are metabolized, at least in part, by the CYP450 system. Overdoses secondary to interactions between the party drugs (i.e., MDMA or GHB) and PI-based ART have been reported. 16
# Summary
It is usually possible over time to support most active drug users such that acceptable adherence levels with ARV agents can be achieved. Providers must work to combine all available resources to stabilize an active drug user in preparation for ART. This should include identification of concurrent medical and psychiatric illnesses, drug treatment and needle and syringe exchange programs, strategies to reduce highrisk sexual behavior, and harm-reduction strategies. A history of drug use alone is insufficient reason to withhold ART because individuals with a history of prior drug use have adherence rates similar to those who do not abuse drugs.
Important considerations in the selection of successful regimens and the provision of appropriate patient monitoring in this population include need for supportive clinical sites; linkage to substance abuse treatment; and awareness of the interactions between illicit drugs and ARV agents, including the increased risk of side effects and toxicities. Simple regimens should be considered to enhance medication adherence. Preference should be given to ARV agents that have a lower risk of hepatic and neuropsychiatric side effects, simple dosing schedules, and minimal interaction with methadone.
# Gender Considerations in Antiretroviral Therapy
In general, studies to date have not shown gender differences in virologic responses to antitretroviral therapy (ART). However, there are limited data showing that pharmacokinetics (PKs) for some antiretroviral (ARV) drugs may differ between men and women, possibly because of variations between men and women in factors such as body weight, plasma volume, gastric emptying time, plasma protein levels, cytochrome P (CYP) 450 activity, drug transporter function, and excretion activity. Adverse Effects
Several studies have suggested that gender may influence the frequency, presentation, and severity of some ARV-related adverse events. Most notably, women are more likely to develop severe symptomatic hepatotoxicity with nevirapine (NVP) use, 8,9 and are more likely to develop symptomatic lactic acidosis with prolonged use of older nucleoside reverse transcriptase inhibitors (NRTIs) such as zidovudine (ZDV), stavudine (d4T), and didanosine (ddI). 10 These agents are no longer recommended for use in HIV-infected patients in the United States; although ZDV is still administered intravenously to women during delivery, it is not generally recommended for long-term use.
# Panel's Recommendations
- Antiretroviral therapy (ART) is recommended for all HIV-infected women to improve their health and to reduce the risk of HIV transmission to HIV-uninfected sex partners (AI).
- In pregnant women, an additional goal of therapy is to maintain a viral load below the limit of detection throughout pregnancy to reduce the risk of transmission to the fetus and newborn (AI).
- When selecting an antiretroviral (ARV) combination regimen for a pregnant woman, clinicians should consider the available safety, efficacy, and pharmacokinetic (PK) data on use during pregnancy for each agent. The risks and benefits of ARV use during pregnancy should be discussed with all women (AIII).
- For women taking ARV drugs that have significant PK interactions with hormonal contraceptives, an alternative or additional effective contraceptive method to prevent unintended pregnancy is recommended (AIII). Switching to an ARV drug without interactions with hormonal contraceptives may also be considered (BIII).
- Nonpregnant women of childbearing potential should undergo pregnancy testing before initiation of efavirenz (EFV) and receive counseling about the potential risk to the fetus and desirability of avoiding conception while on EFV-based regimens (AIII).
- When designing a regimen for a pregnant woman, clinicians should consult the most current Recommendations for Use of Antiretroviral Drugs in Pregnant HIV-1-Infected Women for Maternal Health and Interventions to Reduce Perinatal HIV Transmission in the United States (Perinatal Guidelines) (AIII).
- Regimens that do not include EFV should be considered in women who are planning to become pregnant or are sexually active and not using effective contraception (BIII).
- Women on a suppressive regimen containing EFV who become pregnant and present to antenatal care during the first trimester can continue EFV throughout pregnancy (CIII). Some studies have compared women and men in relation to metabolic complications associated with ARV use. Over 96 weeks following initiation of ART, HIV-infected women are less likely to have decreases in limb fat but more likely to have decreases in bone mineral density (BMD) than HIV-infected men. 11,12 Women have an increased risk of osteopenia, osteoporosis, and fractures, particularly after menopause, and this risk is exacerbated by HIV and ART. ART regimens that contain tenofovir disoproxil fumarate (TDF), ritonavir-boosted protease inhibitors (PI/r), or both are associated with a significantly greater loss of BMD than regimens containing other NRTIs and raltegravir (RAL). Abacavir (ABC), NRTI-sparing regimens, and tenofovir alafenamide (TAF; a new oral tenofovir prodrug that induces less bone loss than TDF) may be considered as alternatives to TDF in patients who are at risk of osteopenia or osteoporosis.
# Rating of Recommendations
Recommendations for management of bone disease in HIV-infected patients have been published. 21
# HIV-Infected Women of Childbearing Potential
All HIV-infected women of childbearing potential should be offered comprehensive reproductive and sexual health counseling and care as part of routine primary medical care. Topics for discussion should include safe sex practices, reproductive desires and options for conception, the HIV status of sex partner(s), and use of effective contraception to prevent unintended pregnancy. Counseling should also include discussion of special considerations pertaining to ARV use when using hormonal contraceptives, when trying to conceive, and during pregnancy (see Perinatal Guidelines).
# Reproductive Options for Serodiscordant Couples
HIV infected and uninfected women who wish to conceive with an HIV-uninfected or infected male partner (respectively) should be informed of options to prevent sexual transmission of HIV while attempting conception. Interventions include screening and treating both partners for sexually transmitted infections (STIs), ART to maximally suppress and maintain the infected partner's viral load, use of pre-exposure prophylaxis by the uninfected partner, male circumcision, and/or self-insemination with the HIVuninfected partner's sperm during the HIV-infected woman's periovulatory period. 25 Efavirenz (EFV) is teratogenic in nonhuman primates. Nonpregnant women of childbearing potential should have a pregnancy test before starting EFV and be advised of potential EFV-related risks to the fetus and the desirability of avoiding conception while on an EFV-based regimen (AIII). Regimens that do not include EFV should be considered in women who are planning to become pregnant or who are sexually active and not using effective contraception (BIII). The most vulnerable period in fetal organogenesis is early in gestation, usually before pregnancy is recognized. Efavirenz use after the first 8 weeks of pregnancy appears safe.
# Hormonal Contraception
Safe and effective reproductive health and family planning services to prevent unintended pregnancies and perinatal transmission of HIV are an essential component of care for HIV-infected women of childbearing age. These women should receive ongoing counseling on reproductive issues. Regardless of hormonal contraceptive use, HIV-infected women should be advised to consistently use condoms (male or female) during sex and adhere to an HIV regimen effective in maintaining viral suppression. Both strategies are crucial to prevent transmission of HIV to uninfected partners and to protect against infection with other STIs.
The following are some considerations when hormonal contraceptives are used.
# Drug-Drug Interactions
PK interactions between ARV drugs and hormonal contraceptives may reduce contraceptive efficacy. However, there are limited clinical data regarding drug interactions between ARVs and hormonal contraceptives and the clinical implications of these interactions are unclear. The magnitudes of changes in drug levels that may reduce contraceptive efficacy or increase adverse effects are unknown.
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents I-22
- Combined Oral Contraceptives (COCs): Several PIs, EFV, and elvitegravir/cobicistat (EVG/c)-based regimens have drug interactions with COCs. Interactions include either a decrease or an increase in blood levels of ethinyl estradiol, norethindrone, or norgestimate (see Tables 19a,19b, and 19d), which potentially decreases contraceptive efficacy or increases estrogen-or progestin-related adverse effects (e.g., thromboembolism). EFV can decrease etonogestrel bioavailability and plasma progestin concentrations of COCs containing ethinyl estradiol and norgestimate. 26 Several PI/r and EVG/c decrease oral contraceptive estradiol levels. Several PK studies have shown that ETR, RPV, and NVP use did not significantly affect estradiol or progestin levels in HIV-infected women using COCs. - Injectable Contraceptives: Small studies of HIV-infected women receiving injectable depotmedroxyprogesterone acetate (DMPA) while on ART showed no significant interactions between DMPA and EFV, lopinavir/ritonavir (LPV/r), NVP, nelfinavir (NFV), or NRTI drugs. - Contraceptive Implants: Contraceptive failure of the etonogestrel implant in women on EFV-based therapy has been reported. 38,39 Two studies identified lower exposure of levonorgestrel and etonogestrel released from an implant when combined with EFV-based ART. 40,41 These PK studies did not identify any change in hormone concentrations when the implants were used in women taking NVP 40 or LPV/r. 41 Similarly, two retrospective cohort evaluations conducted in Swaziland and Kenya showed an increased risk of contraceptive failure in women using contraceptive implants and receiving EFV. 42,43 Concerns about PK interactions between oral or implantable hormonal contraceptives and ARVs should not prevent clinicians from prescribing hormonal contraceptives for women on ART who prefer this contraceptive method. However, an alternative or additional effective contraceptive method is recommended when there are significant drug interactions between hormonal contraceptives and ARVs (see drug interaction Tables 19a, 19b, and 19d and Perinatal Guidelines).
# Risk of HIV Acquisition and Transmission
Studies have produced conflicting data on the association between hormonal contraception and the risk of acquisition of HIV. 44 Most of the retrospective studies were done in the setting where the HIV-infected partners were not taking ART. A retrospective secondary analysis of two studies of serodiscordant couples in Africa in which the HIV-infected partner was not receiving ART found that women using hormonal contraception (the majority using injectable DMPA) had a twofold increased risk of acquiring or transmitting HIV. HIV-infected women using hormonal contraception had higher genital HIV RNA concentrations than those not using hormonal contraceptives. 45 Oral contraceptive use was not significantly associated with transmission of HIV; however, the number of women using oral contraceptives in this study was insufficient to adequately assess risk. A World Health Organization expert group reviewed all available evidence regarding hormonal contraception and HIV transmission to an uninfected partner and recommended that women living with HIV can continue to use all existing hormonal contraceptive methods without restriction. 46 Further research is needed to definitively determine if hormonal contraceptive use is an independent risk factor for acquisition and transmission of HIV, particularly in the setting of ART. Regardless, the potential association of hormonal contraception use and HIV transmission in the absence of ART underscores the importance of ART-induced viral suppression to reduce transmission risk.
Intrauterine devices (IUDs) appear to be a safe and effective contraceptive option for HIV-infected women. Although studies have focused primarily on non-hormone-containing IUDs (e.g., copper IUD), several small studies have found that levonorgestrel-releasing IUDs are also safe and not associated with increased genital tract shedding of HIV.
# Pregnant Women
Clinicians caring for HIV-infected pregnant women should review the Perinatal Guidelines. The use of combination ARV regimens is recommended for all HIV-infected pregnant women, regardless of virologic, immunologic, or clinical parameters, for their own health and to prevent transmission of HIV to the fetus (AI). Pregnant HIV-infected women should be counseled regarding the known benefits and risks of ARV use during pregnancy to the woman, fetus, and newborn. Women should be counseled and strongly encouraged to receive ART for their own health and that of their infants. Open, non-judgmental and supportive discussion should be used to encourage women to adhere to care.
# Prevention of Perinatal Transmission of HIV
The use of ART and the resultant reduction of HIV RNA levels decrease perinatal transmission of HIV. The goal of ART is to achieve maximal and sustained viral suppression throughout pregnancy. Long-term follow-up is recommended for all infants born to women who receive ART during pregnancy, regardless of the infant's HIV status (see the Perinatal Guidelines).
# ARV Regimen Considerations
Pregnancy should not preclude the use of optimal ARV regimens. As in nonpregnant individuals, genotypic resistance testing is recommended for all pregnant women before ARV initiation (AIII) and for pregnant women with detectable HIV RNA while on ART (AI). ART initiation should not be delayed in pregnant women pending genotypic resistance testing results. The ARV regimen can be modified, if necessary, once the resistance testing results are available (BIII). Unique considerations that influence recommendations on ARVs to use to treat HIV-infected pregnant women include the following:
- Physiologic changes associated with pregnancy that potentially result in changes in PKs, which may affect ARV dosing at different stages of pregnancy;
- Potential ARV-associated adverse effects in pregnant women and the potential for adherence to a particular regimen during pregnancy; and
- Potential short-and long-term effects of an ARV on the fetus and newborn, which are unknown for many drugs.
ART is considered the standard of care for HIV-infected pregnant women, both to treat HIV infection and prevent perinatal transmission of HIV. If a pregnant woman receiving an EFV-based regimen presents to prenatal care during the first trimester with suppressed HIV RNA, EFV can be continued. This is because the risk of fetal neural tube defects is restricted to the first 5 to 6 weeks of pregnancy and pregnancy is rarely recognized before 4 to 6 weeks of pregnancy. Unnecessary changes in ARV drugs during pregnancy may be associated with loss of viral control and increased risk of perinatal transmission. Detailed recommendations on ARV choice in pregnancy are discussed in detail in the Perinatal Guidelines.
If maternal HIV RNA is ≥1,000 copies/mL (or unknown) near delivery, intravenous (IV) infusion of ZDV during labor is recommended regardless of the mother's antepartum regimen and resistance profile, and the mode of delivery (AI). Administration of combination ART should continue during labor and before a cesarean delivery (oral medications can be continued with sips of water).
Clinicians who are treating HIV-infected pregnant women are strongly encouraged to report cases of prenatal exposure to ARVs (either administered alone or in combinations) to the Antiretroviral Pregnancy Registry (). The registry collects observational data regarding exposure to Food and Drug Administration (FDA)-approved ARV drugs during pregnancy to assess potential teratogenicity. Analysis of the Antiretroviral Pregnancy Registry data indicates that there is no clear association between first-trimester exposure to any ARV drug and increased risk of birth defects. For more information regarding selection and use of ART during pregnancy, refer to the Perinatal Guidelines.
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents I-24
# Postpartum Management
Following delivery, clinical, immunologic, and virologic follow-up should continue as recommended for nonpregnant adults and adolescents. Because maternal ART reduces but does not eliminate the risk of transmission of HIV in breast milk and postnatal transmission can occur despite maternal ART, women should be counseled to avoid breastfeeding. 56 HIV-infected women should not premasticate food and feed it to their infants because the practice has been associated with mother-to-child transmission of HIV. 57 ART is currently recommended for all HIV-infected individuals (AI); therefore, maternal ART should be continued after delivery. For more information regarding postpartum management, refer to the Perinatal Guidelines.
Several studies have demonstrated that adherence to ART may decline in the postpartum period. Clinicians caring for postpartum women who are receiving ART should address adherence, including an evaluation of specific facilitators and barriers to adherence. Clinicians may recommend an intervention to improve adherence (see Adherence to Antiretroviral Therapy).
# Clinical Course of HIV-2 Infection
Compared to HIV-1 infection, the clinical course of HIV-2 infection is generally characterized by a longer asymptomatic stage, lower plasma HIV-2 viral loads, and lower mortality rate. 1,2 However, HIV-2 infection can also progress to AIDS over time. Concomitant HIV-1 and HIV-2 infection may occur and should be considered in patients from areas with a high prevalence of HIV-2.
# Diagnosis of HIV-2 Infection
In the appropriate epidemiologic setting, HIV-2 infection should be suspected in patients with clinical conditions suggestive of HIV infection but with atypical serologic results (e.g., a positive screening assay with an indeterminate HIV-1 Western blotf 3 The possibility of HIV-2 infection should also be considered in the appropriate epidemiologic setting in patients with serologically confirmed HIV infection but low or undetectable HIV-1 RNA levels or in those with declining CD4 T lymphocyte (CD4) cell counts despite apparent virologic suppression on antiretroviral therapy (ART).
The 2014 Centers for Disease Control and Prevention guidelines for HIV diagnostic testing 4 recommend initial HIV testing using an HIV-1/HIV-2 antigen/antibody combination immunoassay and subsequent testing using an HIV-1/HIV-2 antibody differentiation immunoassay. The Multispot HIV-1/HIV-2 Rapid Test (Bio-Rad Laboratories) is Food and Drug Administration approved for differentiating HIV-1 from HIV-2 infection. Commercially available HIV-1 viral load assays do not reliably detect or quantify HIV-2. 5,6 Quantitative HIV-2 plasma RNA viral load testing has recently become available for clinical care at the University of Washington
# Summary of HIV-2 Infection
- Compared to HIV-1 infection, the clinical course of HIV-2 infection is generally characterized by a longer asymptomatic stage, lower plasma HIV-2 RNA levels, and lower mortality; however, progression to AIDS does occur.
- There have been no randomized trials addressing the question of when to start antiretroviral therapy or the choice of initial or second-line therapy for HIV-2 infection; thus, the optimal treatment strategy has not been defined.
- Although the optimal CD4 T lymphocyte (CD4) cell count threshold for initiating antiretroviral therapy in HIV-2 infection is unknown, therapy should be started before there is clinical progression.
- HIV-2 is intrinsically resistant to non-nucleoside reverse transcriptase inhibitors and to enfuvirtide; thus, these drugs should not be included in an antiretroviral regimen for an HIV-2 infected patient.
- Pending more definitive data on outcomes in an antiretroviral therapy -naive patient who has HIV-2 mono-infection or HIV-1/HIV-2 dual infection and requires treatment, an initial antiretroviral therapy regimen for these patients should include two nucleoside reverse transcriptase inhibitors plus an HIV-2 active boosted protease inhibitor or integrase strand transfer inhibitors.
- A few laboratories now offer quantitative plasma HIV-2 RNA testing for clinical care (see section text).
- Monitoring of HIV-2 RNA levels, CD4 cell counts, and clinical improvements can be used to assess treatment response, as is recommended for HIV-1 infection.
- Resistance-associated viral mutations to nucleoside reverse transcriptase inhibitors, protease inhibitors, and/or integrase strand transfer inhibitors may develop in HIV-2 infected patients while on therapy. However, no validated HIV-2 genotypic or phenotypic antiretroviral resistance assays are available for clinical use.
- In the event of virologic, immunologic, or clinical failure, second-line treatment should be instituted in consultation with an expert in HIV-2 management.
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents I-29
() 7 and the New York State Department of Health (). 8 However, it is important to note that approximately one-quarter to one-third of HIV-2-infected patients without ART will have HIV-2 RNA levels below the limits of detection; some of these patients will have clinical progression and CD4 cell count decline. No validated HIV-2 genotypic or phenotypic antiretroviral (ARV) resistance assays are available for clinical use.
# Treatment of HIV-2 Infection
To date, no randomized trials addressing the question of when to start ART or the choice of initial or secondline therapy for HIV-2 infection have been completed; 9 thus, the optimal treatment strategy has not been defined. Three clinical trials to assess first-line ART for HIV-2 infection are currently underway; 2 are enrolling patients with CD4 counts 200 and ≤600 cells/mm 3 (NCT02150993). Although the optimal CD4 cell count threshold for initiating ART in HIV-2 infection is unknown, therapy should be started before there is clinical progression.
HIV-2 is intrinsically resistant to non-nucleoside reverse transcriptase inhibitors (NNRTI) 10 and to enfuvirtide (T20). 11 Data from in vitro studies suggest that HIV-2 is sensitive to the currently available nucleoside reverse transcriptase inhibitors (NRTIs), although with a lower barrier to resistance than HIV-1. 12,13 Darunavir (DRV), lopinavir (LPV), and saquinavir (SQV) are more active against HIV-2 than other approved protease inhibitors (PIs); one of these boosted PIs should be used if a PI-based regimen is used. Other PIs should be avoided because of their lack of ARV activity and high failure rates. The integrase strand transfer inhibitors (INSTIs) raltegravir (RAL), elvitegravir (EVG), and dolutegravir (DTG) have potent activity against HIV-2. The CCR5 antagonist maraviroc (MVC) appears active against some HIV-2 isolates; 22 however, no approved assays to determine HIV-2 co-receptor tropism exist and HIV-2 is known to use many other minor co-receptors in addition to CCR5 and CXCR4. 23 Several small studies suggest poor responses in HIV-2 infected individuals treated with some ARV regimens including dual-NRTI regimens; regimens containing NNRTI plus 2NRTIs; and some unboosted PI-based regimens including nelfinavir (NFV) or indinavir (IDV) plus zidovudine (ZDV) and lamivudine (3TC); and atazanavir (ATV)-based regimens. 9, Clinical data on the effectiveness of triple-NRTI regimens are conflicting. 28,29 In general, HIV-2 active, boosted PI-containing regimens have resulted in more favorable virologic and immunologic responses than 2 or 3-NRTI-based regimens. However, CD4 cell recovery on therapy are generally poorer than that observed for HIV-1. INSTI-based regimens may also have favorable treatment responses. 34,35 A recent large systematic review of ART for HIV-2-infected patients (n = 17 studies, 976 HIV-2 infected patients) was unable to conclude which specific regimens are preferred. 36 Resistance-associated viral mutations to NRTIs, PIs and/or INSTIs commonly develop in HIV-2 infected patients while on therapy. 24,29,41 Currently, HIV-2 transmitted drug resistance appears rare. 42 In one small study, DTG was found to have activity as a second-line INSTI in some HIV-2 patients with extensive ARV experience and RAL resistance. 43 Genotypic algorithms used to predict drug resistance in HIV-1 may not be applicable to HIV-2, because pathways and mutational patterns leading to resistance may differ between the HIV types. 13,29,44 Some groups have recommended specific preferred and alternative regimens for initial therapy of HIV-2 infection; Effective antiretroviral therapy (ART) has increased survival in HIV-infected individuals, resulting in an increasing number of older individuals living with HIV infection. In the United States, among persons living with HIV infection at year-end 2013, 42% were age 50 years or older, 6% were age 65 or older, and trends suggest that these proportions will increase steadily. 1 Care of HIV-infected patients increasingly will involve adults 60 to 80 years of age, a population for which data from clinical trials or pharmacokinetic (PK) studies are very limited.
There are several distinct areas of concern regarding the association between age and HIV disease. 2 First, older HIV-infected patients may suffer from aging-related comorbid illnesses that can complicate the management of HIV infection. Second, HIV disease may affect the biology of aging, possibly resulting in early manifestations of clinical syndromes generally associated with advanced age. Third, reduced mucosal and immunologic defenses (such as post-menopausal atrophic vaginitis) and changes in risk relatedbehaviors (e.g., decrease in condom use because of less concern about pregnancy or more high-risk sexual activity with increased use of erectile dysfunction drugs) in older adults could lead to increased risk of acquisition and transmission of HIV. 3,4 Finally, because older adults are generally perceived to be at low risk of HIV infection, screening for this population remains low.
# HIV Diagnosis and Prevention in the Older Adult
In older adults, failure to consider a diagnosis of HIV likely contributes to later initiation of ART. 5 The Centers for Disease Control and Prevention (CDC) estimates that in 2013, 37% of adults aged 55 years or older at the time of HIV diagnosis met the case definition for AIDS. The comparable CDC estimates are 18% for adults aged 25 to 34 years and 30% for adults aged 35 to 44 years. 6 In one observational cohort, older patients (defined as those ≥35 years of age) appeared to have lower CD4 T lymphocyte (CD4) cell counts at seroconversion, steeper CD4 count decline over time, 7 and tended to present to care with significantly lower CD4 counts. 8 When individuals >50 years of age present with severe illnesses, AIDS-related opportunistic infections (OIs) need to be considered in the differential diagnosis of the illness.
Although many older individuals engage in risk behaviors associated with acquisition of HIV, they may see
# Key Considerations When Caring for Older HIV-Infected Patients Receiving Antiretroviral Therapy (ART)
- Antiretroviral therapy (ART) is recommended for all patients regardless of CD4 T lymphocyte cell count (AI). ART is especially important for older patients because they have a greater risk of serious non-AIDS complications and potentially a blunted immunologic response to ART.
- Adverse drug events from ART and concomitant drugs may occur more frequently in older HIV-infected patients than in younger HIVinfected patients. Therefore, the bone, kidney, metabolic, cardiovascular, and liver health of older HIV-infected patients should be monitored closely.
- Polypharmacy is common in older HIV patients; therefore, there is a greater risk of drug-drug interactions between antiretroviral drugs and concomitant medications. Potential for drug-drug interactions should be assessed regularly, especially when starting or switching ART and concomitant medications.
- HIV experts, primary care providers, and other specialists should work together to optimize the medical care of older HIV-infected patients with complex comorbidities.
- Early diagnosis of HIV and counseling to prevent secondary transmission of HIV remains an important aspect of the care of the older HIV-infected patient. themselves or be perceived by providers as at low risk of infection and, as a result, they are less likely to be tested for HIV infection than younger persons. 9,10 Despite CDC guidelines recommending HIV testing at least once in individuals aged 13 to 64, and more frequently for those at risk, 11 HIV testing prevalence remains low (<5%) among adults aged 50 to 64, and decreases with increasing age. 12 Clinicians must be attuned to the possibility of HIV infection in older adults, including those older than 64 years of age and especially in those who may engage in high-risk behaviors. Sexual history taking is therefore an important component of general health care for HIV-uninfected older adults, together with risk-reduction counseling, and screening for HIV and sexually transmitted infections (STIs), if indicated.
# Rating of Recommendations
# Impact of Age on HIV Disease Progression
HIV infection presents unique challenges in aging adults and these challenges may be compounded by ART:
- HIV infection itself is thought to induce immune-phenotypic changes akin to accelerated aging, 13 but recent laboratory and clinical data provide a more nuanced view of these changes. Some studies have shown that HIV-infected patients may exhibit chromosomal and immunologic features similar to those induced by aging. 14,15 However, other studies show the immunologic changes to be distinct from agerelated changes. 16 In addition, although data on the increased incidence and prevalence of age-associated comorbidities in HIV patients are accumulating, 17,18 the age of diagnosis for myocardial infection and non-AIDS cancers in HIV-infected and HIV-uninfected patients is the same. 18,19 - Older HIV patients have a greater incidence of complications and co-morbidities than HIV-uninfected adults of similar age, and may exhibit a frailty phenotype-defined clinically as a decrease in muscle mass, weight, physical strength, energy, and physical activity, 20 although the phenotype is still incompletely characterized in the HIV population.
# Initiating Antiretroviral Therapy in the Older HIV Patient
ART is recommended for all HIV-infected individuals (AI; see Initiation of Antiretroviral Therapy section). Early treatment may be particularly important in older adults in part because of decreased immune recovery and increased risk of serious non-AIDS events in this population. In a modeling study based on data from an observational cohort, the beneficial effects of early ART were projected to be greatest in the oldest age group (patients between ages 45 and 65 years). 21 No data support a preference for any one of the Panel's recommended initial ART regimens (see What to Start) on the basis of patient age. The choice of regimen should instead be informed by a comprehensive review of the patient's other medical conditions and medications. The What to Start section (Table 7) of these guidelines provides guidance on selecting an antiretroviral regimen based on an older patient's characteristics and specific clinical conditions (e.g., kidney disease, elevated risk for cardiovascular disease, osteoporosis). In older patients with reduced renal function, dosage adjustment of nucleoside reverse transcriptase inhibitors (NRTIs) may be necessary (see Appendix Table 7). In addition, ARV regimen selection may be influenced by potential interaction of antiretroviral medications with drugs used concomitantly to manage co-morbidities (see ). Adults age >50 years should be monitored for ART effectiveness and safety similarly to other HIV-infected populations ; however, in older patients, special attention should be paid to the greater potential for adverse effects of ART on renal, liver, cardiovascular, metabolic, and bone health (see Table 14).
# HIV, Aging, and Antiretroviral Therapy
The efficacy, PKs, adverse effects, and drug interaction potentials of ART in the older adult have not been studied systematically. There is no evidence that the virologic response to ART differs in older and younger patients. In a recent observational study, a higher rate of viral suppression was seen in patients >55 years old than in younger patients. 22 However, ART-associated CD4 cell recovery in older patients is generally slower and lower in magnitude than in younger patients. 8, This observation suggests that starting ART at a younger age may result in better immunologic response and possibly clinical outcomes.
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents I-35
Hepatic metabolism and renal elimination are the major routes of drug clearance, including the clearance of ARV drugs. Both liver and kidney functions decrease with age and may result in impaired drug elimination and increased drug exposure. 26 Most clinical trials have included only a small proportion of participants over 50 years of age, and current ARV dosing recommendations are based on PK and pharmacodynamic data derived from participants with normal organ function. Whether drug accumulation in the older patient may lead to greater incidence and severity of adverse effects than seen in younger patients is unknown.
HIV-infected patients with aging-associated comorbidities may require additional pharmacologic interventions that can complicate therapeutic management. In addition to taking medications to manage HIV infection and comorbid conditions, many older HIV-infected patients also are taking medications to relieve discomfort (e.g., pain medications, sedatives) or to manage adverse effects of medications (e.g., antiemetics). They also may self-medicate with over-the-counter medicines or supplements. In HIV-negative older patients, polypharmacy is a major cause of iatrogenic complications. 27 Some of these complications may be caused by medication errors (by prescribers or patients), medication non-adherence, additive drug toxicities, and drug-drug interactions. Older HIV-infected patients are probably at an even greater risk of polypharmacy-related adverse consequences than younger HIV-infected or similarly aged HIV-uninfected patients. When evaluating any new clinical complaint or laboratory abnormality in HIV-infected patients, especially in older patients, clinicians should always consider the possible role of adverse drug reactions from both ARV drugs and other concomitantly administered medications.
Drug-drug interactions are common with ART and can be easily overlooked by prescribers. 28 The available drug interaction information on ARV agents is derived primarily from PK studies performed in small numbers of relatively young, HIV-uninfected participants with normal organ function (see ).
Data from these studies provide clinicians with a basis to assess whether a significant interaction may exist. However, the magnitude of the interaction may be greater in older HIV-infected patients than in younger HIV-infected patients.
Nonadherence is the most common cause of treatment failure. Complex dosing requirements, high pill burden, inability to access medications because of cost or availability, limited health literacy including misunderstanding of instructions, depression, and neurocognitive impairment are among the key reasons for nonadherence. 32 Although many of these factors associated with non-adherence may be more prevalent in older patients, some studies have shown that older HIV-infected patients may actually be more adherent to ART than younger patients. Clinicians should regularly assess older patients to identify any factors, such as neurocognitive deficits, that may decrease adherence. To facilitate medication adherence, it may be useful to discontinue unnecessary medications, simplify regimens, and recommend evidence-based behavioral approaches including the use of adherence aids such as pillboxes or daily calendars, and support from family members (see Adherence to Antiretroviral Therapy).
# Non-AIDS HIV-Related Complications and Other Comorbidities
Among persons treated effectively with ART, as AIDS-related morbidity and mortality have decreased, non-AIDS conditions constitute an increasing proportion of serious illnesses. Neurocognitive impairment, already a major health problem in aging adults, may be exacerbated by the effect of HIV infection on the brain. 36 In a prospective observational study, neurocognitive impairment was predictive of lower retention in care among older persons. 37 Neurocognitive impairment probably also affects adherence to therapy. Social isolation and depression are also particularly common among older HIV-infected adults and, in addition to their direct effects on morbidity and mortality, may contribute to poor medication adherence and retention in care. 38,39 Heart disease and cancer are the leading causes of death in older Americans. 40 Similarly, non-AIDS events such as heart disease, liver disease, and cancer have emerged as major causes of morbidity and mortality in HIV-infected patients receiving effective ART. The presence of multiple non-AIDS comorbidities coupled with the immunologic effects of HIV infection may add to the disease burden of aging HIV-infected adults. HIV-specific primary care guidelines have been updated with recommendations for lipid and glucose monitoring, evaluation and management of bone health, and management of kidney disease, and are available for clinicians caring for HIV-infected older patients.
# Switching, Interrupting, and Discontinuing Antiretroviral Therapy in Older Patients
Given the greater incidence of co-morbidities, non-AIDS complications and frailty among older HIVinfected patients, switching one or more ARVs in an HIV regimen may be necessary to minimize toxicities and drug-drug interactions. For example, expert guidance now recommends bone density monitoring in men aged ≥50 years and postmenopausal women, and suggests switching from tenofovir disoproxil fumarate or boosted protease inhibitors to other ARVs in older patients at high risk for fragility fractures. 45 Few data exist on the use of ART in severely debilitated patients with chronic, severe, or non-AIDS terminal conditions. 49,50 Withdrawal of ART usually results in rebound viremia and a decline in CD4 cell count. Acute retroviral syndrome after abrupt discontinuation of ART has been reported. In severely debilitated patients, if there are no significant adverse reactions to ART, most clinicians would continue therapy. In cases where ART negatively affects quality of life, the decision to continue therapy should be made together with the patient and/or family members after a discussion on the risks and benefits of continuing or withdrawing ART.
# Healthcare Utilization, Cost Sharing, and End-of-Life Issues
Important issues to discuss with aging HIV-infected patients are living wills, advance directives, and longterm care planning, including related financial concerns. Out-of-pocket health care expenses (e.g., copayments, deductibles), loss of employment, and other financial-related factors can cause temporary interruptions in treatment, including ART, which should be avoided whenever possible. The increased life expectancy and the higher prevalence of chronic complications in aging HIV populations can place greater demands upon HIV services. 51 Facilitating a patient's continued access to insurance can minimize treatment interruptions and reduce the need for other services to manage concomitant chronic disorders.
# Conclusion
HIV disease can be overlooked in aging adults who tend to present with more advanced disease and experience accelerated CD4 loss. HIV induces immune-phenotypic changes that have been compared to accelerated aging. Effective ART has prolonged the life expectancy of HIV infected patients, increasing the number of patients >50 years of age living with HIV. However, unique challenges in this population include greater incidence of complications and co-morbidities, and some of these complications may be exacerbated or accelerated by long term use of some ARV drugs. Providing comprehensive multidisciplinary medical and psychosocial support to patients and their families (the "Medical Home" concept) is of paramount importance in the aging population. Continued involvement of HIV experts, geriatricians, and other specialists in the care of older HIV-infected patients is warranted.
Approximately 5% to 10% of HIV-infected persons in the United States also have chronic hepatitis B virus (HBV) infection. 1 The progression of chronic HBV to cirrhosis, end-stage liver disease, or hepatocellular carcinoma is more rapid in HBV/HIV-infected persons than in persons with chronic HBV monoinfection. 2 Conversely, chronic HBV does not substantially alter the progression of HIV infection and does not influence HIV suppression or CD4 T lymphocyte (CD4) cell responses following initiation of antiretroviral therapy (ART). 3,4 However, antiretroviral (ARV) drug toxicities or several liver-associated complications attributed to flares in HBV activity after initiation or discontinuation of dually active ARV drugs can affect the treatment of HIV in patients with HBV/HIV coinfection. These complications include the following:
- Emtricitabine (FTC), lamivudine (3TC), tenofovir disoproxil fumarate (TDF), and tenofovir alafenamide (TAF) are ARVs approved to treat HIV that are also active against HBV. Discontinuation of these drugs may potentially cause serious hepatocellular damage resulting from reactivation of HBV. 8 - The anti-HBV drug entecavir has activity against HIV. However, when entecavir is used to treat HBV in HBV/HIV-coinfected patients not on ART, the drug may select for the M184V mutation that confers HIV resistance to 3TC and FTC. Therefore, when used in HBV/HIV-coinfected patients, entecavir must be used in addition to a fully suppressive ARV regimen (AII). 9 - When 3TC is the only active drug used to treat chronic HBV in HBV/HIV coinfected patients, 3TCresistant HBV emerges in approximately 40% and 90% of patients after 2 and 4 years on 3TC, respectively. Therefore, 3TC or FTC, which is similar to 3TC, should be used in combination with other anti-HBV drugs (AII). 10
# Panel's Recommendations
- Before initiation of antiretroviral therapy (ART), all patients who test positive for hepatitis B surface antigen (HBsAg) should be tested for hepatitis B virus (HBV) DNA using a quantitative assay to determine the level of HBV replication (AIII).
- Because emtricitabine (FTC), lamivudine (3TC), tenofovir disoproxil fumarate (TDF) and tenofovir alafenamide (TAF) have activity against both HIV and HBV, for patients coinfected with HIV and HBV, ART should be initiated with the fixed-dose combination of TDF/FTC or TAF/FTC, or the individual drug combinations of TDF plus 3TC as the nucleoside reverse transcriptase inhibitor (NRTI) backbone of a fully suppressive antiretroviral (ARV) regimen (AI).
- If TDF or TAF cannot safely be used, the alternative recommended HBV therapy is entecavir in addition to a fully suppressive ARV regimen (BI). Entecavir has activity against HIV; its use for HBV treatment without ART in patients with dual infection may result in the selection of the M184V mutation that confers HIV resistance to 3TC and FTC. Therefore, entecavir must be used in addition to a fully suppressive ARV regimen when used in HBV/HIV-coinfected patients (AII). Peginterferon alfa monotherapy may also be considered in certain patients (CII).
- Other HBV treatment regimens including adefovir alone or in combination with 3TC or FTC and telbivudine are not recommended for HBV/HIV coinfected patients (CII).
- Discontinuation of agents with anti-HBV activity may cause serious hepatocellular damage resulting from reactivation of HBV; patients should be advised against stopping these medications and carefully monitored during interruptions in HBV treatment (AII).
- If ART needs to be modified due to HIV virologic failure and the patient has adequate HBV suppression, the ARV drugs active against HBV should be continued for HBV treatment in combination with other suitable ARV agents to achieve HIV suppression (AIII). - In HBV/HIV coinfected patients, immune reconstitution following initiation of treatment for HIV, HBV, or both can be associated with elevated transaminase levels, possibly because HBV is primarily an immune-mediated disease. 11 - Some ARV agents can increase transaminase levels. The rate and magnitude of these increases are higher with HBV/HIV coinfection than with HIV monoinfection. The etiology and consequences of these changes in liver function tests are unclear because the changes may resolve with continued ART. Nevertheless, some experts suspend the suspected agent(s) when the serum alanine transferase (ALT) level increases to 5 to 10 times the upper limit of normal. However, increased transaminase levels in HBV/HIV-coinfected persons may indicate hepatitis B e antigen (HBeAg) seroconversion due to immune reconstitution; thus, the cause of the elevations should be investigated before discontinuing medications.
# Rating of Recommendations
In persons with transaminase increases, HBeAg seroconversion should be evaluated by testing for HBeAg and anti-HBe, as well as HBV DNA levels.
# Recommendations for HBV/HIV-Coinfected Patients
- All patients with chronic HBV should be evaluated to assess the severity of HBV infection (see the HBV section of the Guidelines for Prevention and Treatment of Opportunistic Infections in HIV-Infected Adults and Adolescents). Patients with chronic HBV should also be tested for immunity to hepatitis A virus (HAV) infection (anti-HAV antibody total) and, if nonimmune, receive the HAV vaccination. In addition, patients with chronic HBV should be advised to abstain from alcohol and counseled on prevention methods that protect against both HBV and HIV transmission. 15 - Before ART is initiated, all persons who test positive for HBsAg should be tested for HBV DNA by using a quantitative assay to determine the level of HBV replication (AIII), and the test should be repeated every 3 to 6 months to ensure effective HBV suppression. The goal of HBV therapy with NRTIs is to prevent liver disease complications by sustained suppression of HBV replication.
# Antiviral Drugs with Dual Activities against HBV and HIV
Among the ARV drugs, 3TC, FTC, TAF, and TDF all have activity against HBV. Entecavir is an HBV nucleoside analog which also has weak HIV activity. TAF is a tenofovir prodrug with HBV activity and potentially less renal and bone toxicities than TDF. The efficacy of TDF versus TAF in HBV-monoinfected patients was evaluated in a randomized controlled trial of HBV treatment-naive and treatment-experienced HBeAg-negative patients. In this study, TAF was noninferior to TDF based on the percentage of patients with HBV DNA levels <29 IU/ml at 48 weeks of therapy (94% for TAF vs. 93% for TDF; P = 0.47). 16 TAF was also noninferior to TDF in HBeAg-positive patients with chronic HBV monoinfection with similar percentage of patients achieving HBV DNA levels <29 IU/ml at 48 weeks of therapy (64% for TAF vs. 67% for TDF; P = 0.25). 17 In both studies, patients on TAF experienced significantly smaller mean percentage decreases from baseline in hip and spine bone mineral density at week 48 than patients receiving TDF. The median change in estimated glomerular filtration rate (eGFR) from baseline to week 48 also favored TAF. 16,17 In HBV/HIV-coinfected patients, only TDF (with FTC or 3TC) or TAF/FTC can be considered part of the ARV regimen; entecavir has weak anti-HIV activity and must not be considered part of an ARV regimen. In addition, TDF is fully active for the treatment of persons with known or suspected 3TC-resistant HBV infection.
# Recommended Therapy
The combination of TDF (with FTC or 3TC) or TAF/FTC should be used as the NRTI backbone of an ARV regimen and for the treatment of both HIV and HBV infection (AII). The decision whether to use a TAFor TDF-containing regimen should be based on an assessment of risk for nephrotoxicity and for acceleration of bone loss. In a switch study in HBV/HIV-coinfected patients, study participants who switched from a
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents J-3
primarily TDF-based ART regimen to the fixed-dose combination EVG/c/TAF/FTC maintained or achieved HBV suppression, with improved eGFR and bone turnover markers. 21 Currently TAF/FTC-containing regimens approved for the treatment of HIV infection are not recommended for use in patients with creatinine clearance (CrCl) <30 ml/min. While data on switching from a TDF-based to a TAF-based ART regimen are limited, the data from the EVG/c/TAF/FTC switch study suggest that HBV/HIV-coinfected patients can switch to TAF/FTC-containing regimens with a potential reduction in renal and bone toxicity while maintaining HBV suppression.
# Alternative Therapy
If TDF or TAF cannot safely be used, entecavir should be used in addition to a fully suppressive ARV regimen (AII); however, entecavir should not be considered as part of the ARV regimen 22 (BII). Because entectavir and 3TC share a partially overlapping pathway to HBV resistance, it is unknown whether the combination of entecavir plus 3TC or FTC will provide greater virologic or clinical benefit than entecavir alone. In persons with known or suspected 3TC-resistant HBV infection, the entecavir dose should be increased from 0.5 mg/day to 1 mg/day. However, entecavir resistance may emerge rapidly in patients with 3TC-resistant HBV infection. Therefore, entecavir should be used with caution in such patients with frequent monitoring (approximately every 3 months) of the HBV DNA level to detect viral breakthrough.
Peginterferon alfa monotherapy for up to 48 weeks may also be considered in some HBV/HIV-coinfected patients. However, data on the use of this therapy in persons with HBV/HIV coinfection are limited and, given safety concerns, peginterferon alfa should not be used in HBV/HIV-coinfected persons with decompensated cirrhosis.
# Not Recommended Therapy
Other HBV treatment regimens include adefovir in combination with 3TC or FTC, or telbivudine in addition to a fully suppressive ARV regimen. 18,23,24 However, data on these regimens in persons with HBV/HIV coinfection are limited. In addition, compared to TDF, TAF, or entecavir, these regimens are associated with a higher incidence of toxicity, including renal disease when used with adefovir and myopathy and neuropathy when used with telbivudine, as well as higher rates of HBV treatment failure. Therefore, the Panel on Opportunistic Infections in HIV-Infected Adults and Adolescents does not currently recommend ADV or telbivudine for HBV/HIV-coinfected patients.
- Need to discontinue medications active against HBV: The patient's clinical course should be monitored with frequent liver function tests. The use of entecavir to prevent flares can be considered, especially in patients with marginal hepatic reserve such as those with compensated or decompensated cirrhosis. 8 These alternative HBV regimens should only be used in addition to a fully suppressive ARV regimen.
- Need to change ART because of HIV resistance: If the patient has adequate HBV suppression, the ARV drugs active against HBV should be continued for HBV treatment in combination with other ARV agents that effectively suppress HIV (AIII). The treatment of hepatitis C virus (HCV) infection is rapidly evolving. Data suggest that HCV/HIVcoinfected patients treated with all-oral HCV regimens have sustained virologic response rates comparable to those of HCV-monoinfected patients. The purpose of this section is to discuss hepatic safety and drug-drug interaction issues related to HCV/HIV coinfection and the concomitant use of antiretroviral (ARV) agents and HCV drugs. For specific guidance on HCV treatment, please refer to /.
Among patients with chronic HCV infection, approximately one-third progress to cirrhosis, at a median time of less than 20 years. 1,2 The rate of progression increases with older age, alcoholism, male sex, and HIV infection. A meta-analysis found that HCV/HIV-coinfected patients had a three-fold greater risk of progression to cirrhosis or decompensated liver disease than HCV-monoinfected patients. 5 The risk of progression is even greater in HCV/HIV-coinfected patients with low CD4 T lymphocyte (CD4) cell counts. Although antiretroviral therapy (ART) appears to slow the rate of HCV disease progression in HCV/HIVcoinfected patients, several studies have demonstrated that the rate continues to exceed that observed in patients without HIV infection. 7,8 Whether HCV infection accelerates HIV progression, as measured by AIDS-related opportunistic infections (OIs) or death, 9 is unclear. Although some older ARV drugs that are no longer commonly used have been associated with higher rates of hepatotoxicity in patients with chronic HCV infection, 10,11 newer ARV agents currently in use appear to be less hepatotoxic.
For more than a decade, the mainstay of treatment for HCV infection was a combination regimen of peginterferon and ribavirin (PegIFN/RBV), but this regimen was associated with a poor rate of sustained virologic response (SVR), especially in HCV/HIV-coinfected patients. Rapid advances in HCV drug development led to the discovery of new classes of direct-acting antiviral (DAA) agents that target the HCV replication cycle. Recently approved DAA agents are used with or without RBV and have higher SVR rates, reduced pill burden, less frequent dosing, fewer side effects, and shorter durations of therapy than earlier approved agents. Guidance on the treatment and management of HCV in HIV-infected and HIVuninfected adults can be found at /. 17
# Panel Recommendations
- All HIV-infected patients should be screened for hepatitis C virus (HCV) infection. Patients at high risk of HCV infection should be screened annually and whenever HCV infection is suspected.
- Antiretroviral therapy (ART) may slow the progression of liver disease by preserving or restoring immune function and reducing HIVrelated immune activation and inflammation. For most HCV/HIV-coinfected patients, including those with cirrhosis, the benefits of ART outweigh concerns regarding drug-induced liver injury. Therefore, ART should be initiated in all HCV/HIV-coinfected patients, regardless of CD4 T lymphocyte (CD4) cell count (AI).
- Initial ART regimens recommended for most HCV/HIV-coinfected patients are the same as those recommended for individuals without HCV infection. However, when treatment for both HIV and HCV is indicated, the regimen should be selected with special considerations of potential drug-drug interactions and overlapping toxicities with the HCV treatment regimen (see discussion in the text below and in Table 12).
- Combined treatment of HIV and HCV can be complicated by drug-drug interactions, increased pill burden, and toxicities. Although ART should be initiated for all HCV/HIV-coinfected patients regardless of CD4 cell count, in ART-naive patients with CD4 counts >500 cells/mm 3 some clinicians may choose to defer ART until HCV treatment is completed (CIII).
- In patients with lower CD4 counts (eg, <200 cells/mm 3 ), ART should be initiated promptly (AI) and HCV therapy may be delayed until the patient is stable on HIV treatment (CIII).
# Rating of Recommendations
# Assessment of Hepatitis C Virus/HIV Coinfection
- All HIV-infected patients should be screened for HCV infection using sensitive immunoassays licensed for the detection of antibody to HCV in blood. 18 At-risk HCV-seronegative patients should undergo repeat testing annually. HCV-seropositive patients should be tested for HCV RNA using a sensitive quantitative assay to confirm the presence of active infection. Patients who test HCV RNA-positive should undergo HCV genotyping and liver disease staging as recommended by the most updated HCV guidelines (see /).
- Patients with HCV/HIV coinfection should be counseled to avoid consuming alcohol and to use appropriate precautions to prevent transmission of HIV and/or HCV to others. HCV/HIV-coinfected patients who are susceptible to hepatitis A virus (HAV) or hepatitis B virus (HBV) infection should be vaccinated against these viruses.
- All patients with HCV/HIV coinfection should be evaluated for HCV therapy.
# Antiretroviral Therapy in Hepatitis C Virus/HIV Coinfection
# When to Start Antiretroviral Therapy
The rate of liver disease (liver fibrosis) progression is accelerated in HCV/HIV-coinfected patients, particularly in individuals with low CD4 counts (≤350 cells/mm 3 ). Data largely from retrospective cohort studies are inconsistent regarding the effect of ART on the natural history of HCV disease; 6,19,20 however, some studies suggest that ART may slow the progression of liver disease by preserving or restoring immune function and by reducing HIV-related immune activation and inflammation. Therefore, ART should be initiated in all HCV/HIV-coinfected patients, regardless of CD4 count (AI). However, in HIV treatmentnaive patients with CD4 counts >500 cells/mm 3 , some clinicians may choose to defer ART until HCV treatment is completed to avoid drug-drug interactions (CIII). Compared to patients with CD4 counts >350 cells/mm 3 , those with CD4 counts <200 cells/mm 3 had lower HCV treatment response rates and higher rates of toxicity due to PegIFN/RBV. 24 There is a lack of data regarding HCV treatment response to combination therapy with DAA agents in those with advanced immunosuppression. For patients with lower CD4 counts (eg, <200 cells/mm 3 ), ART should be initiated promptly (AI) and HCV therapy may be delayed until the patient is stable on HIV treatment (CIII).
# Antiretroviral Drugs to Start and Avoid
Initial ARV combination regimens recommended for most HIV treatment-naive patients with HCV are the same as those recommended for patients without HCV infection. Special considerations for ARV selection in HCV/HIV-coinfected patients include the folllowing:
- When both HIV and HCV treatments are indicated, the ARV regimen should be selected with careful consideration of potential drug-drug interactions (see Table 12) and overlapping toxicities with the HCV treatment regimen.
- Cirrhotic patients should be carefully evaluated by an expert in advanced liver disease for signs of liver decompensation according to the Child-Turcotte-Pugh classification system. This assessment is necessary because hepatically metabolized ARV and HCV DAA drugs may be contraindicated or require dose modification in patients with Child-Pugh class B and C disease (see Appendix B, Table 7).
# Hepatotoxicity
Drug-induced liver injury (DILI) following the initiation of ART is more common in HCV/HIV-coinfected patients than in those with HIV monoinfection. HVC/HIV coinfected individuals with advanced liver disease (eg, cirrhosis, end-stage liver disease) are at greatest risk for DILI. 29 Eradicating HCV infection with treatment may decrease the likelihood of ARV-associated DILI. 30 Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents J-8
- Given the substantial heterogeneity in patient populations and drug regimens, comparison of DILI incidence rates for individual ARV agents across clinical trials is difficult. The incidence of significant elevations in liver enzyme levels (more than 5 times the upper limit of the normal laboratory reference range) is low with currently recommended ART regimens. Hypersensitivity (or allergic) reactions associated with rash and elevations in liver enzymes can occur with certain ARVs. Alanine aminotransferase (ALT) and aspartate aminotransferase (AST) levels should be monitored 2 to 8 weeks after initiation of ART and every 3 to 6 months thereafter. Mild to moderate fluctuations in ALT and/or AST are typical in individuals with chronic HCV infection. In the absence of signs and/or symptoms of liver disease or increases in bilirubin, these fluctuations do not warrant interruption of ART. Patients with significant ALT and/or AST elevation should be carefully evaluated for signs and symptoms of liver insufficiency and for alternative causes of liver injury (eg, acute HAV or HBV infection, hepatobiliary disease, or alcoholic hepatitis). Short-term interruption of the ART regimen or of the specific drug suspected of causing the DILI may be required. 31
# Concurrent Treatment of HIV and Hepatitis C Virus Infection
Concurrent treatment of HIV and HCV is feasible, but treatment may be complicated by pill burden, drugdrug interactions, and toxicities. In this context, the stage of HCV disease should be assessed to determine the medical need for HCV treatment and to inform the decision on when to start treatment. Additional guidance on the treatment and management of HCV in HIV-infected and uninfected adults can be found at /. If the decision is to treat HCV, the ART regimen may need to be modified before HCV treatment is initiated to reduce the potential for drug-drug interactions and/or toxicities that may develop during the period of concurrent HIV and HCV treatment. See Table 12 for recommendations on the concomitant use of selected drugs for treatment of HCV and HIV infection. In patients with suppressed plasma HIV RNA and modified ART, HIV RNA should be measured within 4 to 8 weeks after changing HIV therapy to confirm the effectiveness of the new regimen. After HCV treatment is completed, the modified ART regimen should be continued for at least 2 weeks before reinitiating the original regimen. Continued use of the modified regimen is necessary because of the prolonged half-life of some HCV drugs and the potential risk of drug-drug interactions if a prior HIV regimen is resumed soon after HCV treatment is completed.
# Antiretroviral and Hepatitis C Virus Drug-Drug Interactions
Considerations for the concurrent use of ART and recommended HCV agents (per /) are discussed below. Table 12 provides recommendations on the concomitant use of selected drugs for treatment of HCV and HIV infection.
- Sofosbuvir is an HCV NS5B nucleotide polymerase inhibitor that is not metabolized by the cytochrome P450 enzyme system and, therefore, can be used in combination with most ARV drugs. Sofosbuvir is a substrate of p-glycoprotein (P-gp). P-gp inducers, such as tipranavir (TPV), may decrease sofosbuvir plasma concentrations and should not be coadministered with sofosbuvir. No other clinicially significant pharmocokinetic interactions between sofosbuvir and ARVs have been identified.
- Ledipasvir is an HCV NS5A inhibitor and is part of a fixed-dose drug combination of sofosbuvir and ledipasvir. 32 Similar to sofosbuvir, ledipasvir is not metabolized by the cytochrome P (CYP) 450 system of enzymes and is a substrate for P-gp. Ledipasvir inhibits the drug transporters P-gp and breast cancer resistance protein (BCRP) and may increase intestinal absorption of coadministered substrates for these transporters. - Daclatasvir is an HCV NS5A inhibitor that is approved for use with sofosbuvir. 33 Daclatasvir is a substrate of CYP3A and an inhibitor of P-gp, OATP1B1/3, and BCRP. Moderate or strong inducers of CYP3A, such as efavirenz (EFV), etravirine (ETR), and nevirapine (NVP), may decrease plasma levels of daclatasvir and reduce the drug's therapeutic effect. In this case, the daclatasvir dosage should be increased from 60 mg once daily to 90 mg once daily. By contrast, strong CYP3A inhibitors may increase plasma levels of daclatasvir, in which case the daclatasvir dosage should be reduced to 30 mg once daily. Clinically relevant interactions between daclatasvir and TDF have not been observed. Because daclatasvir also is an inhibitor of P-gp, OATP1B1/3, and BCRP, administration of daclatasvir may increase systemic exposure to medications that are substrates of these transporters and proteins, which could increase or prolong the therapeutic or adverse effects of that medication.
- Elbasvir (a NS5A inhibitor) and grazoprevir (an HCV PI) are available in combination as a fixed-dose tablet. Both elbasvir and grazoprevir are substrates of CYP3A and P-gp. 34 In addition, grazoprevir is a substrate of OATP1B1/3 transporters. Coadministration of the elbasvir and grazoprevir combination with strong CYP3A inducers, such as EFV, is contraindicated because elbasvir and grazoprevir concentrations may be decreased. Coadministration of strong CYP3A4 inhibitors with elbasvir and grazoprevir is also contraindicated or not recommended bcause elbasvir and grazoprevir concentrations may increase.
Elbasvir and grazoprevir are also inhibitors of the drug transporter BCRP and may increase plasma concentrations of coadministered BCRP substrates.
- The fixed-dose drug combination of ombitasvir (a NS5A inhibitor), paritaprevir (an HCV PI), and RTV (a pharmacokinetic enhancer) is copackaged with or without dasabuvir, an NS5B inhibitor. 35,36 - Paritaprevir is a substrate and inhibitor of the CYP3A4 enzymes and therefore may have significant interactions with certain ARVs that are metabolized by, or may induce or inhibit, the same pathways.
Paritaprevir is also a substrate and inhibitor of OATP1B1/3.
- Both ombitasvir and paritaprevir are inhibitors of UGT1A1 and also substrates of P-gp and BCRP.
- Dasabuvir is primarily metabolized by the CYP2C8 enzymes. It is also an inhibitor of UGT1A1 and a substrate of P-gp and BCRP.
- Coadministration of ombitasvir/paritaprevir/RTV with drugs that are substrates or inhibitors of the enzymes and drug transporters noted may result in increased plasma concentrations of either the HCV drugs or the coadministered drug. Given that several CYP enzymes and drug transporters are involved in the metabolism of ombitasvir, paritaprevir, and RTV, complex drug-drug interactions are likely. Therefore, clinicians need to consider all coadministered drugs for potential drug-drug interactions.
- If a patient's ART regimen contains RTV-or COBI-boosted atazanavir (ATV), the boosting agent should be discontinued during therapy with ombitasvir/paritaprevir/RTV and ATV should be taken in the morning at the same time as the HCV therapy. RTV or COBI should be restarted after completion of HCV treatment. HIV-infected patients not on ART should be placed on an alternative HCV regimen because RTV has activity against HIV.
- Simeprevir is an HCV NS3/4A PI that is approved for use with sofosbuvir. Simeprevir is a substrate and inhibitor of CYP3A4 and P gp enzymes, and therefore has significant interactions with ARVs that are metabolized by the same pathways (eg, HIV PIs, EFV, ETR). Simeprevir is also an inhibitor of the drug transporter OATP1B1/3.
Given that the treatment of HCV is rapidly evolving, this section will be updated when new HCV drugs that may impact the treatment of HIV are approved. For guidance on the treatment of HCV infection, refer to /.
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents J-10 5 Although rifapentine induces cytochrome P (CYP) 450 isoenzymes and can potentially cause significant drug-drug interactions, there are now pharmacokinetic (PK) data supporting its use with efavirenz (EFV) 6 and raltegravir (RAL) 7 (AIII). Rifampin or rifabutin for 4 months may also be considered for LTBI treatment, but clinicians should pay careful attention to potential drug-drug interactions with specific ARV drugs (see Tables 18 through 19e).
If an HIV-infected patient is a contact of an individual infected with drug-resistant TB, the options for LTBI treatment should be modified. In this setting, consultation with a TB expert is advised.
# Antiretroviral Therapy's Effect in Preventing Active Tuberculosis
Accumulating evidence also suggests that ART can prevent active TB. The TEMPRANO study conducted in Côte d'Ivoire randomized 2,056 HIV-infected participants who did not meet WHO criteria for ART initiation to 1 of 4 study arms: deferred ART (until WHO criteria were met); deferred ART plus INH preventive therapy (IPT); early ART; or early ART plus IPT. 8 Among participants with CD4 T lymphocyte (CD4) counts >500 cells/mm 3 , starting ART immediately reduced the risk of death and serious HIV-related illness, including TB, by 44% (2.8 vs. 4.9 severe events per 100 person-years with immediate and deferred ART, respectively; P = .0002). Six months of IPT independently reduced the risk of severe HIV morbidity by 35% (3.0 vs. 4.7 severe events per 100 person-years with IPT and no IPT, respectively; P = .005) with no overall increased risk of other adverse events. In the START study, 4,685 participants with CD4 counts >500 cells/mm 3 were randomized to receive immediate ART or ART deferred until their CD4 count dropped to 350 cells/mm 3 or until they developed a clinical condition that required ART. TB was one of the three most common clinical events, occurring in 14% of participants in the immediate initiation group and 20% of participants in the deferred initiation group. 9 Collectively, these two large randomized studies showed that early initiation of ART (with or without IPT) reduced active TB, particularly in countries with high prevalence of HIV/TB coinfection.
# Antiretroviral Therapy for HIV-Infected Patients with Active Tuberculosis
Active pulmonary or extrapulmonary TB disease requires prompt initiation of TB treatment. The treatment of active TB disease in HIV-infected patients should follow the general principles guiding treatment for individuals without HIV. The Guidelines for the Prevention and Treatment of Opportunistic Infections in HIV-Infected Adults and Adolescents (Adult and Adolescent OI Guidelines) 10 include a more complete discussion of the diagnosis and treatment of TB disease in HIV-infected patients.
All patients with HIV/TB disease should be treated with ART (AI). Important issues related to the use of ART in patients with active TB disease include:
- When to start ART;
- Significant PK drug-drug interactions between anti-TB and ARV agents;
- The additive toxicities associated with concomitant ARV and anti-TB drug use; and
- The development of TB-associated immune reconstitution inflammatory syndrome (IRIS) after ART initiation.
# Patients Diagnosed with Tuberculosis While Receiving Antiretroviral Therapy
When TB is diagnosed in a patient receiving ART, the ARV regimen should be assessed with particular attention to potential PK interactions between ARVs and TB drugs (discussed below). The patient's ARV regimen may need to be modified to permit use of the optimal TB treatment regimen (see Tables 18 through 19e for dosing recommendations).
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents J-17
Patients Not Yet Receiving Antiretroviral Therapy
In patients not taking ART at the time of TB diagnosis, delaying ART initiation for an extended period may result in further immune decline with increased risk of new opportunistic diseases and death, especially in patients with advanced HIV disease. Several randomized controlled trials have attempted to address the optimal timing of ART initiation in the setting of active TB disease. The results of these trials have caused a paradigm shift favoring earlier ART initiation in patients with TB. The timing of ART in specific patient populations is discussed below.
Patients with CD4 count <50 cells/mm 3 : Three large randomized clinical trials in HIV/TB-coinfected patients, conducted in Africa and Asia, all convincingly showed that early ART in those with CD4 counts <50 cell/mm 3 significantly reduced AIDS events or deaths. In these studies, early ART was defined as starting ART within 2 weeks and at no later than 4 weeks after initiation of TB therapy. In all three studies, IRIS was more common in patients initiating ART earlier than in patients starting ART later, but the syndrome was infrequently associated with mortality. Collectively these 3 trials support initiation of ART within the first 2 weeks of TB treatment in patients with CD4 cell counts <50 cells/mm 3 (AI).
Patients with CD4 counts ≥50 cells/mm 3 : In the 3 studies mentioned above, there was no survival benefit for patients with CD4 count ≥50 cells/mm 3 who initiated ART at <2 weeks versus later (8 to 12 weeks) after beginning TB treatment. ART should not be delayed until TB treatment is completed, as this strategy was associated with higher mortality in the SAPiT-1 study. 11 Importantly, none of the studies demonstrated harm from earlier ART initiation, and there are many well-documented benefits from ART in people with HIV regardless of TB coinfection. It is unlikely that more trials will be conducted to specifically inform the decision on when to start ART in patients with TB and CD4 counts over 50 cells/mm 3 . However, given the growing body of evidence supporting early ART in general and lack of data showing any harm in TBcoinfected patients, the Panel recommends ART initiation within 8 weeks of starting TB treatment for those with ≥50 cells/mm 3 (AIII).
Patients with drug-resistant TB: Mortality rates in patients coinfected with multidrug-resistant (MDR) or extensively drug-resistant (XDR) TB and HIV are very high. 15 Retrospective case control studies and case series provide growing evidence of better outcomes associated with receipt of ART in such coinfected patients, 16,17 but the optimal timing for initiation of ART is unknown. Management of HIV-infected patients with drug-resistant TB is complex, and expert consultation is encouraged (BIII).
Patients with TB meningitis: TB meningitis is often associated with severe complications and a high mortality rate. In a study conducted in Vietnam, patients were randomized to immediate ART or to ART deferred 2 months after initiation of TB treatment. A significantly higher rate of severe (Grade 4) adverse events was seen in patients who received immediate ART than in those with deferred therapy (80.3% vs. 69.1% for early and deferred ART, respectively; P = 0.04). 18 Therefore, caution should be exercised when initiating ART early in patients with TB meningitis (AI). outline the magnitude of these interactions and provide dosing recommendations when rifamycins and selected ARV drugs are used concomitantly.
As a potent enzyme inducer, rifampin use leads to significant reduction in ARV drug exposure; therefore, use of rifampin is not recommended for patients receiving PIs (boosted or unboosted), EVG, etravirine (ETR), rilpivirine (RPV), or TAF. Increased ARV doses are needed when rifampin is used with DTG, RAL, or MVC.
In contrast to its effect on other ARV drugs, rifampin only leads to modest reduction in EFV concentrations. 21,22 Several observational studies suggest that good virologic, immunologic, and clinical outcomes may be achieved with standard doses of EFV. 23,24 Even though the current EFV label recommends increasing the EFV dose from 600 mg to 800 mg once daily in patients weighing >50 kg, 25 this dosage increase is generally not necessary.
Rifabutin, a weaker CYP3A4 enzyme inducer, is an alternative to rifampin, especially in patients receiving PI-or INSTI-based ARV regimens. Because rifabutin is a substrate of the CYP 450 enzyme system, its metabolism may be affected by NNRTIs or PIs. Therefore, rifabutin dosage adjustment is generally recommended (see Tables 18 through 19e for dosing recommendations).
Rifapentine is a long-acting rifamycin which can be given once weekly with INH to treat latent TB infection. 26 Once-daily rifapentine is a more potent inducer than daily rifampin therapy. 27 The impact of once-weekly dosing of rifapentine on the PKs of most ARV drugs has not been systematically explored. Once-daily rifapentine did not affect the oral clearance of EFV in HIV-infected individuals 28 and has minimal impact on EFV exposure when given once weekly, 6 whereas once-weekly rifapentine led to increase instead of decrease in RAL drug exposure in healthy volunteers. 7 Pending additional PK data on the effect of rifapentine on other ARV drugs, once-weekly INH plus rifapentine for LTBI treatment should only be given to patients receiving either an EFV-or RAL-based regimen (AIII).
After selecting the ARV drugs and rifamycin to use, clinicians should determine the appropriate dose of each, and should closely monitor the patients to assure good control of both TB and HIV infections. Suboptimal HIV suppression or suboptimal response to TB treatment should prompt assessment of drug adherence, adequacy of drug exposure (consider therapeutic drug monitoring ), or presence of acquired HIV or TB drug resistance.
# Tuberculosis-Associated Immune Reconstitution Inflammatory Syndrome
IRIS is a clinical condition caused by ART-induced restoration of pathogen-specific immune responses to opportunistic infections such as TB, resulting in either the deterioration of a treated infection (paradoxical IRIS) or a new presentation of a previously subclinical infection (unmasking IRIS). TB-associated IRIS (TB-IRIS) has been reported in 8% to more than 40% of patients starting ART after TB is diagnosed, although the incidence depends on the definition of IRIS and the intensity of monitoring. 29,30 Predictors of IRIS include a baseline CD4 count <50 cells/mm 3 ; higher on-ART CD4 counts; high pre-ART and lower on-ART HIV viral loads; severity of TB disease, especially high pathogen burden; and a less than 30-day interval between initiation of TB and HIV treatments. 24, Most IRIS in HIV/TB disease occurs within 3 months of the start of ART. IRIS ranges from mild to severe to life-threatening. Patients with mild or moderately severe IRIS can be managed symptomatically or treated with nonsteroidal inflammatory agents. Patients with more severe IRIS can be treated successfully with corticosteroids, although data on the optimal dose, duration of therapy, and overall safety and efficacy are limited. 34 In the presence of IRIS, neither TB therapy nor ART should be stopped because both therapies are necessary for the long-term health of the patient (AIII).
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents K-1
# Limitations to Treatment Safety and Efficacy
Adherence to Antiretroviral Therapy (Last updated May 1, 2014; last reviewed May 1, 2014)
Strict adherence to antiretroviral therapy (ART) is key to sustained HIV suppression, reduced risk of drug resistance, improved overall health, quality of life, and survival, 1,2 as well as decreased risk of HIV transmission. 3 Conversely, poor adherence is the major cause of therapeutic failure. Achieving adherence to ART is a critical determinant of long-term outcome in HIV infected patients. For many chronic diseases, such as diabetes or hypertension, drug regimens remain effective even after treatment is resumed following a period of interruption. In the case of HIV infection, however, loss of virologic control as a consequence of non-adherence to ART may lead to emergence of drug resistance and loss of future treatment options. Many patients initiating ART or already on therapy are able to maintain consistent levels of adherence with resultant viral suppression, CD4+ T-lymphocyte (CD4) count recovery, and improved clinical outcomes. Others, however, have poor adherence from the outset of ART and/or experience periodic lapses in adherence over the lifelong course of treatment. Identifying those with adherence-related challenges that require attention and implementing appropriate strategies to enhance adherence are essential roles for all members of the treatment team.
Recent data underscore the importance of conceptualizing treatment adherence broadly to include early engagement in care and sustained retention in care. The concept of an HIV "treatment cascade" has been used to describe the process of HIV testing, linkage to care, initiation of effective ART, adherence to treatment, and retention in care. The U.S. Centers for Disease Control and Prevention estimates that only 36% of the people living with HIV in the United States are prescribed ART and that among these individuals, only 76% have suppressed viral loads. 4 Thus, to achieve optimal clinical outcomes and to realize the potential public health benefit of treatment as prevention, attention to each step in the treatment cascade is critical. 5 Therefore, provider skill and involvement to retain patients in care and help them achieve high levels of medication adherence are crucial.
This section provides updated guidance on assessing and monitoring adherence and outlines strategies to help patients maintain high levels of adherence.
# Factors Associated with Adherence Success and Failure
Adherence to ART can be influenced by a number of factors, including the patient's social situation and clinical condition; the prescribed regimen; and the patient-provider relationship. 6 It is critical that each patient receives and understands information about HIV disease including the goals of therapy (achieving and maintaining viral suppression, decreasing HIV-associated morbidity and mortality, and preventing sexual transmission of HIV), the prescribed regimen (including dosing schedule and potential side effects), the importance of strict adherence to ART, and the potential for the development of drug resistance as a consequence of suboptimal adherence. However, information alone is not sufficent to assure high levels of adherence; patients must also be positively motivated to initiate and maintain therapy.
From a patient perspective, nonadherence is often a consequence of one or more behavioral, structural, and psychosocial barriers (e.g., depression and other mental illnesses, neurocognitive impairment, low health literacy, low levels of social support, stressful life events, high levels of alcohol consumption and active substance use, homelessness, poverty, nondisclosure of HIV serostatus, denial, stigma, and inconsistent access to medications). Furthermore, patient age may affect adherence. For example, some adolescent and young adult HIV patients, in particular, have substantial challenges in achieving levels of adherence necessary for successful therapeutic outcomes (see HIV-Infected Adolescents section). 10,11 In additon, failure to adopt practices that facilitate adherence, such as linking medication taking to daily activities or using a medication reminder system or a pill organizer, is also associated with treatment failure. 12 Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents K-2
Characteristics of one or more components of the prescribed regimen can affect adherence. Simple, oncedaily regimens, 13 including those with low pill burden, without a food requirement, and few side effects or toxicities, are associated with higher levels of adherence. 14,15 Many currently available ARV regimens are much easier to take and better tolerated than older regimens. Studies have shown that patients taking oncedaily regimens have higher rates of adherence than those taking twice-daily dosing regimens. 15 However, data to support or refute the superiority of fixed-dose combination product of 1-pill versus 3-pills (of individual drug products), once-daily regimens-as might be required for the use of some soon-to-beavailable generic-based ARV regimens-are limited.
Characteristics of the clinical setting can also have important structural influences on the success or failure of medication adherence. Settings that provide comprehensive multidisciplinary care (e.g., with case managers, pharmacists, social workers, psychiatric care providers) are often more successful in supporting patients' complex needs, including their medication adherence-related needs. Further, specific settings, such as prisons and other institutional settings, may thwart or support medication adherence. Drug abuse treatment programs are often best suited to address substance use that may confound adherence and may offer services, such as directly observed therapy, that promote adherence.
Finally, a patient-provider relationship that enhances patient trust through non-judgmental and supportive care and use of motivational strategies can positively influence medication adherence.
# Routine Monitoring of Adherence and Retention in Care
Although there is no gold standard for assessing adherence, 1 properly implemented validated tools and assessment strategies can prove valuable in most clinical settings. Viral load suppression is one of the most reliable indicators of adherence and can be used as positive reinforcement to encourage continuous adherence.
When patients initiating ART fail to achieve viral suppression by 24 weeks of treatment, the possibility of suboptimal adherence and other factors must be assessed. Similarly, treatment failure as measured by detectable viral load during chronic care is most likely the result of non-adherence. Patient self-report, the most frequently used method for evaluating medication adherence, remains a useful tool for assessing adherence over time. However, self-reports must be properly and carefully assessed as patients may overestimate adherence. While carefully assessed patient self report of high-level adherence to ART has been associated with favorable viral load responses, 16,17 patient admission of suboptimal adherence is highly correlated with poor therapeutic response. The reliability of self report often depends on how the clinican elicits the information. It is most reliable when ascertained in a simple, nonjudgmental, routine, and structured format that normalizes less-than-perfect adherence and minimizes socially desirable or "white coat adherence" responses. Some patients may selectively adhere to components of a regimen believed to have the fewest side effects or the lowest dosing frequency or pill burden. To allow patients to more accurately disclose lapses in adherence, some experts suggest that providers inquire about the number of missed doses during a defined time period rather than directly asking "Are you taking your medicines?" Others advocate simply asking patients to rate their adherence during the last 4 weeks on a 5-or 6-point Likert scale. 18,19 Regardless of how obtained, patient selfreport, in contrast to other measures of adherence, allows for immediate patient-provider discussion to identify reasons for missed doses and to explore corrective strategies.
Other measures of adherence include pharmacy records and pill counts. Pharmacy records can be valuable when medications are obtained exclusively from a single source so that refills can be traced. Pill counts are commonly used but can be altered by patients. Other methods of assessing adherence include the use of therapeutic drug monitoring and electronic measurement devices (e.g., MEMS bottle caps and dispensing systems). However, these methods are costly and are usually done primarily in research settings.
# Interventions to Improve Adherence and Retention in Care
A continuum of ART adherence support services is necessary to meet individual patient needs. All health care
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents K-3 team members, including physicians, physician assistants, nurse practitioners, nurse midwives, nurses, pharmacists, medication managers, and social workers play integral roles in successful adherence programs. 17, Effective adherence interventions vary in modality and duration, and by clinical setting, provider, and patient. There are many options that can be customized to suit a range of needs and settings (see Table 13).
An increasing number of interventions have proven effective in improving adherence to ART. For descriptions of the interventions, see: . 23 Clinicians should provide all patients with a basic level of adherence-related information and support. Before writing the first prescription(s) for patients initiating or reinitiating ART, clinicians should assess the patient's adherence readiness. Clinicians should evaluate patients' knowledge about HIV disease, treatment, and prevention and provide basic information about ART, viral load and CD4 count and the expected outcome of ART based on these parameters, the importance of strict adherence to ART, and the consequences of nonadherence. In addition, clinicians should assess patients' motivation to successfully adhere to ART and identify and support facilitating factors and address potential barriers to adherence. Finally, clinicians should be assured that patients have the necessary medication taking skills to follow the regimen as prescribed.
Given the wide array of treatment options, individualizing treatment with patient involvement in decision making is the cornerstone of treatment planning and therapeutic success. The first principle of successful treatment is to design an understandable plan to which the patient can commit. 24,25 It is important to consider the patient's daily schedule; patient tolerance of pill number, size and frequency; and any issues affecting absorption (e.g., use of acid reducing therapy and food requirements). With the patient's input, a medication choice and administration schedule should be tailored to his/her routine daily activities. If necessary, soliciting help from family members may also improve adherence. Patients who are naive to ART should understand that their first regimen usually offers the best chance for taking a simple regimen that affords long-term treatment success and prevention of drug resistance. Establishing a trusting patient-provider relationship over time and maintaining good communication will help to improve adherence and long-term outcomes. Medication taking can also be enhanced by the use of pill organizers and medication reminder aids (e.g., alarm clock, pager, calendar).
Positive reinforcement can greatly help patients maintain high levels of adherence. This technique to foster adherence includes informing patients of their low or suppressed HIV viral load levels and increases in CD4 cell counts. Motivational interviewing has also been used with some successes. Recognizing high levels of adherence with incentives and rewards can facilitate treatment success in some patients. Adherence-contingent reward incentives such as meal tickets, grocery bags, lotto tickets, and cash have been used in the treatment of HIV and other chronic diseases. The effectiveness of using cash incentives to promote HIV testing, entry to care, and adherence to ART is currently being studied in the multi-site HPTN 065 trial. Other effective interventions include nurse home visits, a five-session group intervention, pager messaging, and couples or family-based interventions. To maintain high levels of adherence in some patients, it is critically important to provide substance abuse therapy and to strengthen social support. Directly observed therapy (DOT) has been effective in providing ART to active drug users 26 but not to patients in a general clinic population. 27 To determine whether additional adherence or retention interventions are warranted, assessments should be done at each clinical encounter and should be the responsibility of the entire health care team. Routine monitoring of HIV viral load, pharmacy records, and indicators that measure retention in care are useful to determine if more intense efforts are needed to improve adherence. Patients with a history of non-adherence to ART are at risk for poor adherence when re-starting therapy with the same or new drugs. Special attention should be given to identify and address any reason for previous poor adherence. Preferential use of ritonavirboosted protease inhibitor-(PI/r)-based ART, which has a higher barrier to the development of resistance than other treatment options, should be considered if poor adherence is predicted.
The critical elements of adherence go hand in hand with linkage-to-care and retention in care. A recently released guideline provides a number of strategies to improve entry and retention in care and adherence to therapy for HIV infected patients. 5 As with adherence monitoring, research advances offer many options for systematic monitoring of retention in care that may be used in accordance with local resources and standards. The options include surveillance of visit adherence, gaps in care, and the number of visits during a specified period of time. 28
# Conclusion
Adherence to ART is central to therapeutic success. Given the many available assessment strategies and interventions, the challenge for the treatment team is to select the techniques that best fit each patient and patient population, and, according to available resources, the treatment setting. In addition to maintaining high levels of medication adherence, attention to effective linkage to care, engagement in care, and retention in care is critical for successful treatment outcomes. To foster treatment success, there are interventions to support each step in the cascade of care, as well as guidance on systematic monitoring of each step in the cascade. 5
# Strategies Examples
Use a multidisciplinary team approach.
Provide an accessible, trustworthy health care team.
- Nonjudgmental providers, nurses, social workers, pharmacists, and medication managers Strengthen early linkage to care and retention in care.
- Encourage healthcare team participation in linkage to and retention in care.
Assess patient readiness to start ART.
Evaluate patient's knowledge about HIV disease, prevention and treatment and, on the basis of the assessment, provide HIV-related information.
- Considering the patient's current knowledge base, provide information about HIV, including the natural history of the disease, HIV viral load and CD4 count and expected clinical outcomes according to these parameters, and therapeutic and prevention consequences of non-adherence.
Identify facilitators, potential barriers to adherence, and necessary medication management skills before starting ART medication.
- Assess patient's cognitive competence and impairment.
- Assess behavioral and psychosocial challenges including depression, mental illnesses, levels of social support, high levels of alcohol consumption and active substance use, non-disclosure of HIV serostatus and stigma.
- Identify and address language and literacy barriers.
- Assess beliefs, perceptions, and expectations about taking ART (e.g., impact on health, side effects, disclosure issues, consequences of non-adherence).
- Ask about medication taking skills and foreseeable challenges with adherence (e.g., past difficulty keeping appointments, adverse effects from previous medications, issues managing other chronic medications, need for medication reminders and organizers).
- Assess structural issues including unstable housing, lack of income, unpredictable daily schedule, lack of prescription drug coverage, lack of continuous access to medications.
Provide needed resources.
- Provide or refer for mental health and/or substance abuse treatment.
- Provide resources to obtain prescription drug coverage, stable housing, social support, and income and food security.
# Strategies Examples
Involve the patient in ARV regimen selection.
- Review regimen potency, potential side effects, dosing frequency, pill burden, storage requirements, food requirements, and consequences of nonadherence.
- Assess daily activities and tailor regimen to predictable and routine daily events.
- Consider preferential use of PI/r-based ART if poor adherence is predicted.
- Consider use of fixed-dose combination formulation.
- Assess if cost/co-payment for drugs can affect access to medications and adherence.
Assess adherence at every clinic visit.
- Monitor viral load as a strong biologic measure of adherence.
- Use a simple behavioral rating scale.
- Employ a structured format that normalizes or assumes less-than-perfect adherence and minimizes socially desirable or "white coat adherence" responses.
- Ensure that other members of the health care team also assess adherence.
Use positive reinforcement to foster adherence success.
- Inform patients of low or non-detectable levels of HIV viral load and increases in CD4 cell counts.
- When needed, consider providing incentives and rewards for achieving high levels of adherence and treatment success.
Identify the type of and reasons for nonadherence. - Failure to fill the prescription(s)
- Failure to understand dosing instructions - Use adherence-related tools to complement education and counseling interventions (e.g., pill boxes, dose planners, reminder devices).
- Use community resources to support adherence (e.g., visiting nurses, community workers, family, peer advocates).
- Use patient prescription assistance programs.
- Use motivational interviews.
# Strategies Examples
Systematically monitor retention in care.
- Record and follow up on missed visits.
On the basis of any problems identified through systematic monitoring, consider options to enhance retention in care given resources available.
- Provide outreach for those patients who drop out of care.
- Use peer or paraprofessional treatment navigators.
- Employ incentives to encourage clinic attendance or recognize positive clinical outcomes resulting from good adherence.
- Arrange for directly observed therapy (if feasible).
Key to Acronyms: ART = antiretroviral therapy; CD4 = CD4 T lymphocyte; PI = protease inhibitor; PI/r = ritonavir-boosted protease inhibitor
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents K-14
# Switching Antiretroviral Therapy Because of Adverse Effects
Some patients experience treatment-limiting ART-associated toxicities, and in these cases, ART must be modified. ART-associated adverse events can range from acute and potentially life threatening to chronic and insidious. Serious life-threatening events (eg, hypersensitivity reaction due to ABC, symptomatic hepatotoxicity, or severe cutaneous reactions) require the immediate discontinuation of all ARV drugs and reinitiation of an alternative regimen without overlapping toxicity. Toxicities that are not life-threatening (eg, urolithiasis with atazanavir , renal tubulopathy with tenofovir disoproxil fumarate ) can usually be managed by substituting another ARV agent for the presumed causative agent without interrupting ART. Other, chronic, non-life-threatening adverse events (eg, dyslipidemia) can be addressed either by switching the potentially causative agent for another agent or by managing the adverse event with additional pharmacological or nonpharmacological interventions. Management strategies must be individualized for each patient.
Switching from an effective ARV regimen (or agent) to a new regimen (or agent) must be done carefully and only when the potential benefits of the change outweigh the potential complications of altering treatment.
The fundamental principle of regimen switching is to maintain viral suppression. When selecting a new agent or regimen, providers should be aware that resistance mutations selected for, regardless of whether previously or currently identified by genotypic resistance testing, are archived in HIV reservoirs. Even if resistance mutations are absent from subsequent resistance test results, they may reappear under selective pressure. It is critical that providers review the following before implementing any treatment switch:
- The patient's medical and complete ARV history, including prior virologic responses to ART;
- All previous resistance test results;
- Viral tropism (if maraviroc is being considered);
- HLA-B*5701 status (if ABC is being considered);
- Comorbidities;
- Adherence history;
- Prior intolerances to any ARVs; and
- Concomitant medications and supplements for potential drug interactions with ARVs.
A patient's acceptance of new food or dosing requirements must also be assessed. In some cases, medication costs may also be a factor to consider before switching treatment. Signs and symptoms of ART-associated adverse events may mimic those of comorbidities, adverse effects of concomitant medications, or HIV infection. Therefore, concurrent with ascribing a particular clinical event to ART, alternative causes for the event should be investigated. In the case of a severe adverse event, it may be necessary to discontinue or switch ARVs pending the outcome of such an investigation. For the first few months after an ART switch, the patient should be closely monitored for any new adverse events. The patient's viral load should also be monitored to assure continued viral suppression. Declines in BMD have been observed upon initiation of most ART regimens. Switching from TDF to alternative ARV agents has been shown to increase bone density, but the clinical significance of this increase remains uncertain.
TAF is associated with smaller declines in BMD than TDF, and with improvement in BMD upon switching from TDF. The long-term impact of TAF on patients with osteopenia or osteoporosis is unknown; close clinical monitoring is recommended in this setting.
# Bone Marrow Suppression
ZDV TDF, TAF, or ABC b ZDV has been associated with neutropenia and macrocytic anemia. Peripheral lipoatrophy is a legacy of prior thymidine analog (d4T and ZDV) use. Switching from these ARVs prevents worsening lipoatrophy, but fat recovery is typically slow (may take years) and incomplete.
# Lipohypertrophy
Accumulation of visceral, truncal, dorso-cervical, and breast fat has been observed during ART, particularly during use of older PI-based regimens (eg, IDV), but whether ART directly causes fat accumulation remains unclear. There is no clinical evidence that switching to another first line regimen will reverse weight or visceral fat gain. TDF may cause tubulopathy.
Switching from TDF to TAF is associated with improvement in proteinuria and renal biomarkers. The long-term impact of TAF on patients with pre-existing renal disease, including overt proximal tubulopathy, is unknown, and close clinical monitoring is recommended in this setting.
ATV/c, ATV/r, LPV/r DTG, RAL, or NNRTI COBI and DTG, and to a lesser extent RPV, can increase SCr through inhibition of creatinine secretion. This effect does not affect glomerular filtration. However, assess for renal dysfunction if SCr increases by >0.4 mg/dL.
# Stones
# Nephrolithiasis and cholelithiasis
ATV, ATV/c, ATV/r DRV/c, DRV/r, INSTI, or NNRTI Assuming that ATV is believed to be causing the stones.
a In patients with chronic active HBV infection, another agent active against HBV should be substituted for TDF.
b ABC should be used only in patients known to be HLA-B*5701 negative.
c TDF reduces ATV levels; therefore, unboosted ATV should not be co-administered with TDF. Long-term data for unboosted ATV are unavailable. Although antiretroviral therapy (ART) is expensive (see Table 16 below), the cost-effectiveness of ART has been demonstrated in analyses of older 1 and newer regimens, 2,3 as well as for treatment-experienced patients with drug-resistant HIV. 4 Given the recommendations for immediate initiation of lifelong treatment and the increasing number of patients taking ART, the Panel now introduces cost-related issues pertaining to medication adherence and cost-containment strategies, as discussed below.
# Key to
# Costs as They Relate to Adherence from a Patient Perspective
Cost sharing: Cost sharing is where the patient is responsible for some of the medication cost burden (usually accomplished via co-payments, co-insurance, or deductibles); these costs are often higher for branded medications than for generic medications. In one comprehensive review, increased patient cost sharing resulted in decreased medical adherence and more frequent drug discontinuation; for patients with chronic diseases, increased cost sharing was also associated with increased use of the medical system. 5 Conversely, co-payment reductions, such as those that might be used to incentivize prescribing of generic drugs, have been associated with improved adherence in patients with chronic diseases. 6 Whereas costsharing disproportionately affects low income patients, resources (e.g., the Ryan White AIDS Drug Assistance Program ) are available to assist eligible patients with co-pays and deductibles. Given the clear association between out-of-pocket costs for patients with chronic diseases and the ability of those patients to pay for and adhere to medications, clinicians should minimize patients' out-of-pocket drug-related expenses whenever possible.
Prior authorizations: As a cost-containment strategy, some programs require that clinicians obtain prior authorizations or permission before prescribing newer or more costly treatments rather than older or less expensive drugs. Although there are data demonstrating that prior authorizations do reduce spending, several studies have also shown that prior authorizations result in fewer prescriptions filled and increased nonadherence. Prior authorizations in HIV care specifically have been reported to cost over $40 each in provider personnel time (a hidden cost) and have substantially reduced timely access to medications. 10
Generic ART: The impact of the availability of generic antiretroviral (ARV) drugs on selection of ART in the United States is unknown. Because U.S. patent laws currently limit the co-formulation of some generic alternatives to branded drugs, generic options may result in increased pill burden. To the extent that pill burden, rather than drug frequency, results in reduced adherence, generic ART could lead to decreased costs but at the potential expense of worsening virologic suppression rates and poorer clinical outcomes. 11,12 Furthermore, prescribing the individual, less-expensive generic components of a branded co-formulated product rather than the branded product itself could, under some insurance plans, lead to higher copays-an out-of-pocket cost increase that may reduce medication adherence.
# Potential Cost Containment Strategies from a Societal Perspective
Given resource constraints, it is important to maximize the use of resources without sacrificing clinical outcomes. Evidence-based revisions to these guidelines recommend tailored laboratory monitoring for patients with long-term virologic suppression on ART as one possible way to provide overall cost savings. Data suggest that continued CD4 monitoring yields no clinical benefit for patients whose viral loads are suppressed and CD4 counts exceed 200 cells/mm 3 after 48 weeks of therapy. 13 A reduction in laboratory use from biannual to annual CD4 monitoring could save ~$10 million per year in the United States 14 (see the Laboratory Monitoring section). Although this is a small proportion of the overall costs associated with HIV care, such a strategy could reduce patients' personal expenses if they have deductibles for laboratory tests. The present and future availability of generic formulations of certain ARV drugs, despite the potential caveats of increased pill burden and reduced adherence, offers other money-saving possibilities on a much
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents K-19 greater scale. One analysis suggests the possibility of saving approximately $900 million nationally in the first year of switching from a branded fixed-dose combination product to a three-pill regimen containing generic efavirenz. 3 In summary, understanding HIV and ART-related costs in the United States is complicated because of the wide variability in medical coverage, accessibility, and expenses across regions, insurance plans, and pharmacies. In an effort to retain excellent clinical outcomes in an environment of cost-containment strategies, providers should remain informed of current insurance and payment structures, ART costs (see Table 16 below for estimates of drugs' average wholesale prices), discounts among preferred pharmacies, and available generic ART options. Providers should work with patients and their case managers and social workers to understand their patients' particular pharmacy benefit plans and potential financial barriers to filling their prescriptions. Additionally, providers should familiarize themselves with ARV affordability resources (such as ADAP and pharmaceutical company patient assistance programs for patients who qualify) and refer patients to such assistance if needed. Pharmacokinetic (PK) drug-drug interactions between antiretroviral (ARV) drugs and concomitant medications are common, and may lead to increased or decreased drug exposure. In some instances, changes in drug exposure may increase toxicities or affect therapeutic responses. When prescribing or switching one or more drugs in an ARV regimen, clinicians must consider the potential for drug-drug interactions-both those affecting ARVs and those affecting other drugs a patient is taking. A thorough review of concomitant medications in consultation with a clinician with expertise in ARV pharmacology can help in designing a regimen that minimizes undesirable interactions. Recommendations for managing a particular drug interaction may differ depending on whether a new ARV is being initiated in a patient on a stable concomitant medication or a new concomitant medication is being initiated in a patient on a stable ARV regimen. The magnitude and significance of interactions are difficult to predict when several drugs with competing metabolic pathways are prescribed concomitantly. When prescribing interacting drugs is necessary, clinicians should be vigilant in monitoring for therapeutic efficacy and/or concentration-related toxicities.
# Mechanisms of Pharmacokinetic Interactions
PK interactions may occur during absorption, metabolism, or elimination of the ARV and/or the interacting drugs. The most common mechanisms of interactions are described below and listed for each ARV drug in Table 17.
# Pharmacokinetic Interactions Affecting Drug Absorption
The extent of oral absorption of drugs can be affected by the following mechanisms:
- Acid-reducing agents, such as proton pump inhibitors, H2 antagonists, or antacids, can reduce the absorption of ARVs that require gastric acidity for optimal absorption (ie, atazanavir and rilpivirine ).
- Products that contain polyvalent cations, such as aluminum, calcium, magnesium-containing antacids, supplements, or iron products, can bind to integrase inhibitors (INSTIs) and reduce absorption of these ARV agents.
- Drugs that induce or inhibit the enzyme CYP3A4 or efflux transporter p-glycoprotein in the intestines may reduce or promote the absorption of other drugs.
# Pharmacokinetic Interactions Affecting Hepatic Metabolism
Two major enzyme systems are most frequently responsible for clinically significant drug interactions.
1. The cytochrome P450 enzyme system is responsible for the metabolism of many drugs, including the nonnucleoside reverse transcriptase inhibitors (NNRTI), protease inhibitors (PI), CCR5 antagonist maraviroc (MVC), and the INSTI elvitegravir (EVG). Cytochrome P450 3A4 (CYP3A4) is the most common enzyme responsible for drug metabolism, though multiple enzymes may be involved in the metabolism of a drug. ARVs and concomitant medications may be inducers, inhibitors, and/or substrates of these enzymes.
2. The uridine diphosphate (UDP)-glucuronosyltransferase (UGT) 1A1 enzyme is the primary enzyme responsible for the metabolism of the INSTIs dolutegravir (DTG) and raltegravir (RAL). Drugs that induce or inhibit the UGT enzyme can affect the PKs of these INSTIs.
# Pharmacokinetic Enhancers (Boosters)
PK enhancing is a strategy used to increase exposure of an ARV by concomitantly administering a drug that inhibits the enzymes that metabolize the ARV. Currently, two agents are used as PK enhancers: ritonavir (RTV) and cobicistat (COBI). Both of these agents are potent inhibitors of the CYP3A4 enzyme, resulting in For treatment of gout flares:
- Colchicine 0.6 mg for 1 dose, followed by 0.3 mg 1 hour later. Do not repeat dose for at least 3 days.
For prophylaxis of gout flares:
- If original dose was colchicine 0.6 mg BID, decrease to colchicine 0.3 mg once daily. If regimen was 0.6 mg once daily, decrease to 0.3 mg every other day.
For treatment of familial Mediterranean fever:
- Do not exceed colchicine 0.6 mg once daily or 0.3 mg BID. Limit metformin dose to no more than 1,000 mg per day.
# Flibanserin
When starting/stopping DTG in patient on metformin, dose adjustment of metformin may be necessary to maintain optimal glycemic control and/or minimize GI symptoms.
# Polyvalent Cation Supplements
Mg, Al, Fe, Ca, Zn, including multivitamins with minerals Despite substantial advances in prevention and treatment of HIV infection in the United States, the rate of new infections has remained stable. Although earlier prevention interventions mainly were behavioral, recent data demonstrate the strong impact of antiretroviral therapy (ART) on secondary HIV transmission. The most effective strategy to stem the spread of HIV will probably be a combination of behavioral, biological, and pharmacological interventions. 3
# Prevention Counseling
Counseling and related behavioral interventions for those living with HIV infection can reduce behaviors associated with secondary transmission of HIV. Each patient encounter offers the clinician an opportunity to reinforce HIV prevention messages, but multiple studies show that prevention counseling is frequently neglected in clinical practice. Although delivering effective prevention interventions in a busy practice setting may be challenging, clinicians should be aware that patients often look to their providers for messages about HIV prevention. Multiple approaches to prevention counseling are available, including formal guidance from the Centers for Disease Control and Prevention (CDC) for incorporating HIV prevention into medical care settings. Such interventions have been demonstrated to be effective in changing sexual risk behavior and can reinforce self-directed behavior change early after diagnosis. 9 CDC's "Prevention Is Care" campaign () helps providers (and members of a multidisciplinary care team) integrate simple methods to prevent transmission by HIVinfected individuals into routine care. These prevention interventions are designed to reduce the risk of secondary HIV transmission through sexual contact. The interventions are designed generally for implementation at the community or group level, but some can be adapted and administered in clinical settings by a multidisciplinary care team.
# Need for Screening for High-Risk Behaviors
The primary care visit provides an opportunity to screen patients for ongoing high-risk drug and sexual behaviors for transmitting HIV infection. Routine screening and symptom-directed testing for and treatment of sexually transmitted diseases (STDs), as recommended by CDC, 10 remain essential adjuncts to prevention counseling. Genital ulcers may facilitate HIV transmission and STDs may increase HIV viral load in plasma and genital secretions. 7, They also provide objective evidence of unprotected sexual activity, which should prompt prevention counseling.
The contribution of substance and alcohol use to HIV risk behaviors and transmission has been well established in multiple populations; therefore, effective counseling for injection and noninjection drug users is essential to prevent HIV transmission. Identifying the substance(s) of use is important because HIV prevalence, transmission risk, risk behaviors, transmission rates, and potential for pharmacologic intervention all vary according to the type of substance used. Risk-reduction strategies for injection drug users (IDUs), in addition to condom use, include needle exchange and instructions on cleaning drug paraphernalia. Evidence supporting the efficacy of interventions to reduce injection drug use risk behavior also exists. Interventions include both behavioral strategies 22 and opiate substitution treatment with methadone or buprenorphine. No successful pharmacologic interventions have been found for cocaine and methamphetamine users; cognitive and behavioral interventions demonstrate the greatest effect on reducing the risk behaviors of these users. Given the significant impact of cocaine and methamphetamine on sexual risk behavior, reinforcement of sexual risk-reduction strategies is important. 28 Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents M-2
# Antiretroviral Therapy as Prevention
ART can play an important role in preventing HIV transmission. Lower levels of plasma HIV RNA have been associated with decreases in the concentration of virus in genital secretions. Observational studies have demonstrated the association between low serum or genital HIV RNA and a decreased rate of HIV transmission among serodiscordant heterosexual couples. 29, Ecological studies of communities with relatively high concentrations of men who have sex with men (MSM) and IDUs suggest increased use of ART is associated with decreased community viral load and reduced rates of new HIV diagnoses. These data suggest that the risk of HIV transmission is low when an individual's viral load is below 400 copies/mL, 35,38 but the threshold below which transmission of the virus becomes impossible is unknown. Furthermore, to be effective at preventing transmission it is assumed that: (1) ART is capable of durably and continuously suppressing viremia; (2) adherence to an effective ARV regimen is high; and (3) there is an absence of a concomitant STD. Importantly, detection of HIV RNA in genital secretions has been documented in individuals with controlled plasma HIV RNA and data describing a differential in concentration of most ARV drugs in the blood and genital compartments exist. 30,39 At least one case of HIV transmission from a patient with suppressed plasma viral load to a monogamous uninfected sexual partner has been reported. 40 In the HPTN 052 trial in HIV-discordant couples, the HIV-infected partners who were ART naive and had CD4 counts between 350 and 550 cells/mm 3 were randomized to initiate or delay ART. In this study, those who initiated ART had a 96% reduction in HIV transmission to the uninfected partners. 3 Almost all of the participants were in heterosexual relationships, all participants received risk-reduction counseling, and the absolute number of transmission events was low: 1 among ART initiators and 27 among ART delayers. Over the course of the study virologic failure rates were less than 5%, a value much lower than generally seen in individuals taking ART for their own health. These low virologic failure rates suggest high levels of adherence to ART in the study, which may have been facilitated by the frequency of study follow-up (study visits were monthly) and by participants' sense of obligation to protect their uninfected partners. Therefore, caution is indicated when interpreting the extent to which ART for the HIV-infected partner protects seronegative partners in contexts where adherence and, thus, rates of continuous viral suppression, may be lower. Furthermore, for HIV-infected MSM and IDUs, biological and observational data suggest suppressive ART also should protect against transmission, but the actual extent of protection has not been established.
Rates of HIV risk behaviors can increase coincidently with the availability of potent combination ART, in some cases almost doubling compared with rates in the era prior to highly effective therapy. 9 A meta-analysis demonstrated that the prevalence of unprotected sex acts was increased in HIV-infected individuals who believed that receiving ART or having a suppressed viral load protected against transmitting HIV. 41 Attitudinal shifts away from safer sexual practices since the availability of potent ART underscore the role of provider-initiated HIV prevention counseling. With wider recognition that effective treatment decreases the risk of HIV transmission, it is particularly important for providers to help patients understand that a sustained viral load below the limits of detection will dramatically reduce but does not absolutely assure the absence of HIV in the genital and blood compartments and, hence, the inability to transmit HIV to others. Maximal suppression of viremia not only depends on the potency of the ARV regimen used but also on the patient's adherence to prescribed therapy. Suboptimal adherence can lead to viremia that not only harms the patient but also increases his/her risk of transmitting HIV (including drug-resistant strains) via sex or needle sharing. Screening for and treating behavioral conditions that can impact adherence, such as depression and alcohol and substance use, improve overall health and reduce the risk of secondary transmission.
# Summary
Consistent and effective use of ART resulting in a sustained reduction in viral load in conjunction with consistent condom usage, safer sex and drug use practices, and detection and treatment of STDs are essential Downloaded from on 7/31/2017
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents M-3 tools for prevention of sexual and blood-borne transmission of HIV. Given these important considerations, medical visits provide a vital opportunity to reinforce HIV prevention messages, discuss sex-and drugrelated risk behaviors, diagnose and treat intercurrent STDs, review the importance of medication adherence, and foster open communication between provider and patient.
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents J-15
Tuberculosis/HIV Coinfection (Last updated July 14, 2016; last reviewed July 14, 2016)
# Management of Latent Tuberculosis Infection in HIV-Infected Patients
According to the World Health Organization (WHO), approximately one-third of the world's population is infected with tuberculosis (TB), with a 5% to 10% lifetime risk of progressing to active disease. 1 HIVinfected persons who are coinfected with TB have a much higher risk of developing active TB than HIV-negative individuals, and this risk increases as immune deficiency worsens. 2
# Anti-Tuberculosis Therapy as Preventive Tuberculosis Treatment
Many clinical trials have demonstrated that treatment for latent tuberculosis infection (LTBI) reduces risk of active TB in HIV-infected persons, especially those with a positive tuberculin skin test. 3 After active TB disease has been excluded, the CDC recommends one of the following regimens for LTBI treatment ():
- Isoniazid (INH) daily or twice weekly for 9 months
- INH plus rifapentine once weekly for 12 weeks
- Rifampin (or rifabutin) daily for 4 months For more than 30 years, INH has been the cornerstone of treatment for LTBI to prevent active TB. It can be coadminstered with any antiretroviral (ARV) regimen and is safe to use in pregnant women. The combination of INH and rifapentine administered weekly for 12 weeks as directly observed therapy (DOT) is another treatment option for LTBI. In the PREVENT TB study, rifapentine plus INH for 12 weeks was as safe and effective as 9 months of INH alone in preventing TB in HIV-infected patients who were not on ART. 4 There was no difference in TB incidence in 1,148 South African HIV-infected adults who were
# Panel's Recommendations
- Selection of a tuberculosis (TB)-preventive treatment for HIV-infected individuals coinfected with latent tuberculosis infection (LTBI) should be based on the individual's antiretroviral therapy (ART) regimen as noted below:
- Any ART regimen can be used when isoniazid alone is used for LTBI treatment (AII).
- Only efavirenz (EFV)-or raltegravir (RAL)-based regimens (in combination with either abacavir/lamivudine or tenofovir disoproxil fumarate/emtricitabine ) can be used with once-weekly isoniazid plus rifapentine (AIII). - If rifampin or rifabutin is used to treat LTBI, clinicians should review Tables 18 through 19e to assess the potential for interactions among different antiretroviral (ARV) drugs and the rifamycins (BIII).
- All HIV-infected patients with active TB who are not on ART should be started on ART as described below:
- In patients with CD4 counts <50 cells/mm 3 : Initiate ART as soon as possible, but within 2 weeks of starting TB treatment (AI).
- In patients with CD4 counts ≥50 cells/mm 3 : Initiate ART within 8 weeks of starting TB treatment (AIII).
- In all HIV-infected pregnant women: Initiate ART as early as feasible, for treatment of maternal HIV infection and to prevent mother-to-child transmission (MTCT) of HIV (AIII). - In patients with tuberculous meningitis: Caution should be exercised when initiating ART early, as high rates of adverse events and deaths have been reported in a randomized trial (AI).
- Rifamycins are critical components of TB treatment regimens and should be included for HIV-infected patients with active TB, unless precluded because of TB resistance or toxicity. However, rifamycins have a considerable potential for drug-drug interactions.
Clinicians should review Tables 18 through 19e to assess the potential for interactions among different ARV drugs and the rifamycins (BIII). Adverse Effects of Antiretroviral Agents (Last updated July 14, 2016; last reviewed July 14,
# Rating of Recommendations
The overall benefits of viral suppression and improved immune function as a result of effective antiretroviral therapy (ART) far outweigh the risks associated with the adverse effects of some antiretroviral (ARV) drugs. However, adverse effects have been reported with the use of all antiretroviral (ARV) drugs and, in the earlier era of combination ART, were among the most common reasons for switching or discontinuing therapy and for medication nonadherence. 1 Fortunately, newer ARV regimens are associated with fewer serious and intolerable adverse effects than regimens used in the past. Generally, less than 10% of ART-naive patients enrolled in randomized trials have treatment-limiting adverse events. However, the longer-term complications of ART can be underestimated because most clinical trials enroll a select group of patients based on highly specific inclusion criteria and the duration of participant follow-up is relatively short. As ART is now recommended for all patients regardless of CD4 T lymphocyte (CD4) cell count, and therapy has to be continued indefinitely, the focus of patient management has evolved from identifying and managing early ARV-related toxicities to individualizing therapy to avoid long-term adverse effects such as bone or renal toxicity, dyslipidemia, insulin resistance, or accelerated cardiovascular disease. To achieve sustained viral suppression over a lifetime, both long-term and short-term ART toxicities must be anticipated and overcome. The clinician must consider potential adverse effects when selecting an ARV regimen, as well as the individual patient's comorbidities, concomitant medications, and prior history of drug intolerances.
Several factors may predispose individuals to adverse effects of ARV medications, such as:
- Concomitant use of medications with overlapping and additive toxicities
- Comorbid conditions that increase the risk of or exacerbate adverse effects (eg, alcoholism or coinfection with viral hepatitis 2,3 may increase the risk of hepatotoxicity; psychiatric disorders may be exacerbated by efavirenz -and, infrequently, by integrase strand transfer inhibitor -related CNS toxicities; 4,5 and borderline or mild renal dysfunction increases the risk of nephrotoxicity from tenofovir disoproxil fumarate )
- Drug-drug interactions that may increase toxicities of ARV drugs or concomitant medications
- Genetic factors that predispose patients to abacavir (ABC) hypersensitivity reaction, 6,7 EFV neuropsychiatric toxicity, 8 and atazanavir (ATV)-associated hyperbilirubinemia. 9 Information on the adverse effects of ARVs is outlined in several tables in the guidelines. Table 14 provides clinicians with a list of the most common and/or severe known ARV-associated adverse events for each drug class. The most common adverse effects of individual ARV agents are summarized in Appendix B, Tables 1-6.
Downloaded from on 7/31/2017
Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents K-9
Table 14. Antiretroviral Therapy-Associated Common and/or Severe Adverse Effects (page 1 of 5)
N/A indicates either that there are no reported cases for the particular side effect or that data for the specific ARV drug class are not available. See Appendix B for additional information listed by drug. HSR symptoms (in order of descending frequency): fever, rash, malaise, nausea, headache, myalgia, chills, diarrhea, vomiting, abdominal pain, dyspnea, arthralgia, and respiratory symptoms.
# Adverse Effect
Symptoms worsen with continuation of ABC.
Patients, regardless of HLA-B*5701 status, should not be rechallenged with ABC if HSR is suspected.
NVP: Hypersensitivity syndrome of hepatotoxicity and rash that may be accompanied by fever, general malaise, fatigue, myalgias, arthralgias, blisters, oral lesions, conjunctivitis, facial edema, eosinophilia, renal dysfunction, granulocytopenia, or lymphadenopathy.
Risk is greater for ARV-naive women with pre-NVP CD4 count >250 cells/mm 3 and men with pre-NVP CD4 count >400 cells/mm 3 . Overall, risk is higher for women than men.
Two-week dose escalation of NVP reduces risk. TAF: Less impact on renal biomarkers and lower rates of proteinuria than TDF.
N/A ATV and LPV/r: Increased risk of chronic kidney disease in a large cohort study.
IDV: ↑SCr, pyuria, renal atrophy or hydronephrosis IDV, ATV: Stone, crystal formation; adequate hydration may reduce risk.
# COBI and DTG:
Inhibits Cr secretion without reducing renal glomerular function.
N
# Other Mechanisms of Pharmacokinetic Interactions
Knowledge of drug transporters is evolving, elucidating additional drug interaction mechanisms. For example, DTG decreases the renal clearance of metformin by inhibiting organic anion transporters in renal tubular cells. Similar transporters aid hepatic, renal, and biliary clearance of drugs and may be susceptible to drug interactions. ARVs and concomitant medications may be inducers, inhibitors, and/or substrates of these drug transporters.
Tables 18-20b provide information on known or suspected drug interactions between ARV agents and commonly prescribed medications based on published PK data or information from product labels. The tables provide general guidance on drugs that should not be coadministered and recommendations for dose modifications or alternative therapy. b. Certain listed drugs are contraindicated on the basis of theoretical considerations. Thus, drugs with narrow therapeutic indices and suspected metabolic involvement with CYP 3A, 2D6, or unknown pathways are included in this table. Actual interactions may or may not occur in patients. c. HCV agents listed include only those that are commercially available at the publication of these guidelines. d. Use of oral midazolam is contraindicated. Single-dose parenteral midazolam can be used with caution and can be given in a monitored situation for procedural sedation. e. The manufacturer of cisapride has a limited-access protocol for patients who meet specific clinical eligibility criteria. f. In healthy volunteer studies, a high rate of Grade 4 serum transaminase elevation was seen when a higher dose of RTV was added to LPV/r or SQV or when doubledose LPV/r was used with rifampin to compensate for rifampin's induction effect; therefore, these dosing strategies should not be used when alternatives exist. g. Phenothiazines include chlorpromazine, fluphenazine, mesoridazine, perphenazine, prochlorperazine, promethazine, and thioridazine.
# Suggested alternatives to:
- Lovastatin, simvastatin: Fluvastatin, pitavastatin, and pravastatin (except for pravastatin with DRV/r) have the least potential for drug-drug interactions (see Table 19a). Use atorvastatin and rosuvastatin with caution; start with the lowest possible dose and titrate based on tolerance and lipid-lowering efficacy.
- Rifampin: Rifabutin (with dosage adjustment, see Tables 19a and 19b) - Start with tadalafil 5-mg dose and do not exceed a single dose of 10 mg every 72 hours. Monitor for adverse effects of tadalafil.
# For Treatment of PAH
In patients on a PI >7 days:
- Start with tadalafil 20 mg once daily and increase to 40 mg once daily based on tolerability.
In patients on tadalafil who require a PI:
- Stop tadalafil ≥24 hours before PI initiation. Seven days after PI initiation, restart tadalafil at 20 mg once daily and increase to 40 mg once daily based on tolerability.
In patients switching between COBI and RTV:
- Maintain tadalafil dose.
For Treatment of Benign Prostatic Hyperplasia:
Maximum recommended daily dose is 2.5 mg per day.
# Vardenafil
All PIs RTV 600 mg BID ↑ vardenafil AUC 49-fold Start with vardenafil 2.5 mg every 72 hours and monitor for adverse effects of vardenafil.
# Sedative/Hypnotics
# Alprazolam, Clonazepam, Diazepam
All PIs ↑ benzodiazepine possible RTV (200 mg BID for 2 days) ↑ alprazolam half-life 222% and AUC 248%
Consider alternative benzodiazepines such as lorazepam, oxazepam, or temazepam.
# Lorazepam, Oxazepam, Temazepam
All PIs No data These benzodiazepines are metabolized via non-CYP450 pathways; thus, there is less interaction potential than with other benzodiazepines. - Colchicine 0.6 mg x 1 dose, followed by 0.3 mg 1 hour later. Do not repeat dose for at least 3 days.
# Midazolam
With FPV without RTV:
- 1.2 mg x 1 dose and no repeat dose for at least 3 days For Prophylaxis of Gout Flares:
- Colchicine 0.3 mg once daily or every other day
With FPV without RTV:
- Colchicine 0.3 mg BID or 0.6 mg once daily or 0.3 mg once daily For Treatment of Familial Mediterranean Fever:
- Do not exceed colchicine 0.6 mg once daily or 0.3 mg BID.
With FPV without RTV:
- Do not exceed 1.2 mg once daily or 0.6 mg BID. a Approved dose for RPV is 25 mg once daily. Most PK interaction studies were performed using 75 to 150 mg per dose.
b Norbuprenorphine is an active metabolite of buprenorphine.
c R-methadone is the active form of methadone. b Based on between-study comparison.
# Key to
c Use a combination of two LPV/r 200 mg/50 mg tablets plus one LPV/r 100 mg/25 mg tablet to make a total dose of LPV/r 500 mg/125 mg. The Panel has carefully reviewed results from clinical HIV therapy trials and considered how they affect appropriate care guidelines. HIV care is complex and rapidly evolving. Where possible, the Panel has based recommendations on the best evidence from prospective trials with defined endpoints. Absent such evidence, the Panel has attempted to base recommendations on reasonable options for HIV care.
# Key to
HIV care requires partnerships and open communication. Guidelines are only a starting point for medical decision making involving informed providers and patients. Although guidelines can identify some parameters of high-quality care, they cannot substitute for sound clinical judgment.
As further research is conducted and reported, these guidelines will be modified. The Panel anticipates continued progress in refining ART regimens and strategies. The Panel hopes these guidelines are useful and is committed to their continued revision and improvement. - For patients with history of HSR, re-challenge is not recommended.
- Symptoms of HSR may include fever, rash, nausea, vomiting, diarrhea, abdominal pain, malaise, fatigue, or respiratory symptoms such as sore throat, cough, or shortness of breath.
- Some cohort studies suggest increased risk of MI with recent or current use of ABC, but this risk is not substantiated in other studies.
# Trizivir
(ABC/ZDV/3TC)
# Note: Generic available
Trizivir:
- (ABC 300 mg plus ZDV 300 mg plus 3TC 150 mg) tablet
Trizivir:
Epzicom:
- (ABC 600 mg plus 3TC 300 mg) tablet - 10 mg/mL oral solution Body Weight ≥60 kg:
- 400 mg once daily With TDF:
- 250 mg once daily Body Weight <60 kg:
- 250 mg once daily With TDF:
- 200 mg once daily Take 1/2 hour before or 2 hours after a meal.
Note: Preferred dosing with oral solution is BID (total daily dose divided into 2 doses).
# Renal excretion: 50%
Dosage adjustment in patients with renal insufficiency is recommended (see Appendix B, Table 7). - 300 mg once daily, or
- 7.5 level scoops once daily (dosing scoop dispensed with each prescription; 1 level scoop contains 1 g of oral powder).
- Take without regard to meals.
Mix oral powder with 2-4 ounces of a soft food that does not require chewing (e.g., applesauce, yogurt). Do not mix oral powder with liquid.
Renal excretion is primary route of elimination.
Dosage adjustment in patients with renal insufficiency is recommended (see Appendix B, Table 7). a For dosage adjustment in renal or hepatic insufficiency, see Appendix B, Table 7.
b Also see Table 14. Retrovir:
# Key to
- 100 mg capsule - 300 mg tablet (only available as generic)
- 10 mg/mL intravenous solution
- 10 mg/mL oral solution Retrovir:
- 300 mg BID, or
- 200 mg TID
- Take without regard to meals.
# Metabolized to GAZT
Renal excretion of GAZT Dosage adjustment in patients with renal insufficiency is recommended (see Appendix B, Table 7). - 1 tablet once daily
- Take with a meal.
a For dosage adjustment in renal or hepatic insufficiency, see Appendix B, Table 7.
b Also see Table 14.
c Rare cases of Stevens-Johnson syndrome have been reported with most NNRTIs; the highest incidence of rash was seen with NVP.
d Adverse events can include dizziness, somnolence, insomnia, abnormal dreams, depression, suicidality (suicide, suicide attempt or ideation), confusion, abnormal thinking, impaired concentration, amnesia, agitation, depersonalization, hallucinations, and euphoria. Approximately 50% of patients receiving EFV may experience any of these symptoms. Symptoms usually subside spontaneously after 2 to 4 weeks but may necessitate discontinuation of EFV in a small percentage of patients. a For dosage adjustment in hepatic insufficiency, see Appendix B, Table 7.
# Key to
b Also see Table 14. 7.
# Key to
b Also see Table 14. - 150 and 300 mg tablets
# Key to
- 150 mg BID when given with drugs that are strong CYP3A inhibitors (with or without CYP3A inducers) including PIs (except TPV/r)
- 300 mg BID when given with NRTIs, T20, TPV/r, NVP, RAL, and other drugs that are not strong CYP3A inhibitors or inducers
- 600 mg BID when given with drugs that are CYP3A inducers, including EFV, ETR, etc. (without a CYP3A inhibitor)
- Take without regard to meals. a For dosage adjustment in hepatic insufficiency, see Appendix B, Table 7.
b Also see Table 14.
Key to Abbreviations: BID = twice daily; CYP = cytochrome P; EFV = efavirenz; ETR = etravirine; MVC = maraviroc; NRTI = nucleoside reverse transcriptase inhibitor; NVP = nevirapine; PI = protease inhibitor; RAL = raltegravir; T20 = enfuvirtide; TPV/r = tipranavir/ritonavir Child-Pugh Score 5-9:
- 700 mg BID Child-Pugh Score 10-15:
- 350 mg BID
# PI-Naive or PI-Experienced Patients
Child-Pugh Score 5-6:
- (700 mg BID plus RTV 100 mg) once daily
Child-Pugh Score 7-9:
- (450 mg BID plus RTV 100 mg) once daily
Child-Pugh Score 10-15:
- (300 mg BID plus RTV 100 mg) once daily
# Indinavir (IDV) Crixivan
- 800 mg PO q8h No dosage adjustment necessary Mild-to-Moderate Hepatic Insufficiency Because of Cirrhosis:
- 600 mg q8h
Appendix B, - Do not use.
# Ritonavir (RTV) Norvir
As a PI-Boosting Agent:
- 100-400 mg per day
No dosage adjustment necessary Refer to recommendations for the primary PI.
# Saquinavir (SQV) Invirase
- (SQV 1000 mg plus RTV 100 mg) PO BID No dosage adjustment necessary Mild-to-Moderate Hepatic Impairment:
- Use with caution.
Severe Hepatic Impairment: a Refer to Appendix B, Tables 1-6 for additional dosing information.
b Including with chronic ambulatory peritoneal dialysis and hemodialysis.
c On dialysis days, take dose after HD session.
# Key to | 81,775 | {
"id": "362198c7a125c6aae19cc5b517375eec439801e1",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # INTRODUCTION
Male circumcision is the surgical removal of some or all of the foreskin (or prepuce) from the penis. 1 Medically attended circumcisions performed by health care professionals are voluntary, elective procedures that are preceded by an informed consent process. Male circumcision may also be performed as part of religious or cultural rites. Circumcision is a very common procedure; it has been estimated that approximately 30% of the world's male population is circumcised. 2 In the United States, overall rates of newborn male circumcision rose throughout much of the twentieth century largely due to changing cultural norms, increased rates of childbirths in hospitals, and a perception that male circumcision was more hygienic. 3 Personal decisions about circumcision are influenced by information about the preventive health benefits, safety, and risks of the procedure, as well as ethical, religious, cultural, familial, and economic considerations. Until recently, prevention of human immunodeficiency virus (HIV) infection was unlikely to factor in the decision to circumcise a male newborn or child, although other preventive health benefits of male circumcision may have been considered.
Study results indicate that male circumcision reduces the risk of male HIV acquisition through penile-vaginal sex. Results from randomized controlled trials (RCTs) provide the strongest level of evidence; however, we describe data from both RCTs and observational studies. Observational studies are often conducted instead of RCTs because of cost considerations and other barriers, and these studies may be the only feasible methodology in some cases for studying particular health outcomes, such as cancer. The results of 3 RCTs of voluntary male circumcision involving more than 10,000 HIV-negative men in settings in sub-Saharan Africa with predominantly heterosexual HIV transmission demonstrated 50%-60% reductions in HIV incidence in the study population. Statistically significant reductions in the following infections among circumcised heterosexual men were also demonstrated in RCTs: (1) incidence and prevalence of genital ulcer disease (GUD), 6,7 (2) incidence of herpes simplex virus type 2 (HSV-2), 8,9 (3) prevalence 8,10,11 and incidence 12,13 of high-risk oncogenic human papillomavirus (HR-HPV), (4) prevalence of Trichomonas vaginalis (T. vaginalis), 14 and (5) prevalence of Mycoplasma genitalium (M. genitalium). 15 Statistically significant reductions in the following infections among female sexual partners of circumcised men were also demonstrated in RCTs: (1) prevalence of GUD, 16 (2) prevalence of HR-HPV, 17 (3) prevalence of T. vaginalis, 16 and (4) prevalence of bacterial vaginosis (BV). 16 RCTs also provided evidence of increased clearance of HR-HPV infection among circumcised heterosexual men and their female sexual partners. 13 Observational studies indicate that male circumcision is likely to reduce rates of other sexually transmitted infections (STIs), 18,19 including syphilis, 20 in men and their female partners, and indicated other health benefits as well, such as reduced risk of penile and cervical 31 cancer and reduced rates of infant urinary tract infections (UTIs). Risks potentially associated with male circumcision include surgical adverse events (AEs), 1, adverse effects on sexual sensation and function, and behavioral risk compensation 5, (increased risk behavior because of perception of decreased risk). In February 2007, the World Health Organization (WHO) and the Joint United Nations Programme on HIV/AIDS (UNAIDS) jointly recommended that male circumcision be recognized as an additional important intervention to reduce heterosexual acquisition of HIV infection among men in settings with high HIV prevalence and low male circumcision rates. 87 When describing the generalizability of the results of the African RCTs to the United States, there are both inherent limitations and strengths to be considered. In the United States, the prevalence of HIV is generally much lower than that in sub-Saharan Africa, and most persons living with diagnosed HIV infection are men who have sex with men. In 2014, there were an estimated 955,081 persons living with diagnosed HIV infection in the United States 88 compared with 25.8 million people living with HIV infection in Sub-Saharan Africa. 89 In 2014, 53% of persons living HIV infection in the United States are men who have sex with men 88 while women account for greater than half of persons living with HIV infection in sub-Saharan Africa. 89 In the absence of randomized clinical trials among men who have sex with men (MSM), meta-analysis of observational studies indicates that male circumcision did not reduce the risk of HIV acquisition among MSM overall, however, circumcision was protective for men who reported practicing mainly unprotected insertive anal intercourse compared to those who practiced mainly receptive anal intercourse. 90 Despite these overall differences, the results of African trials are likely to have application to HIV prevention efforts in the United States. Although the United States differs epidemiologically from regions targeted by the WHO/UNAIDS recommendations and the sub-Saharan African areas in which the RCTs were conducted, there are geographic areas and subpopulations in the United States with HIV prevalence comparable to that of sub-Saharan African countries. For example, the prevalence of diagnosed HIV infection among all black or African American adult and adolescent males in San Francisco in 2012 (5,553.5/100,000 or 5.6%) 91 was similar to that among adults aged 15-49 years living in Kenya in 2014 (5.3%). 92 The predominant mode of sexual HIV acquisition among men in the United States is by penile-anal sex among men who engage in male-to-male sexual contact, but 8% of the estimated annual diagnoses of HIV infection in the United States are attributed to female-to-male sexual transmission. 93 Based on evidence from the African trials, uncircumcised heterosexual men living in areas with high HIV prevalence are likely to experience the most public health riskreduction benefit from elective male circumcision. Although most men in the United States are circumcised, non-Hispanic black and Mexican-American men have lower rates of circumcision compared to non-Hispanic white men. 94 and African-American and Hispanic men have higher rates of diagnosis of HIV-infection compared with white, non-Hispanic men. 95 In the absence of RCT data for MSM related to male circumcision, CDC cannot make definitive statements about whether male circumcision can reduce the risk of acquiring HIV and other STIs in this population. In a meta-analysis of pooled data from observational studies among MSM who practiced mainly or exclusively insertive anal sex, circumcision was associated with a statistically significant decrease in acquiring new HIV infections, 90 and one oncogenic HPV type 96 . Data from observational studies are considered less robust compared with data from RCTs; however, it is biologically plausible that MSM who practice mainly or exclusively insertive anal sex may experience a reduction in the risk for acquiring HIV and STIs similar to the reduction found in RCTs among heterosexuals practicing penile-vaginal sex.
Male circumcision is another strategy or option in the portfolio of biomedical interventions to prevent acquisition of HIV infection, along with condoms, HIV testing, HIV postexposure prophylaxis, HIV pre-exposure prophylaxis, and antiretroviral treatment of HIV-positive persons. Diverse interventions are critical as no one intervention is likely to prevent all HIV transmissions. For example, the overall effectiveness of condoms in reducing heterosexual HIV transmission is reported to be 80%. 97 Also, using such strategies in combination rather than in isolation would likely enhance their overall prevention effect. 98,99 This document presents the methods to gather, synthesize, and interpret data on the preventive health benefits, safety, and risks of medical male circumcision. It also describes the acceptability of, provider attitudes towards, access to, and costeffectiveness of male circumcision and related ethical considerations. The data examined are mainly in the context of the United States, but data from other countries are included to inform the U.S. experience, particularly where data are lacking for the United States or for comparison purposes. This background document was used to inform the development of an informational document for providers sharing information with male patients and parents regarding the role of male circumcision as a strategy for preventing HIV and other adverse health outcomes. a CDC has developed an informational document to help ensure that providers will now have this information and be able to discuss with persons considering undergoing male circumcision or parents of newborn boys considering male circumcision for their infants to have the best possible public health information to guide those decisions. Social, cultural, religious, and ethical considerations are other factors associated with decision-making.
# METHODS TO GATHER, SYNTHESIZE, AND INTERPRET EVIDENCE
A 2-day symposium to obtain input on the potential role of medically attended male circumcision in preventing transmission of HIV infection in the United States was held on April 26-27, 2007. It was a face-to-face meeting of external partners and a broad range of subject matter experts, including clinicians, academicians, and public health practitioners to obtain input on the potential role of male circumcision in preventing transmission of HIV in the United States. 100 A systematic Medline search-including the PubMed database and using the MeSH terms "male circumcision" or "circumcision" and "HIV"-was conducted for the symposium, and relevant literature describing male circumcision for the prevention of HIV along with policy statements regarding male circumcision, were distributed to participants in advance of the meeting. Views on the benefits of male circumcision, as well as risks and adverse effects, were presented by meeting participants. Participants examined scientific evidence to assess the relevance of male circumcision to the HIV burden in the United States, and explored other factors including potential cost-effectiveness, cultural, ethical and safety concerns, and integration of male circumcision with existing prevention methods. The questions posed to the participants, the resulting working group proposals, and names of participants in this symposium have been previously described. 100 Background, Methods, and Synthesis of Scientific Information Used to Inform "Information for Providers to Share with Male Patients and Parents Regarding Male Circumcision and the Prevention of HIV Infection, Sexually Transmitted Infections, and other Health Outcomes"
For this document, a systematic literature review was conducted to assess the association of male circumcision with medical benefits and AEs. All studies of outcomes of male circumcision published during 1950 through the end of October 2015 in Medline, Embase, and Cochrane Library, as well as citation lists were included. Systematic reviews were conducted for the following outcomes related to medically attended male circumcision: HIV acquisition and transmission (female-to-male, male-to-female, and male-to-male); other STIs; penile cancer; cervical cancer; infant UTIs; risks and AEs; sexual function and penile sensation. We conducted a broad search of all articles containing the MeSH terms "male circumcision" or "circumcision" and then conducted crosstab searches of the following terms: "HIV," "STIs," "STDs," "cancer," "malignancy," "urinary tract infection," "risks," "benefits," "sexual function," and "penile sensation." Our inclusion criteria included studies published in English that presented original data, including RCTs, cohort studies, case-control studies, crosssectional studies, case series and case reports. Any studies cited that were published prior to 1950 were used to provide a historical perspective. Study design was classified according to guidelines for collecting scientific data in reports published in the Guide to Community Preventive Services (the Guide). 101 The quality of evidence was assessed according to strength of association, consistency of findings across studies and the methodologic rigor of study designs. Because they reduce spurious causality and bias, RCTs were considered the most rigorous method for determining whether a causeeffect relationship existed between a treatment and an outcome.
Our review of the literature published during January 1950 through October 2015 also used PubMed to conduct a broad, non-systematic narrative review of articles that included the terms "male circumcision" or "circumcision" focused on articles relevant to the Considerations related to male circumcision in the United States section, because this section did not evaluate clinical outcomes. Articles published in English that included information about acceptability, cost-effectiveness, risk compensation, policy issues, and ethical issues related to male circumcision were reviewed. Data through 2014 were included to describe HIV trends in the United States.
In formulating the informational document, available evidence from the literature review was considered together with input on the potential role of male circumcision in preventing HIV transmission made at the 2007 symposium 100 and the numerous comments from the public that were received and reviewed. The 2007 symposium consisted of a two-day consultation with stakeholders. Working groups consisting of stakeholders summarized data and discussed the use of male circumcision for prevention of HIV and other sexually transmitted infections among heterosexual men, MSM, and newborn males. The drafted background document and informational document were reviewed by subject matter experts whose input was incorporated. The revised background document and informational document were posted in a Federal Register Notice for public comment. Numerous comments were reviewed and incorporated. The final drafts of the background document and informational document were sent to subject matter experts for review and their comments were incorporated. A subcommittee of CDC's Public Health Ethics Committee (PHEC) reviewed the informational document and provided guidance on ethical issues related to elective male circumcision. The developers of these guidelines disclose that they have no financial interests or other competing interests related to male circumcision. The informational document will be updated as needed based on the availability of relevant significant new information.
# SUMMARY OF EVIDENCE
# Effect of Male Circumcision on Health Outcomes
This section describes the evidence of biological plausibility of male circumcision on reducing the acquisition of HIV and other STIs, and includes a summary of results of RCTs, observational studies, and meta-analyses. It also describes study results about the frequency of penile and prostate cancers among circumcised men, cervical cancer in female partners of circumcised men, UTIs in circumcised infants, and other associated health risks, including effect on sexual function and penile sensation.
# Biological Plausibility
The foreskin can serve as a portal of entry for STIs (including HIV), lending biological credibility to the role of circumcision in preventing STI and HIV acquisition through insertive sexual intercourse. 104,105 The likely mechanism of increased susceptibility associated with an intact foreskin involves both histopathological and anatomic factors, 106 as well as the interaction between HIV and other STIs.
Compared to the dry external skin surface of the glans penis and the penile shaft, the inner surface of the foreskin is less keratinized. This may allow easier access to the epithelial cells of the epidermis and dermis (in which STIs such as HPV and HSV-2 replicate) as well as access to target cells for HIV infection. 104,107 Some laboratory studies have shown foreskin tissue to be more susceptible to HIV infection than keratinized epithelium. 108,109 However, in another study of rhesus macaques, although more HIV-1 virions were observed on the inner foreskin compared with the glans tissue, a larger proportion of virions were seen penetrating uncircumcised glans tissue and to greater mean depths than inner foreskin tissue of cadaveric specimens. 110 More virions were also visualized on the inner foreskin than outer foreskin at 24 hours of culturing, suggesting that both the inner foreskin and glans epithelia may serve as sites for HIV transmission in uncircumcised men. 110 The inner foreskin surface has been found to contain a higher density of HIV target cells, such as Langerhans cells close to the skin surface, and in men with a history of recent STIs, the number of target cells in the prepuce was increased. 112 The fact that the size of foreskins excised from 965 men enrolled in the Rakai Community Cohort Study 115 significantly correlated with HIV incidence rates may be explained by the hypothesis that surface area would be associated with more resident HIV immune cells such as Langerhans cells, CD4+ T cells, CD8+ T cells, and macrophages and therefore greater rates of HIV transmission. 116 However, the precise role of Langerhans cells is not fully understood. A study of the inner foreskin of healthy Peruvian males who have sex with males or transgender females at elevated risk for HIV infection, found evidence of subclinical changes that may support an inflammatory state in the inner foreskin, including an increased density of target cells for HIV infection such as CCR5+ and CD4+CCR5+ cells in the inner compared with outer foreskin. 120 Because the inner surface of the foreskin is lightly keratinized, it may be relatively susceptible to traumatic epithelial disruptions during intercourse, providing a portal of entry for pathogens. 104 Furthermore, the foreskin retracts away from the glans and over the shaft of the penis during intercourse, which exposes this surface to the body fluids of the sex partner. 111 It has been postulated that the foreskin may serve as a reservoir for various pathogens, particularly HIV and anaerobic bacteria, since the micro-environment in the preputial sac between the unretracted foreskin and the glans penis may be conducive to their survival, thereby increasing contact time of these infectious agents with penile tissues. The anoxic microenvironment of the preputial sac may support proinflammatory anaerobes that can activate Langerhans cells to present HIV to CD4 cells in draining lymph nodes. 121 Circumcision has been associated with a significant decrease in total bacterial load in the coronal sulcus of male adults, particularly that of anaerobes, and a minor increase in aerobes. 122 A significant decrease in bacterial colonization of the glans penis, including that of uropathogenic bacteria, has also been reported in circumcised boys compared with uncircumcised boys. 123 Investigators determined that uncircumcised compared to circumcised males had higher rates of "wetness" around the glans or coronal sulcus and that higher degrees of "wetness" were associated with higher rates of HIV infection. 124 Among male attendees at a sexually transmitted disease (STD) clinic in Durban, South Africa, men with any level of penile wetness compared to men with no wetness had HIV seroprevalences of 66.3% and 45.9%, respectively. 124 Langerhans cells and CD4+T cells in the inner foreskin are significantly more responsive to certain inflammatory cytokines than those in the outer foreskin. This may suggest that immune cells of the inner foreskin more easily respond to infectious and other exposures resulting in increased viral susceptibility of the inner foreskin. 116,125 HIV infection and other STIs, which independently may be more likely in uncircumcised men, interact synergistically to increase acquisition risk. Infection with ulcerative STIs such as HSV-2 has been associated with increased risk of HIV infection in observational studies; 128,131,132 this risk was 3-fold in a recent meta-analysis. 133 In the South African trial, the authors estimated that approximately 28% of incident HIV cases were attributable to HSV-2 seropositivity or acquisition. 9 Proposed mechanisms of increased susceptibility include breaches in the mucosal barrier and increased susceptibility of tissue due to inflammation, or increasing HIV target cells associated with inflammation. 107 Synergistically, HIV seropositivity may increase the risk for new STIs, 128,134 although some studies have failed to find such an association. 135
# Male Circumcision and the Risk of HIV Infection Acquisition
# Male acquisition of HIV infection from female partners
Three RCTs have been undertaken in sub-Saharan Africa to determine whether circumcision of adult males will reduce their risk for HIV infection (Table 1). The randomized, controlled follow-up in all 3 studies was stopped early when interim analyses demonstrated that circumcision by a clinician significantly reduced male participants' risk of HIV infection. The control group was then offered circumcision, as it was determined to be unethical not to offer them circumcision. In intention-to-treat (ITT) analyses, men who had been randomly assigned to the circumcision group had a 60% (South Africa), 53% (Kenya), and 51% (Uganda) lower incidence of HIV infection compared to men assigned to the group to be circumcised at the end of the study. In all 3 studies, some of the men who had been assigned to be circumcised did not undergo the procedure, and vice versa. Non-compliance with assigned study group may mean that the ITT analyses underestimated the potential benefit of circumcision. When the data were reanalyzed to account for these crossovers in an as-treated (AT) analysis, men who had been circumcised had 76% (South Africa), 60% (Kenya), and 55% (Uganda) reductions in risk of HIV infection compared to those who were not circumcised. However, it should be noted that AT analyses may be considered biased.
The Ugandan RCT included male participants aged 15 years or older. 6 Among all men aged 15-49 years, there was a 51% lower HIV incidence at 24 months of follow-up in circumcised compared to uncircumcised males. The reduction in HIV acquisition rate did not vary significantly by age group.
The protective effect of male circumcision appears to be remain over time. In a rigorous meta-analysis of the ITT data of the 3 RCTs, the overall relative risk reduction of acquiring HIV was 50% at 12 months and 54% at 21 or 24 months following circumcision. 136 During 4.79 years of trial surveillance of participants in the Rakai randomized trial of male circumcision, investigators found that the overall HIV incidence was 0.50/100 person-years and 1.93/100 person-years in circumcised men and uncircumcised men, respectively. The corresponding effectiveness was 73% ) after adjusting for sociodemographic characteristics during the last trial visit and time-dependent sexual behavior at post-trial follow-up. 81 The HIV prevention effectiveness in the post-trial observational study was not statistically significantly different to that of the AT effectiveness of circumcision observed during the randomized trial. At 72 months of post-trial follow-up in Kisumu, Kenya, the cumulative 72-month HIV incidence was 4.8% among circumcised men and 11.0% among uncircumcised men with an overall efficacy of 58% (adjusted hazard ratio 0.42 ) 137 similar to a 60% reduction at 24 months. 5 International observational studies also indicate that male circumcision is associated with lower rates of HIV, 138,139 although some cross-sectional studies conducted in general populations have failed to find an association between circumcision status and HIV-1. A systematic review and meta-analysis of 28 studies that focused on heterosexual transmission of HIV in Africa was published in 2000. 138 It included 19 cross-sectional studies, 5 case-control studies, 3 cohort studies, and 1 partner study. In the overall pooled unadjusted analysis, a substantial protective effect of male circumcision on risk for HIV infection was noted, with a 48% reduction in risk for HIV infection among circumcised men compared to uncircumcised men (pooled risk ratio (RR) = 0.52 ). In 3 of 5 studies that were adjusted for other factors, including history of current or previous GUD, an additional 1%-6% risk for HIV infection was noted which suggested a greater protective effect of male circumcision against HIV in populations with more prevalent GUD. After adjusting for confounding factors in the population-based studies, the relative risk for HIV infection was 44% lower in circumcised men compared with uncircumcised men. The strongest association was seen in men who were most likely to be exposed to HIV, such as patients at STD clinics, for whom the adjusted relative risk was 71% lower in circumcised men compared with uncircumcised men.
Prior to the completion of RCTs, another review was conducted that included stringent assessment of 10 potential confounding factors and was stratified by study type or study population. 139,146 The review included 37 studies, 139 with 18 studies (1 cohort, 16 crosssectional, and 1 case-control) conducted in the general population and 19 studies (4 cohort, 12 cross-sectional, and 3 case-control) conducted in high-risk populations. Most of the studies were from Africa. Of the 37 studies included in the review, 139 the 18 studies conducted in general populations had inconsistent results, whereas the 19 studies conducted in high-risk populations found a consistent, substantial protective effect, which increased with adjustment for confounding. Of the 18 studies in the general population, the single cohort study showed a benefit of male circumcision (Odds ratio 0.58 ), the case-control study found no significant difference (OR 1.90 ), and the 16 cross-sectional studies had varying results including 10 studies that indicated a beneficial effect of male circumcision and 6 that indicated a harmful effect (ORs ranging from 0.21-1.73). Of the 8 cross-sectional studies with statistically significant findings, 6 indicated a benefit and 2 indicated harm. The 1 large prospective cohort study conducted in the general population which included 5,507 HIVnegative Ugandan men, and 187 HIV-negative men in discordant relationships, showed a significant protective effect, with 42% lower risk of acquisition of HIV infection among circumcised men compared with uncircumcised men. 147 Among serodiscordant couples, in a substudy of this cohort, none of 50 circumcised men with HIV-infected female partners seroconverted, whereas there were 40 incident cases among 137 uncircumcised men with HIV-infected female partners. 147,148 The 19 studies conducted in high-risk populations in this review 139 found a consistent, substantial protective effect and were in better agreement than the 18 studies in the general population. All 4 cohort studies indicated a beneficial effect from male circumcision, including 3 with statistically significant results with point estimates from crude odds ratios (ORs) varying from 0.10 to 0.39. Eleven of the 12 cross-sectional studies indicated a benefit of male circumcision, including 8 which were statistically significant with ORs of 0.10 to 0.66. Of the 5 crosssectional studies reporting adjusted ORs, these ranged from 0.20-0.59. Among the 3 case-control studies in high-risk populations, all indicated a protective effect of circumcision on HIV status, including 2 that were statistically significant with ORs of 0.37 and 0.88.
More recent meta-analyses have been conducted that include RCTs in addition to observational and case-control studies. One meta-analysis of 13 studies, including 3 RCTs, found a 58% reduced risk of HIV infection among circumcised men (overall risk ratio 0.42 ) and determined that the studies met criteria for causality between lack of circumcision and HIV-1 infection. 149 In a meta-analysis of 15 studies, including 4 RCTs and 11 prospective cohort studies, male circumcision was associated with 70% reduction in the risk for HIV acquisition (pooled adjusted risk ratio 0.30 ). 150 Several studies have examined the association of male circumcision in reducing HIV acquisition in the context of other STI infections. For example, in an RCT studying the role of GUD and HSV-2 in the protection against HIV associated with male circumcision in Rakai, Uganda, male circumcision significantly reduced the risk of HIV acquisition in HSV-2 seronegative men (incidence rate ratio 0.34 ). 151 There were 11.2% and 8.6% reductions in HIV acquisition mediated by reductions in symptomatic GUD (95% CI = 5.0-38.0) and HSV-2 incidence (95% CI = 1.2-77.1), respectively. In Kenya, male medical circumcision did not affect HSV-2 incidence and GUD, and HSV-2 infection, in turn, did not have an impact on the protective effect of male medical circumcision against HIV. 7 In RCTs, male circumcision has also been associated with reductions in prevalent infection with HR-HPV, 8,10,11,17 T. vaginalis, 14,16 BV, 16 or M. genitalium, 15 and reductions in incident infection with HR-HPV. 12,13 Ecologic studies also demonstrate a strong association between lack of male circumcision and HIV infection at the population level. Although links between male circumcision, culture, religion, and risk behavior likely account for some of the differences in HIV infection prevalence, the countries in Africa and Asia with prevalence of male circumcision of less than 20% have HIV-infection prevalence several times as high (seroprevalence range: 0.24-25.84) than countries in those regions where more than 80% of men are circumcised (seroprevalence range: 0.03-11.64). 152 Based on data from an HIV transmission model fitted to data from the Four Cities Study, which included 2 cities in sub-Saharan Africa with relatively low HIV prevalence (Cotonou and Yaoundé) and 2 with high HIV prevalence (Kisumu and Ndola), investigators concluded that differences in rates of male circumcision likely played an important role in differing rates of HIV transmission across Africa. 153 The question of whether resumption of sexual intercourse soon after adult male circumcision affected HIV risk was examined in a combined analysis of data from 3 RCTs limited to HIV-negative men who were randomized to and underwent circumcision. 154 Early sex (intercourse < 42 days after circumcision) was reported by 3.9% of participants in Kenya, 5.4% in Uganda, and 22.5% in South Africa. In all 3 trials, early resumption of sex was reported more often among men who were married or living as married. These same factors associated with early resumption of sex were also identified in a literature review of 11 publications. 155 In pooled analyses of the 3 RCTs, circumcised men reporting early sex did not have higher HIV infection rates at the 3-or 6-month follow-up visit than circumcised men who did not have early sex. 154 In a prospective cohort study in Rakai, Uganda, the effect of male circumcision on the plasma HIV viral load of 111 HIV-positive, HAART-naïve men with complete follow-up was studied. After male circumcision, compared to baseline, there was not a statistically significant increase in HIV plasma viral load, even after controlling for CD4 count. 156 In this study, men with higher baseline log10 plasma viral load were significantly more likely to experience a reduction in mean log10 plasma viral load after undergoing circumcision.
The RCTs in Africa and numerous observational studies 138,157 have demonstrated that male circumcision reduces the risk for female-to-male transmission of HIV. Careful consideration is required to apply these findings to the U.S. context, given differences in HIV risk groups. 158,159 In contrast to the sub-Saharan African countries where the RCTs were conducted, the United States has low prevalence of HIV infection (0.47%), 160 with HIV infection concentrated among men who have male-to-male sexual contact (men who have sex with men and men who have sex with men and women) rather than men who have sex exclusively with women. In 2014, there were an estimated 44,784 new diagnoses of HIV in the United States and 6 dependent areas. 93 While no RCTs have been conducted in the United States, a similar magnitude of risk-reduction benefit of circumcision would likely apply to U.S. men engaged in penile-vaginal sex. However, the population effect would be less pronounced in the United States compared to sub-Saharan Africa due to the smaller proportion of cases among men of heterosexually acquired HIV infection in the United States.
Few U.S. studies have evaluated the effect of male circumcision for preventing heterosexually acquired HIV infection. Two published observational studies have reported on the association between circumcision and the risk of HIV infection in the United States among male patients attending STD clinics. 130,164 The first study suggests that being uncircumcised might be associated with increased HIV risk, but the finding was limited by small sample size and was not statistically significant. The more recent study was a cross-sectional evaluation conducted among heterosexual African American men attending STD clinics in Baltimore, with an overall HIV seroprevalence of 3%. 164 Among approximately 40,000 visits by patients with unknown HIV exposure, male circumcision was not associated with reduced HIV prevalence. However, among 394 visits by men who had female sexual partners known to be infected with HIV, male circumcision was significantly associated with a 51% reduced relative prevalence of HIV infection (10.2% among circumcised men vs 22.0% among uncircumcised men).
# HIV infection transmission from circumcised men to female partners
Studies on the effect of male circumcision on HIV transmission from male partners to female partners have shown mixed results. Some observational studies suggest a benefit; a randomized prospective study failed to demonstrate one. In a study of serodiscordant couples in Uganda in which the male partner was HIV infected and the female partner was initially HIV negative, the infection rates of the female partners differed by male circumcision status and viral load of their male partners. If the HIV viral load in the blood of the male partner was < 50,000 copies/mL, and the man was uncircumcised, the rate of HIV transmission was 9.6 per 100 person-years; if the man was circumcised, there was no HIV transmission. 147 For all male partners, regardless of viral load, the male-tofemale transmission rate from circumcised men was somewhat lower than that from uncircumcised men, but this was not statistically significant. In another study of heterosexual serodiscordant couples from 7 sites in eastern Africa and 7 sites in southern Africa, in which the HIV-infected partner was also infected with HSV, 1,096 couples included a male as the HIV-infected partner. Adjusting for HIV-1 concentration in male partner plasma, female partners of circumcised men retained a not statistically significant 40% reduced risk of HIV-1 acquisition compared to those with partners of uncircumcised men (Hazard ratio 0.60 , for genetically-linked events). After excluding follow-up time occurring after male partners initiated antiretroviral therapy, the risk of HIV acquisition decreased by a not statistically significant 47% (HR 0.53 , for genetically-linked events). 165 Other observational studies have evaluated the effect of male circumcision on HIV risk to women without limiting the participants to serodiscordant couples. In a prospective study among 2,471 HIV-uninfected women in Tanzania, having an uncircumcised husband was associated with a significantly increased risk of HIV acquisition (aRR 3.60 ). 166 Similarly, in a cross-sectional casecontrol study of 4,404 women in Kenya, having a regular sex partner who was uncircumcised was associated with an odds ratio (OR) of 2.9 (95% CI = 2.0-4.2) of being HIV infected. 167 However, another observational study from Uganda found that after adjustment for other risk factors, male circumcision of the primary sex partner was not associated with a woman's risk for HIV infection. 168 Finally, an RCT in Rakai, Uganda, among HIV-infected men failed to demonstrate benefit to female partners. In this trial, 922 uncircumcised, HIV-infected men were randomly assigned to immediate or delayed circumcision. HIV-negative female partners were concurrently enrolled. 169 Overall, 18% of women in the intervention group and 12% of women in the control group acquired HIV during follow-up (HR 1.58 ). In a subanalysis not specified in the protocol, early resumption of sexual relations following male circumcision was significantly associated with higher risk for HIV acquisition among female participants, with a rate ratio versus control of 3.50 (95% CI = 1.14-10.76). These results suggest an increased risk for HIV acquisition when early resumption of sex occurs after male circumcision. However, among couples in the immediate male circumcision arm who delayed resumption of sex until after wound healing, there was no significant difference in HIV incidence relative to uncircumcised controls (rate ratio 1. A systematic review and meta-analysis of the evidence for a direct effect of male circumcision on the risk of women becoming infected with HIV identified 19 epidemiological analyses from 11 study populations. 171 The meta-analysis of data from the 1 RCT and 6 longitudinal analyses showed little evidence that male circumcision directly affects the risk of HIV acquisition in women (RR 0.80 ).
More recent estimates of the effect of male circumcision on male-to-female transmission were calculated using 2 mathematical models representing the HIV epidemics in Zimbabwe and Kisumu, Kenya, based on 4 trials of circumcision among adults and new observational data of HIV transmission from men who were in stable partnerships and who were circumcised at younger ages. According to these models, it is estimated that male circumcision may confer a 46% reduction in the rate of male-to-female HIV transmission. 172 Whether or not circumcision of HIV-infected men directly reduces HIV risk for their female partners, male circumcision of HIV-negative men offers a benefit to women by contributing to a decline in the overall prevalence of HIV in the male population, and thus fewer HIV-infected sexual partners. 173 Male acquisition of HIV and other STIs from male partners HIV transmission. The RCTs demonstrating HIV risk reduction associated with male circumcision were conducted in settings in which most HIV transmission is through heterosexual sex and apply to men engaging mainly in insertive penile-vaginal sex. Only 6 (0.2%) trial participants reported having had male-to-male sexual relations in the 1 RCT in which this history was collected. 5 To date, the data on male circumcision and rates of HIV acquisition among men who have male-to-male sexual contact have been limited to observational studies. 90, No prospective trial of male circumcision for reducing HIV risk among MSM has been conducted, although such studies have been proposed. 188 Some observational studies have shown higher rates of HIV acquisition among uncircumcised MSM compared with circumcised MSM. Among a convenience sample of 387 MSM receiving social and clinical services geared to MSM at local drop-in centers in India, men who self-reported being circumcised had 83% lower odds of prevalent HIV infection (adjusted odds ratio 0.17 ) than men who selfreported being uncircumcised. 185 When controlling for the number of male sex partners and having had unprotected sex with an HIV-positive partner, circumcision was associated with 2-fold decreased odds of prevalent HIV infection (adjusted odds ratio 0.5; 95% CI = 0.25-1.0) in a vaccine preparedness cohort followed from April 1995 through May 1997. 189 Self-reported circumcised status was associated with 2-fold decreased odds of prevalent HIV infection (aOR 0.5 ) in a crosssectional survey of MSM in Seattle in the early 1990s, 190 and the odds of being HIV infected were 5-fold lower among circumcised men in a cross-sectional survey of MSM in Soweto in 2008 (aOR 0.2 ). 180 However, other observational studies have failed to show a benefit (or risk) of male circumcision. In a cross-sectional survey of black and Latino MSM in New York City, Los Angeles, and Philadelphia, there was no evidence that being circumcised was protective against HIV infection, even among men who had reported engaging in insertive unprotected anal intercourse (UAI) but not receptive UAI. 191 Also, in a retrospective analysis of male circumcision status and risk for HIV among MSM participants in a vaccine trial, no association was found, even among primarily insertive partners. 192 Similarly, no association was found in a study of MSM in Seattle, 178 or in an Australian study of MSM. 176 However, a subsequent prospective study of MSM in Australia did report a significantly reduced HIV infection risk in circumcised men who reported engaging primarily in insertive UAI (HR 0.11 ). 193 The authors noted that because more infections were associated with receptive UAI, lack of male circumcision may have accounted for only 9% of the infections in the study overall.
A study of Andean men reported that circumcision was not protective overall, but men who reported mainly insertive anal intercourse, defined as ≥60% of insertive acts with their recent male partners, experienced a nonstatistically significant 69% reduction in the risk of HIV acquisition (RR 0.31 ) compared with those who reported < 60% of insertive acts. 194 The presumed mechanism of decreased HIV acquisition among circumcised men engaging in penile-vaginal sex is decreased HIV entry and infection through target cells on the foreskin. Thus, if there is an HIV prevention benefit to circumcision for MSM, the benefit is likely be associated with insertive acts. Furthermore, the relative risk of HIV infection per sex act may be higher for insertive penile-anal sex than for penile-vaginal sex due to higher HIV RNA concentrations in rectal secretions relative to vaginal or cervical secretions. 195 The risk of HIV acquisition among MSM engaging in penile-anal sex is, however, greater for the anal receptive partner than for the insertive partner. 196,197 Additionally, relatively few MSM are exclusively insertive. Many or most MSM practice both insertive and receptive UAI, but the subject has not been well studied. Among 205 HIV-positive MSM surveyed in the United States, approximately half of men selfidentified as versatile partners (men who practice both insertive and receptive anal sex) and the remaining half equally split, with one-quarter predominantly engaging in insertive anal intercourse and one-quarter predominantly engaging in receptive anal intercourse. 198 In a survey of UAI among 4,295 MSM participating in an observational cohort in 6 cities in the United States, 16.7% were exclusively insertive, 9.6% were exclusively receptive, and 63% were versatile. 199 The proportion of MSM who reported being versatile and who were predominantly insertive was not reported in this study. In another study, substantial proportions of partners who self-identified as predominantly insertive also reported practicing receptive anal intercourse. 200 A Cochrane review conducted in 2011, which included 21 observational studies b and 71,693 participants from mainly Western countries, but also included 1 study each from India, Taiwan, and South Africa, demonstrated that there is a potential benefit of male circumcision in prevention of HIV transmission among MSM; however, the evidence did not support making a recommendation for male circumcision in this population. 90 More specifically, the overall pooled effect estimate for HIV acquisition, which included 20 studies and 65,784 participants, was not statistically significant (OR 0.86 ) and showed significant heterogeneity (I² = 53%). However, there were differing results in subpopulations based on having an insertive versus receptive role in MSM sexual relations. The results were statistically significant among 3,465 men in 7 studies reporting an insertive role (OR 0.27 ), but were not significant among 1,792 men in 3 studies reporting a receptive role (OR 1.20 ). The overall quality of evidence based on the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system was low. 201 Thus, while there is biological plausibility and evidence from some studies to suggest a reduced risk for HIV infection in circumcised men as compared with uncircumcised men engaging in insertive anal sex with an HIV-infected male partner, other well-conducted observational studies do not indicate a protective effect, either in predominately insertive MSM or overall among MSM. Because of the greater risk posed by receptive anal sex, the role of male circumcision as a public health intervention to prevent HIV transmission among MSM appears limited based on current data.
After the Cochrane review, studies evaluating the association of male circumcision and HIV infection among MSM who practice mainly or exclusively insertive anal intercourse have had mixed results. In a cross-sectional study of 1,155 MSM in China, after adjusting for demographic covariates, number of lifetime male sexual partners and anal sex role, male circumcision confirmed by examination was associated with 85% lower odds of prevalent HIV infection (aOR 0.15 ) for men who reported practicing predominantly insertive anal sex compared with uncircumcised men who reported practicing predominantly receptive anal sex or being versatile. 183 In contrast, in a survey of MSM in Chongqing, China who practiced mainly or exclusively insertive anal intercourse, the prevalence of HIV infections did not differ significantly when comparing men who self-reported being circumcised with those who self-reported being uncircumcised. 187 Similarly, among a convenience sample of 1,521 white MSM in Britain who predominantly or exclusively engaged in insertive UAI, men who self-reported being circumcised were not significantly less likely to be HIV seropositive compared with men who self-reported being uncircumcised (aOR 0.79 ). 202 This lack of association was also seen when limiting the comparison to 400 circumcised and uncircumcised MSM who practiced insertive UAI exclusively (aOR 0.84). 202 A meta-analysis of data from 15 studies that examined the strength of the association between male circumcision among MSM and HIV and other STIs; little overall effect on HIV infection was revealed. 182 Among a total of 53,567 MSM participants, 52% of whom were circumcised, the overall weighted odds of being HIVpositive was slightly less than 1 among circumcised MSM versus uncircumcised MSM (OR 0.95 ). There was also no significant association when stratified by study type (e.g., cross-sectional, prospective) or when limited to MSM who reported engaging exclusively in insertive anal sex. However, in 3 studies completed before the introduction of highly active antiretroviral therapy, male circumcision was protective against HIV (OR 0.47 ).
Because of the potential that engaging in receptive UAI would dilute whatever riskreduction benefit might be associated with being circumcised while engaging in insertive UAI, an RCT study among MSM who practice predominantly or exclusively insertive UAI would aid efforts to obtain more definitive answers regarding the benefit of male circumcision among this population. however, condom use mediated this relationship, as circumcision was associated with higher rates of condom use. 203 An observational study of MSM in Australia found that male circumcision was not associated with prevalent or incident HSV-1, HSV-2, self-reported genital warts, or incident urethral gonorrhea or chlamydial infection. 204 Being circumcised was associated with a significantly reduced risk of incident (HR 0.
# STI acquisition. A 2011
# Male Circumcision and Other Health Conditions
In addition to studies of male circumcision related to HIV acquisition, the following sections review other studies exploring the association between male circumcision and other health conditions such as STIs (other than HIV) in heterosexual men and women, penile and prostate cancer, cervical cancer in female partners of circumcised men, UTIs in infants, and other associated health risks, including effect on sexual function and penile sensation.
# Sexually transmitted infections (STIs)
Male circumcision has been shown to reduce the risk for other STIs in addition to HIV. The effect of male circumcision on susceptibility to other STIs has been assessed in a number of observational studies in men who have sex with women. 206,207 Results from these studies have been mixed but suggest that male circumcision is associated with lower risk for some STIs. More recent data from the RCTs of male circumcision provide evidence that circumcision is significantly associated with decreased prevalence 6 and incidence 7 of GUD, decreased incidence of HSV-2, 8,14 decreased prevalence, 8,10,11 decreased incidence, 12,13 and increased clearance 13 of HR-HPV, and decreased prevalence of T. vaginalis 14 and M. genitalium 15 in circumcised heterosexual men (Table 2). Data from RCTs also provide evidence that circumcision in men is significantly associated with reductions in prevalence of GUD, 16 HR-HPV, 17 T. vaginalis, 16 and BV, 16 and increased clearance of HR-HPV 17 among their female sexual partners. The trials did not provide evidence of any association between male circumcision status and gonorrhea, 14,208 chlamydial infection, 14,208 genital discharge, 6 or dysuria. 6 In the 2 circumcision trials in which it was assessed, no association was found with syphilis. 7,8 However, in a prospective cohort study of HIV-serodiscordant couples enrolled in a trial of HIV preexposure prophylaxis (PrEP), syphilis was strongly associated with lack of male circumcision in men overall, in HIV-negative men, in their female sexual partners overall, and in both their HIV-negative and HIV-positive female sexual partners. 20 Syphilis had also been strongly associated with lack of male circumcision in observational studies. 207 While a systematic review and meta-analysis concluded that based on studies of general populations, circumcision was not significantly associated with the risk of individual STIs, 209 the review was found to have several critical methodological flaws. 210 Although rarely fatal, STIs other than HIV are among the most common communicable diseases in the United States, and interventions that prevent STIs would result in substantial reductions in morbidity and cost of health services. Most STIs are asymptomatic and the most prevalent STIs are not reportable in the United States; thus, the incidence of these infections must be estimated. The most recent estimate is that 19.7 million new STIs were acquired in the United States in 2008, including infections with Trichomonas vaginalis (1.1 million), HPV (14.1 million), Chlamydia trachomatis (2.9 million), HSV-2 (776,000), Neisseria gonorrhoeae (820,000), and Treponema pallidum (55,400). 211 Data on male circumcision and STIs in MSM are summarized in the Male-to-Male transmission section.
Rates of STIs differ in the United States compared to sub-Saharan Africa. Thus, it is important to assess the magnitude of the incremental benefit of male circumcision on HIV infection due to its protective effect against other STIs. In a dynamic stochastic model, it was concluded that the protection of male circumcision against STIs contributes little to the overall effect of circumcision on HIV. 212 Analyses of the RCTs confirmed this result, 9,151 which suggests that differing rates of other STIs should not be a major concern in generalizing the HIV prevention results of the RCTs from one setting to another.
# Genital Ulcer Disease (GUD).
Male circumcision is associated with a reduction of HSV-2 and GUD 6,7 in randomized controlled trials and a reduction of GUD due to syphilis 18 or chancroid 19 in observational studies.
# GUD (various types).
There is evidence of an association of reduction in GUD with male circumcision in 2 RCTs. 7,16 In the Kenyan RCT, male circumcision was associated with a reduction in GUD (RR 0.52 ). 7 This reduction occurred regardless of HSV-2 status. Male circumcision significantly reduced symptomatic GUD in HSV-2-seronegative men (PRR 0.51 ), HSV-2-seropositive men (PRR 0.66 ), and in HSV-2 seroconverters (PRR 0.48 ). 7 In the Ugandan RCT, male circumcision was also associated with a reduction in GUD (PRR 0.53 ) in men 6 and in their female partners (adjusted prevalence rate ratio 0.78 ). 16 Herpes Simplex Virus (HSV-2). HSV-2 infection is often asymptomatic but can cause genital ulcers. Compelling evidence of the protective effect of male circumcision on HSV-2 acquisition is available from 2 of 3 RCTs. 8,14 In the South African trial, IRR for acquisition of HSV-2 through 21 months of follow-up was 0.66 (95% CI = 0.39-1.12) for the intervention arm in the ITT analysis, and 0.55 (95% CI = 0.32-0.94) for circumcised men in the AT analysis. 14 In the Uganda RCT, among 1,684 interventions and 1,709 control participants who were HSV-negative at baseline, the adjusted HR in the intervention group for HSV-2 infection was 0.72 (95% CI = 0.56-0.92) at 24 months in the ITT analysis. 8 In these 2 clinical trials, circumcised men were approximately 30% to 45% less likely to become infected with HSV-2 over 21 to 24 months of observation. In addition, investigators estimated the probability of HSV-2 per-sex-act female-to-male transmission per sex act in South Africa, and found that there was a positive correlation between HIV and HSV-2 infections and that male circumcision had a protective effect on HSV-2 acquisition by males. 213 From the RCT in Kisumu, Kenya, which included 1,391 men assigned to the circumcision arm and 1,393 men assigned to the delayed circumcision arm, male circumcision was not associated with the cumulative incidence of HSV-2 through 24 months of follow-up (overall HSV-2 incidence = not reported; circumcised:uncircumcised point estimates 5.8/100 person years and 6.1 per 100 person years, respectively; RR = 0.94 ) 7 or 72 months of follow-up (overall HSV-2 incidence = 33.5%; circumcised:uncircumcised point estimates 33.5% and 32.7%, respectively; crude HR 0.89 ). 214 Investigators from the Kenyan study and others hypothesized that the reason that results from the Kisumu RCT were inconsistent with the South African and the Ugandan RCTs may have been due to location of lesions, test performance, higher prevalence of HSV-2 infection, and greater risk of exposure for younger men in Kisumu. 7,214,215 For example, 37% of clinically detected genital ulcers were on the penile shaft rather than the foreskin mucosa in Kisumu; however, similar data were not reported for the other 2 RCTs. Also, the sensitivity and specificity of the Kalon test for detecting HSV-2 were higher in Kampala, Uganda (95% and 88%, respectively), compared to Kisumu, Kenya (92% and 79%, respectively). However, a re-analysis of the Kenya data using various Kalon index optical density cut-off values to vary the specificity did not find a protective effect of male circumcision against HSV-2. 218 In the Uganda RCT, female partners of circumcised HSV-2-positive males did not have significantly lower HSV-2 acquisition compared to partners of their uncircumcised male counterparts (IRR = 0.85 ). 219 Observational studies have provided mixed results. 18,19,71,73,82 In an early review of 6 observational studies, 2 found male circumcision was protective against HSV-2 and 4 found no association with HSV-2. 206 In a subsequent review of 10 observational studies related to HSV-2 serostatus, 6 studies found a reduced relative risk associated with male circumcision status, and the difference was statistically significant for 2 of the studies. 207 Compared to uncircumcised men, circumcised men had a summary estimated relative risk for HSV-2 infection of 0.88 (95% CI = 0.77-1.01). In a cross-sectional observational study of men in rural Tanzania, those circumcised before sexual debut were less likely to be HIV seropositive compared with uncircumcised men (aOR 0.50 ), and were also less likely to be HSV-2 infected (aOR 0.67 ) or have genital ulcer syndrome in the past 12 months (aOR 0.69 ). 82 In a population-based observational survey in Kisumu, Kenya, conducted to estimate baseline male circumcision status and attitudes associated with male circumcision, circumcision status was not associated with HIV/HSV-2 infection. 220 Observational data from a cross-sectional study in the United States have not shown an association between male circumcision status and HSV-2 infection. In an evaluation conducted by the National Health and Nutrition Examination Surveys (NHANES) of 3,850 U.S. adolescent and adult males aged 14-49 years who reported having had sex, there was no association between self-reported circumcision status and HSV-2 infection, after controlling for potential confounders such as age, race/ethnicity, and sexual behaviors. 221 Treponema pallidum. Syphilis, caused by T. pallidum, classically presents as a painless genital ulcer. A review of 11 studies in which genital ulcers were due either to chancroid or syphilis found statistically significant decreases in risk of GUD among circumcised men. 206 In addition, of 14 studies that have assessed the association between male circumcision and a serologic diagnosis of syphilis, 13 found a reduction in risk associated with male circumcision, and the difference was statistically significant in 4 studies. 207 A summary estimate of relative risk for syphilis was 0.69 (95% CI = 0.50-0.94) for circumcised men versus uncircumcised men.
While there was no prevention benefit from male circumcision against syphilis acquisition in 2 of the randomized trials of male circumcision interventions, 7,8 a benefit was reported in a prospective cohort study of HIV discordant couples enrolled in an RCT related to HIV PrEP. 20 In the Uganda RCT of male circumcision, syphilis was detected in 50 of 2,083 subjects (2.4%) in the intervention group, compared with 45 of 2,143 subjects (2.1%) in the control group (crude HR 1.14 ; aHR 1.10 ). 8 Circumcised men were less likely to report genital ulcers; however, nearly all genital ulcers with an identified etiology were attributed to herpes virus infection and not syphilis. At the 24-month follow-up visit in the Kenya RCT of male circumcision, only 13 men had developed syphilis among 2,714 men who did not have syphilis at enrollment. Of the 13 men who developed syphilis, 6 were uncircumcised and 7 were circumcised. 7 Incident syphilis did not differ by circumcision status in this trial: 0.4/100 person-years circumcised men versus 0.3/100 person-years uncircumcised men (RR 1.23 ). Because of the small numbers of incident syphilis in this trial, no conclusions about the association between circumcision status and incident syphilis were drawn. During a median of 2.75 years of prospective follow-up of 4,716 HIV-1 serodiscordant Kenyan and Ugandan couples in an RCT of pre-exposure prophylaxis, male circumcision was associated with reductions in incident syphilis of 42% in men overall (aHR 0.58 ) and 62% in HIVinfected men (aHR 0.38 ). 20 HIV-uninfected men experienced no statistically significant reductions in incident syphilis associated with circumcision (aHR 0.64 ). 20 In this same trial, male circumcision was associated with reductions in incident syphilis of 59% in women overall (aHR 0.41 ), 75% reduction in HIV-negative women (aHR 0.25 , and 48% reduction in HIV-positive women (aHR 0.52 ). 20 Haemophilus ducreyi. H. ducreyi, the organism that causes chancroid, is now uncommon in the United States. Only 1 observational study was found that included serologic diagnosis, so a review included 6 other studies that were based on clinical diagnosis. 207 Six studies found a reduced relative risk for circumcised males versus uncircumcised males, which was statistically significant in 4 studies. Relative risks varied widely, and no summary relative risk was estimated due to variability in study design.
Other STIs. Male circumcision is associated with a reduction of high risk HPV infections in RCTs. Mixed results for other STIs, such as trichomoniasis, are included and described in this section.
# Human Papilloma Virus (HPV).
HPV is generally an asymptomatic infection, but oncogenic or high-risk HPV (principally genotypes 16, 18, 31, and 33) are believed to be responsible for 100% of squamous cervical cancers, 90% of anal cancers, and 40% of cancers of the penis, vulva and vagina. 226 Penile squamous carcinoma (caused by carcinogenic HPV subtypes) has been strongly and consistently associated with lack of male circumcision 206 (see the Penile Cancers section). Cervical cancer has been associated with lack of circumcision in male partners of women in several casecontrol studies 227 (see the Cervical Cancer section).
HPV prevalence. Two meta-analyses of 21 228 and 23 229 studies evaluated the potential association of male circumcision and HPV infection and included mainly cohort studies, cross-sectional studies, and RCTs 8,10,11 published through 2010. The meta-analysis of 21 studies included 8,046 circumcised and 6,336 uncircumcised men, 228 but the other meta-analysis of 24 studies did not report the total number of circumcised men and uncircumcised men. 229 Both meta-analyses concluded that male circumcision was significantly associated with reduced odds of prevalent genital HPV infection overall (OR 0.57 228 , (OR 0.57 . 229 The RCTs, which were conducted in Uganda 8,10 and South Africa, 11 studied circumcision and prevalent HPV. In the Uganda RCT, the overall prevalence of HPV of any risk type was similar at baseline in both arms prior to the circumcision intervention, but lower in the circumcision arm at the 24-month follow-up visit (RR 0.70 ). 8 The overall prevalence of any high-risk HPV genotypes at the 24-month follow up visit in Uganda was also lower among circumcised men who were both HIV-negative and HSV-2 seronegative at baseline (aRR 0.65 ) 8 and among HIV-positive circumcised men (RR 0.77 ) 10 compared with that among uncircumcised men in the control arms. In the South African RCT, the overall prevalence of any urethral high-risk HPV genotype at the 21-month follow-up visit was lower among circumcised men compared with that among uncircumcised men in both the ITT and AT analyses (ITT analysis: aPRR 0.68 ; AT: aPRR 0.62 ). 11 Circumcised men in the Ugandan RCT also had a lower prevalence of multiple high-risk HPV genotypes at the 24-month follow-up (RR 0.53 ) than uncircumcised men. 10 Among 15,162 men and women aged 16-74 years enrolled in the third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) in Britain, circumcised men were less likely than uncircumcised men to have any HPV-type (aOR 0.26 ), high-risk-HPV c (aOR 0.14 ), and possible highc High-risk-HPV was defined as being positive for genotype (s) 16 231 However, _an important limitation of the study was the lack of specificity of study results by site of penile swabbing for HPV sampling. Because results for HPV samples taken from the glans penis, coronal sulcus, penile shaft, and scrotum were combined into an aggregate result for each patient, results for samples taken from the glans penis, an area hypothesized to be more likely protected by male circumcision, could not be distinguished from results for samples taken from other sites.
HPV incidence or acquisition. A meta-analysis of 23 articles found that male circumcision was associated with decreased HPV incidence and that the effect of male circumcision on reducing prevalent HPV infection was stronger at the glans/corona and urethra compared with penile areas more distal to the foreskin. 229 The incidence of high risk HPV genotypes in relation to circumcision was also studied in the Uganda and Kenya RCTs. At the 24-month follow-up visit, among HIV-positive men in the Uganda trial, there were no significant differences between circumcised men and uncircumcised men in the incidence of high-risk HPV genotypes overall (RR 0. In a multi-national study of 1,469 circumcised men and 2,564 uncircumcised men, male circumcision was not significantly associated with overall incident HPV infection (aHR 1.08 ). 233 In a cohort study of 359 circumcised and 118 uncircumcised university students from Seattle, Washington who underwent testing for 37 alpha HPV genotypes from 3 genital sites (shaft/scrotum, glans, and urine), rates of acquiring clinically relevant HPV genotypes (high-risk genotypes plus HPV-6 and HPV-11) did not differ by circumcision status, although there was a higher likelihood of detecting infections at all 3 sites versus at only 1 site among uncircumcised men than among circumcised men. 234 HPV clearance. Two meta-analyses concluded that male circumcision was not significantly associated with HPV clearance. 228,229 Clearance of high-risk HPV genotypes were also studied in both the Uganda and Kenya trials. In Uganda, overall clearance of HPV genotypes was similar in both study arms among HIV-positive men only (RR 1.09 ), 10 but clearance was higher among circumcised men compared with uncircumcised men in the analysis restricted to HIV-negative men (aRR 1.39 ) 12 and in the analysis of a mixed population of HIV-positive and HIV-negative men (aRR 1.48 ). 232 Similarly, at the 6-month follow-up visit among men in the Kenya trial, circumcised men had lower persistence of HPV-16 with high viral load (RR 0.36 ) and HPV-18 with high viral load (RR 0.34 ) than uncircumcised men. 13 In the Uganda RCT, when analysis was restricted to results at the 24-month follow-up visit only among men infected with one of 6 selected high-risk HPV genotypes (16, 18, 31, 33, 35,and 52), circumcised men had a lower viral load associated with HPV infections acquired after enrollment compared with uncircumcised men, but the same association was not seen with HPV infections persisting from enrollment. 235 This may help to explain why, at the 24-month follow-up visit, HPV-infected women who were partners of circumcised men in the Uganda RCT had a lower prevalence of high risk HPV DNA load (PRR 0. T. vaginalis. Trichomoniasis, caused by the parasite T. vaginalis, is believed to be the most common curable STI in the United States. The infection is generally asymptomatic in men but can cause severe cervicitis, vaginal discharge and labial itching and irritation in women, and may increase susceptibility to HIV. 236 The association of T. vaginalis and male circumcision had not been previously studied in any major observational studies. In the South African RCT, the effect of male circumcision on T. vaginalis infections was measured by polymerase chain reaction (PCR) from urine specimens. 14 Circumcised men were less likely to have a prevalent trichomonas infection (1.7%) than were uncircumcised men (3.1%), with statistical significance in the AT analysis (aOR 0.47 ) and borderline statistical significance in the ITT group (aOR 0.53 ). However, in the Kenya trial, which measured T. vaginalis by culture in participants' urine and urethral discharge, no significant association between male circumcision status and trichomonas infection was found. 208 The Uganda RCT assessed trichomonas infections in female partners. The prevalence of T. vaginalis was found to be about half as high among the HIV-negative wives of married participants who were circumcised (5.9%) compared with HIV-negative wives of participants who were uncircumcised (11.2%) (aPRR 0.52 ). 16 Mycoplasma genitalium. M. genitalium causes male urethritis, including persistent or recurrent urethritis 237 and cervicitis and pelvic inflammatory disease in women. 238,239 Nucleic acid amplification testing (NAAT) is the easiest method to diagnose M. genitalium infection, but there is no diagnostic test for M. genitalium approved by the FDA. 240 In a cross-sectional study of 526 men enrolled in the Kenya RCT, circumcised men had half the odds of being infected with M. genitalium (aOR 0.54 ) compared with circumcised men. 15 Chlamydia trachomatis. C. trachomatis causes urethritis in men and cervicitis and pelvic inflammatory disease in women. Before accurate tests were available, chlamydial infection in men was often diagnosed syndromically as "non-gonococcal urethritis" after exclusion of gonorrhea by Gram stain. Of 8 observational studies of non-gonococcal urethritis, 2 found that male circumcision was protective, 3 found that it increased risk, and 3 found no association. 206 In women, 1 cross-sectional study found chlamydial infection among female partners of circumcised men to be 5.6-fold lower than among partners of uncircumcised men (OR 0.18 ), as tested by the presence of antibodies to C. trachomatis. 242 In another cross-sectional study, C. trachomatis infection was not associated with circumcision status of the partner (HR 1.25 ) between C. trachomatis infection among male participants in the circumcision intervention arm (2.1%) and control arm (3.6%); this association was not statistically significant in the AT analysis (aOR 0.75 ). 14 Neisseria gonorrhoeae. Gonorrhea is caused by the bacterium N. gonorrhoeae and can lead to urethritis in men and cervicitis and pelvic inflammatory disease in women. Of 7 observational studies, 5 found statistically significant decreases in risk in circumcised men and 2 found no association with circumcision status. 206 However, no association has been demonstrated in prospective trials. In the Uganda RCT, there was no association between male circumcision and self-reported urethritis or discharge in men or women. 6 In the South Africa trial, the prevalence of gonorrhea, tested by polymerase chain reaction in first void urine, was similar in the male circumcision (10.0%) and the control (10.3%) groups. 14 Similarly, in the Kenya trial, no association between male circumcision status and gonorrhea was found. 208 Bacterial vaginosis (BV). BV, a clinical syndrome where anaerobic bacteria, G. vaginalis, Ureaplasma, and Mycoplasma replace hydrogen-peroxide-producing Lactobacillus in the vagina, may occasionally produce vaginal discharge or malodor, but is most often asymptomatic. 240 BV can be diagnosed by gram stain or clinical diagnostic criteria. 244 In the Uganda RCT, male circumcision was associated with reduced risk in the prevalence of any BV (aPRR 0.
# Penile and prostate cancers
Penile cancer is rare in developed countries, accounting for < 1% of malignancies among men. 245 Observational studies have found that among men diagnosed with penile cancer, only a small percentage were circumcised. Aside from circumcision status, penile cancer is associated with a history of HPV infection 31,246 and certain lifestyle choices such as smoking, 247,248 poor hygiene, 249 and multiple sex partners. 250 In one metaanalysis, key risk factors for penile cancer included phimosis (8 studies), smegma (4 studies), balanitis (4 studies), and high-risk HPV types (10 studies), and these risk factors were more prevalent in uncircumcised than circumcised men. 26 Invasive penile cancer is very rare in circumcised men. The lifetime risk for a U.S. male of ever being diagnosed with penile cancer is 1 in 1,437. 29 During 1982-2005, the overall incidence of penile cancer was higher in England and Wales (1.44 per 100,000 man-years) than in Australia (0.80 per 100,000 man-years) and the United States (0.66 per 100,000 man-years). 251 It is hypothesized that the lower rates of male circumcision in England and Wales compared with the United States and Australia may partly explain the differing penile cancer rates. 251 In a retrospective analysis of 89 cases of invasive penile cancer diagnosed from 1954 through 1997, 98% were in uncircumcised men; of 118 cases of carcinoma in situ, 84% were in uncircumcised men. 27 In a retrospective review of 5 studies with 592 cases of invasive penile cancer in the United States, none of the cases were in men who had been circumcised in infancy. 24 It has been suggested that the protective effect of male circumcision may be by preventing phimosis. 252 In a population-based case-control study, the authors found that men not circumcised during childhood were at increased risk of invasive (OR 2.3 ), but not in situ (OR 1.1 ) penile carcinoma. 253 Among uncircumcised men, phimosis was strongly associated with invasive penile cancer (OR 11.4 ). Racial/ethnic distribution of penile cancer in the United States reflects the varying prevalence of male circumcision. In an analysis of penile cancer among 6,539 U.S. men identified through population-based registries during 1995-2003, Hispanic men had the highest age-adjusted incidence (6.58 per million), followed by blacks (4.02 per million) and whites (3.9 per million). 25 The lifetime risk of prostate cancer among men in the United States during 2008-2010 was about 15%. 254 Prostate cancer represented the second highest age-adjusted invasive cancer incidence rate in the United States during 2012 255 and was also one of the leading causes of cancer death among men, with 27,682 men dying from prostate cancer in 2013. 256 Infection with STIs has been associated with the development of prostate cancer in some studies but not others. In 1 meta-analysis, an increased risk of prostate cancer was associated with a history of any STI (OR 1.5 ). 264 Risk factors for STIs have also been associated with prostate cancer, including earlier age of first sexual activity 265 and a greater number of sexual partners. 261,266 A meta-analysis of the association between male circumcision and prostate cancer based on 7 reports from case-control studies published during 1971-2014 did not show a significant association between prostate cancer and circumcision in the overall analysis (OR 0.88, P = 0.19), but it did show significantly reduced risk in the following subgroups of published studies: those conducted after PSA testing became available (OR 0.83, P = 0.03), populationbased studies (OR 0.84, P = 0.05), studies that collected data by personal interview (OR 0.83, P = 0.03), and studies including black race as a variable (OR 0.59, P = 0.02). 267 In an ecologic study evaluating the relationship between male circumcision prevalence and the prostatic carcinoma mortality rate in 85 countries, investigators reported that countries with male circumcision prevalence exceeding 80% had significantly higher prostate cancer mortality than countries with circumcision prevalence ranging from 0%-19% (aOR 1.82 ) or 20%-80% (aOR 1.80 ). 268 In a population-based case-control study in Montreal, Canada, among 1,590 men with prostate cancer and 1,680 population controls, circumcision was associated with lower rates of prostate cancer in men aged ≥ 36 years (OR 0.55 ) 28 and the largest reduction in rates of prostate cancer associated with circumcision was seen in black men (OR 0.40 ). 28 Circumcision before first sexual intercourse was associated with a 15% reduction in risk of prostate cancer compared to that of uncircumcised men or those circumcised after first sexual intercourse in a combined analysis using pooled data from 1,754 cases and 1,645 controls from 2 population-based case-control studies; prevalence of prostate cancer for circumcised men and uncircumcised men was 64.9% and 69.0%, respectively (OR 0.85 ). 30
# Cervical cancer in female partners of circumcised men
In a meta-analysis of male circumcision status and cervical cancer in female partners, data from 7 case-control studies were pooled. 227 Circumcision was associated with significantly less HPV infection in men. In an analysis restricted to monogamous women, there was a nonsignificant reduction in the odds of having cervical cancer among women with circumcised partners (OR 0.75 ). When the couples with men with ≤ 5 lifetime partners (40% of the study population) were excluded, the odds of cervical cancer in female partners of circumcised men were significantly reduced compared with female partners of uncircumcised men (OR 0.42 ).
# Urinary tract infection (UTI) in males
Studies have consistently demonstrated decreased incidence of UTIs among circumcised boys compared with uncircumcised boys. A multicenter prospective study of 1,025 febrile infants aged < 2 months found that 9.0% of the fevers were attributable to UTIs. Of the uncircumcised male infants, 21.3% had UTIs compared with 2.3% of the circumcised male infants. 36 A large cohort study including all births (n = 427,698) in U.S.
Army hospitals worldwide during 1975-1984 demonstrated an increase in the total number of UTIs among male infants as the circumcision rate declined over time. 35 In a meta-analysis including 22 studies and over 336,000 males, the relative risk for UTI was higher for uncircumcised boys compared with circumcised boys in all 3 age groups studied: 0-1 year (RR 9.9 ).
Among infants hospitalized for UTIs in Canada, the annual rate of UTIs necessitating hospitalization was 0.70% for uncircumcised infants versus 0.18% for circumcised infants (P < 0.0001). 34 As a result of decreasing the likelihood of UTIs in infants overall, neonatal circumcision reduces the likelihood of associated serious complications of UTIs in infants, including sepsis, pyelonephritis, and renal scarring. Such complications have been associated with the increased potential for long term consequences including hypertension, uremia, and end stage renal disease. 270
# Other health conditions
The presence of a foreskin has been associated with various penile dermatoses, including psoriasis, infections (e.g., molluscum and candidiasis), lichen sclerosis and seborrheic dermatitis. 271 Balanitis, inflammation of the glans penis, or balanoposthitis, the inflammation of the glans and the prepuce, are painful conditions that occur more frequently in uncircumcised males. In a retrospective cohort study of boys, the total frequency of complications (balanitis, irritation, adhesions, phimosis, paraphimosis) was higher among uncircumcised boys than circumcised boys (14% vs 6%), but most conditions were minor. 274 A prospective longitudinal study of over 500 boys in New Zealand found the adjusted rates of penile conditions in infants aged < 1 year to be 5% and 1% (P < 0.01) in circumcised and uncircumcised boys, respectively. These conditions included phimosis, penile inflammation, inadequate circumcision, and postcircumcision infection. However, among children aged 1-8 years, the collective adjusted rates in circumcised boys and uncircumcised boys in the study were 7% and 17% (P < 0.01), respectively. The majority of these problems were for penile inflammation including balanitis, meatitis, and inflammation of the prepuce. 272 A separate study of penile hygiene in the United States found that subjects who retracted the foreskin when bathing were less likely to have smegma accumulation, inflammation, phimosis, or adhesions than those who did not. Significant correlations were also found between early instructions concerning hygiene and the type of hygiene practiced, which suggests that good hygiene can offer some of the advantages of circumcision. 275
# Health Conditions for Which Male Circumcision Is Indicated
Specific medical conditions for which male circumcision is indicated include phimosis, paraphimosis, and balanoposthitis. Phimosis is a narrowing of the preputial orifice can lead to an inability to retract the foreskin over the glans, and can be categorized as physiological phimosis or pathological phimosis. 276 Physiological phimosis occurs when there is non-retractable foreskin or preputial adherence to the glans, which occurs in babies and young boys. 277 Physiological phimosis is a normal part of penile development, and foreskin separation from the glans occurs over time, usually within the first 3 to 4 years of life by physical maturity without intervention. 276 It is estimated that the prevalence of physiological phimosis decreases from 96% at birth, to 50% at age 1 year, to 10% at age 3 years, to 6%-8% at age 7 years, to 1% at age 16 years. 278 Pathological phimosis occurs when there is a failure to retract the foreskin due to distal scarring of the prepuce and may occur as a result of balanitis xerotica obliterans (a progressive inflammatory dermatological condition), recurrent balanoposthitis, or after forceful prepuce retraction. 276, In addition to difficulty in retracting the foreskin, pathological phimosis may result in pain on urination or erection, urinary retention, UTI, renal stones, dermatological infections localized to the area of phimosis, sexual dysfunction, and tearing of the foreskin. 276,282 Pathologic phimosis is rare before the first 5 years of life, peaks before puberty, and has been estimated to be present in 0.8%-1.5% of boys in Liverpool, England, by their 17th birthday. 276 In the absence of a response to topical steroids, or when the child is not a candidate for steroid use, male circumcision is the definitive treatment.
Paraphimosis is the entrapment of a retracted foreskin behind the coronal sulcus. Because paraphimosis may constrict blood flow leading to tissue damage and gangrene, it is considered a medical emergency. 283,277 Male circumcision may also be indicated for recurrent balanitis (also known as balanoposthitis), a swelling (inflammation) of the foreskin and head of the penis, if the condition does not respond to conservative medical treatment.
A study of 25,718 admissions for male circumcision in Western Australia that excluded neonatal circumcisions at birth found the rate of circumcision (per 1,000 person-years) decreased from 5.51 at ages 0-4 years to 0.39 at ≥ 15 years. 284 Most male circumcisions were for phimosis, and some of the circumcisions may have been unnecessarily done for non-retractable foreskins or preputial adhesions. The rate of male circumcision for balanoposthitis was 0.44 at ages 0-4 years and decreased to 0.04 at ≥ 15 years.
# Safety and Risks Associated with Male Circumcision
In the United States, reported rates of complications in large studies of medically attended male circumcision in the neonatal period, including infants from birth to age 1 month, are approximately 0.2%, 39,285 41 (note: complication rates reported as 0.19%, 285 0.22%, 39 and 0.2% 41 ) and vary by type of study, setting, operator, and surgical technique.
Similarly, the reported rate of complications of medically attended male circumcisions occurring at any age in the United States is 0.23%. 40 In a comprehensive risk-benefit analysis of infant male circumcision based on reviews of the literature and meta-analyses, it is estimated that over a lifetime, benefits exceed risks by a factor of 100:1. 286 Based on a meta-analysis of 22 studies, most of which were conducted in the United States, it is estimated that 32.1% (95% CI = 15.6-49.8) of uncircumcised men compared with 8.8% (95% CI = 4.15-13.2) of circumcised men will experience a UTI in their lifetime, suggesting that lack of circumcision is associated with a 23.3% increased risk of UTI during a man's lifetime. 37 The most common complications reported have been bleeding and infection, which are usually minor and easily managed. 1,39,41,285 Other reported complications, including wound dehiscence, unsatisfactory cosmesis, skin bridges, urinary retention, meatal stenosis, chordee, retained Plastibell devices that require surgical removal, "concealed" (or "buried") penis, major bleeding, injury to the urethra due to fistula, surgical mishap, and severe infection are rare 287 and may occur after discharge from the hospital. More severe bleeding episodes may be a sign of an undiagnosed coagulation disorder and underscore the need to conduct routine preoperative laboratory screening for such disorders, include questions about family history of prolonged bleeding or bleeding disorders, and have institutional protocols for circumcising infants with bleeding disorders, including therapy for treating prolonged bleeding after male circumcision. 52,53,55, In a study of 130,475 circumcised neonates, 0.18% had hemorrhagic complications, 0.04% suffered injury to the penis, and 0.0008% had cellulitis; the overall complication rate was 0.22%. 39 A similar AE rate of 0.19% was observed in a retrospective cohort of 100,157 circumcised neonates, including local infection, bacteremia, hemorrhage, surgical trauma, and UTI. 285 In a smaller study, complications were associated with 4% of 361 neonatal male circumcisions (hemorrhage, infection, surgical revision) and 13% of 230 circumcisions performed after the neonatal period (adhesions, poor hygiene, meatitis, surgical revisions). 291 A recent meta-analysis of 16 prospective studies from diverse settings worldwide that evaluated complications following neonatal and infant male circumcision found that median frequency of severe AEs was 0% (range 0-2%). The median frequency of any complication was 1.5% (range 0-16%). Medically attended male circumcision performed on children tended to be associated with more complications (median frequency 6%; range 2-14%) than for neonates and infants. 292 In a study using data from a large longitudinal healthcare reimbursement dataset in the United States, investigators estimated the incidence of AEs during 2001-2010 attributable to male circumcision and assessed whether AE rates differed by the age range when male circumcision was performed (i.e., aged 3 years but no uncircumcised boys. 293 However, the study population was not clearly defined, and the diagnosed cases were not independently confirmed. In addition, the investigator reported that the low number of uncircumcised boys in the study resulted in a lack of power to demonstrate a significant association between circumcision status and meatal stenosis. A study among 3,125 boys aged 6-12 years in Tehran, Iran, demonstrated a much lower rate of meatal stenosis of 0.9%. 294 Results from studies have implicated male circumcision in methicillin-resistant Staphylococcus aureus (MRSA) outbreaks. A case-control study of 2 outbreaks in 11 otherwise healthy male infants at a well infant nursery in a hospital in Los Angeles, California, identified circumcision as a potential risk factor. However, in no case did MRSA infections involve the circumcision site, anesthesia injection site, or the penis, and MRSA was not found on any of the circumcision equipment or anesthesia vials tested. 44 In a review of published MRSA outbreaks, it was hypothesized that MRSA infections and circumcision might be associated. 295 Minimizing pain is an important consideration for the procedure. Appropriate use of analgesia is considered standard of care for male circumcision at all ages and can substantially control pain. 38 In one study, 93.5% of neonates circumcised in the first week of life with appropriate analgesia gave no indication of pain on an objective, standardized neonatal pain rating system. 38 In a review of 14 studies of analgesia in neonatal circumcision, most showed that a combined pharmacological and non-pharmacological approach is best to maximally reduce pain, such as dorsal penile nerve block combined with other therapies including acetaminophen and nonnutritive sucking or 2.5% lidocaine/2.5% prilocaine cream and acetaminophen. 296 In a study of 112 men ranging in age from 15 to 82 years in Edinburgh, Scotland, pain was reported to be mainly mild to moderate after circumcision under general anesthesia with intraoperative penile block. Pain was rarely severe and occurred mostly after circumcision-related complications. 56 Because of their rarity, rates of severe complications are difficult to document. In a review article, data from a myriad of sources were compiled, including personal correspondence, to estimate the following rates of AEs per circumcisions performed in the United States: excessive bleeding requiring ligature, 1 per 4,000; bleeding requiring transfusion, 1 per 20,000; severe infection requiring parenteral antibiotics, 1 per 4,000; subsequent surgery (e.g., for skin bridges), 1 per 1,000; repair of traumatic injury, 1 per 15,000; and loss of entire penis, less than 1 per 1,000,000. 273 There were 3 deaths due to male circumcision during 1954-1989.
A study from a large longitudinal healthcare reimbursement dataset in the United States estimated the incidence rate difference (IRD) (subtracting out the background rate of AEs in uncircumcised newborns) for potential serious AEs to range from a low of 0.76 persons (95% CI = 0.10-5.43) with stricture of male genital organ PMMC to a high of 703.23 persons (95% CI = 659.22-750.18) with repair of incomplete circumcision PMMC. 40 Four amputations of the penis (incidence = 3.87 per million) occurred in uncircumcised newborns and 3 partial amputations of the penis (incidence = 2.29 per million) circumcised newborns (IRD = -1.58 ).
In a study of 1,239 infant male circumcisions using the Mogen clamp in Western Kenya, the overall AEs rate was 2.7%. 297 Most AEs were mild or moderate and treated conservatively. One severe AE involving excision of a small piece of the lateral aspect of the glans penis was documented. AEs were more common in babies who were aged ≥ 1 month, resulting in the conclusion that infant male circumcision is optimally conducted within the first month of life.
Complication rates for medically attended adult male circumcisions were well documented in the 3 African clinical trials. The complications were of similar magnitude and severity, and most commonly were pain, bleeding, infection and unsatisfactory cosmesis and the complication rate ranged from 2% to 4%. 43 In Kenya, the rate of complications was 1.7% and the most common complications included bleeding and infection. 5 In South Africa, 3.8% of trial participants experienced complications; the most common complications were pain (31.7%), bleeding (15.0%), swelling or hematoma (16.7%), and problems with appearance (15.0%). 4 In Uganda, moderate to severe complications (those requiring any treatment) were reported in 3.6% of procedures, all of which resolved with treatment. 6 There were no reported deaths or long-term sequelae.
In an observational follow-up study of males aged ≥ 12 years who underwent voluntary male medical circumcision (VMMC) between November 2008 and March 2010 in 16 clinics in Nyanza Province, Kenya, the AE rate among clinic system participants during the intra-operative period was 0.1% and post-operative period was 2.15%. 42 The rate increased to 7.5% among participants under active surveillance. Providers performing 100 or more procedures compared to those who performed fewer than 100 procedures were 63% and 39% less likely to perform a procedure resulting in an AE in both a clinicbased passive surveillance system and among a randomly selected subset of clinic participants followed through a home-based active surveillance system (involving an indepth interview), respectively, and had a shorter duration of male circumcision procedures (15.5 vs 24.0 minutes, respectively). Those performing > 100 procedures achieved an AE rate of 0.7% and 4.3% in the clinic-based passive and home-based active surveillance systems, respectively. 42 In Uganda, it was determined that the mean time to complete male circumcision surgery was 40 minutes for the first 100 procedures and 25 minutes for the subsequent 100 procedures. 298 The rate of moderate and severe AEs ranged from 8.8% for the first 19 unsupervised procedures after training, 4.0% for the next 20-99 procedures, and 2.0% for the last 100. All AEs were found to resolve with medical management. Investigators concluded that >100 circumcisions needed to be completed to achieve optimum duration of surgery and that the first 20 procedures after completing training should be supervised.
# Effect of Male Circumcision on Sexual Function and Penile Sensation
The foreskin is a highly innervated structure 299 and some authors have expressed concern that its removal may compromise sexual sensation or function. 59 However, in one survey of 123 men following medical circumcision in the United States, men reported improved sexual satisfaction and no change in sexual activity, despite decreased erectile function and penile sensation. 67 Furthermore, a small survey conducted in the United States among 15 men before and after circumcision found no statistically significant difference in sexual function or sexual satisfaction. 300 Other studies conducted among men after adult circumcision have found that relatively few men report that there is a decline in sexual functioning after circumcision; most report either improvement or no change. 62,64,301,302 A systematic review of the literature of the histological correlates of penile sensitivity and sexual pleasure concluded that circumcision results in increased "access of genital corpuscles to sexual stimuli" and that exposure of the glans, rather than lack of prepuce may be the most important factor in penile sensitivity and sexual pleasure. 69 Results from 2 other systematic literature reviews refute the assertion that circumcision compromises sexual sensation or function. 58,303 A systematic review and meta-analysis based on results from studies reporting original data evaluated the relationship between male circumcision and sexual function, sensitivity, and satisfaction and included 40,473 men of which 19,542 were uncircumcised and 20,931 were circumcised. 303 The authors of this systematic review used the Scottish Intercollegiate Guidelines Network grading system to grade the quality of the articles. 303 Of the 36 publications, 2 were classified as high quality RCTs, and 34 were case-control or cohort studies. Of the 34 case-control or cohort studies, 11 were classified as high quality, 10 were classified as well-conducted, and 13 were classified as low quality. The results from high quality RCTs and high quality or well-conducted case-control or cohort studies indicated that circumcision was not associated with an overall compromise on "penile sensitivity, sexual arousal, sexual sensation, erectile function, premature ejaculation, ejaculatory latency, orgasm difficulties, sexual satisfaction, pleasure, or pain during penetration." 303 Ten of 13 lower quality studies found compromises in ≥ 1 parameter(s); 303 however, several critical flaws have been reported in at least 1 of these studies. 304 In another systematic review and meta-analysis including 10 studies and 9,317 circumcised men and 9,423 uncircumcised men, there were no significant associations between male circumcision and sexual desire, dyspareunia, premature ejaculation, ejaculation latency time, erectile dysfunction, or orgasm difficulties. 58 Despite that no such differences were detected in 2 large, welldesigned RCTs included in this systematic review, the authors suggest that more studies are needed in diverse settings over longer study periods to further elucidate this topic. 58 Three of 4 additional studies not included in the systematic reviews and meta-analyses described above also failed to find that circumcision reduces sexual function or satisfaction. 65,66,305,68 Results from a survey about erectile function and sexual quality of life conducted in Cottbus, Germany among 2,499 men, including 167 circumcised men, indicated that male circumcision was not significantly associated with erectile dysfunction or sexual satisfaction. 65 Investigators included 35 survey items from the International Index of Erectile Function version 6 306 (IIEF-6) and reported that this study represented the largest survey worldwide on male erectile dysfunction using the IIEF-6 as a validated instrument. 65 In a cross-sectional survey in Lusaka, Zambia, of 478 men (242 circumcised and 236 uncircumcised) who responded to the IIEF-5 questionnaire, circumcised men had higher average erectile function scores (P < 0.001) and percentage of participants with normal erectile function (P < 0.05) and lower prevalence of erectile dysfunction (56% and 68%, respectively) (P < 0.05) compared with uncircumcised men. 305 Among 6,293 men surveyed through Britain's Natsal-3, 20.7% of whom were circumcised, there was no association between male circumcision and being in the lowest quintile of scores for the Natsal-SF (indicator of poorer sexual function) (aOR 0.95 ). 66 Among 62 men in Portugal participating in telephone surveys using questions from 3 validated scales, including the IIEF, about sexual habits and dysfunction before and after circumcision, male circumcision was associated with increased frequency of erectile dysfunction (9.7% vs. 25.8% ; P = 0.002)), and delayed orgasm (11.3% vs. 48.4% ; P < 0.001), and symptomatic improvement in patients with pain with intercourse (50.0%
vs. 6.5% ); P < 0.001). 68
# CONSIDERATIONS RELATED TO MALE CIRCUMCISION IN THE UNITED STATES
Policy decisions regarding male circumcision need to be considered in light of several factors. These factors include the domestic HIV burden, rates of male circumcision in the United States, acceptability of both adult male and newborn male circumcision in the United States and abroad, risk compensation, policy issues, and cost-effectiveness, while addressing ethical concerns.
# HIV Infection in the United States
The epidemiology of HIV in the United States differs considerably from that of regions targeted by the WHO/UNAIDS recommendations and the sub-Saharan African areas of Kenya, 5 Uganda, 6 and South Africa, 4 where the RCTs were conducted. 87 The overall prevalence of HIV infection (0.47%) 160 is considerably lower in the United States than in other nations such as Kenya (6.0%), Uganda (7.4%) and South Africa (19.1%), where the male circumcision clinical trials were conducted. 92 Most sexual transmission in the United States occurs among men who engage in male-to-male sexual contact, whereas in sub-Saharan Africa, transmission is predominantly through heterosexual sex. It should be noted, however, that HIV prevalence is high in some U.S. communities (for example, 2.5% of all adults and adolescents in Washington, DC) 307 and social networks. 308 In an analysis of surveillance data from 12 urban areas, overall HIV prevalence was between 1%-2% in 4 (66% male-to-male sexual contact alone, 3% male-to-male sexual contact and injection drug use ). 93 In addition, it is estimated that among persons living with diagnosed HIV infection in the United States in 2013, 75% were adult or adolescent males, 8% were adult or adolescent males who acquired HIV infection heterosexually, and 58% were adult or adolescent males who acquired HIV infection through male-to-male sexual contact (52.2% male-to-male sexual contact alone, 5.4% male-to-male sexual contact and IDU). 93 As noted earlier, there are few data showing a benefit of male circumcision on the risk of HIV associated with penile-anal sex or oral sex between men, and thus the benefit of circumcision among MSM is uncertain.
In the United States, because an estimated 8% of HIV diagnoses in 2014 and 8% of persons living with HIV in 2013 were among adult and adolescent males with infection attributed to heterosexual contact, 93 circumcision can play a role in preventing HIV among men who engage in unprotected heterosexual vaginal sex, especially in communities where prevalence of HIV infection among women is high or among men with multiple sex partners. The potential benefit of male circumcision as an intervention to prevent HIV infection among men who have sex with women depends upon the likelihood of HIV exposure among such men, and thus, upon the prevalence of HIV among their female sex partners.
The applicability of newly proven HIV prevention practice like male circumcision across racial/ethnic groups is a critical consideration. Of the HIV diagnoses among non-Hispanic whites, non-Hispanic blacks, and Hispanics or Latinos in the United States in 2014, the highest rates of diagnosis per 100,000 population occurred in black adult or adolescent males (94.0) and black adult or adolescent females (30.0). 93 The overall estimated rate of HIV diagnosis among adult or adolescent males in the United States and 6 dependent areas in 2014 was 27.5 per 100,000 population. Overall, 8% of all estimated HIV diagnoses in 2014 in the United States and 6 dependent areas were among adult or adolescent males with infection attributed to heterosexual contact and the proportions attributed to heterosexual contact among adult or adolescent males was 11% among Blacks or African Americans, 7% among Hispanics or Latinos, and 4% among whites. 93 In the United States during 2014, the rate of diagnoses of HIV infection for black women (30.0 per 100,000) was 5 times that for Hispanic or Latino women (6.5 per 100,000) and 18 times the rate for white women (1.7 per 100,000). 93
# Rates of Male Circumcision in the United States
The United States differs from some regions of sub-Saharan Africa in that most American men are already circumcised, including 80.5% of men and adolescents aged 14-59 years in the United States during 2005-2010. 94 The practice of circumcising male newborns for reasons unrelated to religious beliefs was introduced to the United States in the late 1800s, 3 and by the 1940s, an increasing proportion of male children in the United States were born in hospitals and circumcised shortly after birth. 310 The percentage of newborns who were circumcised annually reached 80% after World War II, peaked in the mid-1960s, and has decreased by 28% to 58% in 2010. In 2002, approximately 1.2 million newborn boys were circumcised prior to discharge from the hospital. 311 In 1996, an estimated 142,000 male circumcision procedures were performed beyond the neonatal period; of these, 49,000 were in persons older than 15 years. 312 Four nationally representative surveys have examined the prevalence of circumcision among U.S. males: 2 among newborns prior to discharge from the hospital, 1 among adult men, and 1 among adolescent males and adult men. According to the National Hospital Discharge Survey (NHDS), 65% of newborn boys born in hospitals were circumcised in 1999, and the overall proportion of newborns circumcised was stable from 1979 to 1999. 313 The proportion of black newborns who were circumcised during this period rose from 58% to 64%, while the proportion of white newborns who were circumcised remained stable at 66%. Significant differences in rates of male circumcision exist by region. While the proportion of newborns born in the Midwest who were circumcised increased over this 20-year period from 74% to 81%, the proportion of newborns born in the West who were circumcised decreased over the same period, from 64% to 37%. 313 From 2000 to 2007, newborn male circumcision rates in the NHDS declined from 63% to 56% (CDC unpublished data). In another hospital discharge survey with different methodology (Healthcare Cost and Utilization Project National Inpatient Sample ), the rate of circumcision performed during newborn male delivery hospitalizations increased significantly from 48% in 1988-1991 to 61% in 1997-2000 314 and declined from 61% in 2000 to 57% in 2010. 315 Male circumcision was more common among newborns born to families of higher socioeconomic status, in patients with private insurance or belonging to a health maintenance organization, and among those born in the Northeast and Midwest. On multivariate analysis, black newborns were slightly more likely and newborns of other races much less likely to undergo male circumcision than whites. 314 These surveys document male circumcisions performed in hospitals and billed or coded in discharge diagnoses, but do not ascertain male circumcisions which were not billed or coded, were performed outside of hospitals (e.g., circumcision conducted in religious ceremonies), or were performed after delivery hospitalization.
In a series of national probability samples in which random samples of adults living in U.S. households were surveyed during 1999-2004 as part of the NHANES, the overall prevalence of male circumcision among adult males in the United States was 79% and varied by race or ethnicity (88% in non-Hispanic white men, 73% in non-Hispanic black men, 42% in Mexican Americans, and 50% in men of other races and ethnicities). 221 Similarly, in a follow-up study, data from the NHANES 2005-2010 were used to estimate the prevalence of male circumcision among men and adolescents aged 14-59 years in the United States. The overall estimated prevalence of male circumcision in this population was 80.5% and also varied by race or ethnicity (90.8% in non-Hispanic whites, 75.7% in non-Hispanic blacks, and 44% in Mexican Americans). The estimated circumcision prevalence calculated retrospectively by birth cohort among men and adolescents increased from 65.7% in the 1946-1949 birth cohort to a high of 83.3% in the 1960-1969 birth cohort, and decreased to 76.2% in the 1990-1996 birth cohort. 316 In a study of NHDS during 1979-2010, the prevalence of newborn circumcision declined from 64.5% to 58.3% over this time period, with the highest prevalence of neonatal circumcision of 65.9% in 1981 and lowest prevalence of 55.4% in 2007. 286 This decrease in the prevalence of circumcision in newborns in the United States was accompanied by an increase in the proportion of Hispanics in Western states, 316 and the withdrawal of Medicaid coverage in 18 states. 286 Hispanics typically have a lower prevalence of male circumcision compared with the overall U.S. population.
Using 2010 MarketScan claims data, investigators reported that 156,247 circumcisions were performed in privately insured boys aged 0-18 years, including 93.6% in neonates (aged ≤ 28 days) and 6.4% in postneonates (aged > 28 days). 317 Investigators estimated a neonatal circumcision rate of 65.7%. The proportion of circumcisions having a nonmedical indication was 81.6% among neonates and 25.1% among postneonates. The proportion of circumcisions having phimosis as the indication in neonates and postneonates was 7.9% and 66.3%, respectively. Of postneonatal circumcisions, 46.6% were performed in infant males aged < 1 year.
During 1993-2009, data on newborn male circumcision procedures during hospitalizations from the HCUP indicated that the percentage of newborn hospital stays during which a male newborn circumcision was performed increased from 55.3% in 1993 to a high of 62.7% in 1999, and then decreased to a low of 54.5% in 2009. 318 The decrease in percentage of newborn hospital stays during which a male newborn circumcision was performed after 1999 coincided with the release of a statement in 1999 by the American Academy of Pediatrics (AAP) indicating that the evidence of medical benefits from circumcisions was not compelling enough to warrant routine newborn circumcision. 319 The AAP has since changed its policy stance indicating that the medical benefits of neonatal circumcision outweigh the risks. 320
# Acceptability
# Acceptability of adult male circumcision in the United States
It is not well understood whether American men and male adolescents at higher risk for heterosexual acquisition of HIV would be willing to undergo circumcision for partial HIV prevention, nor whether parents would be willing to have their newborns circumcised for the purpose of reducing risk of possible future HIV infection. In a consumer survey assessing the acceptability of male circumcision as an HIV prevention intervention among adult males in the continental United States, investigators mailed surveys to a random sample of 19,996 potential respondents of approximately 340,000 households. 321 Among the 789 uncircumcised men with completed survey responses, 13.1% reported they would be likely or very likely to get circumcised if their health care provider told them it would reduce their risk of becoming HIV infected, including 15% of heterosexual men and 13% of MSM. 321 In contrast, in an analysis of data collected from MSM interviewed at gay pride events in 2006, over half of uncircumcised MSM and 70% of uncircumcised black MSM indicated that they would be willing to be circumcised if the procedure were proven to reduce risk of HIV among MSM. 322 Willingness was associated with black race (OR 3.4 ), non-injection drug use (OR 6.1 ) and the perception that male circumcision reduces the risk of penile cancer (OR 4.7 ). Post-surgical pain and wound infection were the most commonly reported concerns about male circumcision in that study.
Based on a retrospective review of medical records of 500 male patients from an STD clinic in southern Florida, the proportion of men who were circumcised was 27% overall, 17% among Hispanics, and 36% among non-Hispanics. 323 In the same clinic, among a convenience sample of 39 Hispanic male and female patients during 2009, 53% of male respondents indicated that they would be willing to be circumcised. In a follow-up study by the same investigators, after receiving information about biomedical prevention strategies in the form of a pamphlet or brief video, the most preferred HIV prevention strategies among 97 male and female patients attending the STD clinic included male condoms (34%), PrEP (18%), microbicides (18%), male circumcision (14%), and female condoms (14%). 324 Adult and adolescent male circumcision could potentially have the largest impact on HIV acquisition in populations in which a low percentage of males are circumcised and there is a high risk for HIV transmission through penile-vaginal sex. As noted above, among racial/ethnic groups, Hispanic men have the lowest rates of circumcision and higher rates of heterosexually acquired HIV than white men, while black men have the highest risk of heterosexually acquired HIV infection. Further research regarding acceptability of male circumcision in these populations is needed.
# Acceptability of adult male circumcision in sub-Saharan Africa
More research on the acceptability of adult male circumcision has been conducted in sub-Saharan Africa in countries where HIV prevalence is high and male circumcision is practiced less frequently. The studies discussed below addressed facilitators and barriers to male circumcision. While some of the facilitators and barriers may be culturally specific to sub-Saharan Africa, others are universal in nature and help inform the U.S. discussion.
A review of 13 articles concerning male circumcision in 9 sub-Saharan African countries found that a median of 65% of uncircumcised men reported willingness to be circumcised, but there was a wide range of acceptability by country (from 29% in Uganda in 1997 to 81% in Swaziland in 2006). 325 In a later study, the range of acceptability of male circumcision among uncircumcised men in 4 districts in Uganda in 2008 was reported to be 40%-62%. 326 Factors that increased acceptability of male circumcision in the studies from the review article and other studies include the perception of improved hygiene, protection from HIV and other STIs, 220,326,327, increased sexual pleasure, 326,327, acceptance of procedure by female partner, 335 and improved ease of condom use. 327,328 Barriers to male circumcision include concerns about the pain associated with surgery, 327,328,330,331,336 religious and cultural beliefs, 328,337 the cost of surgery, 327,330,337 complications from surgery, 326,327,329,331 lack of access to health care, 327 concerns about contracting HIV during the procedure, 326 need for financial assistance during the recovery period to help maintain family income, 326 and beliefs about resulting changes in penile size, sensation, or performance. 333 The beliefs of women also have an impact on the acceptability of male circumcision, and their beliefs differ by country. 325 There are several reasons why women report that they prefer that their male partners be circumcised. Some women reported that they believed that it is easier for men to maintain good hygiene if they are circumcised, 327,330,331,333 some reported believing that male circumcision decreases their own risk of acquiring STIs, 327, some reported preferring circumcised sex partners and some believed that men enjoy sex more if they are circumcised. 330,332,333 In the Ugandan RCT, of 455 female partners of men circumcised as adults, 2.9% reported less sexual satisfaction after their partners were circumcised, 57.3% reported no change, and 39.8% reported an improvement. 338 Acceptability of newborn male circumcision in the United States Newborn circumcision has generally been well accepted in the United States, as evidenced by the rates of parents choosing to circumcise their newborn sons. Parents have typically made the decision based more on social concerns or perceptions of improved hygiene rather than medical reasons. 291,339 A 1999 survey among parents of young boys found that those whose sons were uncircumcised were generally less satisfied with the decision than those who had chosen to circumcise their sons, and these parents of uncircumcised sons felt that they had not received adequate information. 340 It is not clear whether more information on potential health benefits and risks of male circumcision would influence parents' decisions, particularly among racial/ethnic groups that do not typically elect to have their sons circumcised. In a survey of new parents, 76% responded that they probably or definitely would want circumcision for their male children and few participants' attitudes changed after reading an AAP policy summary or after reading about the results of the RCTs on HIV and HPV risk reduction. 341 However, in a more recent telephone survey of nearly 10,000 respondents across the continental United States, sampled through random digit dialing, 88% of respondents said that they would definitely or probably circumcise a newborn son, including 65% who "definitely would" and 23% who "probably would"; 53% of all respondents (including those who said they would definitely have their sons circumcised) stated that they would be more willing to consider circumcision for a male newborn child based on information provided about potential future HIV risk reduction. 321 Approximately one-third of those who probably would not circumcise a newborn son responded that they were more likely to circumcise as a result of the information of a partial HIV protective effect later in life. Greater odds of not being inclined to circumcise a newborn son were associated with individuals of Hispanic ethnicity and other race/ethnicity compared with non-Hispanic whites; uncircumcised men and men with unknown circumcision status compared with females; individuals receiving postgraduate education compared with individuals with no more than a high school education; individuals living in the South and West compared to the Midwest; and those who were not or only somewhat confident in the safety of routine childhood vaccines compared with those who were confident or very confident in vaccine safety.
In an STD clinic in southern Florida, among a convenience sample of 39 Hispanic male and female patients during 2009, most respondents expressed a preference for circumcising their children (male respondents , female respondents ) and reported that the best age to conduct male circumcision was during the first month of life (male respondents , female respondents ). 323 In a study about attitudes and decision making about infant male circumcision in a predominantly Hispanic population in New York City, the parental decision in favor of circumcising a male infant was associated with parents who came from a culture and family that believed in circumcision and who believed that it was not too risky. 342 Investigators studying whether presence of state Medicaid coverage for infant male circumcision was associated with male circumcision rates in the United States, found that the average rate of male circumcision was 55.9% and states with Medicaid coverage for routine male circumcision had, on average, male circumcision rates that were 24 percentage points higher than states without such coverage. Hospitals with higher percentages of Hispanic patients also had lower circumcision rates. 343 As of 2012, coverage for male circumcision through the Medicaid program is denied in 18 states. 344
# Acceptability of newborn male circumcision in sub-Saharan Africa
Although some of the issues related to acceptability of newborn male circumcision in sub-Saharan Africa may be culturally specific to this region, others are universal in nature and help inform the discussion of male circumcision in the United States. In addition, some of the culturally specific issues of sub-Saharan Africans may continue to influence decision making around male circumcision even after migrating to the United States. In Uganda, willingness of men to have their sons circumcised ranged from 60%-86%, depending on geographic region. 326 A higher proportion of circumcised (96%-100%) compared to uncircumcised men (59%-79%) were likely to have their sons circumcised. Women's support of a son's circumcision ranged from 49%-95%, based on geographic region. To prevent HIV or provide for a "healthier future" was the most common reason for willingness to support a son's circumcision. Concerns about male circumcision included cost, pain associated with surgery, perception that circumcision would signify a religious conversion, lack of information about male circumcision, or that it would encourage their children to engage in risky sexual activity. Household survey participants and healthcare workers preferred male circumcision during infancy or childhood (0-9 years) compared to adolescence (10-17 years) or adulthood (≥ 18 years). 326 In Zimbabwe, acceptability of early infant male circumcision was high among most ethnic groups; concerns included issues related to safety, questionable motivations behind free service provision by health care providers, handling of the discarded foreskin, separation of traditional circumcision from the adolescent initiation process, and female nursing of an infant's wound (which would be considered taboo). 345 In Botswana, among mothers who were interested in circumcision, protecting the infant from future infections such as HIV and hygiene were the main reasons expressed for circumcising their infants, while the child's comfort or safety during the procedure and timing of the procedure at too young an age were concerns voiced by those not interested in the procedure. 346 Among 129 grandparents and parents participating in focus group discussions in Lusaka, Zambia, most participants felt there were benefits for HIV prevention associated with circumcision, as well as advantages to conducting circumcisions at a young age. 347 Among these same focus group participants, barriers to neonatal circumcision included concerns about pain and cultural identity. Factors associated with allowing infant males to be circumcised among parents participating in a case-control study at 5 government hospitals in Nyanza Province, Kenya differed by gender. 348 Among mothers, having a husband (infant's father) who was circumcised or agreeing with the husband (infant's father) about the infant male circumcision facilitated infant male circumcision. Among fathers, being circumcised and agreeing with the mother about infant male circumcision were factors associated with conducting infant male circumcision. The primary decision makers were found to be fathers in 66% of instances.
# Provider attitudes and practices regarding male circumcision in the United States
Although many medical societies have addressed neonatal male circumcision, 320, few systematic data are available regarding provider attitudes and practices. In a nationally representative, self-administered cross-sectional electronic survey of 1500 physicians (510 family or general practitioners, 490 internists, 250 pediatricians, and 250 obstetricians/gynecologists) conducted in 2008, 33% of respondents thought that the medical benefits outweighed the risks of newborn male circumcision, while 34% thought the benefits and risks were equal, 18% believed that the benefits did not justify the risks, and 15% reported not knowing whether or not the medical benefits of newborn male circumcision outweighed the risks. 352 Overall, 39% of physicians reported being somewhat or very familiar with data from the male circumcision RCTs. Being supportive of newborn male circumcision was not associated with familiarity with African male circumcision trial results. Nevertheless, 22% (n = 327/1,500) of physicians in this study reported not understanding the risks and benefits of newborn male circumcision well enough to counsel parents and 40% (n = 504/1,250) reported not understanding the risks and benefits well enough to counsel adult men, suggesting the need for more education of physicians regarding the latest male circumcision research in order to feel comfortable counseling adult men or parents of newborn male infants. 353 A study of health care provider overall knowledge of infant male circumcision and knowledge of male circumcision reduction in HIV acquisition was conducted among 92 health care providers, including 37 obstetricians, 28 pediatricians, and 27 family practitioners in an urban medical center in Chicago, Illinois. 354 Health care providers scored high on knowledge items related to AE rates, the concepts that male circumcision protects against phimosis and UTIs and does not prevent hypospadias demonstrated but scored lower for knowledge items related to the concepts that male circumcision protects against cervical cancer, GUD, BV, and reduced HIV acquisition. Pediatricians demonstrated greater knowledge of overall infant male circumcision and obstetricians demonstrated greater knowledge of male circumcision related to HIV acquisition.
Study results from interviews of a nonrandom sample of key informants and health care practitioners serving the Hispanic community in Miami, including physicians, nurses, and other allied health professionals illustrated differing attitudes based on gender and highlighted the importance of supporting health care workers in any efforts to counsel clients around male circumcision and its role as an HIV prevention strategy. 355 The acceptability of male circumcision among male health care providers was associated with acceptability of American Pediatric Association guidelines, and personal circumcision. Some male health care providers expressed skepticism regarding health benefits for sexually transmitted disease or HIV risk reduction. Female providers expressed the importance of parental financial burden, lack of information, and low acceptability among Hispanic men.
# Cost-Effectiveness
The medical costs of male circumcision must also be accounted for in considering the role of circumcision for HIV prevention in any setting. While male circumcision has been shown to be a cost-saving HIV prevention intervention in sub-Saharan Africa, 356,357 the calculus is different in the United States, where the costs of performing male circumcision as well as HIV treatment costs are higher, and the risk of HIV infection is lower. Another important factor driving the cost-effectiveness is the length of time between the intervention and when the benefits are experienced. The value of these benefits is discounted over decades for newborn male circumcision, but over a shorter period for adult male circumcision.
One cost-utility analysis of male circumcision in the United States reported that circumcision increased incremental costs by $828 per patient and resulted in an incremental 15.30 well-years lost per 1,000 males. However, like most other costeffectiveness analyses of male circumcision in the United States, it was conducted prior to publication of the RCTs that reported benefits of circumcision to prevent HIV and focused on costs and benefits of related conditions other than HIV; 358 cost-effectiveness increased if the procedure was cost-free, pain-free, and had no immediate complications. One evaluation of a large health maintenance organization database found the expected lifetime cost of male circumcision was small, compared with larger expected benefits. 359 Much of the benefit of neonatal male circumcision in that analysis derived from preempting the need for post-neonatal circumcision, which is substantially more costly. Two other studies published in 1991, which did not include an HIV prevention benefit, estimated that both costs and benefits were too small to play a substantial role in the decision whether to perform the procedure. 360,361 A model estimating the impact of newborn circumcision on a U.S. male's lifetime risk of HIV from heterosexual contact showed that circumcision reduced the 1.9% absolute lifetime risk by 15.7% overall, by 20.9% for black males, 12.3% for Hispanic males and 7.9% for white males. 362 The number of circumcisions needed to prevent 1 HIV infection was 298 for all males, and ranged from 65 for black males to 1,231 for white males. Newborn male circumcision was a cost-saving HIV prevention intervention overall, as well as for black and Hispanic males. The net cost of newborn male circumcision per quality-adjusted life-year (QALY) saved was $87,792 for white males. Results were most sensitive to the discount rate, male circumcision efficacy in preventing acquisition of HIV, and cost. The main analysis did not take into account secondary prevention (i.e., HIV cases prevented among partners of circumcised males), the benefits of male circumcision in preventing other STIs, AEs, and possible reduction in HIV risk from male-to-male sexual contact.
A population-based model of the effect of adult male circumcision in MSM indicated that over 20 years, circumcision could very slightly reduce (< 1%) the number of new HIV cases among MSM (CDC, unpublished data). Although there are no conclusive data demonstrating a protective effect for MSM, the model assumed 50%-60% protection from HIV for circumcised men engaging in insertive sex. The net costs of the procedure were less than $50,000 per QALY saved, which is considered a conservative threshold for cost-effectiveness. The model included the prevention of secondary cases of HIV. The reduction in new cases of HIV was small because the chief source of HIV infection among MSM is receptive anal sex and the model assumed no circumcision-related protection for receptive acts.
Investigators evaluated the reduction in infections associated with male circumcision and resulting health care costs associated with continued decreases in male circumcision rates. They estimated that if male circumcision rates continue to decrease in the United States, such decreases would likely be associated with increased infection prevalence and resulting increased medical expenditures for men and women. For example, a reduction in the male circumcision rate to 10%, a rate similar to that in Europe, would result in an increase in lifetime health care costs by $407 per male and $43 per female, and in an increase in net expenditure per annual birth cohort of $505 million. The projected increase in HIV infections would be responsible for 78.9% of increased costs. 363 Investigators conducting a cost-effectiveness analysis for MSM in Australia similarly found that male circumcision could be cost effective or cost saving in some scenarios, although a relatively small percentage of HIV infections would be prevented by circumcision of MSM and the associated cost was high relative to other HIV prevention programs. 364 They estimated that 2%-5% of HIV infections per year would be averted and that 118-338 male circumcisions would be required to prevent one HIV infection. Circumcising all MSM, all MSM aged 35-44 year old years, and all MSM who practice only insertive penile-anal sex were all cost-effective strategies compared to not circumcising any MSM.
A number of mathematical modeling studies have found that male circumcision would be an effective prevention tool in sub-Saharan Africa. Using mathematical modeling, UNAIDS, WHO, and the South African Centre for Epidemiological Modelling and Analysis (SACEMA) estimated that male circumcision among heterosexual men in areas with a low prevalence of male circumcision and a high HIV prevalence would be very beneficial; 5 to 15 male circumcisions would need to be performed to avert 1 HIV infection. 365 The estimated costs to avert 1 HIV infection would range from US$150 to US$900 using a 10-year time span. Investigators conducting a cost and impact study in Botswana estimated that US$689 per HIV infection and 70,000 HIV infections through 2025 could be averted by scaling-up adult and neonatal circumcision to reach 80% coverage by 2012, at a total net cost of US$47 million between 2012 and 2025. 366 In a cost-effectiveness study of male circumcision over a lifetime in Rwanda, an African country with an adult HIV prevalence of 3%, infant male circumcision was found to be less expensive per procedure than adolescent and adult male circumcision (US$15 instead of US$59) and was determined to be cost-saving despite the calculation that savings from an infant circumcision would require up to 52 years to be realized, which is the life expectancy at birth in Rwanda. Adult male circumcision was neither cost-saving nor highly cost-effective when considering only the direct benefit for the circumcised man. 367 A compartmental epidemic model simulating the population-level impact of various male circumcision programs on heterosexual transmission in Soweto, South Africa, incorporated both gender-specific negotiation strategies related to condom use with the male circumcision program. Investigators determined that even modest programs offering circumcision would result in significant benefits and estimated that a 5-year prevention program in which an additional 10 percent of uncircumcised males undergo circumcision annually, would prevent 13% of expected new HIV infections over a 20-year period. 368 Other Considerations
# Risk compensation
The possibility that men may alter their risk behavior and engage in riskier sex practices following circumcision may undermine the preventive health benefits of male circumcision. 84,369 In addition, it is possible that generalized dissemination of public health information regarding male circumcision may introduce complacency and greater risk behavior among men circumcised early in life, such as the period during infancy through young adulthood.
In general, however, risk compensation was not observed among circumcised participants in the majority of RCTs. 5,74,77,370 A meta-analysis of secondary outcomes measuring sexual behavior for the Kenyan and the Ugandan trials found no significant differences in risky sexual behavior between circumcised and uncircumcised men, 136 and in a subsample of men in the Kenyan trial, a detailed longitudinal sexual risk assessment indicated no statistically significant differences in sexual risk propensity scores or in incident infections of gonorrhea, chlamydial infection, or trichomoniasis by male circumcision status. 371 A similar result for the RCT conducted in Kenya found no significant difference in risk behavior between circumcised and uncircumcised men over 12 months of follow-up. 5 More recently, during 4.79 years of trial surveillance of participants in the Rakai randomized trial of male circumcision, there was no evidence of significant self-selection or behavioral risk compensation based on male circumcision status. 77,81 However, in the South African RCT, during 2002-2005, the mean number of sexual contacts was statistically significantly greater for circumcised men compared to uncircumcised men at the visits during months 4-12 (5.9 vs 5.0, P 1 sexual partner or a new sexually transmitted infection in the past 12 months, the odds of potential risk compensation were higher among (1) non-Hispanic blacks and men of other race/ ethnicity compared to non-Hispanic whites, (2) men reporting an annual household income $60,000, (3) men who were never married or widowed/divorced/separated compared to married men, (4) men who agreed that they have little control over the things that happen to them compared with men who disagreed that they have little control , and (5) men aged > 45 years compared with men aged 18-34. 321 Among a convenience sample of men attending 2 publicly funded sexually transmitted disease clinics in Louisville, Kentucky and Cincinnati, Ohio, men who selfreported being uncircumcised compared with those who self-reported being circumcised, were less likely to report unprotected vaginal sex in the past 3 months (60.5% and 82.9%, respectively; P < 0.001), using condoms for 50% or fewer of the sex acts occurring in the past 3 months (38.1% and 52.9%, respectively; P = 0.02), or having a recent STD in the past 3 months (11.8% and 25.4%, respectively; P = 0.01). 85
# Policy considerations regarding reimbursement
Until recently, most U.S. medical societies have adopted relatively neutral stances regarding the practice of routine neonatal male circumcision. In 1999, the American Medical Association stated: "Virtually all current policy statements from specialty societies and medical organizations do not recommend routine neonatal circumcision, and support the provision of accurate and unbiased information to parents to inform their choice." 349 The AAP statement on neonatal male circumcision from that year, reaffirmed in 2005, concluded that " potential medical benefits … however, these data are not sufficient to recommend routine neonatal circumcision." 319 Similar neutral statements were issued by the American Academy of Family Physicians 374 and the American Urological Association (AUA). 375 The AUA states that "when circumcision is being discussed with parents and informed consent obtained, medical benefits and risks, and ethnic, cultural, religious, and individual preferences should be considered. The risks and disadvantages of circumcision are encountered early whereas the advantages and benefits are prospective."
However, in the wake of the male circumcision clinical trial results from Africa, the AUA has modified their recommendation to say that, "While the results of studies in African nations may not necessarily be extrapolated to men in the United States at risk for HIV infection, the American Urological Association recommends that circumcision should be presented as an option for health benefits (but)… should not be offered as the only strategy for risk reduction." 376 In 2012, after the AAP's Taskforce on Circumcision reviewed the latest evidence, the AAP updated its stance and concluded that new evidence indicates the preventive health benefits of newborn male circumcision outweigh the risks and that the benefits of newborn male circumcision justify access to this procedure for families who choose it. 320,377 In a study assessing insurance coverage and reimbursement for routine newborn and adult male circumcision in 2009, investigators reported that private insurance offered broader coverage compared with state Medicaid programs for routine neonatal male circumcision, and both public and private insurance plans offered only sparse coverage for adult male circumcision. 378 In 2009, based on data from HCUP, private insurance and Medicaid were the leading primary payers of hospital stays involving circumcision procedures in male newborns, having paid for 57.4% and 35.3% of such hospital stays, respectively. 318 Based on an analysis of MarketScan claims data from 2010, the average payment for circumcisions covered by private insurance was $285 for neonatal circumcision and $1,885 for postneonatal circumcision. 317 In 2 studies, reimbursement by Medicaid or private insurance for the costs of neonatal male circumcision were associated with higher circumcision rates in hospitals compared to states which disallow Medicaid reimbursement or where patients did not have private insurance coverage. 314,343 In a study using hospital discharge data from the 2000-2010 Nationwide Inpatient Sample, circumcision incidence decreased significantly from 61.3% in 2000 to 56.9% in 2010 and overall remained higher for newborn hospitalizations covered by private insurance compared with Medicaid (66.9% vs 44.0%). 315 During this same period, the proportion of male newborn hospitalizations with Medicaid coverage increased from 36.0% in 2000 to 50.1% in 2010. 315 In one retrospective study of rates of neonatal and early childhood male circumcision conducted from 1977-2001 limited to 2 hospitals in the Midwest, insurance coverage was not correlated with rates of neonatal male circumcision. 379 Investigators in Louisiana and Florida, 2 states that no longer cover elective circumcision under Medicaid programs, studied the impact of lack of such coverage on nonneonatal circumcision. In Louisiana, among boys aged ≤ 5 years, the average annual number and expense of neonatal circumcision significantly decreased in 2005, the year during which Medicaid coverage was eliminated for elective circumcision. 380 However, the number of "medically indicated" circumcisions began a steady increase during 2006-2010, and in 2010, expenditures for circumcision reached the same levels as those in 2002, before the loss of Medicaid coverage. Investigators estimated that the percentage of expenditures related to nonneonatal circumcisions would increase from 43% in 2002 to 96% in 2015 of all circumcision spending. In Florida, during 2003-2008, publicly funded circumcision procedures increased more than 6-fold compared with those covered by private insurance; this included a significant increase in the number of nonneonatal circumcisions and a doubling in the cost of nonneonatal circumcisions. 381
# Ethical Considerations
Ethical issues, in addition to medical benefits and risk, must be considered before making recommendations related to male circumcision. A subcommittee of CDC's PHEC composed of CDC staff and external (i.e., non-governmental) consultants from academia and a center for ethics was consulted in October 2009 to review the ethical considerations related to elective male circumcision in the United States. The ethical principles of beneficence (maximizing benefit and minimizing harm, both at the individual and societal level); autonomy (respect for individual values and choices); and justice (the obligation to fairly distribute risks, burdens and benefits, to minimize stigmatization, and to make decisions in a transparent fashion) were considered. Of particular importance were ethical questions related to parental decision-making on behalf of a newborn boy, targeting populations at high risk for HIV, and medical reimbursement for the procedure.
The subcommittee concluded that newborns cannot provide informed consent and so must rely on their parents or caretakers to determine and act in their best interests, raising the issue of autonomy in discussions of circumcision of male newborns. The subcommittee took into account varying opinions about the decision-making process, including that the decision about whether to be circumcised should be made by individuals when they are old enough to make their own informed decisions. It has been pointed out that a man with a foreskin can elect to be circumcised but a man circumcised as a newborn cannot easily reverse that decision. 382,383 Others argue that it is a choice that parents should be able to make on behalf of their male children because of the strong evidence showing that the procedure is beneficial and the risks are minimal if performed competently. Parents are generally given the authority to make decisions, such as vaccination, for their minor children based on their evaluative consideration of the child's best interests. Appropriately, this consideration takes into account social, cultural, and religious perspectives, as well as objective, scientific information about preventive health benefits and risks. Thus, in the opinion of the PHEC subcommittee, both a decision to circumcise and a decision to not circumcise are legitimate decisions, and either decision is an appropriate exercise of parental authority on behalf of a minor child.
There are advantages and disadvantages to performing male circumcision at various stages of life. The procedure is simpler, safer, and less expensive for neonates and infants than for adolescents and adults. However, the newborn has no ability to participate in the decision. Furthermore, although there is evidence of reduced UTIs among male infants who have been circumcised, the benefit of the protective effect against STIs, including HIV, is delayed for many years, not accruing until the child becomes sexually active. It is possible that new, less invasive interventions (e.g., effective topical microbicides or vaccines) may be developed in the intervening years. 385 Delaying male circumcision until adolescence or adulthood obviates concerns about violation of autonomy. However, performing the procedure after sexual debut would result in missed opportunities for prevention of HIV infection. 385,387 In the United States, previous sexual intercourse was reported among 32% of males aged 15 to 17 years and 65% of males aged 18 to 19 years. 388 Uptake of the procedure after the neonatal period is also likely to be lower due to the increased cost, greater likelihood of complications, and other barriers to male circumcision at a later age. The PHEC subcommittee concluded that the disadvantages associated with delaying male circumcision would be ethically compensated to some extent by the respect for the integrity and autonomy of the individual.
The prevalence of HIV infection in the United States is not as high as in sub-Saharan Africa, and most men do not acquire HIV through penile-vaginal sex. Targeting recommendations for adult male circumcision to men at elevated risk for heterosexually acquired HIV infection would be more cost effective than offering routine adult male circumcision. Men may be targeted according to sexual practices or an elevated prevalence of HIV within a geographic region or race/ethnicity group. However, some groups at high risk for HIV infection may also be more likely to be members of certain racial or ethnic groups, thus leading to the perception that men are being targeted because of their ethnic/racial status rather than their risk for HIV infection. Furthermore, recommendations to increase rates of male circumcision in the United States to reduce male acquisition of heterosexually acquired HIV infection may result in stigmatization of uncircumcised men or groups of men who are not routinely circumcised should they choose to not undergo circumcision. Conversely, targeting populations at high risk may raise questions about distributive justice, if persons in groups that are not targeted do not have equal access to the procedure. 385 The PHEC subcommittee concluded that programs incorporating male circumcision should be undertaken with sensitivity to the beliefs and practices of communities affected, and potential participants must be provided with an accurate explanation of potential risks and benefits, as well as assurances of protection of their best interests and informed choice. 384,385,389 The PHEC subcommittee also noted that lack of health care insurance for some groups and lack of coverage for male circumcision by Medicaid in some states raises issues of distributive justice, and because data demonstrate that male circumcision has the potential to reduce the risk of HIV infection and other adverse health conditions, the procedure should be made available to all who want it.
# TABLES
aHR = adjusted hazard ratio; aOR = adjusted odds ratio; aPRR = adjusted prevalence rate ratio; aRR = adjusted risk ratio; AT = as-treated; BV = bacterial vaginosis; CI = confidence interval; GUD = genital ulcer disease; HPV = human papillomavirus; analysis; HR = hazard ratio; HR-HPV = high-risk human papillomavirus genotype; HSV = herpes simplex virus; IRR = incidence rate ratio; ITT = intention-to-treat; OR = odds ratio; P = P-value PRR = prevalence rate ratio; RCT = randomized controlled trial; RR = risk ratio; STI = sexually transmitted infection | 31,028 | {
"id": "9a3644ae0b09828ca9963e3bafa9c16896562030",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Tuberculosis (TB) control can be particularly problematic in correctional and detention facilities, in which persons from diverse backgrounds and communities are housed in close proximity for varying periods. This report provides a framework and general guidelines for effective prevention and control of TB in jails, prisons, and other correctional and detention facilities. Recommendations were developed on the basis of published guidelines and a review of the scientific literature. Effective TB-prevention and -control measures in correctional facilities include early identification of persons with TB disease through entry and periodic follow-up screening; successful treatment of TB disease and latent TB infection; appropriate use of airborne precautions (e.g., airborne infection isolation, environmental controls, and respiratory protection); comprehensive discharge planning; and thorough and efficient contact investigation. These measures should be instituted in close collaboration with local or state health department TB-control programs and other key partners. Continuing education of inmates, detainees, and correctional facility staff is necessary to maximize cooperation and participation. To ensure TB-prevention and -control measures are effective, periodic program evaluation should be conducted. rectional and public health officials, this report defines the essential activities necessary for preventing transmission of M. tuberculosis in correctional facilities. These fundamental activities can be categorized as 1) screening (finding persons with TB disease and latent TB infection ); 2) containment (preventing transmission of TB and treating patients with TB disease and LTBI); 3) assessment (monitoring and evaluating screening and containment efforts); and 4) collaboration between correctional facilities and public health departments in TB control. These overarching activities are best achieved when correctional facility and public health department staff are provided with clear roles of shared responsibility. The recommendations in this report can assist officials of federal, state, and local correctional facilities in preventing transmission of TB and controlling TB among inmates and facility employees. The target audience for this report includes public health department personnel, correctional medical directors and administrators, private correctional health vendors, staff in federal and state agencies, staff in professional organizations, and health-care professionals. The report is intended to assist policymakers in reaching informed decisions regarding the prevention and control of TB in correctional facilities.# Introduction
Tuberculosis (TB) is a disease caused by Mycobacterium tuberculosis that adversely affects public health around the world (1). In the United States, TB control remains a substantial public health challenge in multiple settings. TB can be particularly problematic in correctional and detention facilities (2), in which persons from diverse backgrounds and communities are housed in close proximity for varying periods. Effective TB prevention and control measures in correctional facilities are needed to reduce TB rates among inmates and the general U.S. population.
The recommendations provided in this report for the control of TB in correctional facilities expand on, update, and supersede recommendations issued by the Advisory Council for the Elimination of TB (ACET) in 1996 (3). This report provides a framework and general guidelines for effective prevention and control of TB in jails, prisons, and other correctional and detention facilities. In addition, on the basis of existing scientific knowledge and applied experience of cor-Corrections Working Group, an ad hoc group of persons with expertise in public health and health care in correctional facilities. Organizations represented in the Working Group included ACET, the National Commission on Correctional Health Care, the American Correctional Association, the American Jail Association, and the Society of Correctional Physicians. The Working Group reviewed published guidelines and recommendations, published and unpublished policies and protocols, and peer-reviewed studies discussing overall TB prevention and control and aspects of TB prevention and control specific to correctional and detention facilities. These guidelines, recommendations, policies, protocols, and studies form the basis for the Working Group's recommendations. Because controlled trials are lacking for TB prevention and control activities and interventions specific to correctional and detention facilities, the recommendations have not been rated on the quality and quantity of the evidence. The recommendations reflect the expert opinion of the Working Group members with regard to best practices, based on their experience and their review of the literature.
# Summary of Changes from Previous Recommendations
These guidelines are intended for short-and long-term confinement facilities (e.g., prisons, jails, and juvenile detention centers), which are typically referred to as correctional facilities throughout this report. These recommendations differ as follows from those made in 1996:
- The target audience has been broadened to include persons working in jails and other detention facilities. - The need for correctional and detention facilities to base screening procedures for inmates and detainees on assessment of their risk for TB is emphasized. A description of how TB risk should be assessed is included. - The need for institutions to conduct a review of symptoms of TB for all inmates and detainees at entry is discussed. - The need for all inmates and detainees with suspected TB to be placed in airborne infection isolation (AII) immediately is emphasized. - Testing recommendations have been updated to reflect the development of the QuantiFERON ® -TB Gold test (QFT-G), a new version of the QuantiFERON ® -TB (QFT) diagnostic test for M. tuberculosis infection. - The section on environmental controls has been expanded to cover local exhaust ventilation, general ventilation, air cleaning, and implementation of an environmental control program. Ventilation recommendations for selected areas in new or renovated correctional facilities have been included. - A section on respiratory protection has been added, including information on implementing respiratory protection programs. - Treatment recommendations for TB and LTBI have been updated on the basis of the most recent treatment statements published by CDC, the American Thoracic Society (ATS), and the Infectious Diseases Society of America. - Emphasis is placed on case management of inmates with TB disease and LTBI. - The need for early discharge planning coordinated with local public health staff is emphasized. - A section has been included on U.S. Immigration and Customs Enforcement detainees. - The importance of collaboration between correctional facility and public health staff is emphasized, particularly with respect to discharge planning and contact investigation. - The need for corrections staff to work closely with public health staff to tailor an appropriately comprehensive training program to achieve and sustain TB control in a correctional facility is emphasized. - The need for public health workers to receive education regarding the correctional environment is emphasized. - Program evaluation is emphasized. Recommended areas of evaluation include assessment of TB risk in the facility, performance measurement for quality improvement, collaboration, information infrastructure, and using evaluation information to improve the TB-control program.
# Background
During 1980-2003, the number of incarcerated persons in the United States increased fourfold, from approximately 500,000 in 1980 to approximately 2 million in 2003 (4,5). A disproportionately high percentage of TB cases occur among persons incarcerated in U.S. correctional facilities. In 2003 at midyear, although 0.7% of the total US population was confined in prisons and jails, 3.2% of all TB cases nationwide occurred among residents of correctional facilities (6). Although overall incidence of new TB cases among the U.S. population has remained at <10 cases per 100,000 persons since 1993 (6), substantially higher case rates have been reported in correctional populations (2). For example, the incidence of TB among inmates in New Jersey during 1994 was 91.2 cases per 100,000 inmates, compared with 11.0 cases per 100,000 persons among all New Jersey residents (3). In 1991, a TB case rate for inmates of a California prison was 184 cases per 100,000 persons, which was 10 times greater than the statewide rate (7). In addition, in 1993, the TB rate for inmates in the New York State correctional system was 139.3 cases per 100,000 persons, an increase from the rate of 15.4 during 1976-1978 (3,8). In California, the TB case rate reported from an urban jail in a high-prevalence area was 72.1 cases per 100,000 inmates in 1998, representing 10% of the county's cases in that year (9). Studies have demonstrated the prevalence of LTBI among inmates to be as high as 25% (10)(11)(12)(13)(14). Other studies have demonstrated a correlation between length of incarceration and positive tuberculin skin test (TST) response, indicating that transmission might have occurred in these facilities (15,16).
At least three factors contribute to the high rate of TB in correctional and detention facilities. First, disparate numbers of incarcerated persons are at high risk for TB (e.g., users of illicit substances , persons of low socioeconomic status, and persons with human immunodeficiency virus infection). These persons often have not received standard public health interventions or nonemergency medical care before incarceration. Second, the physical structure of the facilities contributes to disease transmission, as facilities often provide close living quarters, might have inadequate ventilation, and can be overcrowded (9,(17)(18)(19). Third, movement of inmates into and out of overcrowded and inadequately ventilated facilities, coupled with existing TB-related risk factors of the inmates, combine to make correctional and detention facilities a high-risk environment for the transmission of M. tuberculosis and make implementation of TB-control measures particularly difficult (19). Despite recent efforts to improve TB-control measures in correctional and detention facilities, outbreaks of TB continue to occur in these settings, and TB disease has been transmitted to persons living in nearby communities (20)(21)(22). Consequently, correctional and detention facilities are critical settings in which to provide interventions for detecting and treating TB among a vulnerable population.
# Addressing the Challenges of TB Control in Correctional Facilities
Published recommendations for elimination of TB in the United States include testing and treating inmates in correctional facilities for LTBI to prevent the development and transmission of TB (23). The basis for this recommendation is that LTBI and coinfection with HIV are more common in these underserved populations than in the general population (24)(25)(26). However, treating correctional inmates for LTBI can be challenging.
Before being incarcerated, inmates might have faced barriers to accessing community health services necessary for the detection and treatment of TB disease and LTBI (27). In addition, inmates released from correctional facilities often do not attend clinic visits or adhere to treatment regimens. One study of inmates released before completion of TB therapy indicated that only 43% made at least one visit to the clinic after release (28). In another jail setting, using an educational intervention increased the rate of clinic visits after release from 3% to only 23% (29).
In the United States, TB is concentrated increasingly among the most disadvantaged populations, particularly immigrants (30). Detained immigrants are arriving largely from countries with a high prevalence of TB (e.g., Mexico, the Philippines, and Vietnam) and therefore present unique challenges in the elimination of TB in the United States- (31). Social and legal barriers often make standard testing and treatment interventions inadequate among undocumented immigrants (31). In certain instances, these patients have become resistant to firstline anti-TB drugs because of the interrupted treatment received in their countries of origin (32). However, undocumented immigrants placed in detention and correctional facilities have an opportunity to receive TB screening and begin treatment for TB disease (33).
# Rationale for Updating and Strengthening TB Control and Prevention Guidelines
Transmission of M. tuberculosis continues to be documented within correctional facilities, primarily as a result of undiagnosed TB. Inmates with undiagnosed TB disease place other inmates and correctional staff at risk for TB, and when released, these persons also can infect persons living in surrounding communities (16,17,20,21,22,34,35).
Despite the continued transmission of TB in correctional settings, few comprehensive evaluations of the implementation of TB-detection and -control procedures in correctional facilities have been performed (36)(37)(38). Nevertheless, correctional facilities are increasingly basing their TB prevention and control procedures on studies and data that support judicious interventions, including screening, case finding, case - The epidemiology of TB in the United States has changed dramatically since the early 1990s. Immigration from countries with a high prevalence of TB contributes substantially to the continued high rates of disease and transmission among foreign-born persons. In 2003, the rate of TB among foreign-born persons in the Untied States was 8.7 times higher than the rate for persons born in the United States. More than half of new TB cases in 2003 occurred in foreign-born persons, particularly those from Mexico, the Philippines, and Vietnam. Of 114 patients in whom multi-drug resistant TB (MDR TB) were diagnosed, foreign-born persons accounted for 95 (83%) cases (6). Detention facilities and local jails frequently contract with U.S. Immigration and Customs Enforcement (ICE) to house detainees, a practice that should be accounted for in assessing a facility's risk status.
management, outbreak and contact investigations, and treatment for LTBI (7,9,14,21,28,33,34,(39)(40)(41)(42)(43)(44)(45)(46). Improving TB prevention and control practices within these settings is necessary to reduce rates of disease and eventually eliminate TB. TB prevention and control practices within correctional facilities should be strengthened for multiple reasons:
- M. tuberculosis is spread through the air. One highly infectious person can infect inmates, correctional staff, and visitors who share the same air space. - Immediate isolation of infectious patients can interrupt transmission of M. tuberculosis in the facility. - Prompt initiation of an adequate regimen of directly observed therapy (DOT) † helps ensure adherence to treatment because a health-care professional, a specially trained correctional officer, or a health department employee observes the patient swallowing each dose of medication. This method of treatment can diminish infectiousness, reduce the risk for relapse, and help prevent the development of drug-resistant strains of M. tuberculosis. - Inmates of correctional facilities have been reported to have relatively high rates of HIV infection; persons who are coinfected with HIV and M. tuberculosis are at high risk for progressing from LTBI to TB disease. - A completed regimen of treatment for LTBI can prevent the development of TB disease in persons who are infected with M. tuberculosis. - Correctional facility officials have an opportunity to treat inmates who have TB disease or LTBI before such inmates are released into the community. - Because a substantial proportion of inmates do not have any other access to the health-care system, the correctional setting can be a primary source of health information, intervention, and maintenance.
# Screening
Early identification and successful treatment of persons with TB disease remains the most effective means of preventing disease transmission (47). Therefore, inmates who are likely to have infectious TB should be identified and begin treatment before they are integrated into the general correctional facility population (i.e., at the time of admission into the correctional system). When possible, newly arrived inmates should not be housed with other inmates until they have been appropriately screened for TB disease. Screening programs in the correctional setting also allow for the detection of substantial numbers of persons with LTBI who are at high risk for progressing to TB disease and would likely benefit from a course of treatment. This secondary benefit of screening programs is often limited by inability to initiate and ensure completion of LTBI treatment, particularly in short-term correctional facilities. In addition to screening at intake, routine (i.e., at least annual) screening of long-term inmates and correctional facility staff (e.g., custody and medical) should be incorporated into the TB-control program (48,49).
How screening activities should be implemented depends on multiple factors, including 1) the type of facility, 2) the prevalence of TB infection and disease in the facility, 3) the prevalence of TB in the inmates' communities, 4) the prevalence of other risk factors for TB (e.g., HIV) in the inmate population, and 5) the average length of stay of inmates in the facility. The type of screening recommended for a particular facility is determined by an assessment of the risk for TB transmission within that facility. The risk assessment should be performed at least annually and should be made in collaboration with the local or state health department. A facility's TB risk can be defined as being minimal or nonminimal. A facility has minimal TB risk if
- no cases of infectious TB have occurred in the facility in the last year, - the facility does not house substantial numbers of inmates with risk factors for TB (e.g., HIV infection and injectiondrug use), - the facility does not house substantial numbers of new immigrants (i.e., persons arriving in the United States within the previous 5 years) from areas of the world with high rates of TB, and - employees of the facility are not otherwise at risk for TB. Any facility that does not meet these criteria should be categorized as a nonminimal TB risk facility.
# Screening Methods
# Symptom Screening
Whenever possible, health-care professionals should perform the initial screening. However, correctional officers in jails (particularly those housing minimal numbers of inmates) frequently administer health intake questionnaires. If custody staff members conduct the intake screening, they should receive adequate periodic training in taking a medical history, making necessary observations, and determining the appropriate disposition of inmates with signs or symptoms of possible medical problems. Staff conducting medical intake should receive appropriate counseling and education regarding medical confidentiality. † Therapy that involves providing the anti-TB drugs directly to the patient and watching as the patient swallows the medications. DOT is the preferred core management strategy for all patients with TB. DOT for LTBI is referred to sometimes as directly observed preventive therapy.
During their initial medical screening, inmates should be asked if they have a history of TB disease or if they have been treated for LTBI or TB disease previously. Documentation of any such history should be obtained from medical records, if possible. Inmates should be observed for the presence of a cough or evidence of significant weight loss. All incoming inmates in any size jail, prison, or other detention facility (e.g., immigration enforcement) should be immediately screened for symptoms of pulmonary TB by being asked if they have had a prolonged cough (i.e., one lasting >3 weeks), hemoptysis (i.e., bloody sputum), or chest pain. The index of suspicion should be high when pulmonary symptoms are accompanied by general, systemic symptoms of TB (e.g., fever, chills, night sweats, easy fatigability, loss of appetite, and weight loss). Inmates should be interviewed systematically (i.e., using a standardized questionnaire) to determine whether they have experienced symptoms in recent weeks. Inmates who have symptoms suggestive of TB disease should immediately receive a thorough medical evaluation, including a TST or QFT-G, a chest radiograph, and, if indicated, sputum examinations.
Persons with symptoms suggestive of TB disease or with a history of inadequate treatment for TB disease should be immediately placed in an AII room § until they have undergone a thorough medical evaluation. If deemed infectious, such persons should remain in isolation until treatment has rendered them noninfectious. Facilities without an on-site AII room should have a written plan for referring patients with suspected or confirmed TB to a facility that is equipped to isolate, evaluate, and treat TB patients.
Symptom screening alone is an unsatisfactory screening mechanism for TB, except in facilities with a minimal risk for TB transmission. The use of symptom screening alone often will fail to detect pulmonary TB in inmates.
# Chest-Radiograph Screening
Screening with chest radiographs can be an effective means of detecting new cases of unsuspected TB disease at intake to a correctional facility. In addition, radiographic screening requires fewer subsequent visits than a TST (i.e., only those inmates with suspicious radiographs or TB symptoms require follow-up). However, such screening will not identify inmates with LTBI. One study demonstrated that screening inmates with a chest radiograph doubled the TB case-finding rate and reduced the time from intake into the correctional facility to isolation substantially compared with TST testing (2.3 days and 7.5 days, respectively), thereby reducing the risk for TB exposure for other inmates and staff (50). Digital radiographs (miniature or full-size) provide enhanced imaging and improved storage and readability. A miniature radiograph can be performed in <1 minute and exposes the patient to approximately one tenth the radiation dose of a conventional radiograph. One cost-effectiveness analysis of miniature chest radiography for TB screening on admission to jail indicated that more cases were detected with this method than either TST or symptom screening, and the cost of radiograph screening was less per case detected (51). The extent to which radiologic screening is used in a given institution should be dictated by multiple factors, including 1) local epidemiologic characteristics of TB disease; 2) inmate length of stay; 3) the ability of the health-care professionals within the facility to conduct careful histories, tuberculin skin or QFT-G testing, and crossmatches with state TB registries; and 4) timeliness of the radiographic study and its reading. Screening with chest radiographs might be appropriate in certain jails and detention facilities that house substantial numbers of inmates for short periods and serve populations at high risk for TB (e.g., those with high prevalence of HIV infection or history of injection-drug use and foreign-born persons from countries in which TB prevalence is high).
Inmates who are infected with HIV might be anergic and consequently might have false-negative TST results. However, routine anergy panel testing is not recommended because it has not been demonstrated to assist in diagnosing or excluding LTBI (52). In facilities that do not perform routine radiographic screening for all inmates, a chest radiograph should be part of the initial screening of HIV-infected patients and those who are at risk for HIV infection but whose status is unknown.
In facilities with on-site radiographic screening, the chest radiograph should be performed as part of intake screening and read promptly by a physician, preferably within 24 hours. Persons who have radiographs suggestive of TB should be isolated immediately and evaluated further. Sputum-smear and culture examinations should be performed for inmates whose chest radiographs are consistent with TB disease and might be indicated for at least certain persons who are symptomatic, regardless of their TST, QFT-G, or chest radiograph results because persons with HIV and TB disease might have "negative" chest radiographs in addition to false-negative TST or QFT-G results. § Formerly called a negative pressure isolation room, an AII room is a singleoccupancy patient-care room used to isolate persons with suspected or confirmed infectious TB disease. Environmental factors are controlled in AII rooms to minimize the transmission of infectious agents that are usually spread from person to person by droplet nuclei associated with coughing or aerosolization of contaminated fluids. AII rooms should provide negative pressure in the room so clean air flows under the door gap into the room, an air flow rate of 6-12 air changes per hour (ACH), and direct exhaust of air from the room to the outside of the building or recirculation of air through a high efficiency particulate air (HEPA) filter.
# Mantoux TST Screening
Tuberculin skin testing using 0.1 mL of 5 tuberculin units (TU) of purified protein derivative (PPD) is the most common method of testing for TB infection. Multiple-puncture tests (e.g., the tine test) should not be used to determine whether a person is infected. Persons who have a documented history of a positive TST result (with a millimeter reading), a documented history of TB disease, or a reported history of a severe necrotic reaction to tuberculin should be exempt from a routine TST. For persons with a history of severe necrotic reactions and without a documented positive result with a millimeter reading, a QFT-G may be substituted for the TST. Otherwise, such persons should be screened for symptoms of TB and receive a chest radiograph unless they have had one recently (i.e., within 6 months) and are not symptomatic. Pregnancy, lactation, or previous vaccination with Bacillus Calmette-Guerin (BCG) vaccine are not contraindications for tuberculin skin testing. The TST is not completely sensitive for TB disease; its sensitivity ranges from 75%-90% (53,54). Despite this limitation, skin testing, along with use of a symptom review, frequently constitutes the most practical approach to screening for TB disease.
A trained health-care professional should place the TST and interpret the reaction 48-72 hours after the injection by measuring the area of induration (i.e., the palpable swelling) at the injection site. The diameter of the indurated area should be measured across the width of the forearm. Erythema (i.e., the redness of the skin) should not be measured. All reactions, even those classified as negative, should be recorded in millimeters of induration.
In the majority of cases, a TST reaction of >10 mm induration is considered a positive result in inmates and correctional facility employees. However, an induration of >5 mm is considered a positive result in the following persons:
- The use of two-step testing can reduce the number of positive TSTs that would otherwise be misclassified as recent skintest conversions during future periodic screenings. Certain persons who were infected with M. tuberculosis years earlier exhibit waning delayed-type hypersensitivity to tuberculin. When they are skin tested years after infection, they might have a false-negative TST result (even though they are truly infected). However, this first skin test years after the infection might stimulate the ability to react to subsequent tests, resulting in a "booster" reaction. When the test is repeated, the reaction might be misinterpreted as a new infection (recent conversion) rather than a boosted reaction. For two-step testing, persons whose baseline TSTs yield a negative result are retested 1-3 weeks after the initial test. If the second test result is negative, they are considered not infected. If the second test result is positive, they are classified as having had previous TB infection. Two-step testing should be considered for the baseline testing of persons who report no history of a recent TST and who will receive repeated TSTs as part of an institutional periodic skin-testing program. In the majority of cases, a two-step TST is not practical in jails because of the short average length of stay of inmates.
In the past, a panel of other common antigens was often applied with the TST to obtain information regarding the competence of the patient's cellular immune system and to identify anergy. More recently, however, anergy testing has been demonstrated to be of limited usefulness because of problems with standardization and reproducibility, the low risk for TB associated with a diagnosis of anergy, and the lack of apparent benefit of preventive therapy for groups of anergic HIV-infected persons. Therefore, the use of anergy testing in conjunction with a TST is no longer recommended routinely for screening programs for M. tuberculosis infection in the United States (52).
Intracutaneous inoculation with BCG is currently used worldwide as a vaccine against TB. BCG is a live attenuated Mycobacterium bovis strain that stimulates the immune system to protect against TB. No reliable method has been developed to distinguish TST reactions caused by vaccination with BCG from those caused by natural mycobacterial infections, although reactions of >20 mm of induration are not likely caused by BCG (55). TST is not contraindicated for persons who have been vaccinated with BCG, and the TST results of such persons are used to support or exclude the diagnosis of M. tuberculosis infection. A diagnosis of M. tuberculosis infection and treatment for LTBI should be considered for any BCGvaccinated person who has a positive TST reaction. The same criteria for interpretation of TST results are used for both BCGvaccinated and nonvaccinated persons (56).
# QuantiFERON ® -TB Gold Test
In May 2005, the U.S. Food and Drug Administration (FDA) licensed QFT-G. This in-vitro diagnostic test measures the amount of interferon-gamma produced by cells in whole blood that have been stimulated by mycobacterial peptides. The peptides used in the test mimic proteins known as ESAT-6 and CFP-10, which are present in M. tuberculosis but absent from all BCG strains and from the majority of commonly encountered non-TB mycobacteria. The test is intended for use as a diagnostic tool for M. tuberculosis infection, including both TB disease and LTBI. As with a TST, QFT-G cannot distinguish between LTBI and TB disease and should be used in conjunction with risk assessment, radiography, and other diagnostic evaluations. The advantages of QFT-G compared with TST are that 1) results can be obtained after a single patient visit, 2) the variability associated with skin-test reading can be reduced because "reading" is performed in a qualified laboratory, and 3) QFT-G is not affected by previous BCG vaccination and eliminates the unnecessary treatment of persons with false-positive results. QFT-G does not affect the result of future QFT-G tests (i.e., no "boosting" occurs). Limitations of the test include the need for phlebotomy, the need to process blood specimens within 12 hours of collection for the most recent version of the test, the limited number of laboratories that process the test, and a lack of clinical experience in interpreting test results. The elimination of the second visit for reading the TST, however, is likely to render the QFT-G competitive in cost-benefit considerations.
Although the performance of QFT-G has not been evaluated sufficiently in select populations of interest (e.g., HIVinfected persons), available data indicate that QFT-G is as sensitive as TST for detection of TB disease and more specific than TST for detection of LTBI (57,58). CDC guidelines for QFT-G recommend that QFT-G can be used in place of TST in all circumstances in which TST is currently used (58). This includes initial and periodic TB screening for correctional facility inmates and employees and testing of exposed persons in contact investigations. Because data are insufficient regarding performance of QFT-G in certain clinical situations, as with a negative TST result, a negative QFT-G result alone might not be sufficient to exclude M. tuberculosis infection in these situations. Examples of such clinical scenarios include those involving patients with severe immunosuppression who have had recent exposure to a patient with TB and patients being treated or about to undergo treatment with potent tumor necrosis factor alpha (TNF-α) antagonists.
# Use of Local Health Department TB Registry
Correctional facilities and local health departments should collaborate to ensure effective TB screening in the correctional setting. Inmates might provide inaccurate information on admission for multiple reasons, ranging from forgetfulness and confusion to deliberate misrepresentation. Health departments should perform cross-matches with the local TB registry and search for matches on known aliases, birth dates, maiden names, and other personal information for inmates suspected of having TB infection. A readily accessible record of previous TB history, drug-susceptibility patterns, treatment, and compliance can be useful in determining the disposition of a given patient with suspected TB.
# Initial Screening
The following procedures should be used for the initial screening of inmates and detainees (depending on their length of stay in the facility and the type of facility) and for all correctional facility employees, regardless of the type of facility.
# Inmates in Minimal TB Risk Facilities
Inmates in all minimal TB risk correctional and detention facilities should be evaluated on entry for symptoms of TB. Persons with symptoms of TB should be evaluated immediately to rule out the presence of infectious disease and kept in an AII room until they are evaluated. If the facility does not have an AII room, the inmate should be transported to a facility that has one. In addition, all newly arrived inmates should be evaluated for clinical conditions and other factors that increase the risk for infection or the risk for progressing to TB disease, including the following:
- HIV infection,
- recent immigration,
- history of TB,
- recent close contact with a person with TB disease,
- injection-drug use,
- diabetes mellitus,
- immunosuppressive therapy,
- hematologic malignancy or lymphoma,
- chronic renal failure,
- medical conditions associated with substantial weight loss or malnutrition, or - history of gastrectomy or jejunoileal bypass. Persons with any of these conditions require further screening with a TST, a QFT-G, or a chest radiograph within 7 days of arrival. Regardless of the TST or QFT-G result, inmates known to have HIV infection or other severe immunosup-pression, and those who are at risk for HIV infection but whose HIV status is unknown, should have a chest radiograph taken as part of the initial screening. Persons who have an abnormal chest radiograph should be further evaluated to rule out TB disease; if TB disease is excluded as a diagnosis, LTBI therapy should be considered if the TST or QFT-G result is positive.
# Inmates in Nonminimal TB Risk Prisons
Immediately on arrival, all new inmates should be screened for symptoms, and any inmate with symptoms suggestive of TB should be placed in an AII room and evaluated promptly for TB disease. If the facility does not have an AII room, the inmate should be transported to a facility that has one. Inmates who have no symptoms require further screening with a TST, a QFT-G, or a chest radiograph within 7 days of arrival. Regardless of their TST or QFT-G status, inmates known to have HIV infection or other severe immunosuppression, and those who are at risk for HIV infection but whose HIV status is unknown, should have a chest radiograph taken as part of the initial screening. Persons who have an abnormal chest radiograph should be further evaluated to rule out TB disease; if TB disease is excluded as a diagnosis, LTBI therapy should be considered if the TST or QFT-G result is positive.
As the rate of TB disease in the United States has decreased, identification and treatment of persons with LTBI who are at high risk for TB disease have become essential components of the TB elimination strategy promoted by ACET (59). Targeted testing using the TST or QFT-G identifies persons at high risk for TB disease who would benefit from treatment for LTBI. Prisons offer an excellent public health opportunity for identifying persons at high risk for TB who can be screened for TB infection and placed on LTBI therapy, if indicated. If the TST is used, a two-step testing procedure should be strongly considered when obtaining a baseline reading. A single step QFT-G is an adequate baseline. Inmates with a positive test should be evaluated for LTBI therapy after TB disease is excluded.
# Inmates in Nonminimal TB Risk Jails and Other Short-Term Detention Facilities
As in prisons, all new detainees in nonminimal TB risk jails should be screened on entry for symptoms, and any detainee who has symptoms suggestive of TB should be placed immediately in an AII room and evaluated promptly for TB disease. If the facility does not have an AII room, the inmate should be transported promptly to a facility that does have one. Detainees without symptoms require further screening with a TST, a QFT-G, or a chest radiograph within 7 days of arrival. Regardless of the TST or QFT-G result, detainees known to have HIV infection, and those who are at risk for HIV infection but whose HIV status is unknown, should have a chest radiograph taken as part of the initial screening. Persons who have a positive result should be further evaluated to rule out TB disease.
The primary purpose of screening in correctional settings is to detect TB disease. TST or QFT-G screening in jails to initiate LTBI therapy often is not practical because of the high rate of turnover and short lengths of stay. Although not all jail detainees have short lengths of stay, determining which detainees will be in the jail for a long term is difficult. Nationwide, approximately half of persons detained in local jails are released within 48 hours of admission. Thus, even if all detainees can be tested at intake, a large proportion will be unavailable to have their TSTs read or to be evaluated when QFT-G test results are available. Of those still in custody, a substantial percentage will be released before the radiographic and medical evaluation is completed. In a 1996 study, 43% of detainees at a county jail in Illinois who had a positive TST result were released or transferred before their evaluation could be completed (3).
A substantial proportion of detainees who are incarcerated long enough to begin LTBI therapy will be released before completion of treatment. A San Francisco study indicated that approximately 62% of detainees who were started on LTBI treatment were released before completion (40). These data illustrate the challenges of implementing a testing and treatment program for LTBI in jails with highly dynamic detainee populations. Certain jails have adopted a targeted approach of performing TSTs only on new detainees who are at high risk for TB disease (e.g., detainees with known HIV infection). Screening for TB and treating LTBI are most effective within the jail setting if resources dedicated to discharge planning and reliable access to community-based treatment are available. Modest interventions (e.g., education and incentives ) in the jail setting can lead to improvements in linking released detainees to postrelease medical care and increase the likelihood that therapy will be completed (60,61).
# Persons in Holding or Booking Facilities
City, county, and other law enforcement authorities frequently have facilities that hold arrestees and detainees for short periods of time, ranging from hours to multiple days. TB symptom screening is recommended for all persons at the time of entry into these facilities. Any detainee who has symptoms suggestive of TB should be immediately isolated and transferred to a facility or hospital in which the detainee can be placed in an AII room and evaluated promptly for TB disease.
# Employees in All Correctional and Detention Facilities
A medical history relating to TB should be obtained from and recorded for all new employees at the time of hiring, and a physical examination for TB disease should be required. The results of the screening and examination should be kept confidential; access should be granted to public health and infection control medical professionals only when necessary. In addition, a TST or QFT-G should be mandatory for all employees who do not have a documented history of a positive result. To improve the accuracy of the baseline result, a two-step TST or a single-step QFT-G should be used for the initial screening of employees who have not been tested during the preceding 12 months. Persons who have a positive TST or QFT-G result should have a chest radiograph taken and interpreted and should be required to have a thorough medical evaluation; if TB disease is excluded as a diagnosis, such persons should be considered for LTBI therapy. All employees should be informed that they should seek appropriate follow-up and testing for TB if they are immunosuppressed for any reason (e.g., have HIV infection). Any employee who has symptoms suggestive of TB should not return to the workplace until a clinician has excluded a diagnosis of infectious TB disease.
# Other Persons Who Might Need to be Screened
Certain persons who are neither inmates nor employees but who visit high-risk facilities on a regular basis also should be considered for screening. These persons might include contractors (e.g., food handlers and service workers), volunteers, and those providing religious ministries. Screening of these persons should follow the same procedures as those outlined for employees.
# Periodic Screening
Long-term inmates and all employees who have a negative TST or QFT-G result should have follow-up testing at least annually. Persons who have a history of a positive test result should be screened for symptoms of TB disease. Annual chest radiographs are unnecessary for the follow-up evaluation of infected persons. Test results should be recorded in medical records and in a retrievable aggregate database of all TST or QFT-G results. Personal identifying information should be kept confidential.
Correctional facilities can use multiple strategies to ensure annual screening of long-term inmates for newly acquired TB infection. Certain institutions schedule annual screening on the inmate's date of birth or on the anniversary of the inmate's most recent test. Other institutions and systems suspend inmate movement and screen the entire population on the same day every year. Methods of screening a subset of the inmate population (e.g., on a monthly basis) are beneficial because they provide an ongoing assessment of M. tuberculosis transmission within the facility.
Results from TST or QFT-G testing should be analyzed periodically to estimate the risk for acquiring new infection in a correctional facility; however, this analysis should be completed by using only the test results of facility employees and inmates who have remained in the facility continually during the interval between testing. The conversion rate equals the number of employees or inmates whose test results have converted from negative to positive (i.e., the numerator) during a specific interval divided by the total number of previously negative employees or inmates who were tested during the same interval (i.e., the denominator). In certain facilities, conducting an analysis of test results for specific areas or groups within the facility might be appropriate.
More frequent screening is needed when a conversion rate is substantially higher than previous rates or when other evidence of ongoing transmission is detected. A cluster (i.e., either two or more patients with TB disease that are linked by epidemiologic or genotyping data or two or more TST or QFT-G conversions occurring in the correctional facility among inmates who are epidemiologically linked) or other evidence of person-to-person transmission also warrants additional epidemiologic investigation and possibly a revision of the facility's TB prevention and control protocol.
Facilities in which the risk for infection with M. tuberculosis is minimal might not need to maintain a periodic screening program. However, requiring baseline TST or QFT-G testing of employees would enable medical staff to distinguish between a TST or QFT-G conversion and a positive TST or QFT-G result caused by a previous exposure to M. tuberculosis. A decision to discontinue periodic employee screening should be made in consultation with the local or state health department.
# HIV Counseling, Testing, and Referral
HIV counseling, testing, and referral (CTR) should be routinely recommended for all persons in settings in which the population is at increased behavioral or clinical risk for acquiring or transmitting HIV infection, regardless of setting prevalence (62). Because correctional facilities are considered settings in which the population is at increased risk for acquiring or transmitting HIV, routine HIV CTR is recommended for inmates. Furthermore, HIV infection is the greatest risk factor for progression from LTBI to TB disease (63,64). Therefore, HIV CTR should be routinely offered to all inmates and correctional facility staff with LTBI or TB disease if their HIV infection status is unknown at the time of their LTBI or TB disease diagnosis (64,65). Correctional facilities should be particularly aware of the need for preventing transmission of M. tuberculosis in settings in which persons infected with HIV might be housed or might work (66).
# Use of Data to Refine Policies and Procedures
Correctional and detention facilities are strongly encouraged to collect and analyze data on the effectiveness of their TB screening policies and procedures. Working in conjunction with their state or local TB-control program, correctional and detention facilities should refine their screening policies and procedures as indicated by such data. In the absence of local data that justify revision, correctional and detention facilities should adhere to the screening recommendations detailed above.
# Case Reporting
All states require designated health-care professionals to report suspected and confirmed cases of TB to their local or state health department; this reporting is mandatory for all correctional facilities, whether private, federal, state, or local. Correctional facility medical staff should report any suspected or confirmed TB cases among inmates or employees to the appropriate health agency in accordance with state and local laws and regulations, even if the inmate or detainee has already been released or transferred from the facility. Reporting cases to health departments benefits the correctional facility by allowing it to obtain health department resources for case management and contact investigation in both the facility and the community. For each suspected case of TB, the diagnosis or the exclusion of a diagnosis of TB should be entered immediately into 1) the person's medical record, 2) the retrievable aggregate TB-control database at the facility, and 3) the database at a centralized office if the system has multiple facilities. In addition, drug-susceptibility results should be sent to the state or local health department for use in monitoring the rates of drug resistance in the health department's jurisdiction. Drug-susceptibility reports also should be sent to all health departments managing the infectious person's contacts because the choice of medication for LTBI treatment is based on these drug-susceptibility test results (64). Reports to local or state health departments should identify the agency that has custodial responsibility for the inmate (e.g., county corrections agency, state corrections agency, ICE, Federal Bureau of Prisons , and U.S. Marshals Service ) and the corresponding identification number for that agency (e.g., U.S. alien number, FBOP number, or USMS number). Federal law enforcement agencies frequently contract for bed space with local or private detention facilities. Therefore, custodial authority and corresponding custody identification numbers should be verified with the facility's custody staff; detention facility medical staff might not have this information available.
# Isolation in an Airborne Infection Isolation Room Initiation
TB airborne precautions should be initiated for any patient who has signs or symptoms of TB disease or who has documented TB disease and has not completed treatment or not been determined previously to be noninfectious.
# Discontinuation
For patients placed in an AII room because of suspected infectious TB disease of the lungs, airways, or larynx, airborne precautions can be discontinued when infectious TB disease is considered unlikely and either 1) another diagnosis is made that explains the clinical syndrome or 2) the patient has three negative acid-fast bacilli (AFB) sputum-smear results (67,68). The three sputum specimens should be collected 8-24 hours apart (69), and at least one should be an early morning specimen (because respiratory secretions pool overnight). Typically, this will allow patients with negative sputum-smear results to be released from an AII room in 2 days. Incarcerated patients for whom the suspicion of TB disease remains after the collection of three negative AFB sputum-smear results should not be released from airborne precautions until they are on standard multidrug anti-TB treatment and are clinically improving. Because patients with TB disease who have negative AFB sputum-smear results can still be infectious (70), patients with suspected disease who meet the above criteria for release from airborne precautions should not be released to an area in which other patients with immunocompromising conditions are housed.
A patient who has drug-susceptible TB of the lung, airways, or larynx, is on standard multidrug anti-TB treatment, and has had a significant clinical and bacteriologic response to therapy (i.e., reduction in cough, resolution of fever, and progressively decreasing quantity of AFB on smear result) is probably no longer infectious. However, because culture and drug-susceptibility results are not typically known when the decision to discontinue airborne precautions is made, all patients with confirmed TB disease should remain in an AII room while incarcerated until they
- have had three consecutive negative AFB sputum-smear results collected 8-24 hours apart, with at least one being an early morning specimen, - have received standard multidrug anti-TB treatment, and - have demonstrated clinical improvement. Because the consequences of transmission of MDR TB (i.e., TB that is resistant to isoniazid and rifampin) are severe, infection-control practitioners might choose to keep persons with suspected or confirmed MDR TB disease in an AII room until negative sputum-culture results have been documented in addition to negative AFB sputum-smear results.
# Environmental Controls Overview
Guidelines for preventing transmission of M. tuberculosis in health-care settings and for environmental infection control in health-care facilities have been published previously (71,72). These guidelines and this report can be used to educate correctional facility staff regarding use of environmental controls in TB infection-control programs.
Environmental controls should be implemented when the risk for TB transmission persists despite efforts to screen and treat infected inmates. Environmental controls are used to remove or inactivate M. tuberculosis in areas in which the organism could be transmitted. Primary environmental controls consist of controlling the source of infection by using local exhaust ventilation (e.g., hoods, tents, or booths) and diluting and removing contaminated air by using general ventilation. These controls help prevent the spread and reduce the concentration of airborne infectious droplet nuclei (see Glossary). Environmental controls work in conjunction with administrative controls such as isolation of inmates with suspected TB disease detected through screening (see Glossary). Secondary environmental controls consist of controlling the airflow to prevent contamination of air in areas adjacent to the source (AII rooms) and cleaning the air (using a HEPA filter or ultraviolet germicidal irradiation ) to increase the number of equivalent ACH. ¶ The efficiency of different primary or secondary environmental controls varies; details concerning the application of these controls to prevent transmission of M. tuberculosis in health-care settings have been published previously (71). To be effective, secondary environmental controls should be used and maintained properly, and their strengths and limitations should be recognized. The engineering design and operational efficacy parameters for UVGI as a secondary control measure (i.e., portable UVGI units, upper-room air UVGI, and in-duct UVGI) continue to evolve and require special attention in their design, selection, and maintenance.
Exposure to M. tuberculosis within correctional facilities can be reduced through the effective use of environmental controls at the source of exposure (e.g., an infectious inmate) or in general areas. Source-control techniques can prevent or reduce the spread of infectious droplet nuclei into the air in situations in which the source has been identified and the generation of the contaminant is localized by collecting infectious particles as they are released. Use of these techniques is particularly prudent during procedures that are likely to generate infectious aerosols (e.g., bronchoscopy and sputum induction) and when inmates with infectious TB disease are coughing or sneezing.
Unsuspected and undiagnosed cases of infectious TB disease contribute substantially to disease transmission within correctional facilities (73). When attempting to control this type of transmission, source control is not a feasible option. Instead, general ventilation and air cleaning should be relied on for environmental control. General ventilation can be used to dilute the air and remove air contaminants and to control airflow patterns in AII rooms or other correctional facility settings. Air-cleaning technologies include mechanical air filtration to reduce the concentration of M. tuberculosis droplet nuclei and UVGI to kill or inactivate microorganisms so they no longer pose a risk for infection.
Ventilation systems for correctional facility settings should be designed, and modified when necessary, by ventilation engineers in collaboration with infection-control practitioners and occupational health staff. Recommendations for designing and operating ventilation systems in correctional facilities have been published (48,49,(74)(75)(76). The multiple types of and conditions for use of ventilation systems in correctionalfacility settings and the individual needs of these settings preclude provision of extensive guidance in this report.
Incremental improvements in environmental controls (e.g., increasing the removal efficiency of an existing filtration system in any area) are likely to lessen the potential for TB transmission from persons with unsuspected or undiagnosed TB. This information should not be used in place of consultation ¶ ACH is the ratio of the volume of air entering the room or booth per hour to the volume of that room or booth. It equals the exhaust airflow (Q) in cubic feet per minute (cfm) divided by the volume of the room or booth (V) in cubic feet (ft 3 ) multiplied by 60 minutes per hour, as expressed thus:
with experts who can advise on ventilation system and air handling design, selection, installation, and maintenance. Because environmental controls will fail if they are not properly operated and maintained, routine training and education of infection-control and maintenance staff are key components to a successful TB infection-control program.
# Airborne Infection Isolation Rooms
Inmates known or suspected of having TB disease should be placed in an AII room or AII cell that meets the design and operational criteria for airborne infection isolation described previously (71). Inmates deemed infectious should remain in isolation until treatment or further evaluation has ensured that they are noninfectious. Facilities without an on-site AII room should have a written plan for referring patients with suspected or confirmed TB to a facility that is equipped to isolate, evaluate, and treat TB patients.
New or renovated facilities should ensure that a sufficient number of AII rooms are available consistent with the facility risk assessment. Under rare circumstances, if an AII room is not available and the immediate transfer of the inmate with suspected infectious TB is not possible, the inmate should be housed temporarily in a room that has been modified to prevent the escape of infectious aerosols outside the TB holding area. The heating, ventilating, and air-conditioning (HVAC) system in this temporary TB holding area might have to be manipulated or augmented with auxiliary exhaust fans to create an inward flow of air that reduces the potential escape of infectious aerosols. If possible, air from these areas should be exhausted directly to the outdoors. If this is not feasible, the highest filtration efficiency compatible with the installed HVAC system should be used. Because TB droplet nuclei are approximately 1-5 micrometers in size, filtration efficiency should be evaluated for particles in that size range. Filter selection based on the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) Standard 52.2 Minimum Efficiency Reporting Value (MERV)-rating efficiency tables can help in this evaluation (77). Secondary air cleaning techniques (portable air cleaners and UVGI) also can be used in these areas to increase effective air cleaning.
# Local Exhaust Ventilation
Aerosol-producing procedures should be performed in an area with a type of local exhaust ventilation that captures and removes airborne contaminants at or near their source without exposing persons in the area to infectious agents. Local exhaust devices typically use hoods. Two types of hoods are used: enclosing devices, in which the hood either partially or fully encloses the infectious source, and exterior devices, in which the infectious source is near but outside the hood. Fully enclosed hoods, booths, or tents are always preferable to exterior devices because of their superior ability to prevent contaminants from escaping.
Enclosing devices should have sufficient airflow to remove >99% of airborne particles during the interval between the departure of one patient and the arrival of the next. The time required to remove a given percentage of airborne particles from an enclosed space depends on 1) the ACH number, 2) the location of the ventilation inlet and outlet, and 3) the physical configuration of the room or booth. The time interval required to ensure the proper level of airborne contaminant removal from enclosing devices varies according to ACH (Table 1). For example, if an enclosing device operates at six ACH, and the air inlet and exhaust locations allow for good air mixing, approximately 46 minutes would be required to remove 99% of the contaminated air after the aerosolproducing procedure has ended. Similarly, an additional 23 minutes (total time: 69 minutes) would be required to increase the removal efficiency to 99.9%. Doubling the ventilation rate decreases the waiting time by half.
# General Ventilation
General ventilation is used to 1) dilute and remove contaminated air, 2) control the direction of airflow in a correctional facility setting, and 3) control airflow patterns in rooms. Recommended ventilation rates for correctional facility settings are typically expressed in ACH. Ventilation recommen- - Values apply to a room or enclosure in which 1) the generation of aerosols has ceased (e.g., the infectious inmate is no longer present in the room) or 2) the aerosol procedure has been completed, and the room or booth is no longer occupied. The times provided assume perfect mixing of the air in the space; removal times will be longer in rooms or areas with imperfect mixing or air stagnation. Caution should be exercised in applying the table to such situations, and expertise from a qualified engineer or industrial hygienist should be obtained. † Minutes required for removal of airborne contaminants from the time that generation of infectious droplet nuclei has ceased.
dations for selected areas in new or renovated correctional facility settings should be followed (Table 2). The feasibility of achieving a specific ventilation rate depends on the construction and operational requirements of the ventilation system and might differ for retrofitted and newly constructed facilities. The expense and effort of achieving a high ventilation rate might be reasonable for new construction but not be as feasible when retrofitting an existing setting.
Ventilation design guidance for correctional facilities and related areas has been published (78). This design guidance includes specific ventilation recommendations regarding total ventilation, filtration efficiency, and environmental design parameters. For minimum outdoor air supply recommendations, the guidance refers to ASHRAE Standard 62, Ventilation for Acceptable Indoor Air Quality. In 2004, ASHRAE revised and renumbered this standard to ANSI/ ASHRAE Standard 62.1 (74). For areas within correctional facilities that are not intended to contain persons with infectious TB, the recommended minimum outdoor air supply rates should meet or exceed those recommended in ANSI/ ASHRAE Standard 62. 1-2004 (74). When risk analysis reveals an enhanced potential for undiagnosed cases of infectious TB, facility designers and owners may consider using higher supply rates of outdoor air (e.g., those recommended for areas within health-care facilities anticipated to contain infectious patients). Minimum outdoor air supply recommendations for health-care facilities have been published (71,79). Because correctional areas frequently will not have an exact equivalent area within the health-care environment, the designer or owner should identify an analogous health-care area from which to choose the outdoor air supply recommendation. This selection should be made on the basis of occupant risk factors for TB, occupant activities, and occupant density within the area. For example, the intake, holding, and processing area of a higher risk correctional facility might be considered analogous to the emergency waiting room area in a health-care facility. In that case, the recommended outdoor air supply would be at least two ACH.
The direction of air movement relative to adjacent areas is necessary for the containment of contaminated air. Air within a correctional facility should flow to minimize exposure of others within the building (Table 2). For example, air inside an AII room or cell should flow from the corridor and air-supply grille across the worker, then across that patient, and finally out of the room. To ensure that air is flowing from the corridor into an AII room or cell, smoke testing should be performed daily, even if the AII room or cell is equipped with a pressure-sensing device. Air flow (supply air and exhaust air) should be measured at least annually and compared with the designed air flow facilities (e.g., using health-care criteria for emergency waiting rooms for correctional intake, holding, or processing areas) can also be applied. † Single-pass ventilation that safely exhausts all air to the outdoors is the most protective ventilation design approach and should be incorporated within areas likely to contain infectious aerosols. For general population areas in which persons with unsuspected or undiagnosed infectious tuberculosis (TB) disease might be present, single-pass ventilation should be considered where and when environmental conditions are compatible. When direct exhaust to the outdoors is not feasible, the highest filtration efficiency that is compatible with the installed heating, ventilating, and air-conditioning system should be used. Supplemental methods (e.g., ultraviolet germicidal irradiation or portable air cleaners) may be combined with mechanical filtration in areas that do not have single-pass ventilation to increase effective air cleaning. § Anteroom pressurization should be designed to minimize cross-contamination between patient areas and surrounding areas and should comply with local fire smoke management regulations. ¶ This determination should be made on the basis of the risk assessment conducted at each facility, with consideration given to the compatibility with a single-pass ventilation design. Exhausting all air from kitchens and laundry rooms to the outdoors is recommended for contaminant (not TB) and odor control.
rates to ensure that optimal directional air flow and air exchange rates are being maintained (Table 2).
# Air Cleaning Methods
Detailed information has been published regarding the selection, design, maintenance, and safety considerations associated with air cleaning methods (i.e., filtration and UVGI) (71). Designers and end users should consult this information. Air removed from areas likely to contain infectious aerosols (e.g., AII cells, sputum collection and other procedure rooms, and intake areas) should be exhausted directly to the outdoors to ensure that it cannot immediately reenter the building or pose a hazard to persons outside, in accordance with applicable federal, state, and local regulations. If discharging air to the outside is not feasible, HEPA filters should be used to clean the air before returning to the general ventilation system. Such recirculation is acceptable only if the air is recirculated back into the same general area from which it originated.
For general population areas in which infectious aerosols are not anticipated but might be present (from persons with undiagnosed TB disease), total exhaust ventilation should be considered where and when the outdoor environmental conditions (temperature and humidity) are compatible with a single-pass system without undue energy or equipment costs. When recirculating air from these areas, the minimum ASHRAE-recommended level of filtration is a MERV-8 filter (78). However, CDC encourages selection and use of filters with higher MERV ratings to provide an incremental improvement in the protection afforded by this mechanism. The filtration system should be designed to prevent filter by-pass and to allow filter leakage testing and safe filter changes. A combination of air cleaning methods (e.g., MERV-rated filters and supplemental UVGI) may be used to increase effective air cleaning.
When used, UVGI should be applied in-duct (i.e., inside the ductwork of existing HVAC systems) or in the upper room of the area to be treated to ensure that organisms are inactivated. Upper-air systems should be designed, installed, and monitored to ensure both sufficient irradiation in the upper room to inactivate M. tuberculosis and safe levels of UVGI in the occupied space.
# Environmental Control Maintenance
To be most effective, environmental controls should be installed, operated, and maintained correctly. Ongoing maintenance should be part of any written TB infection-control plan. The plan should outline the responsibility and authority for maintenance and address staff training needs.
Failure to maintain environmental control systems properly has adversely impacted TB control and prevention efforts at facilities throughout the United States. At one hospital, improperly functioning ventilation controls were believed to be a factor in the transmission of MDR TB disease to four persons (three patients and a correctional officer), three of whom died (80). In three other multihospital studies evaluating the performance of AII rooms, failure to routinely monitor air-pressure differentials (whether manually or through use of continuous monitoring devices) resulted in a substantial percentage of the rooms being under positive pressure (81)(82)(83)(84).
Correctional facilities should schedule routine preventive maintenance that covers all components of the ventilation systems (e.g., fans, filters, ducts, supply diffusers, and exhaust grilles) and any air-cleaning devices in use. Performance monitoring should be conducted to verify that environmental controls are operating as designed. Performance monitoring should include 1) directional airflow assessments using smoke tubes and use of pressure monitoring devices sensitive to pressures at 0.001 inch of water gauge and 2) measurement of supply and exhaust airflows to compare with recommended air change rates for the respective areas of the facility. Records should be kept to document all preventive maintenance and repairs.
Standard procedures should be established to ensure that 1) maintenance staff notify infection-control personnel before performing maintenance on ventilation systems servicing inmate-care areas and 2) infection-control staff request assistance from maintenance personnel in checking the operational status of AII cells and local exhaust devices (e.g., booths, hoods, and tents) before use. A protocol that is well written and followed will help to prevent unnecessary exposures of correctional facility staff and inmates to infectious aerosols. Proper labeling of ventilation system components (e.g., ducts, fans, and filters) will help identify air-flow paths. Clearly labeling which fan services a given area will help prevent accidental shutdowns (85). In addition, provisions should be made for emergency power to avoid interruptions in the performance of essential environmental controls during a power failure.
# Respiratory Protection
# Considerations for Selection of Respirators
Respiratory protection is used when administrative (i.e., identification and isolation of infectious TB patients) and environmental controls alone have not reduced the risk for infection with M. tuberculosis to an acceptable level. The use of respiratory protection is most appropriate in specific set-tings and situations within correctional facilities. For example, protection is warranted for inmates and facility staff when they enter AII rooms, transport infectious inmates, and participate in cough-inducing procedures.
Respirators should be selected from those approved by CDC/ National Institute for Occupational Safety and Health (NIOSH) under the provisions of Title 42, Part 84 of the Code of Federal Regulations (86). Decisions regarding which respirator is appropriate for a particular situation and setting should be made on the basis of a risk assessment of the likelihood for TB transmission. For correctional facilities, a CDC/ NIOSH-approved N95 air-purifying respirator will provide adequate respiratory protection in the majority of situations that require the use of respirators. If a higher level of respiratory protection is warranted, additional information on other classes of air-purifying respirators and powered air-purifying respirators (PAPRs) is available (71). The overall effectiveness of respiratory protection is affected by 1) the level of respiratory protection selected (i.e., the assigned protection factor), 2) the fitting characteristics of the respirator model, 3) the care taken in donning the respirator, and 4) the effectiveness of the respiratory protection program, including fit testing and worker training.
# Implementing a Respiratory Protection Program
All facilities should develop, implement, and maintain a respiratory-protection program for health-care workers or other staff who use respiratory protection. Respiratory-protection programs are required for facilities covered by the U.S. Occupational Safety and Health Administration (OSHA) (71,(87)(88)(89). The key elements of a respiratory protection program include 1) assignment of responsibility, 2) training, and 3) fit testing (71,87,90,91). All correctional facility staff who use respirators for protection against infection with M. tuberculosis must participate in the facility's respiratory protection program (e.g., understand their responsibilities, receive training, receive medical clearance, and engage in fit testing) (71). In addition to staff members, visitors to inmates with TB disease should be offered respirators to wear while in AII rooms and instructed on proper use. Certain regular visitors (e.g., law enforcement officials, social workers, ministers and other religious representatives, and attorneys and other legal staff ) might be there in an occupational capacity. Each facil-ity, regardless of TB risk classification (i.e., minimal or nonminimal), should develop a policy on the use of respirators by visitors of patients.
# Precautions for Transporting Patients Between Correctional or Detention Facilities
Recommended precautions to take when transporting patients between facilities have been published (71). Patients with suspected or confirmed infectious TB disease should be transported in an ambulance whenever possible. The ambulance ventilation system should be operated in the nonrecirculating mode and the maximum amount of outdoor air be provided to facilitate dilution. If the vehicle has a rear exhaust fan, it should be used during transport. If the vehicle is equipped with a supplemental recirculating ventilation unit that passes air through HEPA filters before returning it to the vehicle, this unit should be used to increase the number of ACH. Airflow should be from the cab (i.e., front of vehicle) over the patient and out the rear exhaust fan. If an ambulance is not used, the ventilation system for the vehicle should bring in as much outdoor air as possible, and the system should be set to nonrecirculating. If possible, the cab should be physically isolated from the rest of the vehicle, and the patient should be placed in the rear seat. Drivers or other persons who are transporting patients with suspected or confirmed infectious TB disease in an enclosed vehicle should wear at least an N95 disposable respirator. If the patient has signs or symptoms of infectious TB disease (i.e., positive AFB sputum-smear result), consideration might be given to having the patient wear a surgical or procedure mask, if possible, during transport, in waiting areas, or when others are present.
# Diagnosis and Treatment of Latent Tuberculosis Infection and Tuberculosis Disease
The principles of diagnosis and treatment of LTBI and TB disease discussed in this section are guidelines and not meant to substitute for clinical experience and judgment. Medical providers not familiar with the management of LTBI and TB disease should consult a person with expertise. All facilities' local operations procedures should include plans for consultation with and referral to persons with expertise in TB and should include criteria delineating when consultation and referral are indicated.
Although the index of suspicion for TB disease varies by individual risk factors and prevalence of TB in the population Surgical masks should never be worn in place of a respirator. Surgical masks often fit so poorly that they provide only minimal protection from any airborne hazard, including M. tuberculosis. Surgical masks are designed to protect others from the wearer; they are not designed or tested to provide respiratory protection to the wearer.
served by the correctional facility, correctional facilities typically are considered higher-risk settings (see Screening).
A diagnosis of TB disease should be considered for any patient who has a persistent cough (i.e., one lasting >3 weeks) or other signs or symptoms compatible with TB disease (e.g., hemoptysis, night sweats, weight loss, anorexia, and fever). Diagnostic tests for TB include the TST, QFT-G, chest radiography, and laboratory examination of sputum samples or other body tissues and fluids. Persons exposed to inmates with TB disease might become latently infected with M. tuberculosis depending on host immunity and the degree and duration of exposure. Therefore, the treatment of persons with TB disease plays a key role in TB control by stopping transmission and preventing potentially infectious cases from occurring (92). LTBI is an asymptomatic condition that can be diagnosed by the TST or QFT-G.
# Interpreting TST Results
A baseline screening TST result of >10 mm induration is considered positive for the majority of correctional facility staff and inmates, and these persons should be referred for medical and diagnostic evaluation. However, for correctional facility staff and inmates who have had a known exposure in a correctional facility (i.e., close contact with an inmate or staff member with infectious TB disease) after having a previous (baseline) TST value of 0 mm, TST results of >5 mm should be considered positive and interpreted as a new infection. Correctional facility staff and inmates with a screening baseline TST result of >1 mm, but 10 mm on retest (Table 3). For example, a baseline TST result with 8 mm induration and a repeat TST result 1 year later with 18 mm induration would indicate a new infection. However, a repeat TST result with 12 mm induration would not indicate a new infection.
When decisions are made for the diagnosis and treatment of LTBI and choosing the cut-off value for a positive reaction, certain risk factors (e.g., immunocompromising conditions and known contact with a TB patient) should be assessed. Correctional facility staff and inmates who have TST indurations of 5-9 mm should be advised that their results might be an indication for treatment under certain conditions.
# Special Considerations in Interpreting the TST
Interpretation of the TST might be complicated by previous vaccination with BCG, anergy, and the "boosting" effect. Detailed recommendations describing how the TST should be interpreted in relation to these possible confounders have been published (64,93).
# Correctional Staff and Inmates who Refuse Testing for M. tuberculosis Infection
A correctional facility staff member or inmate who refuses testing for M. tuberculosis infection should first be educated regarding the importance of routine screening of correctional facility staff and inmates. If the person continues to refuse to have a TST, the option may be offered for the person to be tested using the QFT-G test (and vice versa). The decision to offer an alternative test depends on the reason for refusal and should be consistent with the patient's underlying wishes (e.g., offering QFT-G in place of TST is acceptable if the patient objects to having injection of a substance but agrees to having blood drawn).
# Interpreting the QuantiFERON ® -TB Gold Test Data
Interpretation of QFT-G data is initially performed electronically; an approved interpretation method is automatically performed by the software supplied by the manufacturer (Table 4) (58). A complete description of the test's interpretation is included in the product insert.
Persons who have a positive QFT-G result should be referred for a medical and diagnostic evaluation. On serial testing, a person with QFT-G results changing from negative to positive should be referred for medical and diagnostic evaluation and considered to be a QFT-G converter. Risk factors (e.g., the facility's prevalence of TB disease and personal risk factors) should be assessed when making decisions about the diagnosis and treatment of LTBI.
# Interpreting Chest Radiographs
# Persons with Suspected Pulmonary TB
Multiple types of abnormalities demonstrated on chest radiographs are strongly suggestive of pulmonary TB disease, including upper-lobe infiltration, cavitation, and pleural effusion. Infiltrates can be patchy or nodular and observed in the apical or subapical posterior upper lobes or superior segment of the lower lobes. If radiographic or clinical findings are consistent with TB disease, further studies (e.g., medical evaluation, mycobacteriologic examinations of sputa or tissue, and comparison of current and prior chest radiographs) should be performed (65). Persons with TB pleural effusions might have concurrent unsuspected pulmonary or laryngeal TB disease (94). These patients should be considered infectious until pulmonary and laryngeal TB disease is excluded. Patients with suspected extrapulmonary TB disease also should be suspected of having pulmonary TB until concomitant pulmonary disease is excluded.
The radiographic presentation of pulmonary TB in HIVinfected persons might be atypical. Apical cavitary disease is less common among such patients than HIV-negative patients. More common findings among HIV-infected persons are infiltrates in any lung zone, mediastinal or hilar adenopathy, or, in rare cases, a normal chest radiograph (65,(95)(96)(97).
# Persons with LTBI
To exclude pulmonary TB disease, a chest radiograph is indicated for all persons in whom LTBI is diagnosed. If chest radiographs do not indicate pulmonary TB, and no symptoms consistent with TB disease are present, persons with positive test results for TB infection should be considered for treatment for LTBI. Persons with LTBI typically have normal chest radiographs, although they might have abnormalities suggestive of previous TB disease or other pulmonary conditions. In certain patients with TB symptoms, pulmonary infiltrates might be apparent on chest computed tomography scan or magnetic resonance imaging study but not on chest radiograph. Previous, healed TB disease typically produces radiographic findings that differ from those associated with current TB disease. These findings include nodules, fibrotic scars, calcified granulomas, and apical pleural thickening. Nevertheless, a chest radiograph by itself cannot be used to distinguish between current and healed TB. Nodules and fibrotic scars might contain slowly multiplying tubercle bacilli and pose substantial risk for progression to TB disease. Calcified nodular lesions (i.e., calcified granulomas) and apical pleural thickening indicate lower risk for progression to TB disease (65).
# Pregnant Women
Because TB disease is dangerous to both the mother and the fetus, a pregnant woman who has a positive TST or QFT-G result or who is suspected of having TB disease should receive a chest radiograph (with shielding consistent with safety guidelines) as soon as feasible. If symptoms or other high-risk conditions (e.g., HIV infection) are identified, a chest radiograph might have to be performed during the first trimester of pregnancy (64,65,98).
# Evaluation of Sputum Samples
Sputum examination is a key diagnostic procedure for pulmonary TB disease (93) and is indicated for the following inmates and correctional facility staff:
- persons suspected of having pulmonary TB disease because of a chest radiograph consistent with TB disease, particularly those with any respiratory symptoms suggestive of TB disease;
# Specimen Collection
Persons requiring smear-and culture-sputum examination should submit at least three sputum specimens (collected 8-24 hours apart, with at least one specimen collected in the early morning) (71,99). Specimens should be collected in a sputum induction booth or in an AII room. In resourcelimited settings without environmental containment, collection is safer when performed outdoors. Patients should be instructed how to produce an adequate sputum specimen, and a health-care professional should supervise and observe the collection of sputum, if possible (93). For patients who are unable to produce an adequate sputum specimen, expectoration might be induced by inhalation of an aerosol of warm, hypertonic saline (71).
# Laboratory Examination
Detection of AFB in stained smears by microscopy can provide the first mycobacteriologic indication of TB disease. A positive result for AFB in a sputum smear is predictive of increased infectiousness; however, negative AFB sputum-smear results do not exclude a diagnosis of TB disease if clinical suspicion is high. In 2002, only 63% of U.S. patients with reported positive sputum cultures had positive AFB sputum smears (100).
Although smears allow for the detection of mycobacteria, definitive identification, strain typing, and drug-susceptibility testing of M. tuberculosis can be performed only via culture (93). A culture of sputum or other clinical specimen that contains M. tuberculosis provides a definitive diagnosis of TB disease. In the majority of cases, identification of M. tuberculosis and drug-susceptibility results are available within 28 days using recommended rapid methods (e.g., liquid culture and DNA probes). A negative culture result is obtained in approximately 14% of patients with confirmed pulmonary TB disease (100) . Testing sputum with certain techniques (e.g., nucleic acid amplification ) facilitates the rapid detection and identification of M. tuberculosis, but should not replace culture and drug-susceptibility testing in patients with suspected TB disease (88,101,102). Recommendations for use and interpretation of NAA tests in the diagnosis of TB disease have been published previously (101,102).
Laboratories should report positive smear results within 24 hours of collection and positive cultures within 24 hours of the notation of the positive culture. Drug-susceptibility tests should be performed on initial isolates from all patients to assist in the identification of an effective anti-TB regimen. Drug-susceptibility tests should be repeated if 1) sputum specimens continue to be culture-positive 3 months after initiation of treatment or if 2) persons whose cultures had converted to negative subsequently revert to positive (65,93).
# Treatment for LTBI
Treatment for LTBI is essential to controlling and eliminating TB disease in the United States because it substantially reduces the risk that TB infection will progress to TB disease (23). Certain persons are at high risk for developing TB disease once infected, and every effort should be made to begin these persons on a standard LTBI treatment regimen and to ensure that they complete the entire course of treatment for LTBI . Before treatment for LTBI is started, TB disease should be ruled out by history, medical examination, chest radiography, and when indicated, mycobacteriologic studies.
# Candidates for Treatment of LTBI
Correctional facility staff and inmates in the following highrisk groups should be given treatment for LTBI if their reaction to the TST is >5 mm, regardless of age (64,65):
- HIV-infected persons,
- recent contacts of a TB patient,
- persons with fibrotic changes on chest radiograph consistent with previous TB disease, and - patients with organ transplants and other immunocompromising conditions who receive the equivalent of >15 mg/day of prednisone for >1 month. All other correctional facility staff and inmates should be considered for treatment of LTBI if their TST results are >10 mm induration. If QFT-G is used, any correctional facility staff member or inmate with a positive QFT-G result should be considered for LTBI treatment. Decisions regarding initiation of LTBI treatment should include consideration of the likelihood of the patient continuing and completing LTBI treatment under supervision if released from the facility before the treatment regimen is completed.
Persons with previously positive TST results who have previously completed treatment for LTBI (i.e., >6 months of isoniazid, 4 months of rifampin, or another regimen) do not need to be treated again unless concern exists that reinfection has occurred. Other persons who might be poor candidates for treatment of LTBI include those with a previous history of liver injury or a history of excessive alcohol consumption; active hepatitis and end-stage liver disease are relative contraindications to the use of isoniazid or pyrazinamide for treatment of LTBI (64,103). If the decision is made to treat such patients, baseline and follow-up monitoring of serum aminotransaminases are recommended.
# Treatment Regimens for LTBI
Standard regimens have been developed for the treatment of LTBI (Table 5). The preferred treatment for LTBI is 9 months of daily isoniazid or biweekly dosing administered by DOT. Although regimens are broadly applicable, modifications should be considered for certain populations (e.g., patients with HIV infection) and when drug resistance is suspected.
Reports of severe liver injury and death associated with the combination of rifampin and pyrazinamide for treatment of LTBI prompted ATS and CDC to revise previous recommendations. These recommendations now state that this regimen typically should not be offered for the treatment of LTBI (64,(103)(104)(105)(106)(107). If the potential benefits substantially outweigh the demonstrated risk for severe liver injury and death associated with this regimen and the patient has no contraindications this regimen may be considered; a physician with experience treating LTBI and TB disease should be consulted before use of this regimen (103). Clinicians should continue the appro-priate use of rifampin and pyrazinamide in standard multidrug anti-TB regimens for the treatment of TB disease (65).
For all LTBI treatment regimens, nonadherence to intermittent dosing results in a larger proportion of total doses missed than daily dosing; therefore, all patients on intermittent treatment should receive DOT. In addition, DOT should be used with daily dosing of LTBI treatment whenever feasible. Patients with the highest priority for DOT are those at the highest risk for progression from LTBI to TB disease, including persons with HIV infection and persons who are recent contacts of infectious patients with pulmonary TB.
# Contacts of Patients with Drug-Susceptible TB Disease
Contacts of patients with drug-susceptible TB disease who once tested negative but subsequently have a positive TST result (i.e., >5 mm) should be evaluated for treatment of LTBI. The majority of persons who are infected will have a positive TST result within 6 weeks of exposure; therefore, contacts of patients with drug-susceptible TB disease who have initial negative TSTs should be retested 8-10 weeks after the end of exposure to a patient with suspected or confirmed TB disease (108). Persons with TB infection should be advised that they can be re-infected with M. tuberculosis if re-exposed (109-111). If they have not been treated previously, HIV-infected persons (regardless of TST result or previous LTBI treatment history), persons receiving immunosuppressive therapy (regardless of TST result or previous LTBI treatment history), and persons with a known previous (to current exposure) positive TST also should be considered for LTBI treatment.
Treatment of LTBI should not be started until a diagnosis of TB disease has been excluded. If the presence of TB disease is uncertain because of an equivocal chest radiograph, a standard multidrug anti-TB therapy might be started and adjusted as necessary, depending on the results of sputum cultures, drugsusceptibility tests, and clinical response (65). If cultures are obtained without initiating therapy for TB disease, treatment for LTBI should not be initiated until all cultures are reported as negative, which might take 6-8 weeks.
# Contacts of Patients with Drug-Resistant TB Disease
Treatment for LTBI caused by drug-resistant M. tuberculosis organisms is complex and should be conducted in consultation with the local health department's TB control program and persons with expertise in the medical management of drugresistant TB. Often this will require waiting for results of susceptibility testing of the isolate from the presumed source patient. Treatment should be guided by in vitro susceptibility test results from the isolate to which the patient was exposed (65,112,113).
# Pretreatment Evaluation and Monitoring of Treatment
Routine laboratory monitoring during treatment of LTBI is indicated only for patients with abnormal baseline tests and for persons at risk for hepatic disease. Baseline laboratory testing is indicated only for persons infected with HIV, pregnant women, women in the immediate postpartum period (typically within 3 months of delivery), persons with a history of liver disease, persons who use alcohol regularly, and persons who have or who are at risk for chronic liver disease (64).
All patients should undergo clinical monitoring at least monthly. This monitoring should include 1) a brief clinical assessment regarding the signs of hepatitis (i.e., nausea, vomiting, abdominal pain, jaundice, and yellow or brown urine) and 2) education about the adverse effects of the drug(s) and the need for prompt cessation of treatment and clinical evaluation should adverse effects occur. All aspects of the clinical encounter should be conducted in private and in the patient's primary language.
Severe adverse events associated with the administration of tuberculin antigen or treatment of LTBI or TB disease (e.g., those resulting in hospitalization or death) should be reported to MedWatch, FDA's Safety Information and Adverse Event Reporting Program at telephone 800-FDA-1088, by facsimile at 800-FDA-0178, or via the Internet by sending Report Form 3500 (available at / 3500.pdf ). Instructions regarding the types of adverse events that should be reported are included on MedWatch report forms. In addition, severe adverse effects associated with LTBI treatment should be reported to CDC's Division of Tuberculosis Elimination at telephone 404-639-8118.
# Treatment for TB Disease
A decision to initiate treatment (i.e., combination anti-TB chemotherapy) should be made on the basis of epidemiologic information; clinical, pathological, and radiographic findings; and the results of microscopic examination of AFB-stained sputum smears and cultures for mycobacteria. A positive AFBsmear result provides strong inferential evidence for the diagnosis of TB, and combination chemotherapy should be initiated promptly unless other strong evidence against the diagnosis of TB disease is present (e.g., a negative NAA test). If the diagnosis is confirmed by isolation of M. tuberculosis or a positive NAA test, treatment should be continued until a standard course of therapy is completed. Because as few as 50% of patients with positive sputum culture results for M. tuberculosis will have negative sputum AFB-smear results (93), when initial AFB-smear results are negative, empiric therapy for TB is indicated if the clinical suspicion for TB disease is high. Regardless of the decision to begin anti-TB treatment, diagnoses other than TB should be considered and appropriate evaluations undertaken in patients with negative AFB-smear results. A diagnosis of culture-negative pulmonary TB can be made if sputum cultures are negative, the TST result is positive (in this circumstance, a reaction of >5 mm induration is considered positive), a clinical or radiographic response is observed 2 months after the initiation of therapy, and no other diagnosis has been established. An adequate regimen for culture-negative pulmonary TB includes an additional 2 months of isoniazid and rifampin to complete 4 months of treatment (65). If no clinical or radiographic response is observed by 2 months, treatment can be stopped, and other diagnoses (including inactive TB) should be considered. If AFB-smear results are negative, and suspicion for TB disease is low, treatment can be deferred until the results of mycobacterial cultures are known and a comparison chest radiograph is available (typically at 2 months). Among persons who have not begun treatment and in whom suspicion of TB is low, treatment of LTBI should be considered if 1) cultures are negative, 2) the TST result is positive (>5 mm induration), and 3) the chest radiograph is unchanged after 2 months. A person with TB expertise should be consulted for unusual or complex situations.
Individualized case management should be provided for all patients with TB disease (114)(115)(116). In addition, patient management should be coordinated with officials of the local or state health department; suspected or confirmed TB cases should be reported to the local or state health department in accordance with laws and regulations. Regimens for treating TB disease should contain multiple drugs to which the organisms are susceptible. For persons with TB disease, treatment with a single drug can lead to the development of mycobacterial resistance to that drug. Similarly, adding a single drug to a failing anti-TB regimen is not recommended because it can lead to resistance to the added drug (65).
For the majority of patients, the preferred regimen for treating TB disease consists of an initial 2-month phase of isoniazid, rifampin, pyrazinamide, and ethambutol, followed by a continuation phase of isoniazid and rifampin lasting >4 months, for a minimum total treatment period of 6 months (Tables 6 and 7). The decision to stop therapy should be made on the basis of the number of doses taken within a maximum period (not simply a 6-month period) (65). Persons with cavitary pulmonary TB disease and positive cultures of sputum specimens at the completion of 2 months of therapy should receive a longer, 7-month continuation phase of therapy (total duration: 9 months) because of the substantially higher rate of relapse among persons with this type of TB disease (65).
If interruptions in TB therapy occur, the decision should be made whether to restart a complete course of treatment or continue the regimen as originally intended. In the majority of instances, the earlier the break in therapy and the longer its duration, the more serious the effect and the greater the need to restart the treatment from the beginning. Continuous treatment is more important in the initial phase of therapy, when the bacillary burden is highest and the chance of developing drug resistance is greatest. Although no evidence on which to base detailed recommendations exists, examples of practical algorithms for managing interruptions in therapy have been described previously (65).
For HIV-infected persons who are receiving antiretroviral therapy, TB treatment regimens might need to be altered. Whenever possible, the care of persons with concomitant TB and HIV should be provided by or in consultation with persons with expertise in the management of both TB and HIVrelated disease (65). To prevent the emergence of rifampin resistance, persons with TB, HIV, and CD4+ T-lymphocyte cell counts <100 cells/mm 3 should not be treated with highly intermittent (i.e., once-or twice-weekly) regimens. These patients should instead receive daily therapy during the intensive phase (i.e., first 2 months) and receive daily dosing or 3 doses per week by DOT during the continuation phase (117). Antiretroviral therapy should not be withheld because the patient is being treated for TB if it is otherwise indicated. Nevertheless, beginning both antiretroviral therapy and combination chemotherapy for TB at nearly the same time is not advisable. Although data on which to base recommendations are limited, experience in the fields of HIV and TB suggests that treatment for TB should be initiated first. Delaying the initiation of antiretroviral therapy until 4-8 weeks after starting anti-TB therapy is advantageous because it 1) better enables providers to ascribe a specific cause to a drug side effect, 2) decreases the severity of paradoxical reactions, and Substitution of rifabutin for rifampin might be indicated in HIV-infected patents taking certain antiretroviral medications because drug-drug interaction may be less frequent. † † Not recommended for HIV-infected patients with CD4+ T-lymphocyte cell counts <100 cells/mm 3 . Additional information is available at / nchstp/tb/tb_hiv_drugs/toc.htm.
3) decreases adherence challenges for the patient. Until controlled studies have been conducted that evaluate the optimal time for starting antiretroviral therapy in patients with HIV infection and TB, this decision should be individualized on the basis of 1) the patient's initial response to treatment for TB, 2) the occurrence of side effects, and 3) the availability of multidrug antiretroviral therapy. Because drug-drug interactions might be less frequent with use of rifabutin, substitution of rifabutin for rifampin might be indicated with certain antiretroviral medications. Detailed information on TB treatment in HIV-infected persons has been published (65,107 Drug-susceptibility testing should be performed on all initial isolates from patients with TB disease. When results from drugsusceptibility tests become available, the treatment regimen should be adjusted accordingly (65,113,114,118,119) (Tables 6 and 7). Medical providers treating patients with drugresistant TB disease should seek expert consultation and collaborate with the local health department for treatment decisions (65).
The primary determinant of treatment outcome is patient adherence to the drug regimen. Thus, careful attention should be paid to measures designed to enable and foster adherence (65,119,120). DOT is the preferred treatment strategy for all persons with TB disease and high-risk (e.g., HIV infected) persons with LTBI. DOT should be used throughout the entire course of therapy whenever feasible. Practitioners providing treatment to inmates should coordinate DOT with the local health department on an inmate's release. The local health department also may be involved in monitoring therapy for correctional facility staff (65).
# Challenges to Treatment Completion
Achieving completion of treatment for LTBI or TB disease often is difficult, particularly in correctional facilities. Movement of inmates both within and outside of correctional systems interferes with continuity of care and might lead to treatment default (121). Comprehensive case management that includes discharge planning and coordination with other correctional facilities and health departments is needed to ensure completion of therapy for patients with TB disease and LTBI (42).
Multiple studies have demonstrated that inmates have relatively low LTBI treatment completion rates, particularly those in jails who are likely to be released before their therapy has been completed (14,28,40,122). For a substantial proportion of inmates, referrals for follow-up after release are not made; of inmates whose appointments are scheduled, 40%-60% will not attend their first clinic visit (36,40). Multiple interventions have been attempted to improve LTBI treatment completion in this population, including patient education while in jail, use of incentives, and use of DOT (61,122,123). None of these strategies has had substantial success, although patient education and use of DOT have increased completion rates modestly in certain situations (61,122). Active case management, as recommended for TB disease, should be considered as a next step in improving the completion rates for LTBI treatment (14,42).
# Discharge Planning
Correctional facilities should plan for the discharge of inmates and other detainees who have confirmed or suspected TB disease and those with LTBI who are at high risk for TB disease. Such planning is crucial to effective local TB control efforts within the community to which released inmates return. Facilities should ensure that their discharge plan is comprehensive and effective; the process should include 1) collaborating with public health and other community health-care professionals, 2) ensuring continuity of case-management, and 3) evaluating discharge-planning procedures and modifying procedures as needed to improve outcomes.
# Collaboration Between Correction Facilities and Public Health Officials
Postconfinement follow-up is a necessary component of TB-control efforts (35,124). Effective discharge planning requires collaboration between corrections and medical staff (both intra-and inter-facility), and with public health and community-based service organizations (37). Correctional facilities and public health departments should overcome multiple obstacles associated with postdetention follow-up (125), including
- short length of stay in a facility;
- unscheduled release or transfer;
- poorly defined or implemented channels of communication between correctional and public health authorities; - limited resources (i.e., staff, equipment, and medications) available to provide recommended TB prevention, screening, treatment, and discharge-planning services; - limited resources of the patient to make or keep appointments; - high prevalence of mental illness and substance abuse among correctional patients;
- mistrust among inmates, which might result in the provision of aliases or incorrect contact or locating information; and - reincarceration with disruption in treatment or termination of public benefits. Collaboration is essential to ensure that TB-control efforts are undertaken in the most cost-effective manner. Coordination between the correctional facility and the public health department maximizes the effectiveness of any efforts begun in a correctional facility (126), and linking released detainees to the public health-care system might improve post-release adherence (35) and reduce recidivism (127,128). The types of relationships forged will depend on the assessment of the TB risk in the facility and the community.
# Comprehensive Discharge Planning
Comprehensive discharge planning is an important component of case management and is essential for ensuring the continuity of TB management and therapy among persons with TB disease and LTBI. Following release, former inmates face housing, employment, and other crises concerning basic needs that often take priority over their health. Multiple reports from the United States and other countries support the use of comprehensive discharge planning in TB control efforts (42,129,130). Comprehensive discharge planning should be implemented for inmates with confirmed TB disease, suspected TB disease, and LTBI who also are at high risk for TB disease.
Discharge planning for persons with LTBI who are considered at high risk for developing TB disease is critical if treatment is begun in the correctional facility. Starting all inmates at high risk on LTBI therapy might not be feasible while they are in the correctional facility, and the policy determining which risk groups to start on treatment should be made in collaboration with public health personnel. Collaboration ensures appropriate communication and adequate resources for treatment after transfer to another facility or after release to the community. At minimum, all inmates who have begun therapy for LTBI in a correctional facility should be given community contact information for follow-up and continuity of care. Ideally, all inmates demonstrated to be infected with TB should be considered for therapy, and discharge planning to facilitate therapy should be comprehensive (124). Because of high recidivism rates, discharge-planning efforts should begin in the detention phase and continue in the post-detention phase to ensure continuity of care as inmates move among different facilities and between correctional facilities and the community.
# Components of Discharge Planning Initiate Discharge Planning Early
To ensure uninterrupted treatment, discharge planning for inmates who receive a diagnosis of TB disease should begin as soon as possible after diagnosis (131). Corrections or health services administrators (or their designees) should assign staff to notify the public health department of inmates receiving treatment for TB disease or LTBI. Inmates with TB disease should be interviewed while still incarcerated (ideally by public health staff ) to enable facility administrators to assess and plan for the appropriate support and referrals that will be needed after discharge (131). Such personnel also should communicate with other facilities in the event of transfers of inmates.
# Provide Case Management
To ensure continuity of care, all correctional facilities should assign personnel (preferably health-care professionals) to serve as case managers. These managers should be responsible for conducting discharge planning in the facility, which entails coordinating follow-up and communicating treatment histories with public health department and other health-care counterparts within the community (42). In addition, case managers should employ strategies (e.g., mental-illness triage and referral, substance-abuse assessment and treatment, and prerelease appointments for medical care) to help former inmates meet basic survival needs on release. The role of case manager should be assigned to a facility staff member who is capable of establishing good rapport with inmates; an effective case manager might be capable of persuading TB patients who are being released into the community to supply accurate information needed to ensure follow-up care.
The following factors should be considered when planning community discharge of an inmate receiving treatment for TB (132):
- Where will the ex-inmate reside after discharge (e.g., a permanent residence, a halfway house, or a shelter)? - Will family or other support be available? - Are cultural or language barriers present? - What kind of assistance will be needed (e.g., housing, social services, substance abuse services, mental health services, medical services, and HIV/AIDS services)? - Does the inmate understand the importance of followup and know how to access health-care services?
# Obtain Detailed Contact Information
To facilitate the process of locating former inmates, detailed information should be collected from all inmates with TB disease or LTBI for whom release is anticipated, including 1) names, addresses, and telephone numbers of friends, relatives, and landlords; 2) anticipated place of residence; and 3) areas typically frequented (e.g., restaurants, gyms, parks, and community centers) (61,133). Inmates also should complete a release form authorizing health department personnel to contact worksites, family members, corrections staff (parole officers), and public and private treatment centers. Inmates might give aliases or incorrect contact information because of fear of incrimination or deportation. The use of an alias can be a barrier to continuity of care on reentry to a correctional facility.
# Assess and Plan for Substance Abuse and Mental Health Treatment and for Other Social Services
Substance abuse and other comorbid mental health conditions should be considered when developing a comprehensive discharge plan. Addiction affects health care, medication adherence, housing opportunities, social relationships, and employment and might be the greatest barrier to continuity of care for TB (134). Mental illness can be a barrier when community service providers have not been trained to interact with mentally ill patients. Persons who are mentally ill might have difficulties keeping medical appointments. Collaboration between corrections and health department personnel can facilitate the placement of former inmates in substance abuse or mental-health treatment programs to improve the likelihood of social stabilization and continuity of care (134,135).
Other social issues present barriers to released inmates. Loss of health insurance benefits while incarcerated is common, and former inmates might be required to wait 30-365 days after release to become re-eligible for benefits (136,137). Certain correctional facilities have agreements with local Social Security Administration field offices to facilitate swift reactivation of these benefits (138); creation of and training in the use of such agreements are encouraged. Ideally, on entry into the correctional system, public benefits would be suspended, rather than terminated, and reactivated on release to eliminate gaps in coverage. Application for public benefits and insurance should be incorporated into the discharge planning phase whenever possible. If the inmate is likely to have limited access to care because of inability to pay for services on release, documentation should be made and another treatment mechanism identified (139).
# Make Arrangements for Postrelease Follow-Up
Before release, the inmate should be introduced (preferably face to face) to the employee from the community treatment agency who is responsible for community-based treatment and care (139). When release dates are known, setting post-release appointments has been demonstrated to improve compliance (128,134,140). Patients with TB disease should be given a supply of medication at discharge adequate to last until their next medical appointment. Discharge planners can work with advocacy groups or private or government-funded programs to facilitate a safe, supported transition into the community (61).
# Make Provisions for Unplanned Release and Unplanned Transfers
Administrative procedures should be in place for unscheduled discharge of inmates who are being managed or treated for TB (36,141). Reporting requirements for inmates with TB disease who are released or transferred to another facility vary among states and jurisdictions. Despite mandatory notification policies, notification of public health officials varies from 87%-92% for inmates with TB disease (37,126) to only 17% for inmates with LTBI (36,37). Correctional facility staff responsible for health department notification should relay information about all scheduled and unscheduled releases as it becomes available. All TB information concerning persons who are being transferred to other correctional settings should be provided to the receiving facility. In addition, inmates should be given a written summary or discharge card outlining their treatment plan to ensure continuity of care in case of unplanned and unanticipated release (131,142). Inmates with TB disease who are eligible for release or transfer to another medical or correctional facility but continue to be infectious should remain in airborne precautions during and after transfer until noninfectious (132).
# Provide Education and Counseling
Patient education and documentation of education in the correctional facility is critical; multiple misconceptions persist among inmates and facility staff regarding means of transmission, differences between infection and disease, and methods of prevention and treatment for TB (143). Persons receiving treatment should be counseled about the importance of adhering to the treatment plan (131) as a measure to improve postrelease follow-up (61). Education should be delivered in the inmate's first preferred language and should be culturally sensitive with respect to ethnicity, sex, and age (135,(144)(145)(146)(147). The inmate should be actively involved in all education sessions to encourage communication regarding previous transition experiences (e.g., the inmate's treatment motivations and any positive or negative experiences with specific providers) (141). Inmates with LTBI who have not been started on therapy should be counseled on their risk factors, encouraged to visit the public health department, and provided with information about access to care after release.
# DOT
DOT for TB disease or LTBI in the correctional setting provides an opportunity for educating and counseling inmates and for establishing a routine of medication administration. The effect, if any, of DOT on postrelease behavior has not been evaluated formally, but this practice might enhance adherence (122).
# Community-Based Case Management after Release
Case-management strategies begun in the correctional facility should be continued after release for former inmates with confirmed or suspected TB disease and those with LTBI who are at high risk for progression to TB disease. Incentives and enablers (see Glossary) have improved adherence in incarcerated (35,60,61) and nonincarcerated (148,149) populations, and incentives combined with education and counseling optimize both short-and long-term adherence (40,60,61,150). Case management that takes into account cultural differences and addresses not only TB-control matters but patient-defined needs (particularly among foreignborn persons) results in improved completion rates for LTBI therapy (145). Case management by health department personnel after release is critical for continuity of care in the event of reincarceration. The provision of follow-up information from local health departments and community-based organizations back to corrections staff is helpful in determining whether discharge planning is effective.
# Discharge Planning for Immigration and Customs Enforcement Detainees Background
Persons with TB disease detained by ICE officers are a potential public health threat because they typically are highly mobile, likely to leave and reenter the United States before completion of TB therapy, and at high risk for interrupting treatment (151). Therefore, ensuring treatment of such detainees is important to the national strategy to eliminate TB in the United States (32,152).
In March 2003, the detention and removal functions of the former Immigration and Naturalization Service (INS) were transferred from the U.S. Department of Justice (DOJ) to the U.S. Department of Homeland Security (DHS). ICE is a division of DHS and detains approximately 200,000 persons annually while enforcing immigration law. ICE detainees are screened for TB disease at service processing centers, staging facilities, contract detention facilities, and local jails. Frequent transfers of ICE detainees between detention facilities are common.
ICE detention provides an opportunity to identify persons with confirmed and suspected TB disease and initiate treatment, if appropriate. ICE detainees with confirmed or suspected TB disease receive treatment while they are in custody. Presently, ICE does not deport detainees with known infectious TB, but such persons might be deported when noncontagious, even if treatment has not been completed or the final culture and susceptibility results are pending.
# Discharge Planning for ICE Detainees
In May 2004, ICE approved a policy to implement a shortterm medical hold of persons with suspected or confirmed TB disease until continuity of care is arranged, which affords the ICE health services program the time needed to facilitate continuity of TB therapy arrangements before the patient's release or removal. The ICE health services program seeks to enroll all persons with confirmed or suspected TB disease in programs that facilitate the continuity of TB therapy between countries. These programs (e.g., CureTB, TB Net, and the U.S.
# Evaluation of Discharge Planning Effectiveness
Evaluation of a discharge planning program is critical and should begin with an assessment of existing programs and activities. Program evaluation should be incorporated into the overall correctional quality improvement/assurance program (153). Data from program evaluation studies should be documented and published to ensure that correctional facility and public health department staff are informed regarding effec-tive measures and the effective translation of research findings into practice (123). Evaluation of discharge planning should include measurements of - adherence to therapy,
- cost savings (from unduplicated testing for persons with LTBI and completion of care without re-starts and extensions), - recidivism, and - the effectiveness of the collaboration between medical and corrections staff (both within and among facilities) and between correctional facilities and the public health department and other community agencies.
# Contact Investigation Overview
Multiple outbreaks of TB, including those involving MDR TB, have been reported in prisons and jails, particularly among HIV-infected inmates (17,22,45,154). The identification of a potentially infectious case of TB in a correctional facility should always provoke a rapid response because of the potential for widespread TB transmission. A prompt public health response in a confined setting can prevent a TB outbreak or contain one that has already begun (16,21,155).
The overall goal of a TB contact investigation is to interrupt transmission of M. tuberculosis. Ongoing transmission is prevented by 1) identifying, isolating, and treating persons with TB disease (source and secondary-case patients) and 2) identifying infected contacts of the source patient and secondary patients and providing them with a complete course of treatment for LTBI. The contact investigation can serve to educate corrections staff and inmates about the risk, treatment, and prevention of TB in correctional facilities; inform staff and inmates regarding the importance of engaging in recommended TB-control practices and procedures within the correctional system; and emphasize the importance of completion of therapy for persons with TB disease and LTBI.
Because decisions involved in planning and prioritizing contact investigations in correctional facilities are seldom simple, a multidisciplinary team is preferable. Health departments often can help correctional facilities in planning, implementing, and evaluating a TB contact investigation.
Data collection and management is an essential component of a successful investigation (21,36). It requires a systematic approach to collecting, organizing, and analyzing TBassociated data. As part of the contact investigation, all staff and investigation personnel should adopt a uniform approach. Investigators should have a clear understanding of how a contact is defined and what constitutes an exposure (156)(157)(158).
Two correctional information systems are critical to the efficient conduct of a contact investigation: 1) an inmate medical record system containing TST results and other relevant information and 2) an inmate tracking system. The lack of either system can lead to the unnecessary use of costly personnel time and medical evaluation resources (e.g., TSTs and chest radiographs). Without these information systems, facilities also might be forced to implement costly lockdowns and mass screenings.
# TB Transmission Factors
TB transmission is determined by the characteristics of the source patient and exposed contacts; the circumstances surrounding the exposure itself also determine whether ongoing transmission will occur. The following variables should be accounted for when planning each contact investigation.
# Characteristics of the Source Patient
Source patients who have either cavitation on chest radiograph or AFB smear-positive respiratory specimens are substantially more likely to transmit TB than persons who have neither characteristic (159)(160)(161)(162)(163) Delays in TB diagnosis in source patients have also been associated with an increased likelihood of transmission (164). Nonetheless, substantial variability exists among the infectiousness of a given TB source patient. Although AFB smear status, cavitary disease, and delayed diagnosis increase the likelihood of transmission, certain persons with these characteristics infect few persons, whereas others with none of these characteristics might infect multiple persons. The best measure of the infectiousness of source patients is the documented infection rate among their contacts.
# Characteristics of Persons Who Have Been Identified as Contacts
Immunosuppression. HIV infection is the greatest single risk factor for progression to TB disease. Therefore, HIVinfected contacts should receive the highest priority for evaluation of TB infection, even if these persons had shorter duration of exposure than other contacts. Persons receiving prolonged therapy with corticosteroids, chemotherapy for cancer, or other immunosuppressive agents (e.g., TNF-α antagonists) also should be considered high priority for investigation. In addition, persons with end-stage renal disease and diabetes mellitus should be promptly evaluated, because these conditions are associated with compromised immune function.
Age. Young children (i.e., those aged <4 years) are at high risk for rapid development of TB disease, particularly TB meningitis. If an inmate with TB identifies a young child as a community contact, a health department referral should be made immediately.
# Exposure Characteristics
Air volume. The volume of air shared between an infectious TB patient and susceptible contacts is a major determinant of the likelihood of transmission. Infectious particles become more widely distributed as air space increases, rendering them less likely to be inhaled.
Ventilation. Ventilation is another key factor in the risk for airborne transmission of disease. Airborne infectious particles disburse throughout an entire enclosed space; thus, if air is allowed to circulate from the room occupied by an infectious patient into other rooms or central corridors, their occupants also will be exposed. Areas that have 1) confined air systems with little or no ventilation or 2) recirculated air without HEPA filtration have been associated with increased TB transmission.
Duration of exposure. Although transmission of TB has occurred after brief exposure, the likelihood of infection after exposure to an infectious patient is associated with the frequency and duration of exposure. However, defining what constitutes a substantial duration of exposure for any given contact is difficult. When conducting a contact investigation, priority should be given first to inmates and employees who were most exposed to the source patient (21,154,162).
# Decision to Initiate a Contact Investigation
The decision to initiate a contact investigation for an inmate or detainee with possible TB is made on a case-by-case basis. Each potential source patient's clinical presentation and opportunities for exposure should be evaluated. Contact investigations should be conducted in the following circumstances:
- Suspected or confirmed pulmonary, laryngeal, or pleural TB with cavitary disease on chest radiograph or positive AFB smears (sputum or other respiratory specimens). If the sputum smear is positive and the NAA is negative, TB is unlikely, and a contact investigation typically can be deferred. A negative NAA on an AFB-smearnegative specimen, however, should not influence decisions about the contact investigation (102). - Suspected or confirmed pulmonary (noncavitary) or pleural TB with negative AFB smears (sputum or other respiratory specimens) and a decision has been made to initiate TB treatment. A more limited initial investigation may be conducted for smear-negative cases. Contact investigations typically are not indicated for extrapulmonary TB cases (except for laryngeal and pleural TB), unless pulmonary involvement is also diagnosed.
The decision as to whether the facility should conduct a contact investigation should be guided by the probability that an inmate or employee has pulmonary TB. Sputum results for AFB serve as a major determinant (165). However, in certain patients with pulmonary TB, collecting sputum samples is not feasible. In this circumstance, other types of respiratory specimens (e.g., those from bronchoscopy) may be collected for AFB smear and mycobacterial culture.
# Principles for Conducting the Contact Investigation
No simple formula has been devised for deciding which contacts to screen in a correctional facility contact investigation. However, the investigation should be guided by the following basic principles:
- Identified contacts should be stratified by their duration and intensity of exposure to the source patient. - HIV-infected contacts should be classified as the highest priority group for screening and initiation of LTBI therapy, regardless of duration and intensity of exposure. - Identified groups of contacts with the greatest degree of exposure should be screened immediately, followed by repeat testing at 8-10 weeks if the initial TST or QFT-G is negative. - The infection rate should be calculated to assess the level of TB transmission. - Decisions to expand the contact investigation to groups with less exposure should be made on the basis of the calculated infection rate. If no evidence of transmission is observed, the investigation should not be expanded. If transmission is occurring, the investigation should be expanded incrementally to groups with less exposure. When the group screened shows minimal or no evidence of transmission, the contact investigation should not be expanded further. - Corrections and medical staff should be included in the contact investigation depending on their exposure risks. Ideally, decisions about structuring the contact investigation should be made collaboratively with the contact investigation team that includes input from the state or local health department. For certain investigations, screening a convenience sample before expanding the investigation is prudent. For example, in jail investigations, multiple contacts might already have been released, rendering those who remain incarcerated the only available group for screening. If a substantial number of high priority contacts cannot be evaluated fully, a wider contact investigation should be considered.
The investigation should focus on identifying the contacts at highest risk for transmission, screening them completely, and providing a full course of LTBI treatment for persons demonstrated to be infected. In general, because wide-scale investigations divert attention away from the high priority activities necessary to interrupt transmission in the facility, mass screening of all persons who had any contact with the source patient should be avoided (166). Rarely is a person so infectious that wide-scale expansion of the contact investigation is necessary or beneficial.
# Medical Evaluation of Contacts
Appropriate medical evaluation depends on both the immunologic status (e.g., HIV infection) of the contact and previous TST or QFT-G results. Adequate knowledge of these data is possible only through use of a medical record system that is complete, up-to-date, and reliable with regard to TST or QFT-G status, testing date, and documentation of the reading in millimeters (for TST). Without an adequate medical record system (and therefore definitive information regarding prior TST or QFT-G results), the true infection and transmission rates cannot be determined. The lack of such information is likely to lead to unnecessary expansion of the contact investigation.
# All Contacts
All contacts should be interviewed for symptoms of TB disease using a standard symptom questionnaire. Symptomatic contacts should receive a chest radiograph and a complete medical evaluation by a physician, regardless of TST or QFT-G status; they also should be isolated appropriately (i.e., inmates should be placed in an AII room if infectious TB is suspected by chest radiograph or clinical findings; staff should not be permitted to work). † † HIV testing should be considered for all contacts whose HIV status is unknown.
# Inmates with Documented Previous Positive TST or QFT-G results
Inmates who are asymptomatic, HIV-negative, and have previous positive TST or QFT-G results need no further follow-up, other than consideration for "routine" treatment of LTBI, if not completed in the past. However, if such an inmate has any signs or symptoms suggestive of TB, further evaluation should be conducted (e.g., a chest radiograph for persons with respiratory symptoms).
# HIV-Infected Inmates
HIV-infected contacts should be interviewed for symptoms, have a TST or QFT-G and chest radiograph performed, and initiate a complete course of treatment for LTBI (once TB disease has been ruled out), regardless of the TST or QFT-G result. Treatment should be initiated even for persons with a history of previous treatment for LTBI or TB disease because of the possibility of re-infection. Those with a history of a negative TST or QFT-G should have a TST or QFT-G placed at baseline and again in 8-10 weeks. The results of the TST or QFT-G will not affect treatment decisions, but they will provide important information for the contact investigation. Anergy testing is not recommended (52).
# Previous TST-Negative or QFT-G-Negative Inmates (HIV Negative)
Mandatory tuberculin skin or QFT-G testing of all previously TST-or QFT-G-negative inmate contacts should be conducted at baseline (unless previously tested within 1-3 months of exposure). Testing should be repeated 8-10 weeks from the most recent contact with the source patient (58,167).
# TST and QFT-G Converters
Persons whose TSTs or QFT-Gs convert or those with newly documented, positive TST or QFT-G results should be offered treatment for LTBI unless medically contraindicated. If inmate contacts refuse medically indicated treatment for LTBI, they should be monitored regularly for symptoms. Certain facilities have chosen to monitor HIV-infected contacts with follow-up chest radiographs.
# Contact Investigation Stepwise Procedures
The following steps are involved in conducting a contact investigation and might overlap in time. As soon as a person is confirmed or suspected of having TB disease, the case should be reported to the appropriate local health authorities and contacts promptly evaluated.
- Notify correctional management officials. Identification of TB in an inmate or facility staff member can be alarming for other inmates, corrections staff, and the community. The administrator should be notified through appropriate chain of command that a case of TB has been identified in the institution so that appropriate briefing and educational efforts can be initiated. - Conduct a source patient chart review. The following data (with specific dates) should be collected: 1) history of previous exposure to TB, 2) history of TB symptoms (e.g., cough, fever, and night sweats), 3) weight history (particularly unexplained weight loss), 4) chest radiograph reports, 5) previous TST or QFT-G results, 6) mycobacteriology results (e.g., AFB smears, cultures, and susceptibilities), 7) NAA test results, 8) HIV status, and 9) other medical risk factors. † † Asymptomatic contacts with normal chest radiographs typically do not require isolation.
- Interview the source patient. A chart review and case interview should be accomplished within 1 working day for persons with AFB smear-positive respiratory specimens or cavitation on chest radiograph and within 3 days for all other persons (165). Source patients should be asked concerning TB symptom history, with a particular focus on duration of cough. Source patients also should be asked about where they conduct their daily activities. Persons with confirmed or suspected TB who were detained during the course of the infectious period should be interviewed regarding potential community contacts, particularly HIV-infected persons and young children; information regarding the location of community contacts also should be obtained. Source patients should be questioned regarding contacts during a second interview conducted 7-14 days after the first. - Define the infectious period. Defining the infectious period for a source patient helps investigators determine how far back to go when investigating potential contacts. The infectious period is typically defined as 12 weeks before TB diagnosis or onset of cough (whichever is longer). If a patient has no TB symptoms, is AFB smear negative, and has a noncavitary chest radiograph, the presumed infectious period can be reduced to 4 weeks before the date of first positive finding consistent with TB. If the contact investigation reveals that TB transmission occurred throughout the identified infectious period, the period for contact investigation might need to be expanded beyond 12 weeks. - Convene the contact investigation team. After TB is diagnosed, a team of professionals (e.g., infection-control, medical, nursing, custody, and local public health personnel) should be convened and charged with planning the contact investigation. A team leader should be identified and the roles and responsibilities of each team member defined, and a schedule of regular meetings (documented formally with written minutes) should be established. In addition, a communications plan and a plan for handling contact investigation data should be developed. - Update correctional management officials. Administrative personnel should be kept apprised of the strategy, process, and action steps involved in conducting the contact investigation. - Obtain source case inmate traffic history. The dates and locations of the source patient's housing during the infectious period and information regarding employment and education should be obtained. Groups of contacts should be prioritized according to duration of exposure and immune status.
- Tour exposure sites. A tour should be conducted of each place the source patient lived, worked, or went to school during the infectious period. In addition, information should be obtained regarding any correctional facility that has housed the source patient during the infectious period, including 1) the number of inmates who are housed together at one time, 2) the housing arrangement (e.g., cells versus dorms), 3) the general size of the air space, 4) the basics of the ventilation system (e.g., whether air is recirculated), 5) the pattern of daily inmate movement (e.g., when eating, working, and recreating), and 6) the availability of data on other inmates housed at the same time as the source patient. The assistance of a facility engineer often is necessary to help characterize the ventilation system and airflow direction within a correctional facility. - Prioritize contacts. Contacts should be grouped according to duration and intensity of exposure. Persons with the most exposure and HIV-infected or other immunosuppressed contacts (regardless of duration of exposure) are considered highest priority. Because progression from exposure to death can be rapid among HIV-infected persons in a facility in which HIV-infected persons are housed or congregated separately, the entire group should be given high priority (45). - Develop contact lists. Rosters of inmate and employee contacts from each location should be obtained and their current location investigated. Lists of exposed contacts should be generated and grouped according to current location (e.g., still incarcerated, released, and transferred). - Conduct a medical record review on each highpriority contact. TST or QFT-G status, chest radiograph history, history of treatment for LTBI, HIV status, and other high-risk medical conditions should be recorded. Particular attention should be given to weight history and previous visits to facility health-care professionals for respiratory symptoms. Dates should be carefully recorded. - Evaluate HIV-infected contacts for TB disease and LTBI promptly. LTBI therapy should be initiated promptly among these persons once TB disease has been excluded. - Place and read initial TSTs or perform QFT-Gs on eligible contacts. Tuberculin skin or QFT-G testing of all previously TST-or QFT-G-negative inmate contacts should be conducted at baseline (unless previously tested within 1-3 months of exposure). Referrals should be made for persons who have been released or transferred before receiving their initial TST or QFT-G. - Make referrals for contact evaluation. Referrals should be made to the local health department for inmate contacts of the source case who have been released or trans-ferred to another facility. Additionally, family members or frequent visitors of the source patient should be investigated by the health department; follow-up TST or QFT-G results for a substantial percentage of contacts of released inmates have been obtained on re-arrest by matching the list of exposed contacts with the jail intake TST or QFT-G registry (21). - Calculate the infection rate and determine the need to expand the investigation. To calculate the infection rate, the total number of inmates whose TST or QFT-G has converted from negative to positive should be divided by the total number with a TST placed and read or QFT-G performed. Persons with a history of a prior positive TST or QFT-G should be excluded. The infection rate should be calculated by exposure site. In addition, if using tuberculin skin testing, separately calculating the rate for U.S.versus foreign-born inmates might provide useful data (33); foreign-born contacts often have a history of BCG vaccination, and a TST "conversion" among these contacts might represent a vaccination-associated "booster" TST response (168). The contact investigation team should analyze the infection rate(s) and decide whether to expand the investigation.
# Place and read follow-up TSTs or perform follow-up
QFT-Gs. Follow-up TSTs or QFT-Gs for contacts who had a negative TST or QFT-G result on initial testing should be placed 8-10 weeks after exposure to the source patient has ended. Referrals should be made for persons who have been released or transferred and need a followup TST or QFT-G. - Determine the infection/transmission rate. The infection rate from the second round of testing should be calculated. In addition, the need to expand the investigation should be determined. - Write a summary report. The summary report should briefly describe the circumstances of the investigation, how it was conducted, the results of the investigation (e.g., the number of secondary cases identified and the infection and transmission rates), and any special interventions required (including follow-up plans). The report should be distributed to corrections administrators and the local health department.
# Tuberculosis Training and Education of Correctional Workers and Inmates
TB training and education of correctional workers and other persons associated with any correctional facility (e.g., volun-teers and inmates) can help lower the risk for TB transmission and disease. To ensure the effectiveness of such training and education, multiple factors should be considered. First, correctional facilities and local or state health departments should collaborate when providing TB training and education to correctional workers; specifically, facilities should routinely work with health department staff to provide them with correctionsspecific training. Second, routine TB education should be provided for all persons who spend significant time in the facility, and additional training should be given to any employee who will interact with persons at risk for TB. The ideal amount of training time and information varies by the local risk for TB transmission and by the job descriptions and characteristics of those needing training. Finally, TB training and education efforts and other TB-related events should be documented to ensure that these programs can be evaluated and updated. In-facility training and education efforts can build on existing sources of TB-related preservice education and training. Regional and national professional associations frequently provide ongoing education regarding TB and infection control, and national correctional health-care conferences and courses for medical professionals working in correctional facilities regularly include TB in their curricula.
# Training and Education in Correctional Facilities
TB-associated training should be designed to meet the needs of correctional workers with diverse job descriptions. In multiple facilities and for multiple categories of correctional workers, appropriate TB training might be accomplished through incorporation of the topic into other annual employee trainings (e.g., bloodborne pathogen training); more extensive or topicspecific training should be developed for persons who are specifically involved in TB control. Facilities that use inmates to provide peer-to-peer TB-education programs should provide similarly tailored training to any participating inmates. Facilities located in areas with a high TB prevalence or whose inmates have lived in such areas might need to increase the time and resources dedicated to TB training.
The correctional facility health services director or designee (i.e., the staff member responsible for a facility's TB control program) should collaborate with the local public health department to establish TB education and training activities. In addition, these staff members routinely should evaluate and update the facility's TB training and education program in collaboration with the public health sector. External changes in the prevalence of TB in the community, changes in state or local public health policies, or changes in national TB control guidelines might necessitate the conduct of regular educational updates for staff.
Each facility should maintain training records to monitor correctional worker training and education. Records of TB-related adverse events (e.g., documented in-facility transmission) also should be monitored as a means of evaluating training and education outcomes. The circumstances of adverse events should be investigated, and the possibility of enhanced or altered training should be considered as an appropriate intervention.
# Initial Training and Education for all Correctional Workers
Although the level and detail of any employee's initial TB training and education session will vary according to staff members' job responsibilities, the following components should be included for all correctional workers, regardless of job function:
- communication regarding the basic concepts of M. tuberculosis transmission, signs, symptoms, diagnosis (including the difference between LTBI and TB disease), and prevention; - provision of basic information regarding the importance of following up on inmates or correctional workers demonstrating signs or symptoms of TB disease; - need for initiation of airborne precautions of inmates with suspected or confirmed TB disease; - review of the policies and indications for discontinuing AII precautions; - discussion of basic principles of treatment for TB disease and LTBI; and - discussion regarding TB disease in immunocompromised persons. § §
# Required Training for Correctional Workers in Facilities with AII Rooms
Correctional workers in facilities equipped with AII rooms also should be provided clear guidelines regarding the identification and containment of persons with TB disease. Educa-tion efforts for these staff members should include 1) discussion of the use of administrative and engineering controls and personal protective equipment and 2) a respiratory protection program (including annual training) as mandated by OSHA (Standard 29 CFR OSHA/DOL ).
# Enhanced Training and Education for Correctional Workers in High-Risk Facilities
Correctional workers in facilities with a high risk for TB transmission should receive enhanced and more frequent training and education concerning
- the signs and symptoms of TB disease,
- transmission of TB disease, and - TB infection-control policies (including instruction on and location of the facility's written infection-control policies and procedures, exposure control plan, and respiratory protection program). If a contact investigation is being conducted because of suspected or confirmed infectious TB, the health department or designated health provider should educate facility correctional workers in all aspects of the investigation. Education should include information concerning
- contact investigation guidelines (165),
- TB transmission,
- the method used to determine a contact's risk for infection and prioritization for evaluation and treatment, - the noninfectiousness of inmates and correctional workers with LTBI, - the noninfectiousness of persons with TB disease who have responded to therapy and have submitted three AFB negative sputum-smear results, and - patient confidentiality issues. Facility staff members who are responsible for TB-control activities should stay informed regarding current TB trends and treatment options. Conference attendance, participation in professional programs, and other off-site training are effective supplemental training strategies for correctional worker trainers and facility medical and infection-control staff.
# Training and Education of Public Health Department Staff
State and local health department staff providing consultation or direct services to a correctional facility (including those who act as liaisons) should receive training and education regarding the unique aspects of health care and TB control in the correctional facility setting. Correctional facility administrators, contracted correctional facility health-care profession- § § Because being immunocompromised (having pathologic or iatrogenic immune suppression, e.g., HIV infection or chemotherapy) is a risk factor for TB disease, correctional workers should be educated on the relation between TB and medical conditions associated with being immunocompromised. Correctional workers should be encouraged to discuss known or possible immunocompromising conditions with their private physicians or health-care professionals. als, and health department staff should collaborate to develop an appropriate training program. The use of self-study and other educational materials should be encouraged as a supplement to training. Certain TB training resources also can be accessed on the Internet (Appendix A). Education and training of health department staff should cover (but not be limited to) the following topics:
- TB-related roles of correctional facility and health department staff; - methods of effectively collaborating with correctional facilities; - differences between and among jails, prisons, and other forms of detention facilities; - correctional culture and the importance of respecting the mission and purpose (i.e., custody) of correctional facilities and correctional workers; - the health department's role in the discharge of inmates (see Discharge Planning); and - the effect of the custody and movement of foreign detainees on local facilities.
# Training and Education of Inmates
Inmates should receive education from facility health-care professionals or other appropriately trained workers managing the screening or treatment process. Education and training should be appropriate in terms of the education level and language of the trainees. The following components should be incorporated into inmate training and education programs:
- general TB information (provided either at the time of admission or when being screened for TB); - the meaning of a positive TST or QFT-G result and treatment options for LTBI; - comprehensive TB education, including the infectiousness of and treatment for inmates being confined with suspected or confirmed TB disease; and - the importance of completing treatment for inmates with LTBI or TB disease.
# Program Evaluation
Six steps should be followed to ensure successful monitoring and evaluation of a TB-prevention and -control program:
- identifying collaborators,
- describing the TB-control program,
- focusing the evaluation to assess TB risk and performance,
- collecting and organizing data,
- analyzing data and forming conclusions, and - using the information to improve the TB program (169).
The purpose of program evaluation is to improve accountability, enable ongoing learning and problem-solving, and identify opportunities for improvement. The evaluation process should be designed to provide information relevant to the stakeholders. Measures should be simple and the communication of results meaningful.
# Identifying Collaborators
TB control requires the collaboration of correctional systems, health departments, and other community agencies; effective program evaluation also involves teamwork. Early engagement of program staff and internal and external collaborators (including custody staff ) helps ensure that the evaluation will yield the information that is most useful to stakeholders. Such engagement also promotes mutual cooperation for constructive change. Although multiple parties might be involved, each TB program should have a single person designated to be responsible for data quality and program evaluation. Designating staff for these activities helps ensure that continuity and accountability are maintained.
# Describing the Program
Underlying a useful evaluation is an understanding of how the TB program currently operates within the context of the facility. Evaluators should be knowledgeable about program goals and objectives, strategies, expected program-associated results, and the way in which the program fits into the larger organization and community. This information can typically be obtained by reviewing a facility's existing TB-control plan.
In addition, all stakeholders should agree on program goals before the evaluation is undertaken (169).
# Focusing the Evaluation to Assess TB Risk and Performance
# Risk Assessment
Each facility should assess its level of TB risk at least annually (71). The TB risk assessment (see Screening) determines the types and levels of administrative and environmental controls needed. Assessment of a facility's risk level includes analysis of disease burden and facility transmission, which can be conducted by examining the following indicators:
- Burden of disease -community rates of TB disease (including other communities from which substantial numbers of inmates come; these data are available from local health departments), -the number of cases of TB disease in the facility during the preceding year, and -the number and percentage of inmates and staff with LTBI; and - Facility transmission -the number and percentage of staff and inmates whose tests for TB infection converted and the reasons for the conversion, -the number of TB exposure incidents (see Contact Investigation), and -evidence of person-to-person transmission. Conversion rates (as determined by annual testing) for staff and inmates should be determined and tracked over time to monitor for unsuspected transmission in the facility. In larger facilities, conversion rates for staff assigned to areas that might place them at higher risk for TB (e.g., booking and holding areas, day rooms, libraries, enclosed recreation areas, medical and dental areas, and transport vehicles) should be calculated and tracked. Staff should analyze contributing factors to TB exposure and transmission and plan for corrective intervention, as appropriate. The following performance measures should be considered when determining risk within all correctional facilities, including those that function as a contract facility within a larger correctional system:
- the timeliness with which patients with suspected TB disease are detected, isolated, and evaluated (see Performance Measurement for Improving Quality); and - other factors (e.g., the total number of patients with TB housed in the facility and the number of persons housed in the facility who are risk for TB) that will help determine the controls needed (71).
# Performance Measurement for Improving Quality
The risk-assessment process enables the monitoring of risk for TB transmission (the key program indicator) and helps guide the focus and intensity of ongoing performance measurement and monitoring. Facilities at higher risk (e.g., those with cases of TB disease) benefit more from broader investigation of performance than facilities at lower risk. Riskassessment findings should help guide the development of simple process performance measures for each pertinent area of TB prevention and control. These performance measures can then be used to monitor program implementation and intermediate outcomes. Treatment completion and continuity of care are key performance indicators. Each facility should have goals against which to measure performance in these areas (e.g., 100% of patients with TB disease will have documented treatment completion or, in the case of release or transfer, continuity of treatment on release). For LTBI, goals might be that 100% of patients released during treatment will have a documented referral for continuity of care in the community and that 90% of these patients will follow-up on their referral. The following are examples of possible performance measures that can be useful as part of a TB program evaluation, depending on the level of risk:
- Timeliness of screening and isolation -time from inmate admission to testing for TB infection, -time from TB testing to obtaining test results, -time from positive TB infection test results to obtaining a chest radiograph, -time from identification of a suspect TB patient (either through symptoms or abnormal chest radiograph) to placement in an AII room, -time from sputum collection to receipt of results, and -time from suspicious result (either via radiograph, smear-positive result, or smear-negative/culturepositive result) to initiation of contact investigation; - Treatment -the number and percentage of patients with LTBI who initiated treatment and the percentage of persons who completed the prescribed treatment for LTBI (excluding those released from or transferred out of the facility), -the number and percentage of persons in whom TB disease was diagnosed who completed the prescribed treatment regimen (excluding those released from or transferred out of the facility), and -the reasons for treatment interruption among persons who stop therapy; and - Continuity of care ¶ ¶ -the number and percentage of patients released before completing treatment for TB disease or LTBI who had documented community appointments (or referrals) for continuity of care, and -the number and percentage of patients with confirmed and suspected TB disease who kept their first medical appointment in the community. Other pertinent performance measures for correctional facilities might include the adherence rates among inmates and staff who should undergo TB testing, the percentage of staff receiving TB education and training annually, and the percentage of inmates receiving TB education.
# Assessment of Collaboration
On an annual basis, each program also should evaluate its success in working collaboratively with local and state public health departments in each area of TB control (e.g., screen- ¶ ¶ Public health departments typically track treatment completion rates for patients referred to their care.
# Collecting and Organizing Data Data Sources
As part of quality assessment, all facilities that house persons with confirmed or suspected TB disease should conduct periodic reviews of medical records for these patients and for a sample of patients with LTBI. In collaboration with the public health department, the review should be conducted at least annually in facilities with any confirmed or suspected cases of TB (including low-risk facilities) and quarterly in higher-risk facilities with numerous cases. The record review should compare actual performance against time standards, protocols, and goals for TB activities and outcomes (see Performance Measures for Improving Quality). Multiple tools are available for data collection (Appendix B) (131).
Medical records should contain information regarding TB history and risk factors, treatment, and all other interventions and dates to enable performance to be monitored. Other sources of data include log books, interviews with staff, and observations. Quality controls for TST placement and reading should be checked at least annually. The quality of the data used for calculating performance also should be verified.
# Information Infrastructure
Effective program monitoring and evaluation is made possible through the reliable collection of valid data and through analysis of these data. Health-care professionals responsible for the prevention and control of TB within a correctional facility should have access to complete medical records and a database of essential TB-related activity and measurements. A retrievable aggregate record system is essential for tracking all inmates and for assessing the status of persons who have TB disease and LTBI, particularly in large jail and prison systems in which inmates are transferred frequently from one facility or unit to another. This record system should maintain at minimum current information about the location, screening results, treatment status, and degree of infectiousness of these persons. In addition to facilitating case management, such a record system provides facilities with the information necessary for conducting annual TB risk assess-ments, monitoring TB trends, measuring performance, and assessing the effectiveness of overall TB control efforts. Information contained in health records should always be kept confidential; all staff members involved in program evaluation should receive training to maintain the confidentiality of patient information.
Although medical databases can be maintained manually, electronic databases provide additional benefits by enabling a facility to 1) better track inmates for testing and case management, 2) access information regarding tests for TB infection, 3) share medical information regarding transferred inmates with other facilities, 4) link with the local health department, and 5) measure the effectiveness of TB-control efforts.
# Analyzing Data and Drawing Conclusions
In a multifacility correctional system, evaluation data should be compiled for each facility separately and in aggregate. Data should be analyzed against standards, which can be defined externally (e.g., by national organizations or CDC-defined standards) or internally as established by the program collaborators (170). Once analyzed, conclusions should be drawn from the data and recommendations for program improvement developed. The evaluation and recommendations should be shared with program staff, administrators, and partners, including the local public health department.
# Using Information to Improve the TB Program
The final step in the evaluation process is to implement the recommendations to improve the TB program. Program staff should use data to identify and remove barriers to improving performance, and administrators should make necessary revisions to policies or procedures.
Because the evaluation process is cyclical, assessing whether recommendations have been implemented and whether outcomes are improved is crucial. Existing data can be used to clearly demonstrate the effects of implemented interventions.
# Collaboration and Responsibilities
The management of TB from the time an inmate is suspected of having the disease until treatment is complete presents multiple opportunities for collaboration between correctional facilities and the public health department. For example, public health agencies can partner with correctional facilities in TB screening and treatment activities. In a study of 20 urban jail systems and their respective public health departments, only 35% reported having collaborated effectively when conducting TB-prevention and -control activities (38). Formal organizational mechanisms (e.g., designated liaisons, regular meetings, health department TB program staff providing on-site services, and written agreements) are associated with more effective collaboration between correctional facilities and health departments (37).
Correctional facilities and health departments should each designate liaisons for TB-associated efforts. Liaisons should serve as a familiar and accessible communication link between collaborating entities. The duty of liaison at the correctional facility should be assigned to the person responsible for TB control or to another staff member familiar with TB control and patient management at the facility. Regular meetings between correctional facilities and health departments are important to establish communication and collaboration on TB-related issues (37,171). Jurisdictions with regularly scheduled meetings between jails and public health staff are 13 times more likely to report having highly effective collaboration than jurisdictions that have not established such meetings (37). For example, in Florida, the state TB-control program and corrections health officials hold quarterly coordination meetings on TB issues and regularly scheduled collaborative TB casereview conferences (171), activities that have encouraged communication between facilities and local health departments.
The presence of health department staff in correctional facilities can help promote more effective collaboration (37,171). Functions provided by such personnel within the correctional facility setting include screening, surveillance, education and training, contact investigation, and follow-up after release (171). For example, New York City Department of Health and Mental Hygiene personnel assigned to the Rikers Island jail interview inmates, monitor their care, suggest interventions or changes, and work with the jail to determine discharge planning needs for continuity of care in the community. Data access links are available on site that enable health department personnel to promptly inform corrections staff regarding previous completed therapy, incomplete work-up or therapy, sputum-smear results, culture and drugsusceptibility data, and ongoing treatment for TB cases and suspects. These on-site access links diminish the time spent in AII rooms and decrease the time required for patient work-up by providing confirmatory historical documentation.
Correctional facilities and health departments should work together to agree on and delineate their respective roles and responsibilities. Establishing clear roles and responsibilities helps avoid duplication, confusion, the potential for breaching patient confidentiality, excess expenditures, and missed opportunities.
Roles and responsibilities should be clearly defined for all TB-control activities that might require collaboration between correctional facilities and health departments, including
- screening and treatment of inmates for LTBI and TB disease,
- reporting of TB disease,
- follow-up of inmates with symptoms or abnormal chest radiographs, - medical consultation regarding persons with confirmed and suspected TB disease, - contact investigations for reported TB cases, - continuity of treatment and discharge planning for persons with TB disease and LTBI, - training and education of correctional facility staff, - evaluation of screening and case management, and - facility risk assessment. Agreements about roles and responsibilities may be formal or informal, but they should be recorded in writing. Formal agreements include memoranda of understanding and written policies or plans. Informal agreements may be as simple as an e-mail summary of a verbal discussion or meeting. The format for recording and communicating agreements (e.g., checklists, flow charts, algorithms, and lists of steps) may vary depending on the need. Once agreements are made, they should be reassessed periodically (see Program Evaluation).
Correctional facilities and health departments should work together to formulate agreements that specify the information to be shared in a particular time frame, who will have access to specific information or databases, and how patient confidentiality will be protected. Information systems provide the framework for recording and accessing pertinent information (see Program Evaluation). Health departments should provide correctional facilities with pertinent TB surveillance information (e.g., local rates of drug resistance, the number of TB cases occurring in correctional facilities relative to the community, and the number of TB cases identified in the community among recently incarcerated persons), which can bolster support for TB-screening activities within these facilities.
Legislation or policy statements can effectively encourage or mandate collaboration on issues (e.g., disease reporting, contact investigation, and discharge planning) when institutional barriers (e.g., time and resources) inhibit collaboration. For example, California has improved discharge planning by prohibiting the release or transfer of inmates with confirmed or suspected TB unless a written treatment plan has been received and accepted by the local health officer (172). Arizona's state administrative code places responsibility for contact investigations of TB exposures in correctional facilities on the correctional facility but requires consultation with (and re-porting to) the local health department. ICE also has developed a policy memorandum requesting that ICE field office directors grant a short-term hold on the deportation of patients with TB disease to allow time for the ICE health services program to facilitate continuity of care.
# Summary of Recommendations Screening
Early identification and successful treatment of persons with TB disease remains the most effective means of preventing disease transmission. Inmates who are likely to have infectious TB should be identified and begin treatment before they are released into the general population. Screening programs in the correctional setting also allow for the detection of substantial numbers of persons with LTBI who are at high risk for TB disease and would likely benefit from a course of treatment.
The type of screening recommended for a particular correctional facility is determined by an assessment of the risk for TB transmission within that facility. The risk assessment should be performed annually and should be conducted in collaboration with the local or state health department. A facility's TB risk level can be defined as minimal or nonminimal. A facility should be classified as having minimal TB risk on the basis of four criteria:
- No cases of infectious TB have occurred in the facility in the last year. - The facility does not house substantial numbers of inmates with risk factors for TB (e.g., HIV infection and injection-drug use). - The facility does not house substantial numbers of new immigrants (i.e., persons arriving in the United States within the previous 5 years) from areas of the world with high rates of TB. - Employees of the facility are not otherwise at risk for TB. Any facility that does not meet all of these criteria should be categorized as being a nonminimal TB risk facility.
Inmates in all minimal TB risk correctional and detention facilities require an evaluation at entry for symptoms of TB. Persons with symptoms of TB require an immediate evaluation to rule out the presence of infectious disease and must be kept in an AII room until they are evaluated. All newly arrived inmates should be evaluated for clinical conditions and other factors that increase the risk for TB disease. Persons who have any of these conditions require further screening with a TST, a QFT-G, or a chest radiograph within 7 days of arrival. Regardless of TST or QFT-G result, inmates known to have HIV infection or other severe immunosuppression, as well as inmates who are at risk for HIV infection but whose HIV status is unknown, should have a chest radiograph taken as part of the initial screening. Persons who have an abnormal chest radiograph should be evaluated further to rule out TB disease; if TB disease is excluded as a diagnosis, LTBI therapy should be considered if the TST or QFT-G is positive.
In nonminimal TB risk prisons, symptom screening assessment should be performed immediately for all new inmates. Any inmate who has symptoms suggestive of TB should be placed in an AII room and evaluated promptly for TB disease. Inmates who have no symptoms require further screening with a TST, a QFT-G, or a chest radiograph within 7 days of arrival. Regardless of their TST or QFT-G status, inmates known to have HIV infection or other severe immunosuppression, and inmates who are at risk for HIV infection but whose HIV status is unknown, should have a chest radiograph taken as part of the initial screening. Persons who have an abnormal chest radiograph should be evaluated further to rule out TB disease; if TB disease is excluded as a diagnosis, LTBI therapy should be considered if the TST or QFT-G result is positive.
Symptom screening should be performed immediately on entry for all new detainees in nonminimal TB risk jails. Any detainee who has symptoms suggestive of TB should be placed in an AII room and promptly evaluated for TB disease. Detainees who are without symptoms require further screening with a TST, a QFT-G, or a chest radiograph within 7 days of arrival. Regardless of TST or QFT-G result, detainees known to have HIV infection, and detainees who are at risk for HIV infection but whose HIV status is unknown, should have a chest radiograph taken as part of the initial screening. Persons who have a positive result should be further evaluated to rule out TB disease. Screening in jails with the TST or QFT-G for purposes of initiating LTBI therapy often is not practical because of the high rate of turnover and short lengths of stay.
A medical history relating to TB should be obtained from and recorded for all new employees at the time of hiring, and a physical examination for TB disease should be required. In addition, TST or QFT-G screening should be mandatory for all employees who do not have a documented positive result. Persons who have a positive TST or QFT-G result should have a chest radiograph taken and interpreted and should be required to have a thorough medical evaluation; if TB disease is excluded as a diagnosis, such persons should be considered for LTBI therapy. All employees should be informed and instructed to seek appropriate follow-up and screening for TB if they are immunosuppressed for any reason (e.g., HIV infection, organ transplant recipient receiving immunosuppressive therapy, and treatment with TNF-α antagonist). Any employee who has symptoms suggestive of TB should not return to the workplace until a clinician has excluded a diagnosis of contagious TB disease.
In general, long-term inmates and all employees who have a negative baseline TST or QFT-G result should have followup testing at least annually. Persons who have a history of a positive test result should be screened annually for symptoms of TB disease. Annual chest radiographs are unnecessary for the follow-up evaluation of infected persons. Test results should be recorded in medical records and in a retrievable aggregate database of all TST or QFT-G results.
# Case Reporting
Correctional facility medical staff must report any suspected or confirmed TB cases among inmates or employees to the appropriate health agency in accordance with state and local laws and regulations, even if the inmate or detainee has already been released or transferred from the facility. Reporting cases to health departments benefits the correctional facility by allowing it to obtain health department resources for case management and contact investigation in both the facility and the community. In addition, drug-susceptibility results should be used to inform optimal therapy and sent to the state or local health department for use in monitoring the rates of drug resistance. The drug-susceptibility reports also should be sent to all health departments managing contacts of the TB case because the choice of medication for LTBI treatment is based on drugsusceptibility test results of the source case. Reports to local or state health departments should identify the agency that has custodial responsibility for the inmate.
# Airborne Infection Isolation
TB airborne precautions should be initiated for any patient who 1) has signs or symptoms of TB disease or 2) has documented TB disease and has not completed treatment or not previously been determined to be non-infectious. For patients placed in an AII room because of suspected infectious TB disease of the lungs, airways, or larynx, airborne precautions can be discontinued when infectious TB disease is considered unlikely and either 1) another diagnosis is made that explains the clinical syndrome or 2) the patient has three negative AFB sputum-smear results. Incarcerated patients in whom the suspicion of TB disease remains after the collection of three negative AFB sputum-smear results should not be released from an AII room until they are on standard multidrug anti-TB treatment and are clinically improving. A patient who has drugsusceptible TB of the lung, airways, or larynx; who is on standard multidrug anti-TB treatment; and who has had a clinical and bacteriologic response to therapy is probably no longer infectious. However, because culture and drug-susceptibility results typically are not known when the decision to discontinue airborne precautions is made, all patients in whom the probability of TB disease is high should remain in an AII room while incarcerated until they have 1) had three consecutive negative AFB sputum smear results, 2) received standard multidrug anti-TB treatment, and 3) demonstrated clinical improvement.
# Environmental Controls
Environmental controls should be implemented when the risk for TB transmission persists despite efforts to screen and treat infected inmates. Environmental controls are used to remove, inactivate, or kill M. tuberculosis in areas in which the organism could be transmitted. Primary environmental controls consist of controlling the source of infection by using local exhaust ventilation (e.g., hoods, tents, or booths) and diluting and removing contaminated air using general ventilation. Secondary environmental controls consist of controlling the airflow to prevent contamination of air in areas adjacent to the source (AII rooms) and cleaning the air using HEPA filtration and/or UVGI. The efficiency of different primary or secondary environmental controls varies. A detailed discussion concerning the application of environmental controls has been published previously (71).
# Personal Respiratory Protection
Respiratory protection is used when administrative (i.e., identification and isolation of infectious TB patients) and environmental controls alone have not reduced the risk for infection with M. tuberculosis to an acceptable level. The use of respiratory protection might be most appropriate in specific settings and situations within correctional facilities; for example, protection is warranted for inmates and facility staff when they enter AII rooms, transport infectious inmates in an enclosed vehicle, and perform or participate in coughinducing procedures. In correctional facilities, a CDC/ NIOSH-approved N95 air-purifying respirator will provide adequate respiratory protection in the majority of situations that require the use of respirators.
All correctional facility staff members who use respirators for protection against infection with M. tuberculosis must participate in the facility's respiratory protection program (e.g., understand their responsibilities, receive training, receive medical clearance, and engage in fit testing). All facilities should develop, implement, and maintain a respiratory-protection program for health-care workers or other staff who use respiratory protection. (Respiratory-protection programs are required for facilities covered by OSHA.) In addition to staff members, visitors to inmates with TB disease should be given respirators to wear while in AII rooms and instructed how to ensure their own respiratory protection by checking their respirator for a proper seal. Each facility, regardless of TB risk classification (i.e., minimal or nonminimal), should develop a policy on the use of respirators by visitors of patients.
# Diagnosis and Treatment of LTBI and TB Disease
A diagnosis of TB disease should be considered for any patient who has a persistent cough (>3 weeks) or other signs or symptoms compatible with TB disease (e.g., bloody sputum , night sweats, weight loss, anorexia, and fever). Diagnostic tests for TB include the TST, QFT-G, chest radiography, and laboratory examination of sputum samples or other body tissues and fluids. Persons exposed to inmates with TB disease might become infected with LTBI, depending on host immunity and the degree and duration of exposure. Therefore, the treatment of persons with TB disease plays a key role in TB control by stopping transmission and preventing potentially infectious cases from developing. LTBI is an asymptomatic condition that can be diagnosed by the TST or QFT-G.
Regardless of age, correctional facility staff and inmates in the following high-risk groups should be given treatment for LTBI if their reaction to the TST is >5 mm:
- HIV-infected persons,
- recent contacts of a TB patient,
- persons with fibrotic changes on chest radiograph consistent with previous TB disease, and - patients with organ transplants and other immunocompromising conditions who receive the equivalent of >15 mg/day of prednisone for >1 month. All other correctional facility staff and inmates should be considered for treatment of LTBI if their TST result is >10 mm induration. The preferred treatment for LTBI is 9 months of daily isoniazid or biweekly dosing administered by DOT. Although LTBI treatment regimens are broadly applicable, modifications should be considered for certain populations (e.g., patients with HIV infection) and when drug resistance is suspected.
Individualized case management should be provided for all patients with TB disease. In addition, patient management should be coordinated with officials of the local or state health department. Regimens for treating TB disease must contain multiple drugs to which the organisms are susceptible. For the majority of patients, the preferred regimen for treating TB disease consists of an initial 2-month phase of isoniazid, rifampin, pyrazinamide, and ethambutol, followed by a con-tinuation phase of isoniazid and rifampin lasting >4 months, for a minimum total treatment period of 6 months. The decision to stop therapy should be based on the number of doses taken within a maximum period (not simply a 6-month period). Persons with cavitary pulmonary TB disease and positive cultures of sputum specimens at the completion of 2 months of therapy should receive a longer, 7-month continuation phase of therapy (total duration: 9 months) because of the substantially higher rate of relapse among persons with this type of TB disease.
Drug-susceptibility testing should be performed on all initial M. tuberculosis isolates from patients with TB disease. When results from drug-susceptibility tests become available, the treatment regimen should be adjusted accordingly. Medical providers treating patients with drug-resistant TB disease should seek expert consultation and collaborate with the local health department for treatment decisions.
TB treatment regimens might need to be altered for HIVinfected persons who are receiving antiretroviral therapy. Whenever possible, the care of persons with concomitant TB and HIV should be provided by or in consultation with experts in the management of both TB and HIV-related disease.
The primary determinant of treatment outcome is patient adherence to the drug regimen. Thus, careful attention should be paid to measures designed to enable and foster adherence. DOT is the preferred treatment strategy for all persons with TB disease and high-risk (e.g., HIV infected) persons with LTBI. DOT should be used throughout the entire course of therapy whenever feasible. Practitioners providing treatment to inmates should coordinate DOT with the local health department on an inmate's release. The local health department also may be involved in monitoring therapy for correctional facility staff.
# Discharge Planning
Postrelease follow-up is a necessary component of TB control efforts. Effective discharge planning requires collaboration between corrections and medical staff (both intra-and interfacility), as well as with public health and communitybased service organizations.
To ensure uninterrupted treatment, discharge planning for inmates in whom TB disease is diagnosed must begin as soon as possible after diagnosis. Corrections or health service administrators (or their designees) should assign staff to notify the public health department of inmates receiving treatment for TB disease or LTBI. Inmates with TB disease should be interviewed while still incarcerated (ideally by public health staff ) to enable facility administrators to assess and plan for the appropriate support and referrals that will be needed after discharge.
All correctional facilities should assign personnel (preferably health-care professionals) to serve as case managers. These managers should be responsible for conducting discharge planning in the facility, which entails coordinating follow-up and communicating treatment histories with public health department and other health-care counterparts within the community.
# Contact Investigation
The overall goal of a TB contact investigation is to interrupt transmission of M. tuberculosis. Ongoing transmission is prevented by 1) identifying, isolating, and treating other persons with TB disease (e.g., secondary patients) and 2) identifying infected contacts of the source and secondary patients and providing them with a complete course of treatment for LTBI.
Because decisions involved in planning and prioritizing contact investigations in correctional facilities are seldom simple, the process benefits from the input of a larger, multidisciplinary team when possible. The best preparation for contact investigations in correctional facilities is ongoing, formal collaboration between correctional and public health officials.
The decision to initiate a contact investigation for an inmate or detainee with possible TB is made on a case-by-case basis. In general, contact investigations should be conducted in the following circumstances: 1) suspected or confirmed pulmonary, laryngeal, or pleural TB and cavitary disease on chest radiograph or positive AFB smear results (sputum or other respiratory specimens) or 2) suspected or confirmed pulmonary (noncavitary) or pleural TB and negative AFB smear results (sputum or other respiratory specimens) and a decision has been made to initiate TB treatment. A more limited initial investigation may be conducted for smearnegative cases.
Contact investigation should be conducted in a stepwise fashion that includes 1) notifying correctional management officials; 2) conducting a chart review of the source patient; 3) interviewing the source patient; 4) defining the infectious period; 5) convening the contact investigation team; 6) updating correctional management officials about the strategy, process, and action steps involved in conducting the contact investigation; 7) obtaining source case inmate traffic history (i.e., the dates and locations of the TB source patient's housing during the infectious period); 8) touring exposure sites; 9) prioritizing contacts according to duration and intensity of exposure and risk factors for becoming infected with TB and progressing to TB disease; 10) developing con-tact lists; 11) conducting a medical record review on each highpriority contact; 12) evaluating HIV-infected contacts promptly; 13) placing and reading initial TSTs or QFT-Gs on eligible contacts; 14) making referrals for contact evaluation (e.g., referrals to the local health department for contacts of inmates who have been released or transferred to another facility, family members, frequent visitors of the source patient); 15) calculating the infection rate and determining the need to expand the investigation; 16) placing and reading follow-up TSTs or QFT-Gs; 17) determining the infection/ transmission rate from the second round of testing; and 18) writing a summary report.
# Training and Education
Although the level and detail of any employee's initial TB training and education session will vary according to staff members' job responsibilities, the following components should be included for all correctional workers, regardless of job function: 1) communication regarding the basic concepts of M. tuberculosis transmission, signs, symptoms, diagnosis (including the difference between LTBI and TB disease), and prevention; 2) provision of basic information regarding the importance of following up on inmates or correctional workers demonstrating signs or symptoms of TB disease; 3) explanation of the need for initiation of AII of inmates with suspected or confirmed TB disease; 4) review of the policies and indications for discontinuing AII precautions; 5) discussion of basic principles of treatment for TB disease and LTBI; and 6) discussion regarding TB disease in immunocompromised persons.
Correctional workers in facilities with a high risk of TB transmission should receive enhanced and more frequent training and education regarding 1) the signs and symptoms of TB disease, 2) transmission of TB disease, and 3) infection-control policies (including instruction on and location of written infection-control policies and procedures, the facility's exposure control plan, and the respiratory protection program).
State and local health department staff providing consultation or direct services to a correctional facility (including those who act as liaisons) should receive training and education regarding the unique aspects of health care and TB control in the correctional facility setting. Correctional facility administrators, contracted correctional facility health-care professionals, and health department staff should collaborate to develop an appropriate training program. Inmates should receive education from facility health-care professionals or other appropriately trained workers managing the screening or treatment process. Education and training should be appropriate in terms of the education level and language of the trainees.
# Program Evaluation
Program evaluation should be performed based on the CDC framework. Successful monitoring and evaluation of a TB-prevention and -control program includes identifying collaborators, describing the TB-control program, focusing the evaluation to assess TB risk and performance, collecting and organizing data, analyzing data and forming conclusions, and using the information to improve the TB program.
# Collaboration and Responsibilities
The management of TB from the time an inmate is suspected of having the disease until treatment is complete presents multiple opportunities for collaboration between correctional facilities and the public health department. Formal organizational mechanisms (e.g., designated liaisons, regular meetings, health department TB-program staff providing on-site services, and written agreements) have been demonstrated to be associated with more effective collaboration between correctional facilities and health departments.
Correctional facilities and health departments should each designate liaisons for TB-associated efforts. Liaisons should serve as a familiar and accessible communication link between collaborating entities. The duty of liaison at the correctional facility should be assigned to the person responsible for TB control or to another staff member familiar with TB control and patient management at the facility.
Correctional facilities and health departments should work together to agree on and delineate their respective roles and responsibilities. Establishing clear roles and responsibilities helps avoid duplication, confusion, the potential for breaching patient confidentiality, excess expenditures, and missed opportunities. Agreements about roles and responsibilities may be formal or informal, but they should be recorded in writing to avoid misunderstandings and to give the agreement longevity beyond personal relationships.
# Acid-fast bacilli (AFB).
A laboratory test that involves microscopic examination of a stained smear (typically of sputum) to determine if mycobacteria are present. A presumptive diagnosis of pulmonary tuberculosis (TB) can be made with a positive AFB sputum smear result; however, approximately 50% of the patients with pulmonary TB disease have negative AFB sputum-smear results. The diagnosis of TB disease typically is not confirmed until Mycobacterium tuberculosis is identified in culture. A positive nucleic acid amplification (NAA) test is useful as a confirmatory test.
Aerosol. Dispersions of particles in a gaseous medium (e.g., air). Droplet nuclei are examples of particles that are expelled by a person with an infectious disease (e.g., by coughing, sneezing, or singing). For M. tuberculosis, the droplet nuclei are approximately 1-5 µm. Because of their small size, the droplet nuclei can remain suspended in the air for substantial periods and can transmit M. tuberculosis to other persons.
Adherence. Following instructions regarding a treatment regimen (adherence to treatment).
Administrative controls. Managerial measures that reduce the risk for exposure to persons who might have TB disease. Examples include coordinating efforts with the state or local health department, conducting a TB risk assessment of the setting, developing and instituting a written TB infectioncontrol plan to ensure prompt detection, airborne infection isolation, treating persons with suspected or confirmed TB disease, and screening and evaluating persons who are at risk for TB disease or who might be exposed to M. tuberculosis.
Air change rate. Ratio of the airflow in volume units per hour to the volume of the space under consideration in identical volume units, typically expressed in air changes per hour (ACH).
Air change rate (equivalent). Ratio of the volumetric air loss rate associated with an environmental control or combination of controls (e.g., an air cleaner or ultraviolet germicidal irradiation system) divided by the volume of the room in which the control has been applied. The equivalent air change rate is useful for describing the rate at which bioaerosols are removed by means other than ventilation.
Air change rate (mechanical). Ratio of the airflow to the space volume per unit time, typically expressed in ACH.
# Air changes per hour (ACH).
Air change rate expressed as the number of air exchange units per hour.
Airborne infection isolation room (AII room). Formerly called a negative pressure isolation room, an AII room is a single-occupancy patient-care room used to isolate persons with suspected or confirmed infectious TB disease.
Environmental factors are controlled in AII rooms to minimize the transmission of infectious agents that typically are spread from person to person by droplet nuclei associated with coughing or aerosolization of contaminated fluids. An AII room should have these all three of these characteristics: 1) negative pressure, so air flows under the door gap into the room; 2) an air flow rate of 6-12 ACH; and 3) direct exhaust of air from the room to the outside of the building or recirculation of air through a high efficiency particulate air (HEPA) filter.
Anergy. Condition in which a person has a diminished ability to react to antigens because of a medical condition or situation resulting in immunosuppression. Persons who have such immunosuppression are considered to be anergic. Traditionally, anergy is identified through a tuberculin skin test (TST). Anergy skin testing has poor predictive value and is not routinely recommended.
Apical. Relating to or located at the tip (an apex). Asymptomatic. Neither causing nor exhibiting signs or symptoms of disease.
Bacille Calmette-Guérin (BCG). An attenuated strain of Mycobacterium bovis that is used in multiple countries worldwide as a TB vaccine, named after the French scientists Calmette and Guérin. BCG has limited efficacy in preventing disease and is rarely used in the United States. The vaccine is effective in preventing disseminated and meningeal TB disease in infants and young children and is appropriately used in multiple countries in which TB disease is endemic.
Boosting. A phenomenon in which some persons who receive a TST many years after acquiring latent TB infection (LTBI) have a negative result to an initial TST followed by a positive result to a subsequent TST. The second (i.e., positive) result is caused by a boosted immune response of the prior sensitivity rather than by a new infection. Two-step testing is used to distinguish new infections from boosted reactions in TB infection control screening programs that utilize TST for detecting M. tuberculosis (see Two-step skin testing). Because QuantiFERON ® -TB Gold (QFT-G) test is an in vitro method, it is not complicated by boosting.
Bronchoscopy. Procedure for examining the respiratory tract that requires inserting an instrument (bronchoscope), either flexible or rigid, through the mouth or nose and into the respiratory tree. Bronchoscopy can be used to obtain diagnostic specimens and creates a risk for transmission for exposed health-care professionals when performed on a patient with pulmonary or laryngeal TB disease.
# Glossary
Cavity. The radiographic appearance of a hole in the lung parenchyma, typically not involving the pleural space, resulting from the destruction of pulmonary tissue by an interaction of M. tuberculosis infection and the host response seen in TB disease (or other pulmonary infections). TB patients with cavitary disease indicated by a chest radiograph typically are more infectious than TB patients without cavitary disease.
Chest x-ray. See Radiography. Close contact. A person who has shared the same air space in a household or other enclosed environment for a prolonged period (i.e., days or weeks, not minutes or hours) with a person with suspected or confirmed TB disease.
Contact. A person who has shared the same air space with a person in whom infectious TB disease has been diagnosed. Although sputum-smear results, the grade of the sputum-smear result if positive, and sputum culture results all influence the likelihood of infectiousness, other factors (e.g., exposure time, environmental conditions, and site of disease) also contribute to infectiousness.
Contact investigation. The process of finding, notifying, screening, and treating persons who might have LTBI or TB disease as a result of recent contact with a person diagnosed with TB disease. This process is undertaken promptly after a TB source patient is identified.
Contagious. Refers to a disease that can be transmitted from one living being to another through direct contact (e.g., measles) or indirect contact (e.g., TB or cholera). The agent responsible for the contagious character of a disease is described as being infectious; the most common infectious agents are microorganisms (e.g., M. tuberculosis) or macroorganisms (e.g., parasitic worms).
Conversion. See TST conversion. Conversion rate. The percentage of a population with a converted test result (TST or QFT-G) for M. tuberculosis within a specified time. This is calculated by dividing the number of conversions among eligible persons in the setting in a specified period (numerator) by the number of persons who received tests in the setting over the same period (denominator), multiplied by 100.
Culture. Microorganisms obtained from sputum or samples of other body fluids that are grown in the laboratory to detect and identify infection. This test typically takes 2-4 days when used to detect the majority of bacteria, although cultures for mycobacteria typically must grow for 2-4 weeks.
Directly observed therapy (DOT). Adherence-enhancing strategy in which a trained health-care professional or other specially trained person watches a patient swallow each dose of medication and records the dates that the medication was taken. DOT is the standard of care for all patients with TB disease and should be used for all doses during the course of treatment for TB disease and for LTBI whenever feasible. All patients on intermittent (i.e., once-or twice-weekly) treatment for TB disease or LTBI should receive DOT. Plans for DOT should be coordinated with the state or local health department. Rates of relapse and development of drugresistance are decreased when DOT is used.
Disposable respirator. A respirator designed to be used and then discarded; also known as a filtering-facepiece respirator. Respirators should be discarded after excessive resistance, physical damage, or hygiene considerations.
Droplet nuclei. Microscopic particles produced when a person coughs, sneezes, shouts, or sings. These particles can remain suspended in the air for prolonged periods of time and can be carried on normal air currents throughout the room and to adjacent spaces or areas receiving exhaust air.
Drug-susceptibility test. Laboratory test that determines whether the M. tuberculosis bacteria cultured from a patient's isolate are susceptible or resistant to various first-line or secondline anti-TB drugs.
Enabler. An item or service that helps to remove barriers for willing (but unable) patients to adhere to anti-TB therapy (e.g., transportation, bus tokens, stable housing, driver's license, and service programs).
Environmental control measures. Physical or mechanical measures (as opposed to administrative control measures) used to reduce the risk for transmission of M. tuberculosis. Examples include ventilation, filtration, ultraviolet lamps, airborne infection isolation rooms, and local exhaust ventilation devices.
Epidemiologic cluster. A series of cases that can be closely grouped by time or place.
Erythema. Abnormal redness of the skin. Erythema might develop around a TST site, but it should not be read as part of the TST result.
Exposure. The condition of being subjected to something (e.g., an infectious agent) that could have a harmful effect. A person exposed to M. tuberculosis does not necessarily become infected (See Transmission).
Exposure period. The period when the following two events overlap: 1) the time the contact shares the same air space with the TB source patient and 2) the infectious period of the source patient.
Extrapulmonary TB. Disease in any part of the body other than the lungs (e.g., the kidneys, spine, or lymph nodes.) The presence of extrapulmonary disease does not exclude pulmonary TB disease.
False-positive TST or QFT-G result. A TST or QFT-G result that is interpreted as positive for a particular purpose (i.e., infection control surveillance or medical and diagnostic evaluation) in a person who is not actually infected with M. tuberculosis. For the TST, this is more likely to occur in
Hemoptysis. The expectoration or coughing up of blood or blood-tinged sputum. Hemoptysis is one of the symptoms of pulmonary TB disease, and it can also occur in other pulmonary conditions (e.g., lung cancer).
Immunosuppression and immunocompromising conditions. A condition in which the immune system is not functioning normally. The term "immunocompromised" has come to be defined as the broader term, and the term "immunosuppression" is now used when referring to states that were induced by medical treatment or procedures (i.e., iatrogenic), including causes that result from therapy for another condition. Immunocompromised persons are at increased risk for rapidly progressing to TB disease after infection with M. tuberculosis. Immunocompromising conditions also make TB disease more difficult to diagnose, increasing the likelihood of a false-negative result for a test for M. tuberculosis (e.g., TST and QFT-G).
Incentive. An item or service that rewards desired behavior (e.g., adherence to anti-TB therapy). Examples of incentives include cookies, food, food vouchers, clothing vouchers, and stickers. Incentives motivate patients to take medication and keep clinic appointments, and they should be specifically tailored to each patient.
Induration. Area of firmness produced by an immune cell infiltration in response to an injected antigen. In tuberculin skin testing (TST) or anergy testing, the diameter of the indurated area is measured 48-72 hours after the injection in a direction perpendicular to the long axis of the forearm and the result recorded in mm. The induration of the TST result should be measured, and not erythema (abnormal redness of the skin).
Infectious. See Contagious. Infectious droplet nuclei. Droplet nuclei produced by an infectious TB patient that can carry tubercle bacilli and be inhaled by others. Although these nuclei typically are produced from patients with pulmonary TB through coughing, they can also be generated from aerosolizing procedures (e.g., during bronchoscopy, autopsy, or wound irrigation) performed at the site of infectious tissue.
Infectious period. The time during which a person with TB disease might have transmitted M. tuberculosis organisms to others. The infectious period typically is defined as 12 weeks before TB diagnosis or onset of cough (whichever is longer). If a patient has no TB symptoms, is AFB-smear negative, and has a non-cavitary chest radiograph, the presumed infectious period can be reduced to 4 weeks before the date of diagnosis of suspected TB. If the contact investigation indicates that TB transmission occurred throughout the identified infectious period, the time for contact investigation might need to be expanded beyond the basic 12 weeks.
Isolation. Separation of a person or group of persons from others to prevent the spread of droplet nuclei. In this report, the term "airborne infection isolation" is used interchangeably with "isolation."
# Isoniazid (INH).
A drug used to prevent TB disease in persons who have latent TB infection (LTBI). INH is also one of the four drugs often used to treat TB disease.
Latent TB infection (LTBI). Infection with M. tuberculosis in which the bacilli are alive but inactive in the body. Persons who have LTBI but who do not have TB disease are asymptomatic, do not feel sick, and cannot spread TB to other persons. They typically have a positive TST or QFT-G result. Approximately 5%-10% of infected persons will develop TB disease at some point in their lives, but the risk is considerably higher for persons who are immunocompromised, especially persons infected with HIV. Persons with LTBI can be given treatment to prevent the infection from progressing to disease.
Mantoux method. The recommended TST method, performed by injecting 0.1 ml containing 5 tuberculin units (TU) of purified protein derivative (PPD) intradermally into the volar or dorsal surface of the forearm. The injection is made using a 1/4-1/2-inch, 27 gauge needle and a tuberculin (preferable a safety-type) syringe.
Medical evaluation. An examination conducted for the purpose of diagnosing TB disease or LTBI, selecting treatment, and assessing response to therapy. A medical evaluation might include the following components:
- medical history and TB symptom screening,
- clinical or physical examination,
- screening and diagnostic tests (e.g., TSTs, chest radiographs, bacteriologic examination, and HIV testing), - counseling, and - treatment referrals. Multidrug-resistant tuberculosis (MDR TB). TB disease caused by M. tuberculosis organisms that are resistant to at least isoniazid and rifampin.
# Mycobacterium tuberculosis. The bacterium that causes LTBI and TB disease.
Mycobacterium tuberculosis culture. A laboratory test to determine the presence of M. tuberculosis. In the absence of cross-contamination, a positive culture confirms the diagnosis of TB disease.
N95 disposable respirator. Air-purifying, filtering facepiece respirators certified by the National Institute for Occupational Safety and Health with filters >95% efficient at removing 0.3 micron particles; these respirators are not resistant to oil (see Respirator).
Negative pressure. The difference in air pressure between two areas in a health-care setting. A room that is under negative pressure has a lower pressure than adjacent areas, which keeps air from flowing out of the room and into adjacent rooms or areas.
Nontuberculous mycobacteria (NTM). Refers to mycobacterium species other than those included as part of M. tuberculosis complex. Although valid from a laboratory perspective, the term can be misleading because certain types of NTM cause disease with pathological and clinical manifestations similar to TB disease. Another term used interchangeably with NTM is "mycobacteria other than tuberculosis" (MOTT).
Nucleic acid amplification (NAA) test. Laboratory test used to target and amplify a single DNA or RNA sequence for identification. This technique is highly sensitive and specific for identification of M. tuberculosis, and results from these tests typically are available within 1-3 days.
Outbreak (TB). The result when transmission of M. tuberculosis continues to occur (i.e., potentially ongoing or newly recognized transmission).
Periodic fit testing. Repetition of fit testing performed in accordance with federal, state, and local regulations. Additional fit testing should be used when 1) a new model of respirator is used, 2) a physical characteristic of the user changes, or 3) when the user or respiratory program administrator is uncertain that the staff member is obtaining an adequate fit.
Pulmonary TB. TB disease that occurs in the lung parenchyma. The majority of TB disease is pulmonary.
Purified protein derivative (PPD) tuberculin. A material used in diagnostic tests for infection with M. tuberculosis. PPD is a purified tuberculin preparation that was developed in the 1930s and derived from old tuberculin. In the United States, it is administered as part of a TST that is given as an intradermal injection of 0.1 ml containing 5 TU (Mantoux method) and read 48-72 hours later. It also was used in the older version of QFT-G (see Tuberculin skin test). QFT-G). An in vitro cytokine assay that assesses the cell-mediated immune response to specific antigens of M. tuberculosis (ESAT6 and CFP-10) in whole blood used to determine M. tuberculosis infection. Unlike the TST, the QFT-G requires only a single visit. The QFT-G is more specific than the TST and is less affected by previous BCG vaccination and infection with nontuberculous mycobacteria (NTM).
# QuantiFERON ® -TB Gold test (
QFT-G converter. A change from a negative to a positive QFT-G result.
Radiography. Method of viewing internal body structures by using radiation to project an image onto a film, computer screen, or paper. A chest radiograph is taken to view the respiratory system of a person who is being evaluated for pulmonary TB disease. Abnormalities (e.g., infiltrates or cavities in the lungs and enlarged lymph nodes) described on a chest radiography can indicate the presence of TB disease.
Recirculation. Ventilation in which all or most of the air exhausted from an area is returned to the same area or other areas of the setting.
Reinfection. A second infection that follows recovery from a previous infection by the same causative agent. Often used when referring to an episode of TB disease resulting from a subsequent infection with M. tuberculosis.
Resistance. Ability of certain strains of mycobacteria, including M. tuberculosis, to grow and multiply in the presence of certain drugs that ordinarily kill them. Such strains are referred to as drug-resistant strains and contribute to drugresistant TB disease (see Multidrug-resistant TB.)
Respirator. A device worn to prevent the wearer from inhaling airborne contaminants.
Respiratory protection. The third level in the hierarchy of TB infection-control measures (after administrative and environmental controls).
Risk factor. Any condition or circumstance (i.e., causal agents) that is associated (without confounding or bias) with an increase in the frequency of disease.
Screening. Measures used to identify persons who have TB disease or LTBI (see Symptom screen).
Secondary cases. Cases of TB disease caused by transmission from the source patient.
Smear (AFB smear). Laboratory technique for visualizing mycobacteria. The specimen (direct or concentrated) is spread onto a laboratory slide, stained, and examined using a microscope. Smear results typically are available within 24 hours of specimen collection. The concentration of organisms per unit area of slide (the smear grade) correlates with the degree of infectiousness. However, a positive AFB smear result is not diagnostic of TB disease because organisms other than M. tuberculosis (e.g., nontuberculous mycobacteria ) might be seen on an AFB smear result (see Nontuberculous mycobacteria and Acid-fast bacilli).
Source control. Manipulation of a process preventing an emission (e.g., aerosolized M. tuberculosis) at the place of origin. Examples of source control methods include booths in which a patient coughs and produces sputum, biological safety cabinets in laboratories, and local exhaust ventilation.
Source patient (TB). The patient who was the original source of infection for secondary cases or contacts.
Specimen. Any body fluid, secretion, or tissue sent to a laboratory in which diagnostic tests, smears, and cultures for M. tuberculosis are performed.
Sputum. Mucus secretions coughed up from deep within the lungs (to be distinguished from saliva and nasal secretions). If a patient has pulmonary disease, an examination of the sputum by smear and culture can be helpful in evaluating the organism responsible for certain infectious diseases (e.g., TB). Sputum is different than and should not be confused with saliva or nasal secretions.
Sputum induction. Method used to obtain sputum from a patient who is unable to cough up a specimen spontaneously. The patient inhales a saline mist, which stimulates coughing from deep within the lungs.
Susceptibility. See Drug-susceptibility test. Suspect TB patient. A person in whom a diagnosis of TB disease is being considered, regardless of whether anti-TB therapy has been started. Persons should not remain in this category for >3 months. A patient might be determined as suspect if one or more of the following criteria are satisfied:
- coughing for >3 weeks and one or more additional signs or symptoms of TB disease (e.g., loss of appetite, unexplained weight loss, night sweats, bloody sputum or hemoptysis, hoarseness, fever, fatigue, and chest pain), - a positive TST result and signs or symptoms of infection in the lung, pleura, or airways, - positive AFB sputum smear result, or - Pending results from sputum culture or NAA test for M. tuberculosis. Symptomatic. Exhibiting signs or symptoms of a particular disease or disorder. Symptoms of pulmonary TB disease (or infection in the lung, pleura, or airways ) include coughing for >3 weeks, loss of appetite, unexplained weight loss, night sweats, bloody sputum or hemoptysis, hoarseness, fever, fatigue, or chest pain.
Symptom screen. A procedure used during a clinical evaluation in which the patient is asked if they have experienced any signs or symptoms of TB disease. TB case. A particular episode of clinical TB disease. This term refers only to the disease, not to the person with the disease. By law, TB cases and suspect TB cases must be reported to the state or local health department. TB contact. A person who has shared the same air space with a person who has TB disease for a sufficient amount of time to allow possible transmission of M. tuberculosis.
TB disease. TB disease is caused by Mycobacterium tuberculosis. The bacteria can attack any part of the body, but they typically attack the lungs. TB disease is diagnosed by isolation of M. tuberculosis in a culture. TB disease of the lungs or larynx can be transmitted when a person with the disease coughs, sings, laughs, speaks, or breathes. TB disease might be infectious.
TB infection. TB infection is the term used for persons with positive TST or QFT-G results, negative bacteriologic studies (if conducted), and no clinical, bacteriologic, or radiographic evidence of TB disease. A better term is infection with M. tuberculosis. In the majority of persons who inhale TB bacteria and become infected, the body is able to fight the bacteria to stop them from growing. The bacteria become inactive, but they remain alive in the body and can become active later. TB infection is not contagious; patients with TB infection can not spread TB to other persons.
TB infection control program. Early detection, isolation, and treatment of persons with infectious TB through a hierarchy of control measures, including 1) administrative controls to reduce the risk for exposure to persons with infectious TB disease; 2) environmental controls to prevent the spread and reduce the concentration of infectious droplet nuclei in the air; and 3) respiratory protection in areas where the risk for exposure to M. tuberculosis is high (e.g., in airborne infection isolation rooms). A TB infection-control plan should include surveillance of inmates and correctional staff.
TB screening. Screening conducted for administrative infection control purposes. Initial TB screening can be conducted through use of TSTs, QFT-Gs, and symptom screening, and follow up tests can be conducted using various other testing methods (e.g., chest radiograph or sputum examination for AFB and culture) (see Symptom screen).
TB risk assessment. An initial and ongoing evaluation of the risk for transmission of M. tuberculosis in a particular correctional facility. A risk assessment considers certain factors, including the number of inmates with TB housed during the preceding year and the number of inmates housed who come from groups that are at risk for TB (e.g., HIV-infected persons and recent immigrants from high-incidence countries). The TB risk assessment determines the types of screening and infection-control measures indicated for the correctional facility.
TB skin test. See Tuberculin skin test.
Transmission. Spread of an infectious agent from one person to another. The likelihood of transmission of M. tuberculosis is directly related to the duration and intensity of exposure (see Exposure).
Treatment for LTBI. Treatment for persons with LTBI that prevents development of TB disease.
TST. See Tuberculin skin test. TST conversion. In programs using the TST method of screening, a change from a negative test result to a positive test result. The size of the change in mm induration needed to be considered a conversion varies based on the baseline testing results and whether the inmate or employee has a known exposure to a TB patient. In programs using QFT-G, a change from negative to positive result is considered a QFT-G conversion. A conversion (TST or QFT-G) typically is interpreted as presumptive evidence of new M. tuberculosis infection and poses an increased risk for progression to TB disease.
TST conversion rate. The percentage of a population in which TST results converted within a specified time. This rate is calculated by dividing the number of TST conversions among persons in the setting in a specified period (numerator) by the number of persons who received TSTs in the setting over the same period (denominator), multiplied by 100.
Tuberculin. A sterile liquid containing proteins extracted from cultures of tubercle bacilli and used in tests for tuberculosis.
# Tuberculin skin test (TST). Method used to assess the likelihood that a person is infected with M. tuberculosis.
A small dose of PPD-tuberculin is injected just beneath the surface of the skin by the Mantoux method, and the area is examined 48-72 hours after the injection. The indurated margins should be read transverse (i.e., perpendicular) to the long axis of the forearm.
Tuberculosis (TB). Clinically active disease caused by an organism in the M. tuberculosis complex (typically M. tuberculosis, but also including M. bovis, M. africanum, and others).
Two-step skin testing. Procedure used for the baseline skin testing of persons who will routinely receive TSTs to reduce the likelihood of mistaking a boosted reaction for a new infection. If an initial TST result is classified as negative, the second step of a two-step TST should be administered 1-3 weeks after the first TST was administered. If the second TST result is positive, it likely represents a boosted reaction, indicating that infection most likely occurred in the past. If the second TST result is also negative, the person is classified as not infected.
Ultraviolet germicidal irradiation (UVGI). Use of ultraviolet radiation to kill or inactivate microorganisms.
# CDC Advisory Council for the Elimination of Tuberculosis (ACET) Ad Hoc Working Group Membership List
Recommendations and Reports
# INSTRUCTIONS
You must complete and return the response form electronically or by mail by July 7, 2009, to receive continuing education credit. If you answer all of the questions, you will receive an award letter for 2.1 hours Continuing Medical Education (CME) credit; 0.2 Continuing Education Units (CEUs); 2.5 contact hours Continuing Nursing Education (CNE) credit; or 2.1 contact
# EXPIRATION -July 7, 2009
hours Certified Health Education Specialist (CHES) credit. If you return the form electronically, you will receive educational credit immediately. If you mail the form, you will receive educational credit in approximately 30 days. No fees are charged for participating in this continuing education activity.
# ACCREDITATION Continuing Medical Education (CME).
CDC is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. CDC designates this educational activity for a maximum of 2.1 hours in category 1 credit toward the AMA Physician's Recognition Award. Each physician should claim only those hours of credit that he/she actually spent in the educational activity. Continuing Education Unit (CEU). CDC has been approved as an authorized provider of continuing education and training programs by the International Association for Continuing Education and Training. CDC will award 0.2 continuing education units to participants who successfully complete this activity. Continuing Nursing Education (CNE). This activity for 2.5 contact hours is provided by CDC, which is accredited as a provider of continuing education in nursing by the American Nurses Credentialing Center's Commission on Accreditation. Certified Health Education Specialist (CHES). CDC is a designated provider of continuing education contact hours in health education by the National Commission for Health Education Credentialing, Inc. This program is a designated event for CHESs to receive 2.1 category I contact hours in health education. The CDC provider number is GA0082.
# Goal and Objectives
This report provides information regarding measures recommended to prevent and control tuberculosis (TB) in correctional and detention settings. The goal of this report is to assist public health departments, correctional medical directors and administrators, private correctional health vendors, federal and state agencies, professional organizations, health care professionals, and policymakers in reaching informed decisions regarding the prevention and control of TB in correctional and detention facilities. Upon completion of this educational activity, the reader should be able to 1) describe recommended TB screening policies for correctional and detention facilities on the basis of risk assessment; 2) describe the controls used to prevent transmission of Mycobacterium tuberculosis in correctional and detention facilities; 3) list the components of comprehensive discharge planning for inmates and detainees who have TB disease or latent tuberculosis infection (LTBI) and are released into the community; 4) describe the principles of diagnosing illness and treating patients with TB disease and LTBI infection in correctional and detention facilities; and 5) describe the steps involved in conducting a contact investigation in a correctional or detention facility for a patient with infectious TB.
To receive continuing education credit, please answer all of the following questions. C. has been demonstrated to be 100% effective in preventing TB transmission even during high-risk procedures (e.g., bronchoscopy). D. will provide adequate respiratory protection in the majority of situations that require the use of respirators.
# A tuberculosis skin test (TST)…
A. is considered positive only if the result is >10 mm induration. B. should never be administered to pregnant women. C. might be negative in patients with TB disease. D. may be read by the patient in the majority of circumstances.
# Treatment of TB disease…
A. for the majority of patients initially consists of isoniazid, rifampin, ethambutol, and pyrazinamide while awaiting drug-susceptibility test results. B. for the majority of patients consists of isoniazid, rifampin, ethambutol, and pyrazinamide for the entire course of therapy. C. may be given only twice a week for all patients, including those infected with human immunodeficiency virus (regardless of CD4 T-lymphocyte count), after 2 weeks of daily treatment have been completed. D. should never be extended beyond 6 months if the patient's M. tuberculosis isolate is susceptible to all first-line medications.
# For correctional and detention facilities, comprehensive discharge planning should include…
A. collaborating with public health systems and other community healthcare professionals. B. ensuring continuity of case-management. C. evaluating discharge-planning procedures and modifying procedures as needed to improve outcomes. D. all of the above.
# In correctional and detention facilities, contact investigations should be conducted for…
A. patients with TB meningitis. B. patients with suspected or confirmed pulmonary, laryngeal, or pleural TB with cavitary disease on chest radiograph or positive sputum acidfast bacilli smears. C. patients with tuberculin skin test results >5 mm induration. D. all employees with a cough. | 43,064 | {
"id": "66ff305d989e0fb4c3de56615ad6ae6a2bfd30fe",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | CDC, our planners, and our content experts disclose that they have no financial interests or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters. Presentations will not include any discussion of the unlabeled use of a product or a product under investigational use. CDC does not accept commercial support.# Introduction
In 1990, CDC published guidelines for investigating clusters of health events (the 1990 Guidelines) (1). The 1990 Guidelines did not focus on any specific disease and considered any noninfectious diseases, injuries, birth defects, and previously unrecognized syndromes or illnesses. Many state, local, and tribal health departments have used the 1990 Guidelines as a basis for developing and implementing protocols to investigate suspected cancer clusters, employing the four-step approach (initial response, assessment, major feasibility study, and etiologic investigation) identified in the 1990 Guidelines. Since the 1990 Guidelines were published, continued attention has been paid to suspected cancer clusters nationwide, leading CDC to publish additional details on the role of the guidelines in responding specifically to cancer clusters (2). Since 1990, many improvements have been made in the areas of data resources, investigative techniques, and analytic/statistical methods, and much has been learned from both large-and small-scale cancer cluster investigations.
This report augments the 1990 Guidelines by focusing specifically on cancer cluster investigations. The guidance provided in this report addresses additional subject areas that are deemed important by epidemiologists from state and local health departments (3). The additional subject areas include communications and resources for data and use of epidemiologic and spatial statistical methods. Useful websites, a resource not available in 1990, were added. The four-step process was retained, and more details were added. Public health personnel in state and local health departments can use these guidelines to develop a systematic approach when responding to inquiries about suspected cancer clusters in residential or community settings. In addition, these guidelines might be helpful to a wider community of responders and epidemiologists who are concerned with such inquiries. These types of inquiries often are requested by community members or medical professionals concerned about what appears to be an unusually high number of diagnosed cases of cancer in a particular community, workplace, family, or school. Upon receiving an initial inquiry, health department personnel should respond rapidly to the caller's concerns, gather relevant information about the cancer cases, make a professional judgment on the likelihood that the reported situation could be an actual increase in cancer cases over those expected in a particular population, and determine whether further investigation is warranted. If appropriate, health department personnel then will need to provide resources for investigation of the suspected cluster, working with and involving members of the community as much as possible throughout the process.
# Methods
In March 2010, the Council of State and Territorial Epidemiologists (CSTE) and CDC convened a workgroup (the authors of this report) to revise the 1990 Guidelines. The group comprised public health professionals selected by the leadership of CSTE's Environmental Epidemiology Subcommittee and by CDC's National Center for Environmental Health's (NCEH) Division of Environmental Hazards and Health Effects (EHHE). CSTE and CDC selected workgroup members with experience in responding to cancer cluster inquiries from communities and managing of cancer cluster investigations. Representatives included epidemiologists from state health departments who were selected in order to have input from states that represent a range of approaches to and capacities for cancer cluster investigations. In addition, CDC workgroup members included representatives from CDC organizations typically called upon to assist in cancer cluster investigations: NCEH/EHHE, the Agency for Toxic Substances and Disease Registry (ATSDR), and the National Center for Chronic Disease Prevention and Health Promotion's Division of Cancer Prevention and Control. CDC risk communications and statistical specialists, as well as epidemiologists at academic institutions experienced in cancer cluster investigations, participated in the workgroup.
The intent of the workgroup was to ensure a practical approach to the assessment, analysis, and investigation of response to cancer cluster concerns. Through regularly scheduled conference calls and meetings from March 2010 to May 2011, the workgroup identified areas that warranted change from the 1990 Guidelines and sources of new information to incorporate in the revision of the guidelines. For these topics, the medical librarians at the CDC Public Health Library and Information Center conducted a comprehensive review of the published, peer-reviewed literature. To identify articles related to community cancer clusters, librarians conducted a structured literature search using multiple databases including PubMed (National Library of Medicine, National Institute of Health, Bethesda, Maryland, available at . nlm.nih.gov/PubMed), MEDLINE (available at . nlm.nih.gov/bsd/pmresources.html), and CAB (available at ). English language peer-reviewed articles published between 1969 and 2010 were searched by using the following medical subject heading (MeSH) terms: "cluster analysis," "cancer cluster," "neoplasm," "environmental illness," and "not occupational diseases." Through this process, 166 articles were identified. In addition, members of the workgroup recommended 26 publications, including publications on communications and statistical analysis as well as nonscientific publications related to cancer clusters, and three unpublished cancer cluster investigation reports that were relevant to topics addressed in the guidelines. All articles and reports were reviewed by the workgroup members. Regarding topics on which no new published evidence was available, expert opinion was sought from workgroup members. In October 2010, an in-person meeting of the workgroup was held to begin writing these guidelines.
In addition to convening a technical workgroup, CSTE sent a survey to all state and territorial epidemiologists to assess the needs of public health professionals when responding to cancer cluster concerns in order to direct the focus and content of the guidelines (3). The survey included questions about the most common activities which states engage in when addressing a cancer cluster inquiry and what type of information would be useful. This survey identified areas (e.g., communications, resources for data, and epidemiologic methods) in which more details would be useful. After discussion, review, and incorporation of the findings from the survey, the workgroup decided to retain and update the fourstep approach first described in the 1990 Guidelines. Updates included incorporating new technological advances (e.g., use of the Internet and websites) for information on relevant data resources, statistical tests, and mapping techniques as well as lessons learned from recent cancer cluster investigations. One important update is the emphasis on the importance of developing a robust working relationship with the community as soon as possible, including clear two-way communication and transparency in all aspects of the response process, while maintaining scientific rigor.
The revised guidelines address questions about the availability of data, limitations associated with understanding cancer clusters, and decision-making about the extent to which inquiries can be followed up. For specificity, the revised guidelines are limited in scope to include only those situations in which a health department responds to an inquiry about a suspected cancer cluster in a residential or community setting. These guidelines do not address workplace cancer clusters or those related to medical treatment (e.g., cancers associated with pharmaceuticals). Workplace or occupational clusters and medically related clusters each present unique sets of circumstances, have unique and clearly defined populations at risk, and generally call for specific investigative methods, agencies, and partnerships (4,5). Similarly, these guidelines do not discuss diseases other than cancer that persons might suspect have occurred in clusters in their communities. However, some of the principles of risk communication, data analysis, and community involvement discussed in this report might be applicable to noncancer cluster investigations as well. Finally, the revised guidelines do not address routine surveillance conducted by cancer registries and programs to assess trends.
This report is divided into two sections and three appendices:
- The first section explains cancer cluster definitions, characteristics and lessons learned from recent investigations; - The second section outlines a systematic, four-step process for evaluating potential cancer clusters; - Appendix A provides an overview of sources of data and other resources useful for cancer cluster investigations; - Appendix B describes considerations for developing effective communication strategies; and - Appendix C highlights some useful statistical and epidemiologic approaches for investigating suspected cancer clusters.
# Cancer Cluster Definitions, Characteristics, and Recent Investigations
# Definition of a Cluster
CDC defines a cancer cluster as a greater than expected number of cancer cases that occurs within a group of people in a geographic area over a defined period of time (6). This definition can be broken down as follows:
- a greater than expected number: Whether the number of observed cases is greater than one typically would observe in a similar setting (e.g., in a cohort of a similar population size and within demographic characteristics) depends on a comparison with the incidence of cancer cases seen normally in the population at issue or in a similar community. - of cancer cases: The cancer cases are all of the same type. In rare situations, multiple cancer types may be considered when a known exposure (e.g., radiation or a specific chemical) is linked to more than one cancer type or when more than one contaminant or exposure type has been identified. - that occurs within a group of people: The population in which the cancer cases are occurring is defined by its demographic factors (e.g., race/ethnicity, age, and sex). - in a geographic area: The geographic boundaries drawn for inclusion of cancer cases and for calculating the expected rate of cancer diagnoses from available data are defined carefully. It is possible to "create" or "obscure" a cluster inadvertently by selection of a specific area. - over a period of time: The time period chosen for analysis will affect both the total cases observed and the calculation of the expected incidence of cancer in the population. When a health agency is investigating a suspected cancer cluster, it can use these parameters to help determine whether the reported cancer cases represent an increase in the ratio of observed to expected cases. The health agency also can use the parameters to identify characteristics that indicate whether cases might be related to each other and to determine whether the cases warrant further investigation. In the sections that follow, guidelines are provided to outline how to make this determination, including the appropriate information to collect, the necessary deliberations, the factors to take into account, and the analyses to perform.
# Characteristics of Cancer and Clusters
The National Cancer Institute (NCI) of the National Institutes of Health (NIH) defines cancer as a term for a group of diseases in which abnormal cells divide without control and can invade nearby tissues (7). As a group, cancers are very common. Cancers are the second leading cause of death in the United States, exceeded only by diseases of the heart and circulatory system (8). One of every four deaths in the United States is attributable to some form of cancer. In 2009, approximately 1.47 million persons in the United States received a cancer diagnosis, and approximately 568,000 persons died from cancer (9).
Because cancer is common, cases might appear to occur with alarming frequency within a community even when the number of cases is within the expected rate for the population. As the U.S. population ages, and as cancer survival rates continue to improve, in any given community, many residents will have had some type of cancer, thus adding to the perception of an excess of cancer cases in a community. Multiple factors affect the likelihood of developing cancer, including age, genetic factors, and such lifestyle behaviors as diet and smoking. Also, a statistically significant excess of cancer cases can occur within a given population without a discernible cause and might be a chance occurrence (10,11).
Three considerations are important for suspected cancer cluster investigations. First, types of cancers vary in etiologies, predisposing factors, target organs, and rates of occurrence. Second, cancers often are caused by a combination of factors that interact in ways that are not fully understood. Finally, for the majority of cancers, the long latency period (i.e., the time between exposure to a causal agent and the first appearance of symptoms and signs) complicates any attempt to associate cancers occurring at a given time in a community with local environmental contamination. Often decades intervene between the exposures that initiate and promote the cancer process and the development of clinically detectable disease (12).
Communicating effectively about the frequency and nature of cancer in explaining suspected cancer clusters can be difficult for public health agencies, and many of the scientific concepts involved (e.g., random fluctuation, statistical significance and latency period) might not be easy to explain to the community (13). Any number of community members, friends, or relatives with cancer is alarming and is too many from a personal perspective (11). When persons are affected personally by a case of cancer, they naturally seek an explanation of the cause of the cancer (13).
# Cancer Cluster Investigations
As the 1990 Guidelines noted, finding a causal association between environmental contaminants and cancer is rare in a community cancer cluster setting (1). Evidence reported by state and local health agencies and federal agencies since 1990 that would suggest otherwise is limited, and most investigations of suspected cancer clusters do not lead to the identification of an associated environmental contaminant (10).
State and local health agencies receive approximately 1,000 inquiries per year regarding suspected cancer clusters (14). The majority of these inquiries can be resolved during the initial response, which consists of the initial contact and follow-up contact with the caller, if needed. The resulting health education can be an important public service (14). Even if inquiries concern events that meet the statistical criteria for a cancer cluster, investigations of suspected cancer clusters are unlikely to find an associated environmental contaminant (1,11). For example, one of the largest suspected cancer clusters investigated by CDC's NCEH and by other agencies concerned cases of childhood leukemia in Fallon, Nevada. Although initial analysis demonstrated a statistically significant (p<0.05) increase in the number of cases, subsequent epidemiologic investigations did not identify a statistically significant association with environmental contaminants (15).
Suspected cancer clusters that consist of cases of one type of cancer, a rare type of cancer, or a type not identified usually in a certain demographic group are thought to be more likely to have a common cause (10). Even if these factors are present, the suspected cluster might not be associated with an environmental exposure and in fact might be a chance occurrence. A type of cancer under investigation might not be associated biologically with any environmental contaminants of concern in the community. In other words, a suspected environmental contaminant might not be in the causal pathway for a certain type of cancer. One common but false assumption held by persons not familiar with the scientific study of cancer is that a single environmental contaminant is likely to cause any or all kinds of cancer. Toxicologic and epidemiologic studies do not support this assumption. Cancer is not one disease, but rather many different diseases with different causal mechanisms (16).
In addition, two statistical issues influence the ability of the health agency to determine an association between the cancer(s) in question and environmental exposures. First, a suspected cancer cluster investigation with a small number of cases (e.g., one that involves a rare cancer type comprising only a few cases) might result in a lack of statistical power to detect an association. Second, because of the substantial number of cancer patients who might live in a community, a spurious association with an environmental contaminant can occur by chance alone, without the contaminant being a causal factor (17).
The health agency should avoid imprecise and post hoc definitions of such concepts as case, population, geographic area, or exposure period because such definitions might bias or limit an investigation. For example, case definitions that include different cancers generally are not useful, unless the environmental contaminant under consideration has been associated with multiple cancer types.
Latency and change of residence add to the complexity of these investigations. Because of the long latency period associated with cancers, behaviors and exposures that might have contributed to the development of cancer in a person typically occur years to decades before the diagnosis (e.g., malignant mesothelioma, a lung tumor, is associated with asbestos exposure). The latent period between first exposure to asbestos and death from mesothelioma is often 30 years or longer (18). Latency needs to be considered in an investigation of a suspected cancer cluster because it influences the exposure period relevant to the investigation. If a person with cancer did not live in the suspected cancer cluster area during the relevant exposure period (possibly 20 years previously), then that person's cancer cannot be related to an environmental contaminant of concern or to any exposure in the suspected cancer cluster area. Conversely, the latency period might limit the ability to detect a cancer cluster or identify cancers related to an environmental exposure that occurred in the past. In a mobile population, a cancer cluster resulting from an environmental contamination occurring years or even decades earlier might go undetected because exposed residents have moved away from the community before the cancer develops. Thus, as persons move in and out of different communities, their cumulative exposure profile will change.
Suspected childhood cancer cluster investigations have the same limitations as adult counterparts (19). However, because childhood cancers generally have shorter latency periods, changes of residence might be less of an issue in the investigation of suspected clusters involving childhood cancers (20). For example, in one California study of 380 children with a diagnosis of leukemia, approximately 65% of the study participants changed residence between birth and diagnosis (21), indicating that even among cancers with short latency periods, migration might be an important factor.
Because investigations rarely demonstrate a clear association with an environmental contaminant, investigations of community-based cancer clusters usually do not provide the resolution communities seek (11). Furthermore, a suspected cancer cluster investigation can have unintended consequences. An investigation can augment the existing fear and uncertainty in the community brought on by the perception that a suspected cancer cluster exists, which might have a negative social and economic impact (22). Therefore, during all stages of an inquiry or investigation, responders should not only be transparent and receive community input but also explain their decisions to the community.
# Four-Step Process for Evaluating Suspected Clusters
Because major investigations require substantial resources and might not identify the cause of cancer cases, a stepwise approach is recommended. Both the likelihood of identifying a causal factor and the feasibility of studying the relationship should be considered before proceeding to the next step. Regardless of the extent of the investigational response, the process of responding to community concerns provides opportunities to increase communities' knowledge about cancer and to encourage participation in cancer screenings and healthy behaviors. For this reason, education and consultation are advised at all steps of the process.
Four steps are recommended to respond to a report of a suspected cancer cluster, including procedures, guidance on, and considerations for closing the inquiry or proceeding to the next step:
- Step 1: Initial contact and response - Step 2: Assessment - Step 3: Determining the feasibility of conducting an epidemiologic study - Step 4: Conducting an epidemiologic study to assess the association between cancers and environmental causes. These steps update the four-stage process discussed in the 1990 Guidelines but should be implemented with two qualifiers. First, the extent to which a health agency is able to follow these guidelines depends on existing resources and infrastructure. Second, the delineation between the steps is not necessarily fixed. Often, a health agency might choose to combine steps or to pursue a problem with several approaches. The four-step process is intended to be flexible, so that health agencies and their partners may use it as model guidelines and adapt it to their own existing protocols, resources, staffing, organizational systems, and policies.
# Step 1. Initial Contact and Response Description
The purpose of Step 1 is to collect information from the inquirer (i.e., the person calling, writing, or emailing the report of a suspected cancer cluster) so as to determine whether the inquirer's concern warrants further follow up. On the basis of the information collected, the health agency will need to decide whether to pursue the inquiry further. This step focuses on obtaining and evaluating whatever information the inquirer can provide as well as relevant data available to the health agency (e.g., data from cancer registry, census, and environmental databases ).
The inquirer should be referred quickly to the responsible unit in the health agency, and the problem should not be dismissed prematurely (i.e., before information is collected). Although the majority of reports of potential clusters will be closed at the time of initial response because the inquirer's concerns are not consistent with a potential cancer cluster, the first encounter is often the health agency's best opportunity to educate the inquirer about the nature of cancer and suspected cancer clusters.
To be an effective initial responder and a successful manager of reports of a local suspected cancer cluster, the health agency needs to understand the context of the inquirer's concern, the nature of the perceived problem, the history of how it has or has not been reported to authorities, and if applicable how authorities have responded to date. In addition, other necessary background information should be gathered including demographic characteristics (e.g., age group) of the persons with cancer and the population group of which they are members. Not only is this information essential to the scientific investigation, but it is also important for effective informationgathering, communication and coordination with the inquirer or community. In addition, it is essential for the effective management of a suspected cancer cluster inquiry to be open, transparent, and thorough with respect to the evaluation of information and actions taken. It is also important to be sensitive and responsive to the inquirer's concerns.
# Procedures
- The health agency responder (the responder) should be empathetic, listen to the inquirer's concerns, and record the information received. - The responder should gather identifying information on the inquirer: name, address, telephone number, length of residence at current location, and organization affiliation, if any. However, the responder should comply with requests for anonymity and explain that the inability to follow up with the caller might hinder further investigation. - The responder should gather initial data on the potential cluster from the inquirer: types of cancer and number of cases of each type, age of people with cancer, geographic area of concern, period over which cancers were diagnosed, and how the person reporting learned about the supposed cluster. Keep in mind that the inquirer might not know the true primary cancer diagnoses and will most likely not be aware of all cases of cancer in this area or during the period of concern. - The responder should gather information from the inquirer about any specific environmental hazards or concerns, other risk factors (e.g., diet, infections, and family history) and other concerns in the affected area (e.g., the likely period of environmental contaminant exposures). - On the basis of the information presented, the responder should make an initial judgment about the advisability of the health agency's pursuing an inquiry into the suspected cancer cluster. The decision might require discussions with other people in the health agency.
-Multiple factors bear on this decision, but it is primarily based on whether the evidence as presented fits the definition of a cluster and the biologic plausibility that the cancers could share a common etiology. Such factors as reports involving a rare cancer or an atypical demographic distribution of a certain type of cancer (e.g., multiple cases of breast cancer in men) support the decision to investigate further and should be considered. If exposure to a specific environmental contaminant is a concern in the community, the consensus in the scientific literature regarding an association between the environmental contaminant and the cancer of concern should be considered. -Factors that do not support the need for further investigation include: ˏ cancer cases within family members who are linked genetically (especially cancers known to be strongly genetically related); ˏ reported disease that might not be cancer; ˏ different types of cancers not known to be related to one another; ˏ a few cases of very common cancers (e.g., breast, lung); ˏ cancer cases among persons who did not live in the same geographic location during the relevant timeframe based on latency, and thus could not have experienced a common carcinogenic exposure; and ˏ the lack of a plausible environmental cause.
- The responder should clearly and accurately explain the rationale used to determine if an investigation will be pursued based on the information provided about the cases as well as the health agency's procedures. For example, the rationale for not pursuing an investigation could be that the reported cancers are unlikely to be related to plausible environmental exposure. - If an inquirer is reporting an event that is not a suspected cancer cluster but rather one involving a known or possible environmental contamination, the caller should be referred to the appropriate environmental resources agency. The responder should work with the health agency's communication experts to assess the potential community concern and impact, and prepare a plan to address concerns.
-The health agency should provide responders with talking points about the nature of cancer, its frequency and occurrence, how different types of cancers reported are related to separate causes, that rates of disease do somewhat increase and decrease in a population over time (random fluctuations), and so forth. These points can be used to educate inquirers about cancer and to provide them with further resources that address their concerns. -If the information provided supports the decision to investigate the cancer concerns further, the health agency responder should notify the inquirer, explain what that entails and outline how the agency will follow-up with the inquirer and provide results. The responder should ask the inquirer if there are others in the community (e.g., other residents with this cancer type) who would like to have a report on the results of the next step.
# Recommendations for Step 1
- The health agency responder should have expertise or training in cancer and/or environmental epidemiology. - The responder should have training and experience in risk communication because, understandably, community residents can be extremely distressed by the perception of an excess amount of cancer in their community (22).
The ability to make a judgment on the facts presented and to communicate the factors in that judgment clearly depends on having both scientific expertise and experience in communication.
- The responder should be knowledgeable about cancer, cancer prevention, and guidelines on investigating suspect cancer clusters. The responder also should be able to offer the inquirer easily accessible resources, such as the health agency's or CDC's cancer website (). - If one person in the health agency with comprehensive expertise (i.e., in all areas described) is not available, the responder should collect initial information and tell the inquirer to expect a follow-up call. The responder should then discuss the case with colleagues who have the necessary expertise before responding to the caller with an initial judgment. - The health agency and responder can access, at minimum, county-level cancer statistics from the state cancer registry to understand and explain the reported cases in an appropriate context. A list of state cancer registries is available at . - If possible, the responder should be, or become, relatively familiar with the geographic area of concern, its demographic profile, and its history (e.g., industrial and residential development) in order to understand the health and environmental concerns of the community.
# Decision to Close the Investigation at Step 1
A decision at Step 1 not to pursue an investigation is based on the determination that the reported cases are unlikely to comprise a cancer cluster; therefore conducting a statistical assessment to determine whether an excess of cancer cases exists might be unsuccessful because the cancers are not likely to share a common, environmental etiology. This determination might involve multiple communications with the inquirer, as well as additional data-gathering. If the inquirer acknowledges and is satisfied with the decision not to move forward, the inquiry can be closed at this point. If the inquirer is not satisfied with the decision and the verbal explanation, then the health agency should consider providing a written explanation and include resources related to the decision. Regardless of the decision, the health agency should document in a permanent log all information about the inquiry and the decision.
# Decision to Continue to Step 2
The data gathered at this point might suggest the need for further evaluation. If so, the health agency should elect to proceed to Step 2 to determine whether an excess of cancer cases exists.
# Step 2. Assessment Description
The primary purpose of Step 2 is to determine whether the suspected cancer cluster is a statistically significant excess. Several components of the follow-up investigation are necessary to determine if an excess of cancer cases exists in the community. These important components include the study design, as well as the collection, analysis, and interpretation of relevant data. Decisions must be made concerning the case definition, how the population of concern (the study population) is defined, the choice of comparison cancer rates, and the choice of statistical methods. To address these components, the health agency investigation team (the investigators) leading the follow-up investigation of Step 2 (and subsequent steps) will need to have epidemiologic expertise or collaborate with an epidemiologist. The time needed to complete Step 2 varies, depending on the complexity of the suspected cancer cluster.
This step also includes identification of local environmental concerns. Depending on the circumstances, communicating with partners and identifying and communicating with key community members about the assessment might be appropriate as a part of this step. Creating a comprehensive communication plan is important, in order to identify audiences, communication needs, and communication channels. More detailed information is provided elsewhere (see Appendix B).
Calculating a standardized incidence ratio (SIR) (23,24) is recommended at this step. The SIR is generally calculated to provide an estimate of the likelihood that an excess of cases exists in the population of concern (the study population) compared to the general or reference population. The SIR is a ratio of the number of observed cases to the number of expected cases. The observed cases are the cases that actually occurred in the study population within a specific timeframe. The expected number of cases is the number that would occur in the study population if the occurrence of cancer in that population was the same as the reference population. Since cancer rates vary with age, the expected number takes into account the age distribution in the study population. It is calculated by multiplying the age specific cancer incidence rates of the reference population by the corresponding age specific group in the study population. In the calculation of the SIR, factors that must be considered include:
- the type(s) of cancer and number of cases,
- the period of concern,
- the geographic area of concern,
- the background cancer incidence in the larger reference population (available through the cancer registry), and - the demographic characteristics of the cases and the reference population. More detailed information is provided (see Appendix C).
# Procedures
- The investigators should define the study population, by demographic characteristics, geographical area and time period of concern. These factors, in addition to cancer type, are also included in the case definition: -Demographic characteristics might include age, sex, race/ethnicity, or residential location. The study population could be all-inclusive, or it could be limited to a specific demographic group. For example, the study population could include females only, adults only, or children only. -The appropriate geographic area (study area) and time period need to be selected. Privacy issues should be considered when collecting, analyzing, and presenting data on a few cases in a small geographic area. Statistical analysis of neighborhood level data or data from sparsely populated areas might not be possible because of limited numbers. Limited numbers might lead to a lack of statistical power and therefore to an instability of rates. Decisions about timeframe and geographic boundaries should take into consideration the concerns of the caller or community, as well as any known or suspected environmental contamination and pathways of contamination ( 19). -The case definition includes information on the type of cancer (e.g., primary site, histology, and grade). Cancer registries collect cancer diagnoses based on the ICD-O codes (International Classification of Diseases for Oncology, 3rd Edition ) (25). Cases might be limited to a specific age or sex (e.g., limiting small cell lung cancer to women because the hypothesis is that there is an increase in the number of cases in women). Cases of cancer among the study population generally are identified from a state's cancer registry, using the case definition.
-An all-cancer SIR (i.e., one calculated for all types of cancers combined) might be useful for communication and educational purposes, but it is not useful for explaining or exploring potential etiologies. If an allcancer SIR is presented with the results, a discussion of its limitations for investigating etiologies and its usefulness for cancer education should be included. -The case definition, study population, study area, and period of interest will require justification. The definitions and the justification should be transparent to the community so that they understand the rationale behind the approach taken. This means sharing information that is consistent, timely, and expressed in a manner that the lay public is able to understand. Otherwise, these decisions might be seen as arbitrary and thus be rejected by the community. - The investigators should calculate incidence rates, the SIR, and 95% confidence interval (CI) and other descriptive statistics. Procedural steps are discussed in detail elsewhere (see Appendix C).
-An SIR of >1.0 indicates that the observed number of cases is greater than the number that would be expected for the population. The SIR increases when the number of observed cases in excess of the number of expected cases increases. -The CI is an indication of the statistical precision of the SIR value. -In addition to whether the SIR is statistically significant, the investigators should consider the suspected cluster in the context of the plausibility that the cancers could share a common etiology based on the latency, on community patterns of migration in and out, known risk factors for the cancer of concern, and the potential for exposure to a contaminant of concern, as well as other factors (see "Decision to close the investigation at Step 2"). - The investigator should understand community concerns and identify facts about local environmental factors by: -reviewing the literature on risk factors for the types of cancers in question, investigating both the human and animal studies using PubMed and/or other sources (e.g., ); -reviewing literature on possible associations between the types of cancers and known or suspected environmental exposures because results from the literature review might affect the case definition (e.g., the types of cancers considered for study); and -identifying whether there is concern in the community about known or suspected environmental exposures or other factors that the community suspects are related to health problems. By using the community members' "local knowledge" (i.e., understanding of the community, its history, and its members as distinct from the scientific/technical expertise that is provided by health agencies) about the hazards and risk factors in their community, as well as data from environmental and other databases, investigators can make more informed decisions. - The investigator should communicate with the inquirer and the community as indicated (see Appendix B). In many cases, communication about the activities in this step is only with the inquirer. However, if communitywide concerns exist about the cancer cases or environmental conditions, early involvement of other community members might be appropriate.
-The investigator should share the SIR calculation with the inquirer and other community residents and describe the process, the results, and the implications of the results. -The investigator should consider who else should be notified after the SIR determination that there is, or is not, an excess number of cases (e.g., the local health agency, the state environmental agency, community residents).
# Recommendations for Step 2
- Because of the variety of issues involved in this phase of the investigation, a team approach involving epidemiologists, toxicologists, communicators, and other experts might be necessary. - Health agencies should document all decisions communications, and processes. - It might also be useful to examine the trend of a cancer type that is documented to be completely unrelated to the cancer type and/or exposure of concern. The purpose of this examination to identify other factors that might affect trends or excess cancer cases detected. If all cancers appear elevated or depressed in a similar time frame (including those that are not related), other factors ought to be considered. These factors include the possibility that the estimated denominator might be incorrect or that the community has an unusually high proportion of persons with high-risk health behaviors (e.g., smoking).
# Decision to Close the Investigation at Step 2
The decision to close the inquiry at this step or to move forward to Step 3 is based on multiple factors. The decision to move forward is best made on the basis of a review of the statistical analysis as well as an understanding of the scientific facts presented. To interpret the SIR, the health agency must answer these questions:
- Are there enough cases and a large enough population for statistical stability (17,23,26)? In general, the population size of a typical census tract (27) is the smallest denominator that will allow reliable results to be generated. - If there is a large enough numerator for statistical stability, how likely is it that this SIR might have occurred by chance, assuming that the underlying incidence rates were not elevated (for example, does the 95% CI exclude 1.0)? - Are there environmental contaminants and/or events that could be related to the cases? - Are there any population-related issues (e.g., a substantial number of persons moving into the community) that might in part explain the observed cancer excess? Other information in addition to the SIR is required to allow estimation of the likelihood that the observed cancers represent an excess, could potentially be related to one another, and share a common etiology. These questions include the following:
- Has there been an increase in the incidence rate of the specific cancer over time? - How many more observed cases are there than expected (the number of excess cases)? - Are the demographic characteristics of these cases unusual for the type of cancer (e.g., in a younger age group for a cancer that usually occurs only in older age groups)? The investigator needs the complete picture in order to determine the likelihood that the observed cancers represent an actual excess, could potentially be related to one another, and share a common etiology. An SIR of limited magnitude that is not statistically significant, coupled with a lack of known association with an environmental contaminant and no trend of increasing incidence over time, justifies closing the inquiry at Step 2. However, a statistically significant SIR of great magnitude and an increasing trend in incidence rate, together with a known environmental contaminant would argue for continuing to Step 3.
The following examples illustrate how these data can be synthesized. For example, an SIR of 4 with CIs that do not overlap 1.0, and ≥10 cases that might be etiologically linked, should encourage advancing to Step 3. Moderate elevations in a SIR, involving small numbers of cases, and instances of rare cancers pose the most difficult decisions for health agencies. Additional information might be needed to assist in the decision to continue the investigation to the next step.
Once it is decided to close the inquiry at Step 2, it is important to respond to the caller in writing, explaining the process and results, including the determination that the cases likely do not comprise a cancer cluster. The inquiry should then be closed, with appropriate documentation in system logs.
Even if continuing with a cluster investigation is not indicated, the inquiry might have raised other concerns, including known or suspected environmental contamination. In that case, the health agency should work with partners to facilitate other public health actions or interventions, as warranted (e.g., health screening, health risk assessments, or education on cancer prevention). In these circumstances, it is important to communicate clearly with the community about the scientific basis for the actions, being careful to set realistic expectations for the community.
Some scientific experts have recommended implementing guidelines that use resources for cancer education and larger, long-term population based studies to determine risk factors for cancer rather than proceeding beyond Step 2 into cluster studies. This is because cluster studies almost never yield definitive answers regarding the cause of any specific cluster (28,29). Each health agency makes a decision as to what resources are available.
# Decision to Continue to Step 3
Step 3 consists of gathering more information to assess the feasibility of conducting an epidemiologic study to determine whether the cases are associated with a common etiological factor. This process will engage additional resources and be more visible to the community. If a decision is made to move forward, the health agency should provide a written report to the caller, as well as to any partners contacted. This report should include a description of the results of the preliminary analyses and circumstances, carefully articulating what is known and not known at this point. Finally, the report should describe the health agency's plan (i.e., next steps).
# Step 3. Determining Feasibility of Conducting an Epidemiologic Study Description
The purpose of Step 3 is to assess the feasibility of performing an epidemiologic study to examine the association between the cancer cluster and a particular environmental contaminant. If further study is feasible, an outcome of this step should include a recommended study design.
All activities in this step should be carried out in collaboration with community, environmental, and other partners. Decisions should reflect the concerns, interests, and expertise of all partners. The health agency should follow the communication plan created in the previous step. This communication plan needs to be tailored to the community, and it should proactively address the information needs of stakeholders. It may be adapted as needed.
Additionally, this step provides the opportunity to evaluate additional public health actions, such as smoking cessation programs, cancer screenings, health risk assessments, removal of environmental hazards, or other activities that should be conducted. If beneficial to public health, these actions should not be delayed pending the decision to conduct or complete an epidemiologic study focused on assessing the association between the cluster of cases and a suspected environmental cause.
# Procedures
- The first actions in determining the feasibility of further study of the identified cluster include determining the study hypothesis and reviewing the scientific literature and past health agency reports.
- -determine which parameters to use for geographic scope, study timeframe, and demographics and select a timeframe that allows for sufficient latency in cancers of concern; -determine the study design, sample size, and the statistical tests necessary to study the association as well as the effect of a smaller sample size on statistical power; -determine the appropriateness of the plan of analyses, including hypotheses to be tested as well as epidemiologic and policy implications; and -assess resource implications and requirements of the study and identify sources of funding.
# Recommendations for Step 3
- Investigators should maintain communication with the community. - Investigators should support the community through acting on an understanding that members might have valuable information about hazards in the area. - Investigators should use a data-driven process for decision-making.
- Investigators should be proactive in maintaining interagency coordination and involving needed experts in advisory roles. - Investigators should carry out the feasibility assessment as broadly as possible with existing information, including assessment of previous efforts in environmental or clinical testing.
# Decision to Close the Investigation at Step 3
In some cases, despite the finding of a significantly elevated SIR, the feasibility assessment might indicate that further study will likely be unable to determine the cause of the elevated rate. In situations in which the types of cancers have no known association with an environmental contaminant, in which there are only a handful of cases, in which no suspected environmental hazard exists, or in which other factors explain the observed cancer excess (e.g., a substantial movement of residents during the study period), investigators might determine that data are insufficient or that insufficient justification exists for conducting further epidemiologic study. If the feasibility assessment suggests that little will be gained from proceeding further, the investigator should close the inquiry and summarize the results of this extensive process in a report to the initial caller and all other concerned parties.
In some circumstances, the public or the media might continue to demand further investigation, regardless of cost or biologic plausibility. Working with established community relationships, media contacts, and the advisory panel will be critical in managing and responding to expectations. If an extensive epidemiologic investigation is not carried out, it is critical to establish other possible options to support the community's health, depending on the information and resources available.
# Decision to Continue to Step 4
If the activities in Step 3 to assess the feasibility of an epidemiologic study suggest that it is warranted, the responders should proceed to Step 4. Further outreach, health assessment, interventions, or other public health actions also might be appropriate. Conducting epidemiologic investigations can take several years; the health agency should consider what can be done in the interim to help protect the community's health and keep its members informed. This level of investigation often can be seen as research rather than public health response to a community concern. Providing periodic progress reports to keep the community involved can help overcome this perception.
# Step 4. Conducting an Epidemiologic Investigation Description
The primary purpose of conducting an epidemiologic investigation of the suspected cancer cluster is to determine if the exposure to a specific risk factor or environmental contaminant might be associated with the suspected cancer cluster. Demonstrating a statistically significant association does not prove causation. The scientific rigor necessary for determining causation is difficult to achieve with an epidemiologic study alone; in addition, determining causation often relies on clinical and laboratory studies (28). This distinction should be communicated to an audience not familiar with these methodologies.
# Considerations
This step involves a standard epidemiologic study that tests a hypothesis of the association between putative exposures and specific cancer types, for which all the preceding effort has been preparatory. Using the feasibility assessment as a guide, responders should develop a protocol and implement the study. The circumstances of most epidemiologic studies tend to be unique. More specific guidelines are provided (see Appendix C).
The results of an investigation are expected to contribute to epidemiologic and public health knowledge. This contribution might take a number of forms, including the demonstration that an association does or does not exist between exposure and disease, or the confirmation of previous findings. It could take many years for such studies to be completed, and even then the result often provides an incomplete picture.
However, even if a cancer cluster is identified and environmental contamination is identified, an investigation might not demonstrate a conclusive association between the contamination and cancer. Other risk factors (e.g., smoking, personal behavior, occupational exposures and genetic traits) also should be explored. Conversely, even if the investigation does not identify an association between a particular suspected environmental exposure and cancer cluster, the exposure still might be linked to the cluster; however, in such a case more scientific information might be required (e.g., toxicologic and clinical data) to establish an association. Epidemiologic studies alone often are not able to detect small effects, particularly in small populations or when the number of cases is limited.
Sometime in advance of beginning an extensive investigation, it is important that health agency responders and the investigation team be clear with the community, the media, and others about the inherent difficulties in undertaking such studies. Every effort should be made to set realistic expectations about the information an epidemiologic investigation will likely provide. Regardless of how exhaustive or comprehensive an investigation, few provide definitive answers and address the community's concerns. Even when expectations are established before the investigation begins, such circumstances can be disappointing to all, and particularly worrisome to the potentially affected persons. Thus after the investigation concludes, the health agency response often persists. Continuous interaction and relationship with the community, along with transparency of process, continue to be vital in such circumstances. Ongoing open communication, information sharing and public awareness efforts might be needed in order help the community overcome frustrating circumstances.
# Conclusion
Public health agencies, including cancer registries, continue to receive hundreds of inquiries about suspected cancer clusters every year (10,14,29). Since publication of the 1990 Guidelines, many changes have taken place in data quality, technology, and communication. Data resources have become richer and statistical methods more refined, and many lessons have been learned from 2 decades of cancer cluster investigations.
Cancer cluster investigations continue to present many challenges. Populations at risk continue to be difficult to define, related environmental contaminants might have been in place many years earlier than the contaminant under investigation, and epidemiologic methods to provide strong evidence of association in large studies have limited value in community settings (14). Only a small fraction of cancer cluster inquiries might meet the statistical and etiological criteria to support a cluster investigation through all the steps outlined in this report. Because of the continuing challenges involved in investigating suspected cancer clusters, state and local health agencies continue to place an important emphasis on transparent and effective communication. The purpose of the revised guidelines contained in this report are to provide needed decision support to public health agencies in order to promote sound public health approaches, facilitate transparency and build community trust when responding to community cancer cluster concerns.
Since the 1990 guidelines for investigating clusters of health events (1) were published, a substantial increase has occurred in the number of sources of available data that can help public health agencies respond to cancer cluster inquiries and conduct cancer cluster investigations. These sources include data on cancer diagnoses, demographics, and environmental quality.
# Cancer Registries
The state cancer registry is a vital data source for suspected cancer cluster investigations. The state central cancer registry, which receives reports of all new cancer cases from clinical facilities in the state, will have numerator data (i.e., the number of new cancer cases) for calculating the SIR as well as data for the appropriate comparison measures for reference populations. In 1990, many states did not have a cancer registry, and the majority of states with registries lacked resources to gather complete data. Today, every state has a statewide central cancer registry for collecting, managing, and analyzing high-quality data on incident (i.e., newly diagnosed) cases of cancer and cancer mortality among residents.
Two federal programs support central cancer registries which compile data on cancer incidence: CDC's National Program of Cancer Registries (NPCR) supports central registries covering 96% of the U.S. population with registries in 45 states, the District of Columbia, Puerto Rico, and the Pacific Islands (2) and NCI's Surveillance, Epidemiology and End Results (SEER) Program includes five state registries and a number of regional and special population registries (3). Together, these programs collect data for the entire US population (3). Uniform national data standards for all registries are developed and promoted by the North American Association of Central Cancer Registries (NAACCR) (4).
The state and national registries have data on cancer type (e.g., organ site, histology, and many other fields) as well as detailed demographic information on the individuals with cancer. Although state registries most often group cancer statistics by county, many registries are also able to characterize data on individual cases by geographic location (geocoding). Age, sex, and race/ethnicity geocoded information permits researchers to calculate the SIR at various geographic levels. The majority of states have internet sites for cancer statistics; SEER (available at ) and NPCR (available at ) also present cancer statistics.
Completeness of the NPCR and SEER registries varies by state, although in general they have a high level of completeness and accuracy. NAACCR certifies registries annually based on completeness overall (95% and 90% for Gold and Silver, respectively) and for specific data items such as race, age, and gender (5). There might be a delay (≥1 year) between cancer diagnosis and the availability of complete data in the cancer registry. Preliminary data might be available for more recent years; however, these data might not contain all cancer cases from these years. The state registry will have information on which years have complete information.
Limitations and cautions to the use and interpretation of data from cancer registries include the following:
- Registry information generally contains patient address at date of diagnosis only. - The majority of registries do not collect information on possible risk factors (e.g., smoking history). Cancer registries do have fields for usual occupation and industry, but the data are often incomplete. - The types of cancer that are most likely to be underreported occur in persons with late-stage cancers that are treated with palliative care (e.g., persons who might not be hospitalized for surgery or treatment). Other likely underreported types include those who have been diagnosed in a physician's office without hospitalization (e.g., early stage melanoma). Many hospitals routinely collect cancer data for their own purposes and for most hospitals reporting to central registries is routine. However, reporting from nonhospital facilities is less reliable. Consequently data for cancer patients who are never hospitalized for diagnosis and treatment tend to be less complete and might be reported later than other cases (6). - Codes and rules for counting cancer cases do change. Some histology classifications change from benign to malignant and vice-versa, depending on the coding edition. Ovarian cancers and hematopoietic cancers are prominent examples. These are for the most part exceptions, and they will be known by the cancer registry personnel. - Occasionally, changes in diagnostic criteria might change how a cancer is diagnosed, possibly creating changes in the frequency in which the cancer is detected and reported. These types of changes are adopted at different rates by physicians and hence in reports to the registries. - Data on race and ethnicity are captured in registry data; however this data is collected inconsistently with some
# APPENDIX A Data and Other Resources
providers relying on a patient's self-report and others assessing race based on observation. - Many registries are aware of "quirks" or "anomalies" in possible mismatching of numerator and denominator data of their regions as a result of rapidly growing or shrinking areas or large population centers that straddle county or other borders. These limitations notwithstanding, the existence of population-based cancer registries has greatly reduced the resource intensity of determining how many and of what type of cancers have occurred in a given area in a state. These registries thus present efficient opportunities for answering questions that the public has about cancer concerns, including suspected cancer clusters.
# State Cancer Profiles
Cancer incidence and mortality data, compiled largely from registry data, are also available on State Cancer Profiles (available at ), a collaborative effort between CDC and NCI (7). The data on this site include state-and county-level cancer incidence and death rates. Statistical assessments are provided for upward and downward trends in rates by county and comparisons to state rates. A mapping capability also is provided; however, the maps do not reflect statistical differences in cancer incidence or death rates. Although the target audiences for this information are health planners, policy makers, and cancer information providers engaged in cancer control planning, the media as well as members of the public also use this site. In addition to data on cancer incidence and mortality, the site provides risk behavior data based on CDC's Behavioral Risk Factor Surveillance System (8).
# Data on Deaths
In addition to incidence data from cancer registries, data on deaths compiled by state vital records offices might be a useful supplement in identifying data on cancer cases. Death records are most useful for cancer with high mortality and a short survival period such as pancreatic, liver, lung, and some types of brain cancer. However, death records are not very useful for cancers with lower mortality, such as breast, thyroid, prostate, or colon cancers, from which patients are likely to survive. Death records increasingly are submitted to state health agencies online, and they are often available within weeks or even days after death. When survival is likely to be short (within 2 years), death records can help to fill in gaps in the cancer registry case count, since registries might have a 1-2 year lag in ascertaining complete records.
Limitations and cautions in the use of death records in cancer cluster investigations include the following:
- Death records might be limited by the requirement that the residence of the deceased is recorded as the address at the time of death; this address might or might not be the place where the individual resided at the time of the cancer diagnosis. - Death records are not necessarily completed by the physician who best knew the patient's medical history, meaning that the given cause(s) of death might not always be accurate.
# U.S. Census Bureau
The U.S. Census Bureau's American FactFinder (available at ) can provide valuable data for use in determining the denominator for incidence calculation (9). State, county, census tract, and census block level data are available. Census data include total population figures, along with socioeconomic status, race/ethnicity, age, sex, and many other useful characteristics of a population.
Limitations and cautions about the use of census data include the following:
- Census numbers might be inaccurate for intercensal years when substantial population changes (rapid growth, shrinkage, or aging changes) occur. - Census boundaries occasionally change, most often in rapidly growing areas that are often subdivided, making comparison between years or combining data from different years difficult. American FactFinder allows a user to see the changes between census years (e.g., between 2000 and 2010). - The census tract is defined by the U.S. Census Bureau, and it is a relatively homogeneous unit with respect to population characteristics. A census tract generally contains between 1,000 and 8,000 persons, with an optimum size of 4,000 persons (10). Cancer clusters of concern frequently are confined to areas smaller than a census tract. Because census tracts are subdivided into census blocks and block groups, blocks and block groups might be combined if a census tract does not give the needed geographic boundaries. The number of cases occurring within a block or a block group might be far too small to allow reporting of cancer cases without privacy concerns or creating statistically unstable rates. Registries often will not release data at the block group level or even the census tract level because of privacy concerns.
- Census units might not be similar to contamination boundaries. - The state demographer is the best resource for information regarding changes in population size. Zip codes can be and often are used as geographic areas for cluster investigations, especially if they are a better fit for communities at issue. There are two major limitations to using zip codes for cancer cluster investigations: 1) zip code boundaries might change more often than census boundaries, and 2) zip codes cross county and census boundaries. Moreover, a person might have a post office box or a rural route address that is in a different zip code than the actual residence. Real estate sites, such as City-Data.com (available at . city-data.com), often can be useful for researching population changes and demographic information.
# National Environmental Public Health Tracking Network
One resource that was not available during the development of the 1990 Guidelines is CDC's National Environmental Public Health Tracking Network (Tracking Network), a nationwide surveillance network that provides health, environmental hazard, and exposure data and information to better inform and protect communities (11). The Tracking Network () is a web-based system of integrated data and information derived from a variety of sources, including federal, state, and local agencies and registries.
Along with other selected health outcomes, the Tracking Network offers data and health messaging on several categories of cancers, including leukemia (by subtype), pediatric cancers, brain cancer, and other cancer types. The website will include additional types of cancers in the future. The cancer data are derived from a compilation of registry data, including NPCR and NCI's SEER programs. Cancer health outcomes data available for many states can be viewed in map, table, or graph format. Annual age-adjusted rates and annual number of cases are available for each selected cancer category for each state, and 5-year average annual rates are available by county. Other information, including demographic and socioeconomic characteristics, health behaviors, and biomonitoring data are also available. Because of a limited or low number of case counts and data confidentiality and human protection laws, health data are protected from being viewed on the Tracking Network at a higher geographic resolution, such as by census tract. In some cases, a request for individual or identifiable data might be granted by state cancer registries directly.
Environmental data primarily derived from federal, state, and local regulatory environmental protection departments (or agencies) are available on the Tracking Network. However, state and local jurisdictions might provide more detailed environmental data, along with staff members who are knowledgeable about issues surrounding a particular situation.
# Data from State and Territorial Environmental Agencies
State and local environmental protection agencies routinely collect environmental data. Because these data are collected in places and at times according to regulatory purposes, they might be useful in identifying environmental hazards in cancer cluster investigations, or they might only approximate the environmental conditions at the site of the potential cancer cluster. Environmental agencies regularly collect data on water quality and air quality for compliance with air and water quality standards. These agencies also often permit and regulate industrial or other facilities that generate, transport, or store hazardous waste or other chemicals. The agencies will therefore have records of compliance and noncompliance that might indicate emissions into the environment. The state agencies are also involved, along with the Environmental Protection Agency (EPA), in monitoring pollution and in the oversight of the cleanup of contaminated sites. Although some states conduct surveillance on pesticide-related illness and injury, not all states regularly collect and maintain data on pesticide use or exposure; if collected, the data are usually kept at the state department of agriculture and sometimes by the state environmental protection agencies.
EPA collects environmental data for regulatory purposes, and the agency publishes the data on its website. A viewer can use tools on the EPA website to view information on air quality or water quality or to see if there are local Superfund sites, brownfields (12), or releases from manufacturing facilities (14). The information is available at the zip code level and can be displayed on a map.
The staff located within state or local environmental protection departments can be a helpful resource for providing information about local environmental conditions that might lead to exposure to contamination. The staff 's assistance should be engaged in evaluating available environmental data for relevance to a cancer cluster inquiry or investigation because the data collection areas are determined by regulatory requirements and might not provide information specific to a particular site of public health interest. EPA's list of State and Territorial Environmental Agencies is available at . epa.gov/epahome/state.htm. Sources of information on the association between specific environmental contaminants and cancer are available. Weight-of evidence-evaluations of carcinogens are published by the International Agency for Research on Cancer (IARC) (IARC cancer classifications are available at ) and the National Toxicology Program (NTP's Report on Carcinogens is available at ). These evaluations tend to focus on exposures that have been of concern for some time and therefore on which there are substantial data. Not all potential carcinogens have been evaluated by these organizations. Other sources of information include PubMed (available at / pubmed), the ATSDR Toxic Substance Portal (available at ), and the ATSDR series of Toxicological Profiles on various chemicals (available at ).
By using the community members' local knowledge about the hazards and risk factors in their community as well as data from environmental and other databases, the investigator can make more informed decisions during the investigation process. For example, information provided by the concerned community members and by available databases can be useful in defining the geographic area and time period for the population at risk, increasing the accuracy and precision of the population definition. Readily available information on environmental hazards in the area of interest can be reviewed to determine if any of the hazards have a space and/or time pattern that can be related to the suspected cancer cluster. A thorough evaluation of environmental hazards with input from the community is appropriate because it might suggest some relevant public health interventions that turn out to be valuable, independent of any suspected cancer cluster. For example, in a community concerned about contaminants in private well systems, proper maintenance of private well systems might be an appropriate public health education program, regardless of whether contaminants are found, particularly if residents express confusion over how to maintain these wells.
# Biomonitoring
Biomonitoring is the measurement, usually in blood or urine, of chemical compounds, elements, or their metabolites in the body. Although biomonitoring indicates exposure to a substance at some level, it might not indicate when the exposure occurred or what effects the exposure might have on health in the future. Because of the long latency period associated with the development of cancer, the limitations of current environmental data also apply to using or collecting current biomonitoring data. The relevant exposure might have occurred years before and might not be detectable at the time that samples for biomonitoring are collected. Although a substance is detected in the body, it might not be a carcinogen or it might not be at levels known to cause the disease. For the U.S., CDC's National Health and Nutrition Examination Survey (NHANES) provides reference data for over 200 chemicals in the blood and urine for a selection of the survey's participants (14). Biomonitoring is a relatively new field, and there is a need for more research to permit an understanding of which substances at what concentrations in the body contribute to cancer.
An overriding goal throughout the process of a cancer cluster investigation, beginning with the initial contact, is to communicate with transparency and to embrace community involvement. The health department and its process should be accessible to the community. This section provides guidance and resources on communicating during a cancer cluster response.
# Developing Communication Plans
Before responding to any inquiries concerning a possible cancer cluster, the health agency should develop a one-on-one communication strategy. Key points in such a strategy should include the importance of listening and how to ask questions that will help determine the nature of the caller's concerns. If possible, responders should try to ascertain in the first call, the level of concern across the larger community. A basic communication plan should be created for answering initial inquiries about possible excess cancer cases. Such a plan will include anticipated characteristics of possible callers, questions to employ to gather the appropriate information, and talking points about cancer, clusters, and the scientific evaluation process. The plan also should define commonly used terms (e.g., cluster) in a clear and accessible way and emphasize that when speaking to a caller, a responder should use such terms in a consistent manner. Statistical concepts such as small samples size, random fluctuation, and statistical significance are difficult concepts for the general public audience to understand, and having consistent, clear, talking points that address these concepts is helpful.
If and when the investigators determine that the entirety of the evidence (e.g., an elevated SIR and an environmental contaminant that is linked to the cancer of concern in the published literature) supports proceeding with an investigation, they should make a concerted effort to establish a solid communication plan within the health agency's communications office. Components of such a plan should include identification of audience and messages, stakeholder groups, types of meetings, communications with the media, social networking possibilities, proactive versus reactive communication, and a commitment to a transparent approach.
# Communication Audience
The communication audience throughout the process of inquiry or investigation will include the initial caller, other concerned community members, community leaders, public health partners, government officials, media, physicians, real estate agents, and other groups, depending on how far the inquiry progresses. The media might approach the health agency with questions at any time, and the health agency will need to be prepared with clear statements for publication. At all stages of the process, the primary concern is the community. If community concerns include a known or suspected industrial contamination, those in the health agency taking the inquiry or handling community and media relations should interact with the community before or at the same time as with the company responsible for the contamination, not after. The media can be important partners in conveying information to community members. However, the health agency should not underestimate the importance of meeting face-to-face with individuals with cancer, their families and impacted community members. This is especially important for sharing information about the health agency's actions or findings. The particular persons who comprise the "community" and the nature of community involvement will change during the steps of cancer cluster inquiries and investigations. The appropriate partners and stakeholders should be identified and involved.
In the initial contact, communication generally is aimed at the person reporting a concern about cancer in the community. The person might be a medical professional or a legislator or community resident with little or no medical expertise. After the health agency responder takes the call, the responder should communicate with agency partners (in the health agency(s) and, if necessary, in the appropriate environmental protection agency) to alert them to the community's concerns.
After the initial response and as a part of the inquiry, communication might extend to the inquirer's family and friends as part of the information gathering and sharing process. If the inquiry progresses past Step 1, the intended audience for communications will broaden to include community residents, members of the media, other agencies (state, local, or federal), and possibly elected officials. Once anyone beyond the initial inquirer is involved, the local health agency should be included in any communications, regardless of whether a statistical excess of cases can be determined.
# APPENDIX B
If an excess of cancer cases is identified (Step 2) and an epidemiologic study is being considered (Step 3), two-way communication with community members is important. One method to accomplish such communication is to convene a community panel. This entity should include individuals who represent the community and, if possible, those with specific expertise that might be helpful during the process. The health agency should hold regular meetings with the panel. The panel should be well organized and have an agenda to keep the discussion on track and to conduct a useful dialogue. Participants in meetings might include concerned residents, residents with expertise, and local health, media, and elected officials. Such meetings provide a useful way to learn about the community and to build trust, credibility, and transparency. They are also useful for keeping the investigation's activities appropriate, focused, and on track. The community panel should be established early in an investigation; otherwise, other models might need to be considered. In communities where trust in government has eroded, it is particularly important to engage the community in the selection of participants of a community panel.
Health agency officials should use their best judgment and assess through personal interactions with community members, media, and internet postings whether a community panel (set up to facilitate communication around the community's cancer cluster concerns) is warranted. If not, the health agency and its investigators should work to establish relationships with existing, trusted community groups and suggest regular, structured, two-way communication with those groups.
# Communicating in Uncertain and Stressful Situations
Because of the perception of health and environmental risk, persons can feel uncertain, worried, and less trusting. Accordingly, principles of risk communication should be part of the training for anyone dealing with the process of cancer cluster inquiries or investigations (1). A few key communication concepts at any step of the inquiry include the following, adapted for cancer clusters from previous guidance (2):
- be a credible and consistent source,
- create realistic expectations,
- raise awareness of other credible sources,
- be empathetic and have patience,
- be supportive and receptive to the information reported, and - listen clearly and consistently.
# Proactive Community Involvement
During Step 2 (the process of determining whether an excess of cancer cases exists), obtaining community input might be useful but not vital. However, once the decision is made to proceed to Step 3, proactive community involvement is critical, not only for gathering information but also for sharing the investigation parameters and process with the community and other affected or collaborating partners.
One way to involve the community broadly is to establish advisory groups, such as a community panel (See Step 3, Procedures). Another way is to hold public meetings. If, during the process of investigation, a need is identified to have public meetings, a clear agenda and goal should be set for each meeting, including discussions of major milestones (e.g., completion of the feasibility assessment). The format and atmosphere of a public meeting can have great influence on its outcome. For example, town hall-type public meetings can allow community members to express frustrations and feelings to officials. Health agency personnel who listen well can establish credibility with the community in such meetings. However, some agencies might have difficulty in communicating well in this format. In these cases, an agency should use trained facilitators who understand the local culture. In such meetings, the health agencies should keep presentations short and use plain language. An alternative is to conduct public meetings with a series of "stations," at which data (e.g., maps) can be presented and discussed in one-to-one or smallgroup communication. This is one way to involve partners such as environmental agencies and community groups in this type of meeting.
Depending on the community's unique needs, one of these approaches or a combination might work best. For each type of meeting, the health agency should include resources for community members who attend, such as educational materials about cancer. Because dealing with a suspect cancer cluster can bring great stress to members of the community, potentially causing additional stress-related illness, resources about stress management also might be useful in promoting public health.
Other options for communicating on a regular basis with the community include establishing a toll-free telephone number for use by members of the community to ask questions during the entire process, providing regular (e.g., monthly) written updates between meetings, creating a website with all relevant information (including a compilation of questions and answers) or, if necessary, establishing a community office. The local health agency will be a valuable partner at this stage of communications.
Another avenue is to work with the state communications department and/or public affairs office to use social media as a communication forum about the investigation. Community members are likely to use social media to obtain information. Putting information out on social media sites and inviting questions has advantages and disadvantages. It is similar to having a toll-free number available, but it also allows for twoway communication that can be viewed by and shared with others. Members of the community also might use their own social media sites, including blogs, to ask questions and express their own opinions. Monitoring such sites provides a valuable opportunity for the health agency to be aware of community concerns and to address misconceptions (3,4,5).
# Resource for State and Local Health Agencies
CDC and the National Public Health Information Coalition (NPHIC) have published a useful resource which is currently available to state and local health agencies, providing detailed guidelines on communicating in cancer cluster investigations (available at ). Cancer Clusters: A Toolkit for Communicators (6) includes information on working through a suspected cancer cluster scenario. It provides suggested outreach techniques for various audiences and offers answers to commonly asked questions about suspected cancer clusters. It also provides literature resources, a glossary of cancer cluster terms, a guide to education by use of social media, and case studies.
A suspected cancer cluster investigation attempts to answer two questions: 1) is there an actual "excess" (that meets statistical and biological plausibility criteria) and 2) is this excess associated with an environmental contaminant? Addressing these questions begins by defining the study population and locating relevant cases and then determining the appropriate geographic boundaries and time period.
This section provides an outline of the basic epidemiological and statistical analysis methods that are recommended for investigating a cancer cluster. This section focuses on the methods most relevant and most commonly used in cancer cluster investigations: the SIR and confidence interval, mapping, and descriptive and spatial statistical and epidemiologic methods.
# Standardized Incidence Ratio and Confidence Interval
The measure typically used to assess whether there is an excess number of cancer cases is the SIR. This measure is explained in many epidemiologic textbooks (sometimes under standardized mortality ratio, which uses the same method but measures mortality instead of incidence rates) (1-5). Simply stated, the SIR is a ratio of the number of observed cancer cases in the study population to the number that would be observed (often called "expected") if the study population experienced the same cancer rates as an underlying population (often called the "reference" population). The reference population could be the surrounding census tracts, other counties in the state, or the state as a whole (not including the community under study).
The SIR can be adjusted for factors such as sex, race, and/or ethnicity, but it is most commonly used to adjust for differences in age between two populations. Various techniques can be used to account for these factors. For example, stratification, which is calculating an SIR by groups (e.g., by calendar year), is a commonly employed technique. (6)
# Confidence Interval
A confidence interval is calculated to determine the precision of the SIR estimate and the statistical significance. If the confidence interval includes 1.0, the SIR is not statistically significant. The narrower the confidence interval, the more confidence one has in the precision of the SIR estimate. One difficulty in cancer cluster investigations is that the population under study is generally a community or part of a community, typically resulting in a small denominator, and such small denominators frequently yield wide confidence intervals, meaning that the SIR is therefore not as precise as desired (1).
# Considering Alpha and Beta Level Values
The alpha is the probability of rejecting the null hypothesis when the null hypothesis is true (no difference in cancer rates between the study population and reference population). Although there are no absolute cut-points, responders often use an alpha value of 0.05 (or equivalently a 95% confidence interval).
Selection of an alpha value larger than 0.05 (e.g., 0.10: 90% confidence interval) will increase the risk of false positive results. Selection of a smaller alpha value (e.g., 0.01: 99% confidence interval) may be considered when many SIRs are computed because the number of SIRs that will be statistically significant by chance alone increases (in other words, with a 95% confidence interval, one expects to see five statistically significant results in a group of 100 results).
Beta and power are related to each other. Both are related to the sample size of the study-the larger the sample size, the larger the power. Power, or 1-β (beta), is the probability of rejecting the null hypothesis when the null hypothesis is actually false. Like alpha, the beta has no absolute cut-points; however, responders often use a beta value of 0.20 or less (or equivalently a power of 0.8 or more) (1).
# Power Analysis
Power analysis is useful in determining the minimum number of people (sample size) needed in a study in order to test the hypothesis and detect a possible association. In most suspected cancer cluster investigations, the cases and study population are defined prior to the analysis. Therefore, a power analysis can be used to determine if the number of cases in the investigation is sufficient, usually a power of 0.8 or greater (3).
# Mapping the Cancer Cluster
When considering the geographic distribution of cases, responders have various methods they can use. For example, they might develop a visual representation showing the location
# APPENDIX C Statistical and Epidemiologic Approaches
-f each case superimposed on the underlying population density to get an approximation of the distribution of the relative rates of cancer.
It also can be useful to plot the location of suspected environmental risk factors on the map for the purpose of making a crude assessment of their proximity to the cases. However, to avoid the "Texas Sharpshooter fallacy" (i.e., a situation in which cases are noticed first and then the "affected" area is selected around them, thus making there appear to be a geographical relationship, similar to an instance in which the sharpshooter shoots the side of the barn first and then draws the bull's-eye around the bullet holes), responders must first outline their definitions, assumptions, and methods (7). Often, a few different spatial (e.g., spatial: census block, census tract, zip code, municipality, or county) or temporal scales (e.g., week, month, year, or several years) can be mapped to look for possible patterns related to specific space and/or time units that merit more careful investigation. This process is systematic, and procedures are outlined a priori. The patterns in such maps often differ dramatically, and they might suggest specific exposures that warrant further consideration. This practice is more useful when longer periods of time are under study, as well as larger numbers of cases (e.g., >10 cases).
Cancer registries and state health agencies typically have criteria related to release of data for small geographic areas. Because of privacy concerns, some data cannot be released to the public, unless the privacy concerns are addressed. For example, a pin-point map of a small geographic area that identifies the residence of a cancer patient should not be made public (8). Similarly, many health agencies are prohibited from publicly releasing a table for a small geographic area with a small population, for each table cell might have only a few cases.
# Descriptive and Spatial Statistical and Epidemiologic Methods
Frequencies, rates, and descriptive statistics are useful first steps in evaluating the suspected cancer cluster. Confidence intervals can also be calculated for rates. Epidemiologic references can explain these methods (9). Other statistical approaches include Poisson regression. Often, the number of cases is limited, therefore limiting the type of analysis. If an investigation progresses to a case-control study, the odds ratio can be calculated. These study designs have been discussed in detail elsewhere (1,3,4).
Since the publication of the 1990 Guidelines, the field of spatial epidemiology has grown, especially in environmental health. This growth is influenced by the increased availability of geocoded data and statistical software. Space/time cluster analysis methods are often used to provide evidence about the existence of a suspected cluster and to define more precisely the extent of the suspected cluster in space and time.
As with any other epidemiologic analysis, there might be methodological issues with the use of clustering tools. Many of these concerns (e.g., limitations associated with small populations, environmental data quality, disease latency periods, and population migration) have been described in this report. Census data can provide the denominators for this type of analysis, and all the limitations associated with rapidly changing populations and intercensal year estimates also apply to these spatial/time cluster methods. In addition, when exposure or outcome analysis uses aggregate data and not data collected on an individual level, responders must use caution when interpreting this type of analysis, because the association with a particular environmental contaminant might not be true for individual cases, especially if there is heterogeneous distribution of the exposure over the geographic area. The related bias is known as ecological inference fallacy. Detailed information regarding methodological issues has been published previously (10).
Many methods have been developed to facilitate what is termed "space/time cluster analysis." These methods assess whether cases are closer to one another than would be observed if the cases had been distributed at random. The concept of "close" might mean closer geographically, closer in time, or closer both geographically and in time. The numeric value of "close" is determined by the responder. For a responder to make a determination of clustering, the space-time distances have to be summarized and then evaluated with any of a variety of statistical techniques. This task can be performed by summarizing where and when each case occurred, typically using the individuals' residence and the reported date of incidence. Some of the simplest methods merely compare the average distances between nearby cases to the average distances between cases and nearby noncases (or controls). If, on average, the cases are sufficiently closer to other cases (in space, time, or both space and time) than they are to noncases, the situation may be described as a cluster. Clusters can be detected by use of spatial autocorrelation techniques. Global clustering statistics, such as Geary's C (11), detect spatial clustering that occurs anywhere in a study area. They do not identify where the cluster(s) occur, nor do they identify differences in spatial patterns within the area. Local clustering statistics, such as Local Indicators of Spatial Autocorrelation (LISA) (12), identify potential clustering within smaller areas inside a study area. Often, global techniques are used first to identify potential clustering; then, local methods are used to pinpoint the clusters in the sample area. Many global statistics have local counterparts. For example, global Moran's I is the summation of local Moran's I statistics (13). Clusters reported to health agencies most often are local. It is beyond the scope of this report to describe more than a few of the most commonly used methods, and even then, these methods are described only briefly.
A useful summary of these techniques has been published recently (14). One of the most popular techniques for detecting clusters is called the spatial scan statistic. Its most commonly used implementation is the SaTScan software ( 15) (available at ). The underlying concept for this approach is the scan statistic, which considers both spatial areas and time intervals (16). Other implementations include the nearest neighbor test (17) and the Small Area Health Statistical Unit (SAHSU)'s "Rapid Inquiry Facility" (RIF) (18). Additional, statistical cluster methods have been discussed elsewhere (19). All of these methods have strengths and weaknesses. In a choice of a statistical cluster method, it might be useful to consider several criteria, such as ease of use and availability, the clarity and transparency of the method, its statistical power to detect the cluster of interest, and the method's ability to produce the desired output (20). Comparisons and reviews have been published (21). In addition, the Appendix of the 1990 Guidelines describes additional spatial statistical methods. | 17,464 | {
"id": "2d209d694c5f2920d20f848237d8b91a9a88af97",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Cryptosporidium (or "Crypto") is an extremely chlorine-tolerant parasite, so even well-maintained pools and interactive fountains can spread Crypto among swimmers. If an outbreak of Crypto infections occurs in your community, the health department might ask you to hyperchlorinate. Additionally, to help keep Crypto levels low, you might choose to hyperchlorinate periodically (for example, weekly). If necessary, consult an aquatics professional to determine or identify the feasibility, practical methods, and safety considerations before attempting to hyperchlorinate at your facility. Step 1: Close the pool to swimmers. If you have multiple pools that use the same filtration system -all of the pools will have to be closed to swimmers and hyperchlorinated. Do not allow anyone to enter the pool(s) until hyperchlorination is completed. Step 2: Raise the water's free chlorine concentration (see Table ) and maintain pH 7.5 or less and the temperature at 77°F (25°C) or higher.#
Step 3: Achieve a concentration time inactivation value (Ct) of 15,300 † to kill Crypto. The Ct refers to the concentration of free chlorine in parts per million (ppm) multiplied by time in minutes at a specific pH and temperature (see footnote § for guidance if chlorine stabilizer is used). Step 4: Confirm that the filtration system is operating while the water reaches and is maintained at the proper free chlorine level for disinfection.
# Use the formula below to calculate the time required for Crypto inactivation
Step 5: Backwash the filter thoroughly after reaching the Ct. Be sure the effluent is discharged directly to waste and in accordance with state or local regulations. Do not return the backwash through the filter. Where appropriate, replace the filter media.
Step 6: Allow swimmers back into the water only after the required Ct has been achieved and the free chlorine and pH levels have been returned to the normal operating range allowed by the state or local regulatory authority. of Crypto. This level of Crypto inactivation cannot be reached in the presence of 50 ppm chlorine stabilizer, even after 24 hours at 40 ppm free chlorine, pH 6.5, and a temperature of 77°F (25°C). Extrapolation of these data suggest it would take approximately 30 hours to kill 99.9% of Crypto in the presence of 50 ppm or less cyanuric acid, 40 ppm free chlorine, pH 6.5, and a temperature of 77°F (25°C) or higher. Shields JM, Arrowood MJ, Hill VR, Beach MJ. The effect of cyanuric acid on the chlorine inactivation of Cryptosporidium parvum in 20 ppm free chlorine. J Water Health. 2009;7(1):109-14. ¶ Many conventional test kits cannot measure free chlorine levels this high. Use chlorine test strips that can measure free chlorine in a range that includes 20-40 ppm (such as those used in the food industry) or make dilutions for use in a standard DPD test kit using chlorine-free water.
CDC does not recommend testing the water for Crypto after hyperchlorination is completed. Although hyperchlorination destroys Crypto's infectivity, it does not necessarily destroy the structure of the parasite. | 683 | {
"id": "77274cbcbab3baac2fa1f496da80ccd1787f1c45",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Figure 1. Recommended immunization schedule for persons aged 0 through 18 years -United States, 2014. (FOR THOSE WHO FALL BEHIND OR START LATE, SEE THE CATCH-UP SCHEDULE ). These recommendations must be read with the footnotes that follow. For those who fall behind or start late, provide catch-up vaccination at the earliest opportunity as indicated by the green bars in Figure 1. To determine minimum intervals between doses, see the catch-up schedule (Figure 2). School entry and adolescent vaccine age groups are in bold.Range of recommended ages for catch-up immunization NOTE: The above recommendations must be read along with the footnotes of this schedule. This schedule includes recommendations in effect as of January 1, 2014. Any dose not administered at the recommended age should be administered at a subsequent visit, when indicated and feasible. The use of a combination vaccine generally is preferred over separate injections of its equivalent component vaccines. Vaccination providers should consult the relevant Advisory Committee on Immunization Practices (ACIP) statement for detailed recommendations, available online at . Clinically significant adverse events that follow vaccination should be reported to the Vaccine Adverse Event Reporting System (VAERS) online () or by telephone (800-822-7967).Suspected cases of vaccine-preventable diseases should be reported to the state or local health department. Additional information, including precautions and contraindications for vaccination, is available from CDC online () or by telephone (800-CDC- ).#
- Administer monovalent HepB vaccine to all newborns before hospital discharge.
- For infants born to hepatitis B surface antigen (HBsAg)-positive mothers, administer HepB vaccine and 0.5 mL of hepatitis B immune globulin (HBIG) within 12 hours of birth. These infants should be tested for HBsAg and antibody to HBsAg (anti-HBs) 1 to 2 months after completion of the HepB series, at age 9 through 18 months (preferably at the next well-child visit). - If mother's HBsAg status is unknown, within 12 hours of birth administer HepB vaccine regardless of birth weight. For infants weighing less than 2,000 grams, administer HBIG in addition to HepB vaccine within 12 hours of birth. Determine mother's HBsAg status as soon as possible and, if mother is HBsAgpositive, also administer HBIG for infants weighing 2,000 grams or more as soon as possible, but no later than age 7 days.
# Doses following the birth dose:
- The second dose should be administered at age 1 or 2 months. Monovalent HepB vaccine should be used for doses administered before age 6 weeks. - Infants who did not receive a birth dose should receive 3 doses of a HepB-containing vaccine on a schedule of 0, 1 to 2 months, and 6 months starting as soon as feasible. See Figure 2. - Administer the second dose 1 to 2 months after the first dose (minimum interval of 4 weeks), administer the third dose at least 8 weeks after the second dose AND at least 16 weeks after the first dose. The final (third or fourth) dose in the HepB vaccine series should be administered no earlier than age 24 weeks. - Administration of a total of 4 doses of HepB vaccine is permitted when a combination vaccine containing HepB is administered after the birth dose.
# Catch-up vaccination:
- Unvaccinated persons should complete a 3-dose series. Administer a series of RV vaccine to all infants as follows:
1. If Rotarix is used, administer a 2-dose series at 2 and 4 months of age.
2. If RotaTeq is used, administer a 3-dose series at ages 2, 4, and 6 months.
3. If any dose in the series was RotaTeq or vaccine product is unknown for any dose in the series, a total of 3 doses of RV vaccine should be administered.
# Catch-up vaccination:
- The maximum age for the first dose in the series is 14 weeks, 6 days; vaccination should not be initiated for infants aged 15 weeks, 0 days or older. - The maximum age for the final dose in the series is 8 months, 0 days. - For other catch-up guidance, see Figure 2. The fourth dose may be administered as early as age 12 months, provided at least 6 months have elapsed since the third dose.
# Catch-up vaccination:
- The fifth dose of DTaP vaccine is not necessary if the fourth dose was administered at age 4 years or older.
- For other catch-up guidance, see Figure 2.
Tetanus and diphtheria toxoids and acellular pertussis (Tdap) vaccine. (Minimum age: 10 years for Boostrix, 11 years for Adacel) Routine vaccination:
- Administer 1 dose of Tdap vaccine to all adolescents aged 11 through 12 years.
- Tdap may be administered regardless of the interval since the last tetanus and diphtheria toxoid-containing vaccine.
- Administer 1 dose of Tdap vaccine to pregnant adolescents during each pregnancy (preferred during 27 through 36 weeks gestation) regardless of time since prior Td or Tdap vaccination. - Administer a 2-or 3-dose Hib vaccine primary series and a booster dose (dose 3 or 4 depending on vaccine used in primary series) at age 12 through 15 months to complete a full Hib vaccine series. - The primary series with ActHIB, MenHibrix, or Pentacel consists of 3 doses and should be administered at 2, 4, and 6 months of age. The primary series with PedvaxHib or COMVAX consists of 2 doses and should be administered at 2 and 4 months of age; a dose at age 6 months is not indicated. - One booster dose (dose 3 or 4 depending on vaccine used in primary series) of any Hib vaccine should be administered at age 12 through 15 months. An exception is Hiberix vaccine. Hiberix should only be used for the booster (final) dose in children aged 12 months through 4 years who have received at least 1 prior dose of Hib-containing vaccine.
# Catch-up vaccination
Footnotes -Recommended immunization schedule for persons aged 0 through 18 years-United States, 2014
For further guidance on the use of the vaccines mentioned below, see: . For vaccine recommendations for persons 19 years of age and older, see the adult immunization schedule.
# Additional information
- For contraindications and precautions to use of a vaccine and for additional information regarding that vaccine, vaccination providers should consult the relevant ACIP statement available online at . - For purposes of calculating intervals between doses, 4 weeks = 28 days. Intervals of 4 months or greater are determined by calendar months.
- Vaccine doses administered 4 days or less before the minimum interval are considered valid. Doses of any vaccine administered ≥5 days earlier than the minimum interval or minimum age should not be counted as valid doses and should be repeated as age-appropriate. The repeat dose should be spaced after the invalid dose by the recommended minimum interval.
Haemophilus influenzae type b (Hib) conjugate vaccine (cont'd)
- For recommendations on the use of MenHibrix in patients at increased risk for meningococcal disease, please refer to the meningococcal vaccine footnotes and also to MMWR March 22, 2013; 62(RR02);1-22, available at .
# Catch-up vaccination:
- If dose 1 was administered at ages 12 through 14 months, administer a second (final) dose at least 8 weeks after dose 1, regardless of Hib vaccine used in the primary series. - If the first 2 doses were PRP-OMP (PedvaxHIB or COMVAX), and were administered at age 11 months or younger, the third (and final) dose should be administered at age 12 through 15 months and at least 8 weeks after the second dose. - If the first dose was administered at age 7 through 11 months, administer the second dose at least 4 weeks later and a third (and final) dose at age 12 through 15 months or 8 weeks after second dose, whichever is later, regardless of Hib vaccine used for first dose. - If first dose is administered at younger than 12 months of age and second dose is given between 12 through 14 months of age, a third (and final) dose should be given 8 weeks later. - Administer a 4-dose series of PCV13 vaccine at ages 2, 4, and 6 months and at age 12 through 15 months.
- For children aged 14 through 59 months who have received an age-appropriate series of 7-valent PCV (PCV7), administer a single supplemental dose of 13-valent PCV (PCV13).
# Catch-up vaccination with PCV13:
- Administer 1 dose of PCV13 to all healthy children aged 24 through 59 months who are not completely vaccinated for their age. - For other catch-up guidance, see Figure 2.
# Vaccination of persons with high-risk conditions with PCV13 and PPSV23:
- All recommended PCV13 doses should be administered prior to PPSV23 vaccination if possible.
- For children 2 through 5 years of age with any of the following conditions: chronic heart disease (particularly cyanotic congenital heart disease and cardiac failure); chronic lung disease (including asthma if treated with high-dose oral corticosteroid therapy); diabetes mellitus; cerebrospinal fluid leak; cochlear implant; sickle cell disease and other hemoglobinopathies; anatomic or functional asplenia; HIV infection; chronic renal failure; nephrotic syndrome; diseases associated with treatment with immunosuppressive drugs or radiation therapy, including malignant neoplasms, leukemias, lymphomas, and Hodgkin disease; solid organ transplantation; or congenital immunodeficiency: 1. Administer 1 dose of PCV13 if 3 doses of PCV (PCV7 and/or PCV13) were received previously. 2. Administer 2 doses of PCV13 at least 8 weeks apart if fewer than 3 doses of PCV (PCV7 and/or PCV13) were received previously.
Pneumococcal vaccines (cont'd) 3. Administer 1 supplemental dose of PCV13 if 4 doses of PCV7 or other age-appropriate complete PCV7 series was received previously. 4. The minimum interval between doses of PCV (PCV7 or PCV13) is 8 weeks. 5. For children with no history of PPSV23 vaccination, administer PPSV23 at least 8 weeks after the most recent dose of PCV13. - For children aged 6 through 18 years who have cerebrospinal fluid leak; cochlear implant; sickle cell disease and other hemoglobinopathies; anatomic or functional asplenia; congenital or acquired immunodeficiencies; HIV infection; chronic renal failure; nephrotic syndrome; diseases associated with treatment with immunosuppressive drugs or radiation therapy, including malignant neoplasms, leukemias, lymphomas, and Hodgkin disease; generalized malignancy; solid organ transplantation; or multiple myeloma:
1. If neither PCV13 nor PPSV23 has been received previously, administer 1 dose of PCV13 now and 1 dose of PPSV23 at least 8 weeks later. 2. If PCV13 has been received previously but PPSV23 has not, administer 1 dose of PPSV23 at least 8 weeks after the most recent dose of PCV13. 3. If PPSV23 has been received but PCV13 has not, administer 1 dose of PCV13 at least 8 weeks after the most recent dose of PPSV23.
- For children aged 6 through 18 years with chronic heart disease (particularly cyanotic congenital heart disease and cardiac failure), chronic lung disease (including asthma if treated with high-dose oral corticosteroid therapy), diabetes mellitus, alcoholism, or chronic liver disease, who have not received PPSV23, administer 1 dose of PPSV23. If PCV13 has been received previously, then PPSV23 should be administered at least 8 weeks after any prior PCV13 dose. - A single revaccination with PPSV23 should be administered 5 years after the first dose to children with sickle cell disease or other hemoglobinopathies; anatomic or functional asplenia; congenital or acquired immunodeficiencies; HIV infection; chronic renal failure; nephrotic syndrome; diseases associated with treatment with immunosuppressive drugs or radiation therapy, including malignant neoplasms, leukemias, lymphomas, and Hodgkin disease; generalized malignancy; solid organ transplantation; or multiple myeloma.
# Inactivated poliovirus vaccine (IPV). (Minimum age: 6 weeks) Routine vaccination:
- Administer a 4-dose series of IPV at ages 2, 4, 6 through 18 months, and 4 through 6 years. The final dose in the series should be administered on or after the fourth birthday and at least 6 months after the previous dose.
# Catch-up vaccination:
- In the first 6 months of life, minimum age and minimum intervals are only recommended if the person is at risk for imminent exposure to circulating poliovirus (i.e., travel to a polio-endemic region or during an outbreak). - If 4 or more doses are administered before age 4 years, an additional dose should be administered at age 4 through 6 years and at least 6 months after the previous dose. - A fourth dose is not necessary if the third dose was administered at age 4 years or older and at least 6 months after the previous dose. - If both OPV and IPV were administered as part of a series, a total of 4 doses should be administered, regardless of the child's current age. IPV is not routinely recommended for U.S. residents aged 18 years or older. - Administer influenza vaccine annually to all children beginning at age 6 months. For most healthy, nonpregnant persons aged 2 through 49 years, either LAIV or IIV may be used. However, LAIV should NOT be administered to some persons, including 1) those with asthma, 2) children 2 through 4 years who had wheezing in the past 12 months, or 3) those who have any other underlying medical conditions that predispose them to influenza complications. For all other contraindications to use of LAIV, see MMWR 2013; 62 (No. RR-7):1-43, available at . For children aged 6 months through 8 years:
- For the 2013-14 season, administer 2 doses (separated by at least 4 weeks) to children who are receiving influenza vaccine for the first time. Some children in this age group who have been vaccinated previously will also need 2 doses. For additional guidance, follow dosing guidelines in the 2013-14 ACIP influenza vaccine recommendations, MMWR 2013; 62 (No. RR-7):1-43, available at . - For the 2014-15 season, follow dosing guidelines in the 2014 ACIP influenza vaccine recommendations. For persons aged 9 years and older:
- Administer 1 dose.
Measles, mumps, and rubella (MMR) vaccine. (Minimum age: 12 months for routine vaccination) Routine vaccination:
- Administer a 2-dose series of MMR vaccine at ages12 through 15 months and 4 through 6 years. The second dose may be administered before age 4 years, provided at least 4 weeks have elapsed since the first dose. - Administer 1 dose of MMR vaccine to infants aged 6 through 11 months before departure from the United States for international travel. These children should be revaccinated with 2 doses of MMR vaccine, the first at age 12 through 15 months (12 months if the child remains in an area where disease risk is high), and the second dose at least 4 weeks later. - Administer 2 doses of MMR vaccine to children aged 12 months and older before departure from the United States for international travel. The first dose should be administered on or after age 12 months and the second dose at least 4 weeks later.
# Catch-up vaccination:
- Ensure that all school-aged children and adolescents have had 2 doses of MMR vaccine; the minimum interval between the 2 doses is 4 weeks.
Varicella (VAR) vaccine. (Minimum age: 12 months) Routine vaccination:
- Administer a 2-dose series of VAR vaccine at ages 12 through 15 months and 4 through 6 years. The second dose may be administered before age 4 years, provided at least 3 months have elapsed since the first dose. If the second dose was administered at least 4 weeks after the first dose, it can be accepted as valid.
# Catch-up vaccination:
- Ensure that all persons aged 7 through 18 years without evidence of immunity (see MMWR 2007;56 , available at ) have 2 doses of varicella vaccine. For children aged 7 through 12 years, the recommended minimum interval between doses is 3 months (if the second dose was administered at least 4 weeks after the first dose, it can be accepted as valid); for persons aged 13 years and older, the minimum interval between doses is 4 weeks. 11.
Hepatitis A (HepA) vaccine. (Minimum age: 12 months) Routine vaccination:
- Initiate the 2-dose HepA vaccine series at 12 through 23 months; separate the 2 doses by 6 to 18 months.
- Children who have received 1 dose of HepA vaccine before age 24 months should receive a second dose 6 to 18 months after the first dose. - For any person aged 2 years and older who has not already received the HepA vaccine series, 2 doses of HepA vaccine separated by 6 to 18 months may be administered if immunity against hepatitis A virus infection is desired.
# Catch-up vaccination:
- The minimum interval between the two doses is 6 months.
# Special populations:
- Administer 2 doses of HepA vaccine at least 6 months apart to previously unvaccinated persons who live in areas where vaccination programs target older children, or who are at increased risk for infection. - Administer a 3-dose series of HPV vaccine on a schedule of 0, 1-2, and 6 months to all adolescents aged 11 through 12 years. Either HPV4 or HPV2 may be used for females, and only HPV4 may be used for males. - The vaccine series may be started at age 9 years. - Administer the second dose 1 to 2 months after the first dose (minimum interval of 4 weeks), administer the third dose 24 weeks after the first dose and 16 weeks after the second dose (minimum interval of 12 weeks).
# Catch-up vaccination:
- Administer the vaccine series to females (either HPV2 or HPV4) and males (HPV4) at age 13 through 18 years if not previously vaccinated. - Use recommended routine dosing intervals (see above) for vaccine series catch-up.
Meningococcal 1. If MenHibrix is administered to achieve protection against meningococcal disease, a complete ageappropriate series of MenHibrix should be administered. 2. If the first dose of MenHibrix is given at or after 12 months of age, a total of 2 doses should be given at least 8 weeks apart to ensure protection against serogroups C and Y meningococcal disease. 3. For children who initiate vaccination with Menveo at 7 months through 9 months of age, a 2-dose series should be administered with the second dose after 12 months of age and at least 3 months after the first dose. 4. For other catch-up recommendations for these persons, refer to MMWR 2013; 62(RR02);1-22, available at . For further guidance on the use of the vaccines mentioned below, see: . | 4,363 | {
"id": "e72ae4a50cb407b43b17c85275c6c8fdcf0ff590",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # Recommendations for Blood Lead Screening of Young Children Enrolled in Medicaid:
Targeting a Group at High Risk Advisory Committee on Childhood Lead Poisoning Prevention (ACCLPP) Summary Children aged 1-5 years enrolled in Medicaid are at increased risk for having elevated blood lead levels (BLLs). According to estimates from the National Health and Nutrition Examination Survey (NHANES) (1991)(1992)(1993)(1994), Medicaid enrollees accounted for 83% of U.S. children aged 1-5 years who had BLLs ³20 µg/dL. Despite longstanding requirements for blood lead screening in the Medicaid program, an estimated 81% of young children enrolled in Medicaid had not been screened with a blood lead test. As a result, most children with elevated BLLs are not identified and, therefore, do not receive appropriate treatment or environmental intervention.
To ensure delivery of blood lead screening and follow-up services for young children enrolled in Medicaid, the Advisory Committee on Childhood Lead Poisoning Prevention (ACCLPP) recommends specific steps for health-care providers and states. Health-care providers and health plans should provide blood lead screening and diagnostic and treatment services for children enrolled in Medicaid, consistent with federal law, and refer children with elevated BLLs for environmental and public health follow-up services.
States should change policies and programs to ensure that young children enrolled in Medicaid receive the screening and follow-up services to which they are legally entitled. Toward this end, states should a) ensure that their own Medicaid policies comply with federal requirements, b) support health-care providers and health plans in delivering screening and follow-up services, and c) ensure that children identified with elevated BLLs receive essential, yet often overlooked, environmental follow-up care. States should also monitor screening performance and BLLs among young children enrolled in Medicaid. Finally, states should implement innovative blood lead screening strategies in areas where conventional screening services have been insufficient. This report provides recommendations for improved screening strategies and relevant background information for health-care providers, state health officials, and other persons interested in improving the delivery of lead-related services to young children served by Medicaid.
# BACKGROUND
High blood lead levels (i.e., ³70 µg/dL) can cause serious health effects, including seizures, coma, and death (1 ). Blood lead levels (BLLs) as low as 10 µg/dL have been associated with adverse effects on cognitive development, growth, and behavior among children aged 1-5 years (1 ). Since the virtual elimination of lead from gasoline and other consumer products in the United States, lead-based paint in homes remains the major source of lead exposure among U.S. children (1 ). Most commonly, children are exposed through chronic ingestion of lead-contaminated dust (2 ).
Because children with elevated BLLs in the 10-25 µg/dL range do not develop clinical symptoms, screening is necessary to identify children who need environmental or medical intervention to reduce their BLLs. CDC has recommended specific interventions to reduce elevated BLLs (2,3 ). To ensure delivery of blood lead screening and follow-up services for young children enrolled in Medicaid, the Advisory Committee on Childhood Lead Poisoning Prevention (ACCLPP) has recommended specific steps for health-care providers and states (Box).
ACCLPP also is developing updated recommendations of specific guidelines for environmental, medical, developmental, nutritional, and educational interventions for children with elevated BLLs. ACCLPP regularly advises CDC regarding new scientific knowledge and technological developments and their practical implications for childhood lead poisoning prevention efforts.
# Advisory Committee on Childhood Lead Poisoning Prevention (ACCLPP)
Recommendations
# INTRODUCTION
# Change in the Epidemiology of Lead Poisoning
Despite the decline in average BLLs among the U.S. population, childhood lead exposure remains a major environmental health problem in the United States (4 ). During 1991-1994, CDC estimated that 890,000 (4.4%) children aged 1-5 years had elevated BLLs (³10 µg/dL), based on data from Phase 2 of the National Health and Nutrition Examination Survey (NHANES) III (4 ). The prevalence of elevated BLLs was 5.9% among children aged 1-2 years and 3.5% among children aged 3-5 years (4 ). Children aged 1-5 years were more likely to have elevated BLLs if they were poor, of non-Hispanic black race, or lived in older housing (4 ). The prevalence of elevated BLLs was higher among non-Hispanic black children (21.9%) and Mexican-American children (13.0%) living in housing built before 1946 than among non-Hispanic white children (5.6%) living in such older housing. Risk for an elevated BLL was higher among low-income children living in housing built before 1946 (16.4%) than among high-income children living in older housing (0.9%) (4 ).
In response to NHANES III information regarding the distribution and prevalence of lead poisoning among U.S. children, CDC changed its national blood lead screening recommendations to a state-based approach. In Screening Young Children for Lead Poisoning: Guidance for State and Local Public Health Officials, issued in 1997, CDC called on state health departments to develop plans to ensure screening of all children at high risk for having elevated BLLs (2 ). To develop such plans, CDC recommended that state health departments assess local data on BLLs and risk factors. If no statewide plan exists, states should screen virtually all young children, as recommended in the 1991 edition of Preventing Lead Poisoning in Young Children (2,3 ). Because young children living in poverty are at high risk for elevated BLLs, CDC recommended various strategies for increasing blood lead screening for all such children, including young children enrolled in Medicaid (2 ). Specifically, CDC recommended that children who receive Medicaid benefits should be screened unless there are reliable, representative blood lead data that demonstrate the absence of lead exposure among this population.
# Medicaid Children at High Risk for Having Elevated Blood Lead Levels
After publication of CDC's 1997 guidelines (2 ), CDC and the U.S. General Accounting Office (GAO) further analyzed data from Phase 2 of NHANES III, confirming that children enrolled in Medicaid are at high risk for having elevated BLLs (³10 µg/dL) (5 ). An estimated 535,000 children enrolled in Medicaid had elevated BLLs (Table 1), with a prevalence among children aged 1-5 years (9%) three times greater than that among young children not enrolled in Medicaid (3%) (5 ). Medicaid enrollees accounted for 60% of children aged 1-5 years who had BLLs ³10 µg/dL and 83% of young children with levels ³20 µg/dL (5,6 ).
This analysis also documented low screening rates among young children enrolled in Medicaid (5 ), with 81% of those aged 1-5 years and 79% of those aged 1-2 years not receiving a blood lead test (5,7 ). Of an estimated 535,000 children aged 1-5 years who were enrolled in Medicaid and had elevated BLLs, 352,000 (65%) had not been screened with a blood lead test and, therefore, did not receive appropriate medical and public health case management, follow-up care, and environmental services to reduce their BLLs (Table 2) (5 ). Several states have also reported low screening rates for children enrolled in Medicaid (8 ).
# Health Care Financing Administration (HCFA) Policies for Blood Lead Screening of Children Enrolled in Medicaid
Current HCFA policies require that all young children enrolled in Medicaid be screened with a blood lead test (i.e., federal Medicaid requirements). In December 1999, the American Academy of Pediatrics (AAP) supported this policy, emphasizing the higher risk for elevated BLLs among children enrolled in Medicaid (9 ).
Since 1989, federal law has required states to screen children enrolled in Medicaid for elevated BLLs as part of prevention services provided through the Early and Periodic Screening, Diagnosis, and Treatment (EPSDT) program. The EPSDT program provides screening and entitles children to any federally allowable diagnostic and treatment service necessary to correct the condition found by the screening (10 ). Details of blood lead screening requirements are periodically revised by HCFA, which administers the Medicaid program at the federal level.
Federal Medicaid regulations were updated in 1998 to require that all children must receive a blood lead screening test at ages 12 and 24 months. All children aged 36-72 months who have not previously been screened must also receive a blood lead test (11 ). A blood lead test is the only required screening element. There is no waiver to this Medicaid requirement for blood lead screening at this time.
# RECOMMENDATIONS TO ENSURE SCREENING AND FOLLOW-UP CARE FOR CHILDREN ENROLLED IN MEDICAID
To ensure blood lead screening and appropriate follow-up care for young children at risk for lead poisoning and enrolled in Medicaid, ACCLPP makes the following recommendations for health-care providers and states, as well as other agencies that administer Medicaid programs (e.g., those serving Medicaid-eligible Native Americans). According to CDC recommendations, if there are no reliable blood lead data demonstrating the absence of lead exposure among this population, health-care providers should a) screen all young children enrolled in Medicaid with a blood lead test in accordance with HCFA policy, b) provide medical management and care, and c) refer children with elevated BLLs for environmental and public health case management.
# ACCLPP Recommendations for Health-Care Providers
- All children enrolled in Medicaid should be screened with a blood lead test at ages 12 and 24 months or at ages 36-72 months if they have not previously been screened.
ACCLPP recommends administration of a blood lead screening test for all children enrolled in Medicaid at ages 12 and 24 months; children who have not previously been screened should be tested at ages 36-72 months (11 ). Administrating a riskassessment questionnaire instead of a blood lead test does not meet Medicaid requirements.
If children are exposed to lead, their BLLs tend to increase during ages 0-2 years and peak at ages 18-24 months (12 ). Therefore, screening is recommended at both ages 1 and 2 years to identify children who need medical management and environmental and public health case management (2 ). Identifying a child with an elevated BLL at age 1 year might prevent additional increases during ages 1-2 years. In addition, a child with a BLL <10 µg/dL at age 1 year might have an elevated level by age 2 years, underscoring the importance of rescreening at age 2 years. For example, among children at selected clinics in high-risk areas of Chicago in 1997, the prevalence of elevated BLLs (³10 µg/dL) was 17% . Thirty-nine percent of children whose BLLs were <10 µg/dL at age 1 year (during 1995-1996) were retested at age ³2 years (during 1996-1997), and 21% had developed elevated BLLs since their initial screening. Screening is recommended for previously untested children aged <6 years to rule out subclinically elevated BLLs during critical stages of development.
- Children identified with elevated BLLs require evaluation and referral for appropriate follow-up services.
Children identified with elevated BLLs should be evaluated and treated in accordance with CDC guidelines for follow-up care, including care coordination and public health, medical, and environmental management (2,3,13 ). Few children will have BLLs high enough to warrant intensive medical treatment (e.g., chelation therapy) (13 ). However, many children with elevated BLLs will need follow-up services, including more frequent blood lead testing, environmental investigation, case management, and lead hazard control (2,3 ). In many jurisdictions, public health or environmental agencies are available to provide or coordinate follow-up care for children with elevated BLLs who are referred by health-care providers. ACCLPP is developing updated recommendations for environmental, medical, developmental, nutritional, and educational interventions for children with elevated BLLs.
# ACCLPP Recommendations for States and Other Agencies That Administer Medicaid Programs
The actions recommended by ACCLPP for states (and other agencies administering Medicaid programs) establish the framework necessary to support and, in some cases, help health-care providers and administrators of managed-care plans provide the required blood lead screening and follow-up services to children enrolled in Medicaid. (The considerable variation in the state-by-state design and administration of Medicaid programs precludes assignment of specific agency responsibility.) Implementing some of the following strategies will require establishing new roles and partnerships for Medicaid agencies and health departments.
- Ensure that state Medicaid policies and program materials on blood lead screening are in compliance with federal Medicaid requirements.
According to an audit by GAO, 24 of 51 state Medicaid program policies were less rigorous than HCFA requirements (6 ). States should review their EPSDT policies and program documentation, particularly health-care provider manuals and EPSDT screening schedules, to ensure they comply with HCFA policy.
- Ensure that state Medicaid managed-care contracts explicitly include federal blood lead screening requirements and provide for follow-up services for children identified with elevated BLLs.
In 1997, of 42 state contracts with Medicaid managed care organizations (MCOs) evaluated by George Washington University, 20 (48%) discussed lead-related services, and 15 (36%) discussed blood lead screening (14 ). Few contracts specified a recommended frequency for screening services or addressed the obligation to provide medical and environmental services for children with elevated BLLs. Contracts that explicitly describe mandated health-care services create legally enforceable duties of the contractor more effectively than contracts that refer readers to the underlying statutory provision (14 ).
In states where young Medicaid beneficiaries are receiving care from MCOs, state Medicaid agencies should review existing contracts to ensure explicit inclusion of blood lead screening and follow-up services for children with elevated BLLs. These contracts also present an opportunity to require reporting of blood lead screening test results and to establish quality assurance measures. Particularly important are provisions for state oversight and feedback to the health-care provider regarding performance. To help states develop Medicaid managed-care contracts that promote blood lead screening and lead poisoning prevention, sample purchasing specifications are available for childhood lead poisoning prevention services (15 ). In developing their managed-care contracts, states should decide whether to permit health-care providers to refer Medicaidenrolled children to off-site laboratories to have their blood drawn, a practice that imposes an additional burden on families and could cause lower screening rates.
- Provide information to health-care providers regarding Medicaid blood lead screening policies and the data that justify them.
Health-care providers are more likely to implement clinical practice guidelines if they perceive the guidelines are based on scientific evidence on how to improve care (16 ). Physicians' perceptions regarding the importance of lead poisoning also influence implementation of screening guidelines (6,17 ). In addition, because CDC, AAP, and HCFA policies have been revised multiple times in the recent past, some health-care providers might be unaware of blood lead screening recommendations. State Medicaid and public health agencies should collaborate with medical professional associations and other stakeholders to develop healthcare provider education initiatives. Such educational programs should include information regarding a) the content of and scientific basis for blood lead screening recommendations, including differences between federal regulations, policies, and requirements; b) state Medicaid policy and contracts; c) state laws; and d) state screening plans. Educational initiatives also could promote reporting of blood lead test results by health-care providers and build community support for childhood lead poisoning prevention.
- Ensure that health-care providers receive adequate Medicaid EPSDT program reimbursement and capitation rates for blood lead screening and follow-up services.
Health-care providers need adequate reimbursement for their medical services, as do MCOs, which monitor their expenditures closely (18 ). Medicaid blood lead screening services are usually provided by physicians and MCOs as part of a larger package of prevention services for children (i.e., the EPSDT program) and are reimbursed as a package. In states where the list of required EPSDT services has been expanded without compensatory increases in reimbursement rates, there are substantial disincentives to providing the full range of EPSDT services or participating in the Medicaid program. All states should review the reimbursement rates and capitation rates for EPSDT services and blood lead screening and treatment services to ensure that reasonable compensation is provided to health-care providers and MCOs. In addition, other resources could be made available to health-care providers to promote blood lead screening. For example, health-care providers working in medically underserved areas with children at high risk for elevated BLLs could receive hand-held lead screening devices at no charge, and arrangements should be made for screening results to be reported to public health authorities.
- Ensure that children identified with elevated BLLs receive environmental followup in addition to other components of case management.
For blood lead screening to be a meaningful prevention service, identification of a child with an elevated BLL must trigger services that will lower the child's BLL. Any treatment regimen that does not eliminate lead exposure is inadequate (19 ). Services needed by a child with an elevated BLL can include environmental investigation to identify the source of the exposure and lead hazard control to eliminate its pathway, along with case management services to ensure that the child receives all necessary public health, environmental, medical, and social services (2,3 ).
Children enrolled in Medicaid are entitled by federal law to all necessary follow-up services allowable under the Medicaid program (10 ). Current HCFA policy requires that all state Medicaid programs cover a one-time environmental investigation to determine the source of lead and the necessary casemanagement services (Timothy M. Westmoreland, HCFA, personal communication, October 22, 1999) ( 11). Yet many states have failed to establish reimbursement mechanisms for these covered services (20 ). As of early 1999, only 22 state Medicaid agencies reported covering environmental investigation, whereas 20 reported covering case management (6,20 ).
HCFA policy on coverage of a one-time environmental investigation to determine the source of lead is limited to the health professional's time, as well as activities during an on-site investigation of the child's home or primary residence. This policy effectively allows activities such as visual assessment of the home, interview of occupants, and on-site X-ray fluorescence (XRF) analysis of lead paint content, when analyzers are available (Timothy M. Westmoreland, HCFA, personal communication, October 22, 1999). HCFA policy prohibits state Medicaid programs from covering the costs of environmental laboratory analyses (e.g., testing paint, dust, or water samples for lead content). These To receive these HUD funds, jurisdictions must develop plans and submit applications; information is available on the Internet at .
- Measure health-care provider performance on blood lead screening, give feedback to providers, and consider incentives and other quality-control measures to promote lead screening and ensure follow-up care.
Measuring performance and providing feedback on the delivery of health-care services affect the patterns of both health-care provider and health plan practices, including increasing screening rates (16,18 ). The widely used Health Plan Employer Data and Information Set (HEDIS) is based on the premise that measurement and reporting of plan performance will increase commitment to the measured services (22 ). In 1997, of 42 state contracts with Medicaid MCOs evaluated by George Washington University, 11 (26%) contracts discussed quality-control or performance measures related specifically to lead, and 10 (24%) contained lead-specific reporting requirements (14 ). State Medicaid agencies should measure the blood lead screening performance of participating health plans and health-care providers, provide feedback on their performance, and develop collaborative approaches for improving performance. State Medicaid agencies should consider focused quality-control or incentive measures to promote federally mandated clinical practices. Independent chart audits, automated reminder systems, visible enforcement actions, and task-specific financial incentives or penalties might be appropriate in some instances to improve performance.
For example, screening rates in Iowa increased after reminders were sent to health-care providers (Rita Gergely, Iowa Department of Public Health, personal communication, December 1999). In addition, the Iowa Department of Public Health is considering a plan to identify health-care providers' claims for Medicaid reimbursement for EPSDT screening visits for which there are no associated claims for blood lead tests. Local programs and federal Title V Maternal and Child Health programs would receive this information, which would be used to inform identified health-care providers of the Medicaid policy on blood lead screening.
- Ensure that state information systems allow tracking of blood lead screening and prevalence of elevated BLLs among young children enrolled in Medicaid.
In late 1997, GAO reported that only 12 states could readily provide information regarding the number of children enrolled in Medicaid, as well as those who had been screened for and identified as having elevated BLLs (6 ). HCFA policy now requires states to report the annual number of blood lead screening tests provided to Medicaid-enrolled children, beginning FY 1999 (revised HCFA form 416). State information systems should be developed or enhanced to a) monitor blood lead screening rates, b) meet the HCFA policy reporting requirement, c) assess the prevalence of elevated BLLs among children enrolled in Medicaid, and d) ensure that blood lead tests are reported systematically to public health agencies. Some states are shifting from information systems for fee-for-service claims to systems for managed care; other states must work with both systems. Some states do not have public health reporting mechanisms to monitor blood lead screening results, and most states have not linked Medicaid enrollment information and blood lead test results.
Information systems are being enhanced in some states. For example, Illinois, Iowa, Connecticut, North Carolina, Wisconsin, and Utah are developing systems to link Medicaid records and blood lead screening data. Iowa has developed a method for the Title V program to import blood lead screening data from the state's childhood lead poisoning prevention program. Rhode Island has developed an integrated pediatric public health tracking and information system (i.e., KidsNet) for pediatric preventive health services (e.g., blood lead screening and vaccination) (23 ).
- Establish partnerships between Medicaid agencies and other programs that serve children enrolled in Medicaid to ensure these children receive appropriate services.
Some obstacles to blood lead screening for children enrolled in Medicaid are not unique to blood lead screening but reflect the challenge of delivering preventive care to hard-to-reach segments of this population. To increase screening rates, some state and local programs are developing blood lead screening initiatives with other public programs. Some states are collaborating with the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), Head Start, or other programs for families receiving government assistance or with programs delivering preventive health services to Medicaid-enrolled children.
For example, Iowa is working to establish partnerships with its Title V program and the WIC program. The concerted efforts and copious resources dedicated by immunization programs to increase vaccination coverage among young children in recent years is showing impressive results, including for children living in poverty. In 1997, vaccination coverage rates for U.S. children aged 19-35 months living in poverty ranged from 86% for measles-containing vaccine to 93% for three doses of diphtheria and tetanus toxoids and pertussis vaccine (including 80% for the newer hepatitis B vaccine) (24 ). Public health agencies should review the literature in this field, as well as their own program successes, to identify models and links with other programs that could be adapted to improve blood lead screening performance for Medicaid-enrolled children.
- Use new blood lead screening technologies to improve blood lead screening services.
In 1997, the U.S. Food and Drug Administration (FDA) cleared for marketing a hand-held blood lead testing device for health-care facilities and physician laboratories certified by the Clinical Laboratory Improvement Amendments (CLIA)- (25 ). This device provides "real-time" blood lead screening results, and other portable devices are in development. Use of these portable lead testing devices can improve access to blood lead screening. These devices allow immediate feedback to families and eliminate the delay associated with a followup visit. If the test result shows an elevated BLL, the result can be confirmed by *In 1988, CLIA established minimum quality standards for all laboratories. Based on the complexity of the testing performed, laboratories must comply with various quality-control regulations. CLIA categorizes the hand-held lead screening device as "moderately complex." This designation limits the device's use to certified laboratories participating in proficiency testing programs and meeting other federal criteria. Thus, most physicians' offices cannot use this device because most are not certified to conduct this type of testing.
immediate retesting, and the family can be provided lead education and help to limit lead exposure. State Medicaid and public health agencies should collaborate to develop innovative ways to use this and other new screening technologies to enhance lead poisoning prevention services.
For public health facilities, CLIA requirements for use of this device can be met through collaboration with state public health laboratories, which can oversee quality control, coordinate proficiency testing, and provide training and certification of personnel. When hand-held devices move blood lead analysis from traditional laboratories to the field, information systems should be established to ensure that blood test results are reported systematically to the appropriate public health agencies so that valuable screening data are included in state tracking systems. Ideally, new blood lead testing devices for field or office use would provide automatic collection and reporting of blood lead test results.
# FUTURE CONSIDERATIONS
HCFA policy requires blood lead screening for all young children enrolled in Medicaid and does not currently permit any variation from this requirement. However, HCFA will be working with ACCLPP to develop an approach that would permit targeted screening of Medicaid-enrolled children in states where adequate data support such a policy. ACCLPP, in conjunction with CDC, has agreed to assist HCFA in considering this approach by developing scientifically based criteria for targeted screening. Targeted screening should be considered only on the basis of reliable and representative blood lead data (e.g., from screening and population surveys).
# CONCLUSION
During 1991-1994, an estimated 535,000 U.S. children aged 1-5 years in the Medicaid program had elevated BLLs (³10 µg/dL). Of children aged 1-5 years with BLLs ³20 µg/ dL, 83% were enrolled in Medicaid. Because most young children enrolled in Medicaid have not been screened with a blood lead test as required by law, an estimated 352,000 children with elevated BLLs have never been identified or treated. Failure to comply with Medicaid blood lead screening requirements forfeits the opportunity to use this targeted risk group to efficiently identify children with elevated BLLs who could benefit from medical and public health follow-up services.
To improve performance in this area, health-care providers and health plans should provide blood lead screening and diagnostic and treatment services for children enrolled in Medicaid and refer children with elevated BLLs for environmental and public health follow-up services. At the same time, states should ensure that young children enrolled in Medicaid receive the appropriate blood lead screening and follow-up care to which they are legally entitled.
# The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on | 5,605 | {
"id": "d8ea08207d71e4a470c0741d6293966173242bc6",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | INSIDE: Continuing Education Examination depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Introduction
Surveillance guidelines that include standardized case definitions for reporting of notifiable infectious diseases are important public health tools that contribute to the assessment of disease trends, measurement of intervention effectiveness, and detection of disease outbreaks (1). Comparable surveillance guidelines for the classification and reporting of adverse reactions after vaccination are nominal and have not com-monly include standardized case definitions (2,3). The term vaccine-related "complication" is often used interchangeably with the terms "side effects" or "adverse reaction" and should be distinguished from the term "adverse event." An adverse reaction is an untoward effect that occurs after a vaccination and is extraneous to the vaccine's primary purpose of producing immunity (e.g., eczema vaccinatum). Adverse reactions have been demonstrated to be caused by the vaccination. In contrast, adverse events are untoward effects observed or reported after vaccinations, but a causal relation between the two have yet to be established. This report focuses on adverse reactions known to be caused by smallpox vaccine (with the exception of dilated cardiomyopathy that has not been shown to have a causal relation) on the basis of scientific evidence. Uniform criteria for classification of adverse reaction reports after smallpox (vaccinia) vaccination have been established. Criteria for dilated cardiomyopathy, an adverse event (not shown to have a causal relation with smallpox vaccination), also have been established. These case definitions and reporting guidelines were used by CDC and the Office of the Assistant Secretary of Defense for Health Affairs during the mandatory Department of Defense (DoD) and voluntary U.S. DHHS smallpox vaccination programs that were designed to increase national preparedness in the event of a biologic terrorism attack (4)(5)(6).
Adverse reactions caused by smallpox vaccination range from mild and self-limited to severe and life-threatening. During the recent smallpox vaccination programs, CDC, DoD, and the joint Advisory Committee on Immunization Practices (ACIP)-Armed Forces Epidemiological Board (AFEB) Smallpox Vaccine Safety Working Group (SVS WG) relied on surveillance data from the smallpox pre-eradication era to estimate frequencies of adverse reactions expected during these vaccination programs. These estimates might be limited because the targeted population during the 1960s was mostly children who had never been previously vaccinated; the recent program targeted healthy adults, some of whom had received smallpox vaccines (6). Furthermore, adverse reactions during the 1960s were classified and reported by providers on the basis of subjective clinical diagnosis, and standard collection or analytical tools were not applied to the clinical data (7)(8)(9)(10)(11). Without explicit criteria for identifying cases for public health surveillance, state health departments and individual practitioners often apply different criteria for reporting similar cases (1). Surveillance data for adverse reactions after smallpox vaccination must be aggressively pursued and standardized to assess accurately the frequency of adverse events after smallpox vaccination.
This report describes the case definitions used to classify reported adverse events during the DHHS smallpox vaccination program. The overall safety surveillance system and related findings are reported elsewhere (12).
# Reporting Guidelines
These surveillance case definitions establish reporting criteria for prospective or retrospective classification of cases. Clinical, laboratory, and epidemiologic information are necessary for accurate case classification, which could not be obtained without cooperation and information exchange between treating health-care providers, state health officials, laboratorians, and CDC. Any adverse event after smallpox vaccination should be reported to state health departments and the Vaccine Adverse Events Reporting System (VAERS), particularly those events known to be adverse reactions (Table 1). Any adverse reaction that requires treatment with vaccine immune globulin (VIG) or cidofovir should be reported immediately, and adverse events that meet the regulatory criteria for "serious" (i.e., those resulting in hospitalization, permanent disability, life-threatening illness, or death) (13) should be reported within 48 hours; all other events should be reported within 1 week (14). Reports can be submitted to VAERS at http:// www.vaers.hhs.gov, 877-721-0366, or P.O. Box 1100, Rockville, MD 20849-1100. Report forms and assistance with reporting are available from VAERS (800-822-7967).
# Case Definition and Classification
ACIP-AFEB SVS WG was responsible for safety oversight of the DHHS and DoD smallpox preparedness programs. The majority of the case definitions for vaccinia adverse reactions were drafted by the Vaccinia Case Definition Development working group in collaboration with ACIP-AFEB SVS WG. The Vaccinia Case Definition Development working group membership included CDC and DoD medical epidemiologists, smallpox eradication experts, ophthalmologists, dermatologists, cardiologists, and infectious-disease specialists. These work groups contributed to the development of case definitions by completing literature searches, translating publications, coordinating or participating in meetings, collecting or analyzing data, investigating cases, providing subject-matter expertise, and drafting and revising case definitions. The case definition for fetal vaccinia was developed by CDC and DoD for use in the development of the National Smallpox Vaccine in Pregnancy Registry (15).
For all cases, exposure to vaccinia is required; vaccination, close contact with a recent vaccinee, or intrauterine exposure can fulfill this criterion. Vaccinia virus can be transmitted from the vaccination site to close contacts of persons who received smallpox vaccine, and these contacts can experience the same adverse reactions as vaccinees.
Smallpox vaccine adverse events can be divided into several categories. Localized reactions include a superinfection of the vaccination site or regional lymph nodes and robust take (RT). Unintentional transfer of vaccinia virus includes transfer from the vaccination site to elsewhere on the vaccinee's body and is called inadvertent autoinoculation. When the virus is transferred from the vaccinee to a close contact, it is called contact transmission. In either case, if the virus is transferred to the eye and surrounding orbit, it is referred to as ocular vaccinia. Diffuse dermatologic complications include two groups. The first group (hypersensitivity rashes) includes nonspecific, postvaccination rash, erythema multiforme minor, and Stevens-Johnson syndrome. These lesions are not thought to contain vaccinia virus, and because these terms are defined elsewhere in the dermatologic literature, they not included in this report. The second group of diffuse dermatologic complications is thought to be caused by replicating vaccinia virus that can be recovered from the rash of generalized vaccinia (GV) (usually a benign, self-limiting condition), eczema vaccinatum (EV) (often associated with substantial morbidity), and progressive vaccinia (PV) (which is generally fatal). Rare adverse reactions include fetal vaccinia and postvaccinial central nervous system diseases such as post-vaccinial encephalitis or encephalomyelitis. Other reactions previously reported but not well described include the newly characterized cardiac adverse reaction, myo/pericarditis (M/P) or the newly described cardiac adverse event dilated cardiomyopathy (DCM), which has not been yet been demonstrated to be etiologically linked.
# Localized Reactions
# Superinfection of the Vaccination Site or Regional Lymph Nodes
Vaccination progression and normal local reactions are difficult to distinguish from a superinfection of the vaccination site or regional lymph nodes. Secondary infections (i.e., superinfections) of the vaccination site are uncommon (rate: 0.55 per 10,000 vacinees) (16) and are typically mild to moderate in clinical severity (Box 1). Persons at greatest risk are children and those who frequently manipulate and contaminate the vaccination site. Occlusive dressings might lead to maceration and increased risk for infection. Secondary streptococcal bacterial infection has been reported ( 9), but anaerobic organisms and mixed infections also might be expected.
Distinguishing superinfection of the vaccination site or regional lymph nodes can be particularly challenging because both a bacterial cellulitis and a variant of the normal major reaction or RT have similar signs and symptoms.
# Robust Take
An RT is a vaccinial cellulitis and is defined as >3 inches (7.5 cm) of redness with swelling, pain, and warmth at the vaccination site. These symptoms peak on days 6-12 postvaccination and regress within the following 24-72 hours. RTs can occur in up to 16% of smallpox vaccinees (16,17). Suspected bacterial cellulitis after smallpox vaccination is often treated empirically with antibiotics without a period of observation, and bacterial or other cultures are rarely obtained. As clinicians have gained experience with smallpox vaccination, some have ceased treating empirically with antibiotics in favor of close observation. Clinical observations suggest that the majority of vaccinees' local symptoms resolved without intervention, leading providers to conclude that these cases were RTs (CDC, unpublished data, 2002). In contrast to an RT, superinfections refer to cellulitis caused by agents other than vaccinia.
# Unintentional Transfer of Vaccinia Virus Inadvertent Autoinoculation
Unintentional transfer of vaccinia virus includes transfer from the vaccination site (or probable site of inoculation in a person infected with vaccinia through contract transmission) to elsewhere on the vaccinee's (or contact's) body, which is called inadvertent autoinoculation (Box 2). Smallpox vaccinees or contacts can transfer vaccinia virus to their hands or fomites, which becomes a source for infection elsewhere on the body. The most common nonocular sites are the face, nose, mouth, lips, genitalia, and anus. Lesions at autoinoculation sites progress through the same stages (e.g., papular, vesicular, pustular, crusting, and scab) as the vaccination site. When autoinoculation occurs >5 days postvaccination, the developing immune response might attenuate the lesions and their progression. Persons at highest risk for inadvertent autoinoculation are children aged 1-4 years and those with disruption of the epidermis, including but not limited to abrasions and burns (17).
# Contact Transmission
When the virus is transferred from the vaccinee to a close contact, this transmission is termed contact transmission. Persons in close contact with a recent vaccinee or associated vectors (e.g., distant lesions on a vaccinee resulting from inadvertent autoinoculation, clothing, bedding, or bandages contaminated by vaccinia) might acquire vaccinia infection. Vaccinia virus is shed from the vaccination site or from dis-tant lesions caused by autoinoculation, GV, EV, or PV (Box 3). Viral shedding might occur until the scab detaches from the vaccination site or distant lesions; virus can survive for several days on clothing, bedding, or other fomites (18). Although virus exists in the scab, it is bound in the fibrinous matrix, and the scab is not thought to be highly infectious (17). Infection acquired through contact transmission can result in the same adverse events observed after smallpox vaccination.
# Ocular Vaccinia
In the case of either contact transmission or inadvertent autoinoculation, if the virus is transferred to the eye and surrounding orbit, this transmission is referred to as ocular vac-cinia. Ocular vaccinial infections result from the transfer of vaccinia from the vaccine site or other lesion containing vaccinia to or near the eye. These infections account for the majority of inadvertent inoculations (11) (Box 4). Infections can be clinically mild to severe and can lead to vision loss. When suspected, ocular vaccinia infections should be evaluated with a thorough eye examination, including use of a slit lamp. These cases should be managed in consultation with an ophthalmologist.
# Diffuse Dermatologic Complications
Diffuse dermatologic complications include two groups. The first includes erythema multiforme minor and Stevens-Johnson syndrome, which are clinically defined elsewhere in the der-Superinfection of the vaccination site or regional lymph nodes is defined as a nonvaccinial superinfection (e.g., superinfection caused by bacterial, fungal, atypical, or viral organisms) that produces a local inflammatory response at the site of vaccination and can present with the same signs and symptoms as vaccinia virus replication at the vaccination site.
# Case definition for superinfection of the vaccination site or regional lymph nodes
A suspected case of superinfection of the vaccination site or regional lymph nodes is defined by the following criteria:
- vaccination site or regional lymph nodes with three or more of the following findings: -dolor (pain and/or tenderness), -calor (warmth), -rubor (redness), and -other (regional lymphadenopathy; lymphangitic streaking; edema, induration and/or swelling; fluctuance; and blister with pus or honey-crusted plaque); and - temporal criterion: -onset or peak symptoms occur from day of vaccination to day 5 after vaccination and/or day 13-60 after vaccination (excludes days 6-12 after vaccination); and - clinical course: -clinical criteria persist or worsen for hours to days after vaccination; patient report is adequate.
# BOX 1. Surveillance case definition for superinfection of the vaccination site or regional lymph nodes after smallpox vaccination for use in smallpox vaccine adverse event monitoring and response
A confirmed case of superinfection of the vaccination site or regional lymph nodes is defined by the following criteria:
- vaccination site or regional lymph nodes with three or more of the following findings:
-dolor (pain and/or tenderness), -calor (warmth), -rubor (redness), and -other (regional lymphadenopathy; lymphangitic streaking; edema, induration and/or swelling; fluctuance; and blister with pus or honey-crusted plaque); and - temporal criterion: -symptoms occur from day of vaccination to 60 days after vaccination (inclusive); and - laboratory criteria having one or more of the following findings:
-positive results of pathogenic culture (e.g., bacterial, fungal, atypical, or nonvaccinial viral culture), -positive microscopy results (e.g., Gram stain, silver stain, acid-fast bacillus stain, or darkfield), and -positive result of bioburden testing- of the vaccinia vaccine vial; or - radiographic findings: -findings consistent with superinfection (e.g., lymphadenopathy or abscess) by magnetic resonance imaging, computed tomography scan, or ultrasound.
matologic literature (19,20), and other nonspecific postvaccination rashes with lesions that are thought to be free of vaccinia virus. For surveillance purposes, clinical diagnosis is adequate for case classification. The second group includes adverse reactions thought to be caused by replicating vaccinia virus recovered from skin lesions, which can be associated with risk for autoinoculation or contact transmission (21).
# Generalized Vaccinia
GV is a disseminated vesicular or pustular rash and is usually benign and self-limited among immunocompetent hosts (Box 5). GV might be accompanied by fever and can produce skin lesions anywhere on the body. GV also can appear as a regional form that is characterized by extensive vesiculation around the vaccination site or as an eruption localized to a single body region (e.g., arm or leg). The skin lesions of GV are thought to contain virus spread by the hematogenous route. First-time vaccinees are at higher risk for GV than revaccinees (22). GV is often more severe among persons with underly-ing immunodeficiency who might have been inadvertently vaccinated; these patients might benefit from early intervention with VIG. GV should not be confused with multiple inadvertent inoculations that might occur in the presence of acute or chronic exfoliative, erosive, or blistering skin disease, including Darier's disease.- GV also should be differentiated from EV, which typically occurs in persons with a history of atopic dermatitis and is often associated with systemic illness.
# Eczema Vaccinatum
Persons with a history of atopic dermatitis (i.e., eczema) are at highest risk for EV (Box 6). Onset of the characteristic lesions can occur concurrently or shortly after the occurrence of the reaction at the vaccination site. EV cases resulting from Inadvertent autoinoculation occurs when a person who has received smallpox vaccine or experienced inoculation from contact might physically transfer vaccinia virus from vaccination or contact site to another part of the body through scratching or through inanimate objects such as clothing, dressings, or bedding. The most common sites of inadvertent autoinoculation, nonocular are the face, nose, mouth, lips, genitalia, and anus. Lesions at autoinoculation sites progress through the same papular, vesicular, and pustular stages as the vaccination site. When autoinoculation occurs more than 5 days postvaccination, the developing immune response might attenuate the lesions and their progression. Persons at highest risk for inadvertent autoinoculation are children aged 1-4 years and those with disruption of the epidermis, including, but not limited to, abrasions or burns.
# Case definition for inadvertent autoinoculation (nonocular)
A suspected case of inadvertent autoinoculation is defined by the following criteria:
- affected person has been recently vaccinated and had one or more lesions at one or more sites beyond the boundaries of the dressing that was used. Lesions progress morphologically through papule, vesicle, pustule, and scab,- and
# BOX 2. Surveillance case definition for inadvertent autoinoculation (nonocular) for use in smallpox vaccine adverse event monitoring and response
- lesions appear up to 10 days after the period beginning with initial vaccination or contact through final resolution and scarring of lesions at vaccination or contact inoculation site. A probable case of inadvertent autoinoculation meets the criteria for a suspected case and - does not meet the case definition for generalized vaccinia*, eczema vaccinatum, or progressive vaccinia, and - other likely etiologies (e.g., bacterial or viral infection) have been excluded. A confirmed case of inadvertent autoinoculation meets the criteria for a suspected or probable case of inadvertent autoinoculation and has the following laboratory evidence of vaccinia infection (on the basis of testing skin lesions distant from the vaccination site in a vaccinee):
- positive test results for vaccinia polymerase chain reaction (PCR) assay or antigen detection techniques (e.g., direct fluorescent assay or direct fluorescent antibody), or - demonstration of vaccinia virus by culture. Early diagnosis of EV and administration of VIG is helpful to reduce associated morbidity and mortality. Two thirds of potential smallpox vaccinees failed to recall an exclusionary dermatologic condition such as atopic dermatitis (eczema) in themselves or their close contacts (23). Poor recall and inconsistent diagnosis of atopic dermatitis contributes to a challenging screening program to exclude persons at risk for EV (24). Therefore, when evaluating vaccinees or close contacts of recent vaccinees with a clinical presentation consistent with EV, despite a negative self-reported history of atopic dermatitis or Darier's disease, clinicians should consider EV and assess for treatment with VIG.
# Progressive Vaccinia
PV is rare, severe, and often fatal and results when a vaccination site fails to heal and vaccinia virus replication persists. The skin surrounding the vaccination site becomes vaccinia infected, and secondary metastatic vaccinia lesions can occur (Box 7). Lesions can appear necrotic, fungated, piled-up, or well demarcated. Concomitant bacterial superinfection also can occur. PV typically occurs in persons with an underlying humoral or cellular immune deficit. Management of PV should include aggressive therapy with VIG or second line agent cidofovir, intensive monitoring, and tertiary-level supportive care (17).
# Rare Reactions Fetal Vaccinia
Rarely, smallpox vaccination of a pregnant woman can result in fetal vaccinia (Box 8). Transmission to the fetus can occur any time during pregnancy. The route of transmission is unknown but is presumed to be through viremia. Abortion, stillbirth, or live birth (usually premature followed by death) or birth of a surviving but pox-scarred infant can occur after the mother's exposure to vaccinia. Fetal or newborn skin lesions have been described as macular, papular, vesicular, pustular, or as scars or areas of epidermolysis (15). Contact transmission of vaccinia virus occurs when virus shed from smallpox vaccination sites or from distant lesions in persons with inadvertent autoinoculation, generalized vaccinia (GV), eczema vaccinatum (EV), or progressive vaccinia (PV) is transferred to another person. Virus might be shed until the scab heals. The virus can survive for several days on clothing, bedding, or other inanimate surfaces. An unvaccinated or nonrecently vaccinated person in close contact (i.e., touching a person's lesions or vaccination site, clothing, bedding, or bandages) with a vaccinee or their inanimate objects might acquire vaccinia infection. Infection acquired through contact transmission can result in inadvertent autoinoculation from the exposure site to additional sites (including ocular vaccinia) or can result in other adverse reactions.
# Case definition for contact transmission (nonocular)
A suspected case of contract transmission is defined as - the development one or more lesions that progress through papule, vesicle, or pustule stages; - history of close contact with -someone who has received the vaccine <3 weeks before the exposure, or -someone who has had autoinoculation GV, EV, and PV diagnosed; and - lesions appear 3-9 days after vaccinia exposure. A probable case of contact transmission meets the case definition for suspected case, and other likely etiologies (e.g., bacterial or viral infection) have been excluded.
For a confirmed case of contact transmission, laboratory evidence of vaccinia infection exists on the basis of testing skin lesions in a close contact of a known vaccinee. Laboratory evidence of vaccinia infection includes
- positive test results for vaccinia polymerase chain reaction (PCR) assay or antigen detection techniques (e.g., direct fluorescent antibody) or - demonstration of vaccinia virus by culture. Note: Histopathologic examination showing typical orthopox cytopathic changes or electron microscopy of biopsy specimens revealing orthopox virus is strongly suggestive of infection with vaccinia and should be confirmed by subsequent PCR or culture.
# BOX 3. Surveillance case definition for contact transmission (nonocular) for use in smallpox vaccine adverse event monitoring and response
# Postvaccinial Central Nervous System Disease
Another rare adverse reaction is postvaccinial central nervous system (CNS) disease such as postvaccinial encephalitis (PVE) or encephalomyelitis (PVEM). CNS disease after smallpox vaccination is most common among infants aged <12 months (10) (Box 9). Clinical symptoms reflect cerebral or cerebellar dysfunction with headache, fever, vomiting, altered mental status, lethargy, seizures, and coma. CNS lesions have been reported in the cerebrum, medulla, and spinal cord. Both PVE and PVEM have been described (11,25). No clinical criteria, radiologic findings, or laboratory tests exist that are di-agnostic for PVE or PVEM. Other infectious or toxic etiologies should be considered and ruled out; the diagnosis of PVE or PVEM after smallpox vaccination is a diagnosis of exclusion.
# Cardiac Myo/pericarditis
An adverse reaction previously reported but not well described is myo/pericarditis. During 1950-1970, both myocarditis and pericarditis were reported after smallpox vaccination in Europe and Australia, where the vaccinia strains Ocular vaccinia is the appearance of lesions suspicious for vaccinia in or near the eye in a vaccinee (or close contact of a vaccinee) up to 10 days after the period beginning with initial vaccinia exposure through final resolution and scarring of lesions at vaccination site or exposure site, to include periocular- involvement, lid involvement (blepharitis † ), conjunctival involvement (conjunctivitis § ), and/or corneal involvement (keratitis ¶ ).
# Case definition for ocular vaccinia
A suspected case of ocular vaccinia is defined as the new onset of erythema or edema of the conjunctiva (conjunctivitis), eyelid (blepharitis), or periocular area or inflammation of the cornea (keratitis) in a recent vaccinee (or close contact of vaccinee) that cannot be ascribed to another ocular diagnosis and - Temporal criteria of -onset after vaccinia exposure but not more than 10 days after the period beginning with initial vaccinia exposure through final resolution and scarring of lesions at vaccination site or exposure site or -onset during the presence of visible vaccinia lesions before scab separation.
A probable case of ocular vaccinia is the presentation in or near the eye of lesions consistent with vaccinia infection to include formation of vesicles that progress to pustules that umbilicate and indurate in a manner similar to a normal vaccinia reaction (Note: see exceptions/differences to conjunctival and cornea clinical presentation footnotes § and ¶ ) and - Temporal criteria of -onset after vaccinia exposure but not more than 10 days after the period beginning with initial vaccinia exposure through final resolution and scarring of lesions at vaccination site or exposure site or -onset during the presence of visible vaccinia lesions before scab separation. A confirmed case of ocular vaccinia meets the criteria as a probable or suspected case of ocular vaccinia with laboratory evidence of vaccinia infection (testing lesions on or near the eye). Laboratory evidence includes -positive test results for vaccinia polymerase chain reaction assay or antigen detection techniques (e.g., direct fluorescent antibody) or -demonstration of vaccinia virus by culture.
# BOX 4. Surveillance case definition for ocular vaccinia for use in smallpox vaccine adverse event monitoring and response
- Periocular involvement (generally above the brow or below the inferior orbital rim) Papules, vesicles, or pustules not involving the ocular adnexa, lids, lid margins, or canthi. † Blepharitis: (lid involvement): Mild -few pustules, mild edema, and no fever; Severe -pustules, edema, hyperemia, lymphadenopathy (preauricular or submandibular), cellulitis, and fever. § Conjunctivitis (involvement of membrane that lines inner surface of the eyelid and exposed surface of the eyeball, excluding the cornea): Conjunctiva might be inflamed (red) with serous or mucopurulent discharge if lesions involve the conjunctiva or cornea. Symptoms of ocular irritation (foreign body sensation) might be present with onset of erythema. Conjunctival lesions typically form vesicles that rapidly ulcerate and form raised "moist appearing" white lesions (rather than pustules that scab) before final resolution: Mild -mild hyperemia or edema, no membranes or focal lesions; Severe -marked hyperemia, edema, membranes, focal lesions, lymphadenopathy (preauricular and/or submandibular), and fever. ¶ Keratitis (corneal involvement): Corneal lesions might present as a grey-appearing superficial punctuate keratitis that might later coalesce to form a geographic epithelial defect resembling herpes simplex keratitis. Stromal corneal lesions might present as small subepithelial opacities resembling those observed in epidemic keratoconjunctivitis, might be associated with epithelial defect, and might progress to corneal haze/clouding: Mild -grey epitheliitis, no epithelial defect, and no stromal haze or infiltrate (cloudy cornea); Moderate -epithelial defect; Severe -ulcer, stromal haze, or infiltrate.
# MMWR February 3, 2006
Generalized vaccinia (GV) is a disseminated vesicular or pustular rash appearing anywhere on the body >4 days after smallpox vaccination and might be accompanied by fever. GV also can appear as a regional form that is characterized by extensive vesiculation around the vaccination site or as an eruption localized to a single body region. The skin lesions of GV are thought to contain virus spread by the hematogenous route. Primary vaccinees are at higher risk for GV than revaccinees. GV is usually self-limited among immunocompetent hosts. Vaccinia immune globulin (VIG) might be beneficial in the rare case where an immunocompetent person appears systemically ill. GV is often more severe among persons with underlying immunodeficiency, and these patients might benefit from early intervention with VIG.
Notes: 1) Systemic symptoms might be present.
2) At early onset of some cases, skin lesions might be macules or slightly elevated papules; in late cases, lesions might have developed scabs. 3) History or clinical signs of eczema/atopic dermatitis or Darier's disease or severe illness should prompt evaluation for eczema vaccinatum. 4) Presence of acute or chronic exfoliative, erosive, or blistering skin disease (e.g., acute burn and epidermolytic hyperkeratosis) should prompt consideration of multiple inadvertent inoculations. 5) A vaccinial skin eruption characterized by grouped vesicles or pustules close to or surrounding the vaccination site but do not appear to be satellite lesions (e.g., on the basis of the presence of a large number of lesions and evidence that the lesions are caused by hematogenous spread of vaccinia) might constitute a regional form of generalized vaccinia.
# Case definition for generalized vaccinia
A probable case of generalized vaccinia occurs in persons recently vaccinated or in a close contact of a recent vaccinee and meets the following criteria:
- a vesicular or pustular eruption at one or more body areas distant from the vaccination site or inadvertent inoculation site, - skin eruption occurring approximately 4-19 days after smallpox vaccination or contact with someone vaccinated against smallpox, - lesions follow approximately the same morphologic progression as a primary vaccination site (i.e., papule, vesicle, pustule, scab, and scar), - unlikely that autoinoculation accounts for skin eruption, and - other likely etiologies have been excluded. A confirmed case of generalized vaccinia can occur in a recent vaccinee, a known close contact of a recent vaccinee, or someone with no known contact but who otherwise meets the criteria for a probable case and no laboratory evidence of vaccinia infection (on the basis of testing skin lesions distal from vaccination site in a vaccinee or distal to likely inoculation site ) exists in a close contact of a known vaccinee or in a patient who is not known to be a close contact.
- Laboratory evidence of vaccinia infection includes -demonstration of vaccinia virus by culture or -histopathologic examination shows typical orthopox cytopathic changes, and either polymerase chain reaction assay or antigen detection techniques (e.g., direct fluorescent antibody) revealing vaccinia or electron microscopy of biopsy specimens revealing orthopox virus are strongly suggestive of infection with vaccinia and should be confirmed by subsequent culture.
# BOX 5. Surveillance case definition for generalized vaccinia after smallpox vaccination for use in smallpox vaccine adverse event monitoring and response
used are considered more reactogenic than the New York City Board of Health (NYCBOH) vaccine used in the United States (26)(27)(28). In the United States, six cases were reported before the resumption of smallpox vaccination in late 2002 (29)(30)(31)(32)(33)(34).
Findings from the DHHS and DoD smallpox programs support a causal relation between smallpox vaccination with the NYCBOH strain and myo/pericarditis (35)(36)(37)(38). Myo/pericarditis refers to inflammatory disease of the myocardium, pericardium, or both. The clinical presentation of inflammatory heart disease can include pain, dyspnea, and palpitations that range from subtle to severe. Results of specific cardiac diagnostic testing are variable. The case definition (Box 10) was designed to include the spectrum of abnormalities found in inflammatory heart disease (39).
# Dilated Cardiomyopathy
An adverse event noted in temporal association to smallpox vaccination but not demonstrated to be linked etiologically to smallpox vaccination is dilated cardiomyopathy (DCM). DCM is a known sequelae of viral myocarditis and can present weeks to months after acute infection (40). Although DCM has not been reported in association with vaccinia vaccination, three DCM cases with symptom onset after smallpox vaccination were identified among DHHS vaccinees (12,41,42). The causal relation between smallpox vaccination and these cases of DCM is unclear. However, because vac-cinia might induce myo/pericarditis and DCM is a rare but recognized outcome of viral myocarditis, an etiologic association between the occurrence of DCM after smallpox vaccination is biologically plausible. The case definition for DCM should be used for surveillance in the context of smallpox preparedness programs (Box 11).
Eczema vaccinatum (EV) is a localized or generalized papular, vesicular, pustular, or erosive rash syndrome that can occur anywhere on the body, with a predilection for areas currently or previously affected by atopic dermatitis lesions. Persons with a history of atopic dermatitis are at highest risk for EV. Onset of the characteristic lesions can be noted either concurrently with or shortly after the development of the local vaccinial lesion in vaccinees. EV cases resulting from secondary transmission usually appear with skin eruptions approximately 5-19 days after the suspected exposure. EV lesions follow the same dermatologic course (progression) as the vaccination site in a vaccinee, and confluent or erosive lesions can occur. The rash is often accompanied by fever and lymphadenopathy, and affected persons are frequently systemically ill. EV tends to be most severe among first-time vaccinees, young children, and unvaccinated close contacts of vaccinees. Before the availability of vaccinia immune globulin (VIG), this condition had a high mortality. Establishing the diagnosis early and treating with VIG is crucial in reducing mortality.
Notes: 1) Although a history consistent with eczema/atopic dermatitis or Darier's disease (i.e., keratosis follicularis) is included in the surveillance definition for EV, clinicians evaluating vaccinees or close contacts of recent vaccinees with a presentation consistent with EV who do not report having one of these dermatologic conditions should still consider EV as a clinical diagnosis and assess for treatment with VIG. 2) Lesions of EV are in approximately the same stage of morphologic development as each other and progress.
# Case definition for eczema vaccinatum
A probable case of EV occurs in persons recently vaccinated or in a known close contact of a recent vaccinee and meets the following criteria:
- a history of or current exfoliative skin condition consistent with a diagnosis of eczema/atopic dermatitis or Darier's disease; and
# BOX 6. Surveillance case definition for eczema vaccinatum after smallpox vaccination for use in smallpox vaccine adverse event monitoring and response
- multiple skin lesions that developed -in a vaccinated person concurrently or soon after lesion at vaccination site or in a close contact of a recent vaccinee up to 3 weeks after exposure, if time of relevant exposure is known, -are distant from the vaccination or likely inoculation site (i.e., are unlikely to be satellite lesions), and -are or have become vesicular/pustular sometime during their evolution (i.e., do not remain macular or papular). Erosive or ulcerative lesions might be observed; and - other likely etiologies have been excluded such as eczema herpeticum (which can be particularly difficult to distinguish), smallpox, chickenpox, disseminated herpes zoster, or pustular (bacterial) impetigo. A confirmed case of EV can occur in a recent vaccinee, a known close contact of a recent vaccinee, or someone with no known contact but who otherwise meets the criteria for a probable case and laboratory evidence of vaccinia infection exists (on the basis of testing skin lesions distal from vaccination site in a vaccinee or distal to likely inoculation site, if identifiable) in a close contact of a known vaccinee or in a patient who is not known to be a close contact.
- Laboratory evidence of vaccinia infection includes -demonstration of vaccinia virus by culture or -polymerase chain reaction assay or antigen detection techniques (e.g., direct fluorescent antibody) revealing vaccinia, histopathologic examination showing typical orthopox cytopathic changes, and electron microscopy of biopsy specimens revealing orthopox virus are strongly suggestive of infection with vaccinia and should be confirmed by subsequent culture.
# MMWR February 3, 2006
# Case Classification
Case definitions are designed to identify the entities under surveillance, not to define the certainty of an etiologic relation between the entities under surveillance and vaccinia exposure. Thus, cases are classified as suspected if they have compatible clinical features but either further investigation is required or investigation of the case did not provide enough supporting evidence for the diagnosis. Cases are classified as probable if they have compatible clinical features and information is supportive of, but not definitive for, the diagnosis. Cases are classified as confirmed if pathognomonic findings or other evidence definitely supporting the diagnosis is docu-mented. In certain instances, confirmation is made on the basis of verification of the presence of vaccinia or of orthopox virus DNA by culture or polymerase chain reaction (PCR) detection. Confirmation also might be determined on the basis of other evidence in instances in which vaccinia presence is not a pathognomonic feature of the entity under surveillance (e.g., myocarditis or pericarditis, both of which are believed to be an immune-mediated response to vaccination rather than mediated through vaccinia viral infection).
Classification of certain smallpox adverse vaccine reactions can be confounded by lack of information or the absence of pathognomonic findings. This is illustrated by the limited un-Progressive vaccinia (PV) refers to continued vaccinia virus replication with progressive infection of skin surrounding the vaccination site or inadvertent inoculation site and sometimes the occurrence of secondary metastatic lesions in a person with underlying immune deficit (humoral or cellular). The condition is rare, severe, and often lethal. The description of the vaccination site lesion is usually that of a necrotic lesion; however, this is not the only presentation described with PV. Lesions can appear "clean," fungated, piled-up, well-demarcated, or have bacterial superinfection.
# Case definition for progressive vaccinia
A suspected case of PV occurs in persons recently vaccinated or in a known close contact of a recent vaccinee and meets the following criteria:
- have a known or suspected depressed or defective immune system (suspicion might arise as result of clinical suspicion of PV); and - have a vaccination site lesion or inadvertent inoculation site with one of the following criteria: -no or minimal inflammatory response around lesion associated with a nonhealing or enlarging vaccination lesion, -progressive expansion at or after 15 days of vaccination, or -Failure to heal or failure of lesion to regress at or after 15 days of vaccination; and - other likely etiologies (e.g., bacterial superinfection) have been excluded. A probable case of PV occurs in persons recently vaccinated or in a known close contact of a recent vaccinee and meets the following criteria:
- a known or suspected depressed or defective immune system and - a vaccination site lesion or inadvertent inoculation site with one of the following criteria: -no or minimal inflammatory response around lesion associated with a nonhealing or enlarging vaccination lesion, -progressive expansion at or after 21 days of vaccination, or -failure to heal or failure of lesion to regress at or after 21 days of vaccination; and - other likely etiologies (e.g., bacterial superinfection) have been excluded. A confirmed case of PV can occur in a recent vaccinee, a known close contact of a recent vaccinee, or someone with no known contact but who otherwise meets the criteria for a suspected case and laboratory evidence of vaccinia infection (on the basis of testing skin lesions at least 15 days after vaccination or likely time of inoculation in a close contact of a recent vaccinee or in persons with no known contact with a vaccinee) exist Laboratory evidence of vaccinia infection include - demonstration of vaccinia virus by culture or - histopathologic examination showing typical orthopox cytopathic changes, and either polymerase chain reaction assay or antigen detection techniques (e.g., direct fluorescent antibody) revealing vaccinia or electron microscopy of biopsy specimens revealing orthopox virus are strongly suggestive of infection with vaccinia and should be confirmed by subsequent culture.
# BOX 7. Surveillance case definition for progressive vaccinia for use in smallpox vaccine adverse event monitoring and response
Fetal vaccinia is a rare but serious complication resulting from vaccinia infection in utero that can occur in any trimester of pregnancy. It has been characterized by the presence of multiple skin lesions, including macules, papules, vesicles, pustules, scars, ulcers, areas of maceration, and epidermolysis (blisters or bullae). When fetal vaccinia occurs, the outcome is usually fetal death, stillbirth, or premature birth of a neonate that dies shortly after birth. Survival of babies with apparent in utero infection such as scarring has also been described. Vaccinia infection in products of conception occurs rarely.
# Case definition for fetal vaccinia
A suspected case of fetal vaccinia is the presence of any skin lesion in a fetus or newborn exposed to vaccinia virus in utero and no other attributable cause.
A probable case of fetal vaccinia is the presence of multiple skin lesions that might include macules, papules, vesicles, pustules, scars, ulcers, areas of maceration, or epidermolysis (blisters/bullae) in a fetus or newborn exposed to vaccinia in utero and no other attributable cause.
A confirmed case of fetal vaccinia meets the criteria for a probable case and has laboratory evidence for vaccinial infection:
Laboratory criteria for diagnosis includes - positive test results for vaccinia virus by polymerase chain reaction assay or antigen detection techniques (e.g., direct fluorescent antibody), or - demonstration of vaccinia virus by culture. Vaccinia infection: Fetus, newborn, or product of conception with laboratory evidence of infection and without any clinical symptoms or signs.
# BOX 8. Surveillance case definition for fetal vaccinia for use in smallpox vaccine adverse event monitoring and response
derstanding of the vaccinia virus' pathogenesis and the relevance of vaccinia testing in conditions such as postvaccinial CNS diseases and fetal vaccinia. No large-scale study examining the cerebral spinal fluid (CSF) of smallpox vaccinees exists; therefore, the significance of the presence or absence of vaccinia neutralizing antibodies or vaccinia virus recovered from the CSF of a vaccinee with CNS findings is not fully understood. Testing for the presence or absence of vaccinia virus cannot confirm or refute a smallpox vaccine-associated etiology for these conditions. Conversely, the inability to recover vaccinia virus from burnt-out lesions from an infant exposed to vaccinia in utero and born with skin lesions com-patible with fetal vaccinia does not mean that intrauterine infection did not occur. To address these limitations, the suspected category for these adverse reactions allows a clinically compatible case with indeterminate or no testing to remain under consideration.
# Vaccinia Laboratory Diagnostics
The smallpox vaccine is made from live vaccinia virus, a species of the Orthopoxvirus genus, and protects against smallpox disease. It does not contain the related Orthopoxvirus variola, which is the causative agent of smallpox disease (25). When evaluating a reported adverse event after smallpox vaccination, standard laboratory testing should be conducted to rule out other infections, including viral infections (e.g., herpes zoster, varicella, enteroviruses, and herpes simplex). During an outbreak of other orthopoxviruses (e.g., monkeypox and smallpox), specific testing also should be completed for these viruses.
Laboratory testing for vaccinia is still largely a research tool assisting the evaluation, diagnosis, and treatment of adverse reactions after smallpox vaccination. Testing is available through the Laboratory Response Network (LRN) (43), which can be accessed through state and local health departments with confirmatory testing at CDC. Diagnostic techniques that can aid in the detection of vaccinia include electron microscopy (EM), viral culture, and PCR (17). Although these tests can identify orthopoxviruses, only certain PCR tests or biologic characterization of viral growth on chick chorioallantoic membrane specifically identifies the presence of vaccinia virus. Positive results for EM, PCR, and viral culture should be interpreted with caution. EM or culture results compatible with orthopox virus and presumed to be vaccinia might be another zoonotic orthopox virus or, in the worst case scenario, variola itself. Experience with vaccinia diagnostics is limited. Molecular contamination resulting in false-positive PCR results can occur. Therefore, use of appropriate controls is essential. PCR techniques, which test for orthopoxvirus nucleic acid presence, at LRN have undergone multicenter validation studies, and these data along with clinical experience with these assays is being compiled to enable the U.S. Food and Drug Administration to review the test reagents and assay for wider diagnostic use (17). Serologic testing of single serum samples for vaccinia is of limited value because it cannot discern existing immunity from recent infection. Testing of paired acute and convalescent sera antibody titers is rarely available during initial assessment of a suspected vaccinia adverse event (17).
# Conclusions
Surveillance case definitions rely on a constellation of clinical, laboratory, and epidemiologic criteria for classification. They are not intended to replace clinical judgment and should not be used to direct individual patient care, assess causality, or determine disability compensation or reimbursement for medical care. The definitions have been developed specifically for the surveillance of adverse events during the voluntary DHHS civilian smallpox preparedness and response program and might not apply to vaccinees in other settings (e.g., clini-
# BOX 10. Surveillance case definition for myo/pericarditis for use in smallpox vaccine adverse event monitoring and response
# Myo/pericarditis
Myo/pericarditis is defined as a spectrum of disease caused by inflammation of the myocardium and/or pericardium. Patients might have symptoms and signs consistent with myocarditis, pericarditis, or both. For the purpose of surveillance reporting, patients with myocarditis or pericarditis will be reported as having myo/pericarditis. These categories are intended for surveillance purposes and not for use in individual diagnosis or treatment decisions.
# Case definition for acute myocarditis
A suspected case of acute myocarditis is defined by the following criteria and the absence of evidence of any other likely cause of symptoms or findings below:
- presence of dyspnea, palpitations, or chest pain of probable cardiac origin in a patient with either one of the following:
-electrocardiogram (ECG) abnormalities beyond normal variants, not documented previously, including -ST-segment or T-wave abnormalities, -paroxysmal or sustained atrial or ventricular arrhythmias, -AV nodal conduction delays or intraventricular conduction defects, or -continuous ambulatory electrocardiographic monitoring that detects frequent atrial or ventricular ectopy or -Evidence of focal or diffuse depressed left-ventricular (LV) function of indeterminate age identified by an imaging study (e.g., echocardiography or radionuclide ventriculography). A probable case of acute myocarditis, in addition to the above symptoms and in the absence of evidence of any other likely cause of symptoms, has one of the following:
- elevated cardiac enzymes, specifically, abnormal levels of cardiac troponin I, troponin T, or creatine kinase myocardial band (a troponin test is preferred);
- evidence of focal or diffuse depressed LV function identified by an imaging study (e.g., echocardiography or radionuclide ventriculography) that is documented to be of new onset or of increased degree of severity (in the absence of a previous study, findings of depressed LV function are considered of new onset if, on followup studies, these findings resolve, improve, or worsen); or - abnormal result of cardiac radionuclide imaging (e.g., cardiac MRI with gadolinium or gallium-67 imaging) indicating myocardial inflammation. A case of acute myocarditis is confirmed if histopathologic evidence of myocardial inflammation is found at endomyocardial biopsy or autopsy.
# Case definition for acute pericarditis
A suspected case of acute pericarditis is defined by the presence of - typical chest pain (i.e., pain made worse by lying down and relieved by sitting up and/or leaning forward) and - no evidence of any other likely cause of such chest pain. A probable case of acute pericarditis is a suspected case of pericarditis, or a case in a person with pleuritic or other chest pain not characteristic of any other disease, that, in addition, has one or more of the following:
- pericardial rub, an auscultatory sign with one to three components per beat, - ECG with diffuse ST-segment elevations or PR depressions without reciprocal ST depressions that are not previously documented, or - echocardiogram indicating the presence of an abnormal collection of pericardial fluid (e.g., anterior and posterior pericardial effusion or a large posterior pericardial effusion alone). Note: A case of acute pericarditis is confirmed if histopathologic evidence of pericardial inflammation is evident from pericardial tissue obtained at surgery or autopsy.
# Surveillance Results and Outcome
The voluntary DHHS civilian smallpox preparedness and response program established adverse event case monitoring capacity and response within CDC and state and local health departments. Data collected were derived from the standardized case definitions and enabled rapid classification, reporting, and the ability to compare adverse reaction surveillance data from various sources. Accurate classification of vaccinia adverse reactions is necessary for appropriate use of VIG and cidofovir for the treatment of select vaccinia reactions. cal trials). These surveillance case definitions might not apply to the international community, which administers non-NYCBOH vaccinia strains and faces different considerations in health-care use and surveillance systems. These case definitions are a component of a dynamic surveillance process. As knowledge and experience increase, they might be modified or improved. Ongoing input from health-care providers and health departments are important for the successful implementation and use of these case definitions. Dilated cardiomyopathy (DCM) is defined by the World Health Organization as a disease of the heart muscle characterized by dilatation and impaired contraction of the left ventricle or both ventricles. It might be idiopathic, familial/genetic, viral, and/or immune, alcoholic/toxic, or associated with recognized cardiovascular disease in which the degree of myocardial dysfunction is not explained by the abnormal loading conditions or the extent of ischemic damage. Histology is nonspecific. Presentation is usually with heart failure, which is often progressive. Arrhythmias, thromboembolism, and sudden death are common and can occur at any stage. Despite full cardiac workup, the etiology of DCM often cannot be determined. Because other viruses are known to cause DCM, the occurrence of DCM after smallpox vaccination is plausible, although not previously described. Because histologic findings of DCM are often nonspecific, endomyocardial biopsy is not likely to confirm an etiologic role for vaccinia but might rule out other known etiologies of DCM (e.g., sarcoidosis and amyloidosis). The following case definition describes the structural and functional cardiac criteria and clinical conditions required to define a case of DCM for use in the smallpox adverse events monitoring and response activity.
# Case definition for dilated cardiomyopathy after smallpox vaccination
Smallpox vacinees are defined as having DCM if they meet all of the following criteria:
- cardiac muscle dysfunction exists, characterized by ventricular dilatation (e.g., left ventricular enddiastolic dimension >55 mm) and impaired contraction of one or both ventricles (e.
Postvaccinial central nervous system disease is an inflammation of the parenchyma of the central nervous system after smallpox vaccination. When the inflammation occurs in the brain it is called "encephalitis," and when it occurs in the spinal cord it is called "myelitis." Confirmation of diagnosis is made only on the basis of the demonstration of central nervous system (CNS) inflammation by histopathology or neuroimaging, but might be suggested by clinical features.*
# Case definition for encephalitis
A suspected case of encephalitis is defined as the presence of the acute onset of
- encephalopathy (e.g., depressed or altered level of consciousness, lethargy, or personality change lasting >24 hours) - clinical evidence suggestive of cerebral inflammation to include one of the following:
-fever (temperature >100°F ) or hypothermia (temperature 5 white blood cells/mm 3 ), -presence of focal neurologic deficit, -electroencephalography findings consistent with encephalitis, -neuroimaging findings on magnetic resonance imaging consistent with acute inflammation (with or without meninges) or demyelination of the nervous system, or -seizures (either new onset or exacerbation of previously controlled seizures); and - no alternative (investigated) etiologies are found for presenting sign and symptoms. A probable case of encephalitis is defined by the acute onset of - encephalopathy as outlined for a suspected case, and - two or more of the criterion listed for suspected encephalitis as clinical evidence suggestive of cerebral inflammation, and - no alternative (investigated) etiologies are found for presenting sign and symptoms. A confirmed case of encephalitis is defined as - demonstration of acute cerebral inflammation (with or without meninges) or demyelination by histopathology and - no alternative (investigated) etiologies are found for presenting sign and symptoms.
# Case definition for acute myelitis
A suspected case of myelitis is defined as presence of the acute onset of - myelopathy (development of sensory, motor, or autonomic dysfunction attributable to the spinal cord, including upper-and lower-motor neuron weakness, sensory level, and bowel or bladder dysfunction); and - additional evidence suggestive of spinal cord inflammation, to include one of the following: -fever (temperature >100 º F ) or hypothermia (temperature 5 white blood cells/mm 3 ), -presence of focal neurologic deficit, -electromyographic (EMG) studies suggestive of central (spinal cord) dysfunction, or -neuroimaging findings on MRI demonstrating acute inflammation (with or without meninges) or demyelination of the spinal cord, and - no alternative (investigated) etiologies are found for presenting sign and symptoms. A probable case of myelitis is defined by the acute onset of - myelopathy as outlined for a suspected case, and - two or more of the criterion listed for suspected myelitis as evidence suggestive of spinal cord inflammation, and - no alternative (investigated) etiologies are found for presenting sign and symptoms. A confirmed case of myelitis is defined by - demonstration of acute spinal cord inflammation (with or without meninges) or demyelination by histopathology, and - no alternative (investigated) etiologies are found for presenting sign and symptoms.
Note: Cases fulfilling the criteria for both encephalitis and myelitis in any category would be classified as encephalomyelitis.
# BOX 9. Surveillance case definition for postvaccinial central nervous system disease after smallpox vaccination for use in smallpox vaccine adverse event monitoring and response
- Some cases of postvaccinial encephalomyelitis might be caused by direct infection of the CNS by vaccinia virus, resulting in acute cytotoxic neuronal damage and inflammation. However, laboratory evidence of virus replication is lacking in the majority of cases and might be attributable to immunopathological mechanisms instead. In the majority of cases, histopathologic findings similar to other "postinfectious" encephalitides are found, suggestive of an inflammatory demyelinating condition (acute disseminated encephalitis/encephalomyelitis ). The distinction between these two pathologic mechanisms might be difficult to make clinically in the early stages of illness. A diagnosis of ADEM might be favored by a longer interval of onset after vaccination; magnetic resonance imaging findings of multifocal areas of increased signal on T2, fluid attenuation inversion recovery, and diffusion weighted imaging sequences, suggestive of acute demyelination; and an absence of CSF pleocytosis.
# The Vaccinia Case Definition Development Working Group
Continuing Nursing Education (CNE). This activity for 1.9 contact hours is provided by CDC, which is accredited as a provider of continuing education in nursing by the American Nurses Credentialing Center's Commission on Accreditation.
# Goal and Objectives
This MMWR provides surveillance guidelines for adverse reactions after smallpox vaccination. The case definitions and classifications were developed by CDC, in conjunction with staff of the Department of Defense, the Advisory Committee on Immunization Practices-Armed Forces Epidemiological Board Smallpox Vaccine Safety Working Group, and smallpox subject-matter experts. The goal of this report is to provide the case definitions used to classify reported adverse events after smallpox vaccination during the 2003 Department of Health and Human Services (DHHS) smallpox vaccination program. Upon completion of this educational activity, the reader should be able to 1) identify the known adverse reactions to smallpox vaccination and be familiar with their case definitions, 2) recognize the importance of reporting adverse events after smallpox vaccination, 3) describe those adverse reactions after smallpox vaccination that require laboratory confirmatory testing for vaccinia, and 4) describe when and how to report all adverse events after vaccination.
To receive continuing education credit, please answer all of the following questions.
# Normal vaccination reactions are often confused with…
A. superinfection of the vaccination site. B. generalized vaccinia. C. eczema vaccinatum. D. none of the above.
# Contact transmission can occur as a result of which of the following?
A. generalized vaccinia. B. progressive vaccinia. C. eczema vaccinatum. D. All of the above.
# When evaluating a reported adverse event after smallpox vaccination, standard laboratory testing should include…
A. herpes zoster virus. B. enteroviruses. C. herpes simplex virus. D. other orthopox viruses during specific outbreaks. E. all of the above.
# Eczema vaccinatum and progressive vaccinia are adverse reactions after smallpox vaccination and…
A. thought to be associated with replicating vaccinia virus recovered from skin lesions. B. are benign and self-limited. C. usually require vaccinia immune globulin (VIG). D. often can be prevented by screening persons before smallpox vaccination. E. A, C, and D.
# Reports to the Vaccine Adverse Event Reporting | 12,634 | {
"id": "7e1f9a08a1cbdbd70e4450a4a93df91a7d17fdd2",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | The following CDC staff members prepared this report: for the use of anergy skin testing in conjunction with purified protein derivative (PPD)-tuberculin skin testing of persons infected with human immunodeficiency virus (HIV). In February 1997, CDC convened a meeting of consultants to discuss current information regarding anergy skin testing, PPD skin testing, and tuberculosis (TB) preventive therapy for HIV-infected persons. In formulating these recommendations, CDC considered the results of this meeting, as well as a review of published studies pertaining to PPD and anergy skin testing of persons who are infected with HIV.
Isoniazid preventive therapy is effective in reducing the incidence of active TB among persons who have HIV infection and latent TB. Because of the complications associated with TB disease in HIV-infected persons, these persons must be screened for tuberculin infection. HIV-infected persons who have positive reactions to skin testing with PPD tuberculin should be evaluated to exclude active TB and offered preventive therapy with isoniazid if indicated. However, HIVinfected persons may have compromised ability to react to PPD-tuberculin skin testing, because HIV infection is associated with an elevated risk for cutaneous anergy.
Anergy testing is a diagnostic procedure used to obtain information regarding the competence of the cellular immune system. When a clinician elects to use anergy testing as part of a multifactorial assessment of a person's risk for TB, the two Food and Drug Administration-approved Mantoux-method tests (mumps and Candida), used together, with cut-off diameters of 5 mm of induration, are recommended. Efforts to apply the results of anergy testing to preventive therapy decisions must be supplemented with information concerning the person's risk for infection with Mycobacterium tuberculosis.
Factors limiting the usefulness of anergy skin testing include problems with standardization and reproducibility, the low risk for TB associated with a diagnosis of anergy, and the lack of apparent benefit of preventive therapy for groups of anergic HIV-infected persons. Therefore, the use of anergy testing in conjunction with PPD testing is no longer recommended routinely for screening programs for M. tuberculosis infection conducted among HIV-infected persons in the United States.
# INTRODUCTION
Persons infected with human immunodeficiency virus (HIV) are at risk for having active tuberculosis (TB) disease (1-3 ) because of either reactivation of latent infection with Mycobacterium tuberculosis (4 ) or rapid progression of newly acquired infection (5 ). Active TB, in turn, may hasten the evolution of HIV-related disease, possibly through mechanisms involving increased cytokine production and accelerated HIV replication (6,7 ). Isoniazid preventive therapy administered to persons who have positive reactions to purified protein derivative (PPD) tuberculin is important in preventing active TB in the United States. However, HIV-infected persons may have compromised ability to react to PPD-tuberculin skin testing, because HIV infection is associated with an elevated risk for cutaneous anergy (8,9 ).
In 1991, CDC published guidelines recommending that anergy skin testing be performed in conjunction with PPD-tuberculin skin testing for HIV-infected persons who are being evaluated for latent infection with M. tuberculosis (10 ). Demonstration of anergy in an HIV-infected, PPD-negative person at high risk for infection with M. tuberculosis was recommended as an indication for isoniazid preventive therapy. Since the publication of these guidelines, several studies have been conducted to examine the results of anergy and PPD skin testing in HIV-infected persons and the effect of isoniazid for the prevention of TB in anergic HIV-infected persons. In February 1997, CDC convened a meeting of consultants to discuss these recent publications and other available data. CDC has used both the results of discussions at this meeting and a review of published literature to prepare this updated report, which provides recommendations concerning the use of anergy testing for HIV-infected persons in the United States.
# BACKGROUND Anergy in HIV Disease and Risk for Active TB in HIV-Infected Persons
Anergy skin testing assesses the responses to skin-test antigens to which a cellmediated, delayed-type hypersensitivity (DTH) response is expected. Anergy or DTH tests placed by using the Mantoux method of intradermal injection have conventionally been classified as positive if an induration measuring ≥5 mm is observed at the injection site within 48-72 hours. Persons who have positive skin tests are considered to have relatively intact cell-mediated immunity. Persons who do not mount a DTH response are considered to be anergic and to be at elevated risk for complications of deficient cell-mediated immunity. PPD-tuberculin skin testing itself elicits a DTH reaction, so persons who have positive PPD responses are not anergic. Responsiveness to DTH antigens may be decreased in HIV-infected persons, and the 1991 guidelines (10 ) recommended the use of companion or "control" antigens in conjunction with PPD testing to provide additional information about a person's ability to mount a DTH response.
Impaired DTH response, which is directly related to decreasing CD4+ T-lymphocyte count, is a predictive factor for progression of acquired immunodeficiency syndrome and mortality in HIV-infected persons (11)(12)(13)(14). In some studies, after the data were stratified by CD4+ count, anergy skin-test results have provided additional prognostic information about HIV-related complications and death (11)(12)(13)(14). The results of several studies have suggested that HIV-infected persons diagnosed as anergic have a greater risk for active TB than do nonanergic, PPD-negative, HIV-infected persons from the same population (15)(16)(17)(18)(19). In addition, among HIV-infected persons who have positive PPD-tuberculin skin-test results, data from one study demonstrated that those who did not respond to testing with a control antigen had a higher risk for active TB than did PPD-positive persons who reacted to such testing (19 ). Although CD4+ counts in HIV-infected persons have been reported as inversely related to their risk for active TB, being anergic has in itself been associated with an elevated risk for TB, even after the data were stratified by CD4+ count (16,19 ). Two studies suggest that mortality may be increased in HIV-infected persons who have active TB and who do not respond to testing with PPD compared with patients who have TB and HIV infection from the same population who respond to PPD testing (20,21 ).
# Anergy and Interpretation of PPD-Tuberculin Skin Tests
The results from supplemental anergy testing, in conjunction with PPD-tuberculin skin testing, have been interpreted in two ways (10 ). A positive DTH response to anergy testing, in conjunction with a negative PPD skin-test result, has been interpreted as evidence that the negative PPD test result is a true negative and the person tested is not infected with M. tuberculosis. Lack of DTH response to anergy skin testing, in conjunction with a negative PPD skin-test result, has been interpreted as evidence that the person is unable to mount a positive response to PPD even if infected with M. tuberculosis.
Certain issues, however, compromise the validity of both of these interpretations. Selective nonreactivity to PPD is a recognized phenomenon in some patients with active culture-positive TB (22,23 ). The observations that mumps reactivity may remain after loss of PPD reactivity (24 ) and that PPD boosting can occur in persons who have an initial positive reaction to control antigens (25)(26)(27) suggest the possibility that DTH response to other antigens may be preserved after loss of PPD reactivity. Therefore, a DTH response is not proof that a negative PPD applied at the same time indicates absence of infection with M. tuberculosis. Lack of response to one or more control antigens, however, does not always mean inability to respond to PPD. In populations in which the prevalence of tuberculin reactivity is high, the percentage of persons who react to PPD may be higher than the percentage reacting to several other antigens (28,29 ). Even in populations in which the prevalence of PPD positivity is low, some persons respond to PPD testing despite lack of response to a companion antigen (19 ). Furthermore, a valid demonstration of anergy does not predict infection with M. tuberculosis; instead, it indicates that, for the anergic person, the PPD test results may not be useful in judging the likelihood of infection with M. tuberculosis and the need for TB preventive therapy.
# ANERGY SKIN TESTING AND DECISIONS REGARDING TB PREVENTIVE THERAPY
Because of the complications associated with active TB in HIV-infected persons, these persons must be screened for latent TB infection and receive complete preventive treatment with isoniazid if indicated. Several factors limit the usefulness of anergy skin testing for making decisions regarding TB preventive therapy for HIV-infected persons in the United States. These factors include problems with the standardization and reproducibility of anergy skin-testing methods, the variable risk for TB associated with a diagnosis of anergy, and the lack of documented benefit of anergy skin testing as part of screening programs for M. tuberculosis infection among HIV-infected persons.
# Difficulties in Interpreting Anergy Skin-Testing Results
The decision of whether to perform anergy testing cannot be separated from determining how to perform it. Lack of standardization and lack of outcome data based on uniform antigens and tests are among the greatest obstacles to evaluating the effectiveness of anergy testing and making decisions concerning TB preventive therapy. Studies have been based on a variety of control antigen preparations and skin-test administration and methods for reading test results. Studies involving multiple-DTH antigen panels have demonstrated that several antigens may be necessary to maximize the likelihood that all persons able to respond are identified (29,30 ). DTH responses may vary in different populations of immunocompetent groups (31 ). DTH reactions of <5 mm of induration have been reported in young children (32 ), who might not be expected to have fully developed cellular immunity, and reactions of <5 mm of induration also have been noted in response to diluent without antigen (33)(34)(35).
The variability in test readings noted for the PPD-tuberculin skin test (36,37 ) is likely to be associated with other DTH tests. Data from one study of variation between duplicate PPD tests (37 ), indicated that more than half of the discordant readings occurred in persons with one test measured as zero and the other as 1-4 mm of induration. Serial anergy testing among HIV-infected persons has shown unpredictable differences over time (24,38 ). This variation may result from changes in host immune competence or from characteristics of the tests themselves. Furthermore, the choice and number of companion antigens and the criteria used for the interpretation of results of anergy testing may lead to false classification of a) persons with intact cell-mediated immunity as anergic or b) anergic persons as nonanergic.
The applicability of skin-testing methods to pediatric populations is uncertain. Children who have HIV infection have had DTH responses, and lack of response has been associated with the stage of HIV-related disease (39,40). However, no clear utility of anergy testing for the evaluation for TB among children has been established (41 ). Skin testing for DTH reactions is an important tool in diagnosing a variety of primary immunodeficiency diseases (i.e., non-HIV-related immunodeficiency diseases). Therefore, any recommendations regarding anergy testing should take into account its value in patients who have primary immunodeficiency diseases.
# Anergy and Risk for Active TB
In studies conducted both in the United States and elsewhere, no definite association has been determined between anergy and the risk for active TB in HIV-infected persons; the results of these studies indicated that the magnitude of the risk is variable and the reason for the variation uncertain. Rates of TB in groups defined as anergic have ranged from zero to >12 per 100 person-years. The risk for active TB in anergic HIV-infected persons may be associated with ongoing risk for M. tuberculosis transmission (i.e., residence in areas of high TB case rates), rather than with a high probability of latent M. tuberculosis infection alone (15)(16)(17)(18)(19)(42)(43)(44)(45). This finding implies that any effect of isoniazid preventive therapy might be attributable not only to prevention of reactivation of latent infection, but also (or instead) to primary prophylaxis against new acquisition of infection.
# Effect of TB Preventive Therapy in Anergic Persons
Two recent studies of 6 months of isoniazid preventive therapy among anergic persons at risk for infection with M. tuberculosis have been conducted. One study involving HIV-positive anergic patients in the United States demonstrated no statistically significant effect of therapy, despite a 56% reduction in rates of TB from 0.9 per 100 person-years in placebo recipients to 0.4 per 100 person-years in isoniazid recipients (42 ). The failure to find statistical significance with a 56% point estimate for protection may result from a lower-than-expected TB case rate in placebo recipients. Researchers concluded that, because the TB rate in the untreated group was low (0.9%), preventive therapy would have minimal impact in reducing the number of incident TB cases among HIV-positive anergic persons but would result in a substantial number of uninfected persons being treated with isoniazid. The other study, involving HIV-positive anergic patients in Kampala, Uganda, demonstrated a high TB case rate (three per 100 person-years) in placebo recipients, but only a statistically insignificant (17%) reduction in isoniazid recipients (43 ). In summary, even if anergic HIV-infected persons are assumed to be at high risk for active TB and are administered isoniazid preventive therapy, the effectiveness of this intervention has not been established for this population.
# METHODS AND USES FOR ANERGY SKIN TESTING
Mumps and Candida antigens have been approved by the Food and Drug Administration (FDA) for intradermal DTH testing to assess cell-mediated immunity. Mumps skin-test antigen has been available for a longer time; lack of response to mumps antigen in HIV-infected persons has been associated with risk for TB (19 ), but some patients who have lost PPD reactivity with progression of HIV disease may still react to mumps (24 ). Candida DTH skin-test antigen was approved more recently. Data linking DTH response to this Candida antigen and risk for TB are limited, and published studies describing Candida skin-test responses have used different products marketed as allergenic extracts.
Both mumps and Candida antigens are applied by using the Mantoux method. The number of control antigens and the method of reading may affect the usefulness of Mantoux-method skin tests. More than two control antigens may be needed to avoid misclassifying immunocompetent persons as anergic (29,30 ). Limited information is available regarding the sensitivity and specificity of the two FDA-approved DTH tests used together. Other antigens (e.g., fluid tetanus toxoid and Candida extracts marketed for allergy testing) frequently are used for anergy testing, but with differing preparations, dosages, and dilutions.
The conventional cut-off measurement of induration diameter for interpreting a Mantoux-method skin-test result as positive is 5 mm of induration. In recent years, attempts have been made to detect DTH in HIV-infected patients by using smaller cut-off diameters (3 mm, 2 mm, 1 mm, or "any induration"). In addition to the validity concerns already noted, the use of smaller cut-offs is subject to technical difficulties (46 ) and has not improved predictive value. A multiple-puncture test battery (MULTITEST CMI ® ), which includes seven antigens and a diluent control, has been approved by the FDA for DTH testing. Results have been associated with clinical outcomes in several studies of HIV-infected persons (12,(15)(16)(17). The use of this product, however, is likely to be limited by cost and availability.
Because several studies suggest a relationship between anergy and risk for TB, health-care providers may find the results of anergy testing useful in individual situations, despite the lack of consensus on how to perform anergy skin testing. Efforts to apply the results of anergy testing to evaluation of latent TB infection and preventive therapy decisions should be supplemented by information concerning the person's risk for exposure to and infection with M. tuberculosis. Used in this way, anergy tests, in conjunction with PPD testing, may assist in estimating TB risk for selected HIVinfected patients in specific situations. If a decision is made to perform anergy testing, technical expertise, feasibility, and cost may be important factors in choosing which test(s) to employ. The purpose of anergy testing also may be a factor: if the primary concern is to avoid misclassifying anergic persons as nonanergic, the use of two Mantoux-method tests with a 5-mm of induration cut-off may be appropriate. Use of more antigens may be indicated, however, if the primary concern is to avoid misclassifying immunocompetent persons as anergic. The expertise of the health-care provider and a clear understanding of the limitations of anergy testing are critical to appropriate use.
# ROLE OF ANERGY SKIN TESTING IN TB PREVENTION AND CONTROL PROGRAMS
Isoniazid preventive therapy administered to HIV-positive persons who have positive reactions to PPD tuberculin is important both as a personal health intervention and as part of efforts to prevent active TB disease in the United States. The prevalence of M. tuberculosis infection and active TB disease differs among different groups of the U.S. population infected with HIV. To reach a community's groups at high risk for TB, CDC has recommended that the design of tuberculin screening programs be based on local data regarding the prevalence and incidence of M. tuberculosis infection and the sociodemographic characteristics of patients with TB and M. tuberculosis infection (47 ).
In studies conducted in the United States in which preventive therapy was administered principally to PPD-positive persons (44,45 ), no cases of TB were observed in anergic persons. (In one of these reports, selected anergic persons also received isoniazid .) In a multicenter study (19 ), the effect of area of residence on risk for TB was much greater than that of anergy. The precise risk for TB in HIV-positive anergic persons in the United States cannot be determined; however, the overall risk seems to be low. In addition, there are no simple skin-testing protocols that can reliably identify persons as either anergic or nonanergic and that have been proven to be feasible for application in public health tuberculosis screening programs. In the United States, the public health impact of finding and treating patients who have infectious TB to prevent further transmission and of providing preventive therapy to PPDpositive, HIV-infected persons to prevent additional infectious TB cases should be greater than the effect of preventive therapy for HIV-positive anergic persons.
# RECOMMENDATIONS Programmatic Use of Anergy Testing
Since the publication of guidelines in 1991, additional information has documented limitations in the usefulness of anergy testing in public health tuberculosis screening programs. These limiting factors include the variability in the available anergy testing methods, their lack of reproducibility, the variation in absolute risk for TB among different anergic groups, and the lack of demonstrated efficacy of a preventive therapy program in anergic HIV-infected groups. Therefore, anergy testing in conjunction with PPD-tuberculin testing is no longer routinely recommended for inclusion in screening programs for M. tuberculosis infection among HIV-infected persons. However, DTH evaluation may assist in guiding individual decisions regarding preventive therapy in selected situations.
# TB Preventive Therapy Among HIV-Positive, PPD-Positive Persons
Unless specifically contraindicated, HIV-positive persons a) who have positive reactions to PPD tuberculin (≥5 mm of induration), b) who have not already been treated for TB infection, and c) whose test results exclude active TB should be considered for 12 months of preventive therapy with isoniazid (48 ). This preventive therapy is indicated even if the date of PPD skin-test conversion cannot be determined.
# TB Preventive Therapy Among HIV-Positive, PPD-Negative Persons
When assessing HIV-infected persons who have negative PPD-tuberculin skin-test results or who are known to be anergic, the most important factors in considering TB preventive therapy are the likelihood of exposure to transmissible active TB and the likelihood of latent M. tuberculosis infection. Preventive therapy should be considered for HIV-infected persons who do not have a documented positive PPD-tuberculin response but who have had recent contact with patients who have infectious pulmonary TB. Repeat PPD testing of initially PPD-negative contacts 3 months after cessation of contact with infectious TB is sometimes used to assist in decisions about duration of preventive therapy (49 ). However, most of these patients should complete a full 12-month course of isoniazid preventive therapy.
In certain cases, preventive therapy with isoniazid for persons who are not PPD positive also may be considered. Such therapy may be beneficial for a) children who are born to HIV-infected women and are close contacts of a person who has infectious TB and b) HIV-infected adults who reside or work in institutions and are continually and unavoidably exposed to patients who have infectious TB. Some experts recommend continuing isoniazid preventive therapy indefinitely for HIV-infected persons who have an ongoing high risk for exposure to M. tuberculosis (e.g., inmates of prisons in which the prevalence of TB is high). In these situations, the results of anergy testing may be useful for deciding which persons should be offered prolonged preventive therapy in settings in which a) exposure is likely but PPD conversion has not occurred, b) the consideration of primary prophylaxis may arise, and c) the most vulnerable persons in immunologic terms may have high priority for preventive therapy.
# Future Programmatic Directions
For formulating future recommendations regarding programmatic uses of anergy testing, results from systematic studies of the two FDA-approved Mantoux-method tests used together with a cut-off diameter of 5-mm of induration would be useful, as would comparisons between results with this combination and with the sevenantigen multipuncture battery. Ultimately, development of a simpler standard skin test or of another method for measuring the same components of immune responses more reliably is desirable.
# CONCLUSIONS
Recent studies suggest that impaired DTH is related to risk for active TB in some HIV-infected populations, despite variability in anergy skin-testing procedures. When a clinician elects to use anergy testing as part of a multifactorial assessment of a person's risk for TB, the two FDA-approved Mantoux-method tests (mumps and Candida ), used together and with cut-off diameters of 5 mm of induration, are recommended. Studies based on these approaches may result in data useful for formulating guidelines regarding future programmatic uses of anergy testing. Improvements in TB screening and preventive therapy practices in HIV-infected persons require better standardization of anergy testing methods and validation of their predictive value or development of an adequate alternative measure of cellular immunity or of an alternative test for the detection of latent TB infection.
In selected situations, anergy testing may assist in guiding individual decisions regarding individual therapy. However, results of currently available anergy-testing methods in U.S. populations have not been demonstrated to make a useful contribution to most decisions about isoniazid preventive therapy. Therefore, anergy testing is no longer recommended as a routine component of TB screening among HIV-infected persons in the United States.
# MMWR
September 5, 1997
# The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on | 4,998 | {
"id": "b13c8d12b7b77b604cdb53abad9aec10060d040d",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | INSIDE: Continuing Education Examination depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Introduction
The most effective methods for preventing human immunodeficiency virus (HIV) infection are those that protect against exposure to HIV. Antiretroviral therapy cannot replace behaviors that help avoid HIV exposure (e.g., sexual abstinence, sex only in a mutually monogamous relationship with a noninfected partner, consistent and correct condom use, abstinence from injection-drug use, and consistent use of sterile equipment by those unable to cease injection-drug use). Medi-cal treatment after sexual, injection-drug-use, or other nonoccupational HIV exposure - is less effective than preventing HIV infection by avoiding exposure.
In July 1997, CDC sponsored the External Consultants Meeting on Antiretroviral Therapy for Potential Nonoccupational Exposures to HIV. This panel of scientists, public health specialists, clinicians, ethicists, members of affected communities, and representatives from professional associations and industry evaluated the available evidence related to use of antiretroviral medications after nonoccupational HIV exposure. In 1998, DHHS issued a statement that outlined the available information and concluded that evidence was insufficient about the efficacy of nonoccupational postexposure prophylaxis (nPEP) to recommend either for or against its use (1).
Since 1998, additional data about the potential efficacy of nPEP have accumulated from human, animal, and laboratory studies. Clinicians and organizations have begun providing nPEP to patients they believe might benefit. In certain instances, health departments have issued advisories or recommendations or otherwise supported the establishment of nPEP treatment programs in their jurisdictions (2)(3)(4)(5)(6). In May 2001, CDC convened the second external consultants meeting on nonoccupational post-exposure prophylaxis to review and discuss the available data. This report summarizes knowledge about the use and potential efficacy of nPEP and details guidelines for its use in the United States. † The recommendations are intended for nonoccupational exposures and are not applicable for occupational exposures.
# Evidence of Possible Benefits from nPEP
For ethical and logistical reasons, a randomized, placebocontrolled clinical trial of nPEP probably will not be performed. However, data are available from animal transmission models, perinatal clinical trials, studies of health-care workers receiving prophylaxis after occupational exposures, and from observational studies. These data indicate that nPEP might sometimes reduce the risk for HIV infection after nonoccupational exposures.
# Animal Studies
Animal studies have demonstrated mixed results (1,7). In macaques, PMPA (tenofovir) blocked simian immunodeficiency virus (SIV) infection after intravenous challenge if administered within 24 hours of exposure and continued for 28 days. PMPA was not as effective if initiated 48 or 72 hours postexposure or if continued for only 3 or 10 days (8). Two macaque studies of combination antiretroviral therapy (zidovudine, lamivudine, and indinavir) initiated 4 hours after simian/human immunodeficiency virus (SHIV) challenge and continued for 28 days did not protect against infection but did result in reduced viral load among the animals infected (9). In a macaque study designed to model nPEP for mucosal HIV exposure, all animals administered PMPA for 28 days, beginning 12 hours (four animals) or 36 hours (four animals) after vaginal HIV-2 exposure, were protected. Three of four animals treated 72 hours after exposure were also protected; the fourth animal had delayed seroconversion and maintained a low viral load after treatment (10).
These findings are consistent with those of macaque studies of the biology of vaginal SIV transmission. After atraumatic vaginal inoculation, lamina propria cells of the cervicovaginal subepithelium were infected first, virus was present in draining lymph nodes within 2 days, and virus was disseminated to the blood stream by 5 days (11). Similarly, in another study, SIV-RNA was detected in dendritic cells from the vaginal epithelium within 1 hour of intravaginal viral exposure, and SIVinfected cells were detected in the lymph nodes within 18 hours (12). These data indicate a small window of opportunity during which it might be possible to interrupt either the initial infection of cells in the cervicovaginal mucosa or the dissemination of local infection by the prompt administration of antiretroviral medications.
# Postnatal Prophylaxis
Abbreviated regimens for reducing mother-to-child HIV transmission have been studied extensively. Certain regimens have included a postexposure component (antiretroviral medications given to the neonate). Although reduction in maternal viral load during late pregnancy, labor, and delivery seems to be a major factor in the effectiveness of these regimens, an additional effect is believed to occur because the neonate receives prophylaxis, which protects against infection from exposure to maternal HIV during labor and delivery (13,14). In a Ugandan perinatal trial, the rate of transmission at 14-16 weeks postpartum was substantially lower for women who received a single dose of nevirapine at the beginning of labor followed by a single dose of nevirapine to the neonate within - In this report, a nonoccupational exposure is any direct mucosal, percutaneous, or intravenous contact with potentially infectious body fluids that occurs outside perinatal or occupational situations (e.g., health-care, sanitation, public safety, or laboratory employment). Potentially infectious body fluids are blood, semen, vaginal secretions, rectal secretions, breast milk or other body fluid that is contaminated with visible blood. † Information included in these recommendations might not represent Food and Drug Administration (FDA) approval or approved labeling for the particular products or indications in question. Specifically, the terms safe and effective might not be synonymous with the FDA-defined legal standards for product approval.
72 hours of birth (transmission rate: 13.1%) than for the women who received intrapartum zidovudine followed by 1 week of zidovudine to the neonate (transmission rate: 25.1%) (15). Similarly low transmission rates were noted in a study in South Africa in which intrapartum and postpartum antiretroviral medications were used. At 8 weeks postpartum, the transmission rate was 9.3% after intrapartum zidovudine and lamivudine followed by 1 week of zidovudine and lamivudine to mother and neonate, and the transmission rate was 12.3% after a single dose of nevirapine administered to the mother during labor and then to the neonate within 72 hours of birth (16). Although these studies lacked control groups, these dosing schedules could not have substantially reduced HIV exposure of the neonate through reducing maternal viral load, demonstrating that a combination of preexposure and postexposure prophylaxis for the neonate reduces HIV transmission. A study in Malawi among women who did not receive intrapartum antiretrovirals compared postnatal prophylaxis with single-dose nevirapine with and without zidovudine for 1 week. The transmission rate at 6-8 weeks was 7.7% among infants who received zidovudine plus nevirapine compared with 12.1% among those who received nevirapine alone (17). Although this study did not have a placebo or no-prophylaxis arm, the transmission rate for the zidovudine-nevirapine arm compares favorably with the rate of 21% at 4 weeks, noted in the placebo arm of a study of zidovudine prophylaxis conducted in Cote d'Ivoire (18). Two observational studies with relatively limited numbers documented a potential effect of postnatal zidovudine prophylaxis alone (without intrapartum medication). A review of medical records in New York indicated that zidovudine monotherapy administered to the mother intrapartum or to the infant within 72 hours of birth reduced perinatal transmission >50%; initiating monotherapy for the infant >72 hours after birth was less effective (19). Similarly, an analysis of births in the PACTS study demonstrated that zidovudine administered to infants within 24 hours of birth, when mothers had not been treated either antepartum or intrapartum, compared with no treatment for mothers or infants, reduced perinatal transmission by 48% (20).
# Observational Studies of nPEP
The most direct evidence supporting the efficacy of postexposure prophylaxis is a case-control study of needlestick injuries to health-care workers. In this study, the prompt initiation of zidovudine was associated with an 81% decrease in the risk for acquiring HIV (21). Although analogous clinical studies of nPEP have not been conducted, data are available from observational studies and registries.
In a high-risk HIV incidence cohort in Brazil, nPEP instruction and 4-day starter packs of zidovudine and lamivudine were administered to 200 homosexual and bisexual men. Men who began taking nPEP after a self-identified highrisk exposure were evaluated within 96 hours; 92% met the event eligibility criteria (clinician-defined high-risk exposure). Seroincidence was 0.7 per 100 person-years (one seroconversion) among men who took nPEP and 4.1 per 100 person-years among men who did not take nPEP (11 seroconversions) (22,23). Subsequent analysis of data from patients who took nPEP and had been followed for a median of 24.2 months indicated 11 seroconversions and a seroincidence of 2.9 per 100 person-years, compared with an expected seroincidence of 3.1 per 100 person years, p>0.97) (24). In a study of sexual assault survivors in Sao Paolo, Brazil, women who sought care within 72 hours after exposure were treated for 28 days with either zidovudine and lamivudine (for those without mucosal trauma) or zidovudine, lamivudine, and indinavir (for those with mucosal trauma or those subjected to unprotected anal sex) for 28 days. Women were not treated if they sought care >72 hours after assault, if the assailant was HIV-negative, or if a condom was used and no mucosal trauma was seen. Of 180 women treated, none seroconverted. Of 145 women not treated, four (2.7%) seroconverted (25). Although these studies demonstrate that nPEP might reduce the risk for infection after sexual HIV exposures, participants were not randomly assigned, and sample sizes were too small for statistically significant conclusions.
In a study of rape survivors in South Africa, of 480 initially seronegative survivors begun on zidovudine and lamivudine and followed up for at least 6 weeks, one woman seroconverted. She had started taking medications 96 hours after the assault. An additional woman, who sought treatment 12 days after assault, was seronegative at that time but not offered nPEP. At retesting 6 weeks after the assault, she had seroconverted and had a positive polymerase chain reaction result (Personal communication, A. Wulfsohn, MD, Sunninghill Hospital, Gauteng, South Africa).
In a feasibility trial of nPEP conducted in San Francisco, 401 persons with eligible sexual and injection-drug-use exposures were enrolled. No seroconversions were observed among those who completed treatment, those who interrupted treatment, or those who did not receive nPEP (26). In a study in British Columbia of 590 persons who completed a course of nPEP, no seroconversions were observed (27). In registries from four countries (Australia, France, Switzerland, and the United States), including approximately 2,000 nonoccupational exposure case reports, no confirmed seroconversions MMWR January 21,2005 have been attributed to a failure of nPEP in approximately 350 nPEP-treated persons reported to have been exposed to HIV-infected sources. However, the absence of seroconversions might not be attributed to receipt of nPEP but rather to the low per-act risk for infection and incomplete follow-up in the registries.
# Case Reports
In addition to these studies, two case reports are of note. In one, a patient who received a transfusion of red blood cells from a person subsequently determined to have early HIV infection began taking combination PEP 1 week after transfusion and continued for 9 months. The patient did not become infected despite the high risk associated with the transfusion of HIV-infected blood (28). In the other case, nPEP was initiated 10 days after self-insemination with semen from a homosexual man later determined to have early HIV infection. The woman did not become infected but did become pregnant and gave birth to a healthy infant (29).
Although data from the studies and case reports do not provide definitive evidence of the efficacy of nPEP after sexual, injection-drug-use, and other nonoccupational exposures to HIV, the cumulative data demonstrate that antiretroviral therapy initiated soon after exposure and continued for 28 days might reduce the risk for acquiring HIV.
# Evidence of Possible Risks from nPEP
Concerns about the potential risks from nPEP as a public health intervention include possible decrease in risk-reduction behaviors resulting from a perception that postexposure treatment is available, the occurrence of serious adverse effects from antiretroviral treatment in otherwise healthy persons, and potential selection for resistant virus (particularly if adherence is poor during the nPEP course). Evidence indicates that these theoretical risks might not be major problems.
# Effects on Risk-Reduction Behaviors
The availability or use of nPEP might not lead to increases in risk behavior. Of participants in the nPEP feasibility study in San Francisco, 72% reported a decrease in risk behavior over the next 12 months relative to baseline reported risk behavior, 14% reported no change, and 14% reported an increase (30). However, 17% of participants requested a second course of nPEP during the year after the first course, indicating that although participants did not increase risk behaviors, a substantial proportion of the participants did not eliminate risk behaviors. A similar proportion of participants (14%) requested a second course of nPEP at the Fenway Clinic in Boston (31). In the Brazil nPEP study of homosexual and bisexual men followed up for a median of 24 months, all groups, including those who elected to take nPEP, reported decreases in risk behavior (24,32). Among highly educated (75% with >4 years of college), predominantly white (74%) homosexual men who completed a street-outreach intervieweradministered survey in San Francisco, those who reported that they were aware of the availability of nPEP did not report more risk behavior than those who were not aware (33). In a study of discordant heterosexual couples, none reported decreased condom use because of the availability of nPEP (34).
# Antiretroviral Side Effects and Toxicity
Initial concerns about severe side effects and toxicities have been ameliorated by experience with health-care workers who have taken PEP after occupational exposures. Of 492 healthcare workers reported to the occupational PEP registry, 63% took at least three medications. Overall, 76% of workers who received PEP and had 6 weeks of follow-up reported certain symptoms (i.e., nausea and fatigue or malaise ). Only 8% of these workers had laboratory abnormalities, few of which were serious and all of which resolved promptly at the end of antiretroviral treatment (35). Six (1.3%) reported severe adverse events, and four stopped taking PEP because of them. Of 68 workers who stopped taking PEP despite exposure to a source person known to be HIV-positive, 29 (43%) stopped because of side effects. According to the U.S. nPEP surveillance registry, among 107 exposures for which nPEP was taken, the regimen initially prescribed was stopped or modified in 22%; modifications or stops were reported because of side effects in half of these instances (36). In addition to reports in these registries, serious side effects have been reported (e.g., nephrolithiasis and hepatitis) in the literature.
During 1997-2000, a total of 22 severe adverse events in persons who had taken nevirapine-containing regimens for occupational or nonoccupational postexposure prophylaxis were reported to FDA (37)(38). Severe hepatotoxicity occurred in 12 (one requiring liver transplantation), severe skin reactions in 14, and both hepatic and cutaneous manifestations occurred in four. Because the majority of occupational exposures do not lead to HIV infection, the risk for using a nevirapine-containing regimen for occupational PEP outweighs the potential benefits. The same rationale indicates that nevirapine should not be used for nPEP.
# Selection of Resistant Virus
Antiretroviral PEP does not prevent all infections in occupational and perinatal settings. Similarly, PEP is not expected to have complete efficacy after nonoccupational exposures. In instances where nPEP fails to prevent infection, selection of resistant virus by the antiretroviral drugs is theoretically possible. However, because of the relative paucity of documented nPEP failures for which resistance testing was performed, the likelihood of this occurring is unknown.
PEP failures have been documented after at least one sexual (39) and 21 occupational (38,40) exposures. Three fourths of these patients were treated with zidovudine monotherapy. Only three received three or more antiretroviral medications for PEP. Among the patients tested, several were infected with strains that were resistant to antiretroviral medications. In a study in Brazil (24), virus obtained on day 28 of therapy from the only treated person who seroconverted (whose regimen included 3TC) had a 3TC-resistance mutation. However, the sourceperson could not be tested. Therefore, whether the mutation was present when the virus was transmitted or whether it developed during nPEP could not be determined.
Selection of resistant virus might occasionally result from the use of nPEP. However, because the majority of nonoccupational exposures do not lead to HIV infection and because the use of combination antiretroviral therapy might reduce further the transmission rate, such occurrences are probably rare. For patients who seroconvert despite nPEP, resistance testing should be considered to guide early and subsequent treatment decisions.
# Cost-Effectiveness of nPEP
Although the potential benefits of nPEP to persons are measured by balancing its anticipated efficacy after a given exposure against individual health risks, the value of nPEP as a public health intervention is best addressed at the population level by using techniques such as cost-benefit analysis. Such analyses have been published. One cost-effectiveness evaluation of nPEP in different potential exposure scenarios in the United States reported it to be cost-effective only in situations in which the sex partner source was known to be HIV-infected or after unprotected receptive anal intercourse with a homosexual or bisexual man of unknown serostatus (41,42). A similar analysis in France reported that nPEP was cost-saving for unprotected receptive anal intercourse with a partner known to be HIV-infected and cost-effective for receptive anal intercourse with a homosexual or bisexual partner of unknown serostatus. It was not cost-effective for penile-vaginal sex, insertive anal intercourse, or other exposures considered (43).
Another study and anecdotal reports indicate difficulty limiting nPEP to the exposures most likely to benefit from it. In British Columbia, where guidelines for nPEP use have been implemented (5), an analysis indicated that >50% of those receiving nPEP should not, according to the guidelines, have been treated (e.g., for exposure to intact skin). The use of nPEP in these circumstances doubled the estimated cost per HIV infection prevented ($530,000 versus $230,000) (44).
Even if nPEP is cost-effective for the highest risk exposures, behavioral interventions are more cost-effective (41,45). This emphasizes the importance, when considering nPEP, of providing risk-avoidance and risk-reduction counseling to reduce the occurrence of future HIV exposures.
# Evidence of Current Practice
Although 40,000 new HIV infections occur in the United States each year, relatively few exposed persons seek care after nonoccupational exposure. Certain exposures are unrecognized. Certain patients have frequently recurring exposures and would not benefit from nPEP because 4 weeks of potential protection cannot substantially reduce their overall risk for acquiring HIV infection. In addition, certain clinicians and exposed patients are unaware of the availability of nPEP or unconvinced of its efficacy and safety. Finally, access to knowledgeable clinicians or a means of paying for nPEP might constrain its use.
Certain populations in the United States remain at high risk for exposure. In a cohort study of homosexual and bisexual men, 17% reported at least one condom failure during the 6 months preceding study enrollment (46). Other studies indicate that increasing use of highly active antiretroviral therapy (HAART) by HIV-infected persons might be leading some persons to have unprotected sex more frequently, in part because of the belief that lowered viral load substantially reduces infectivity (47)(48)(49)(50). This finding is supported by increased rates of sexually transmitted infections among HIV-infected patients (51). In a California study, 69% of discordant heterosexual couples reported having had unprotected sex during the preceding 6 months (34).
Since 1998, certain clinicians have recommended wider availability and use of nPEP (52)(53)(54)(55)(56)(57)(58), and others have been more cautious about implementing it in the absence of definitive evidence of efficacy (59,60). Multiple public health jurisdictions, including the New York State AIDS Institute, the San Francisco County Health Department, the Massachusetts Department of Public Health, the Rhode Island Department of Health, and the California State Office of AIDS, have issued policies or advisories for nPEP use. Some of these recommendations have focused on sexual assault survivors, who constitute few of the estimated 40,000 new HIV infections annually in the United States.
Surveys of clinicians and facilities indicate a need for more widespread implementation of guidelines and protocols for nPEP use (61). In a survey of Massachusetts emergency department directors, 52% of facilities had received nPEP requests during the preceding year, but only 15% had written nPEP protocols (62). Similarly, in a survey of Massachusetts clinicians, approximately 20% had a written nPEP protocol (63). Among pediatric emergency medicine specialists surveyed throughout the United States and Canada, approximately 20% had a written policy about nPEP use, but 33% had prescribed it for children and adolescents; different prescribing practices were reported (64). In a survey of 27 European Union countries, 23 had guidelines for occupational PEP use, but only six had guidelines for nPEP use (65).
Evidence indicates considerable awareness of nPEP and interest in its use among potential patients. In a cohort study of homosexual and bisexual men, 60% were willing to participate in a study of nPEP if it involved a single daily dose of medication; 30% were willing to take 3 doses daily (66). Among men surveyed at a "gay pride" festival in Atlanta, although only 3% had used nPEP, 26% planned to if exposed in the future (67). When nPEP studies were established in San Francisco, approximately 400 persons sought treatment in 2½ years (24). At a clinic primarily serving homosexual and bisexual men in Boston, 71 requests for nPEP were evaluated in 1½ years (30). In a California study of heterosexual discordant couples, 28% had heard of nPEP, 55% of seronegative partners believed that it was effective, and 78% reported they would take it if exposed (34).
No nationally representative data exists on nPEP use in the United States. In 1998, CDC established a national nPEP surveillance registry that accepts voluntary reports by clinicians. Although approximately 800 reports have been received, the majority of clinicians prescribing nPEP do not report to the registry. Similarly, low reporting rates were obtained in attempts to establish voluntary registries to monitor occupational PEP and antiretroviral use during pregnancy. No national surveys of clinicians have been reported. However, one multisite HIV vaccine trial largely conducted in the United States has assessed nPEP use by 5,418 participants, who included men who have sex with men (94%) and heterosexual women at high risk (6%). Two percent of trial participants from 27 study sites reported having taken nPEP during the trial. Supplementary data from six U.S. sites indicated that 46% of participants had heard of nPEP. Enrollment at one of seven California sites (odds ratio = 3.2), having a known positive partner (OR = 2.0), higher educational level (OR = 1.4), and greater recreational drug use (OR = 1.2) were significant predictors of having used nPEP (p<0.05) (68).
# Evaluation of Persons Seeking Care
After Potential Nonoccupational Exposure to HIV
The effective delivery of nPEP after exposures that have a substantial risk for HIV infection requires prompt evaluation of patients and consideration of biomedical and behavioral interventions to address current and ongoing health risks. This evaluation should include determination of the HIV status of the potentially exposed person, the timing and characteristics of the most recent exposure, the frequency of exposures to HIV, the HIV status of the source, and the likelihood of concomitant infection with other pathogens or negative health consequences of the exposure event.
# HIV Status of the Potentially Exposed Person
Because persons who are infected with HIV might not be aware they are infected, baseline HIV testing should be performed on all persons seeking evaluation for potential nonoccupational HIV exposure. If possible, this should be done with an FDA-approved rapid test kit (with results available within an hour). If rapid tests are not available, an initial treatment decision should be made based on the assumption that the potentially exposed patient is not infected, pending HIV test results.
# Timing and Frequency of Exposure
Available data indicate that nPEP is less likely to be effective if initiated >72 hours after HIV exposure. If initiation of nPEP is delayed, the likelihood of benefit might not outweigh the risks inherent in taking antiretroviral medications.
Because nPEP is not 100% effective in preventing transmission and because antiretroviral medications carry a certain risk for adverse effects and serious toxicities, nPEP should be used only for infrequent exposures. Persons who engage in behaviors that result in frequent, recurrent exposures that would require sequential or near-continuous courses of antiretroviral medications (e.g., discordant sex partners who rarely use condoms or injection-drug users who often share injection equipment) should not take nPEP. In these instances, exposed persons should instead be provided with intensive risk-reduction interventions.
# HIV Status of Source
Patients who have had sexual, injection-drug-use, or other nonoccupational exposures to potentially infectious fluids of persons known to be HIV infected are at risk for acquiring HIV infection and should be considered for nPEP if they seek treatment within 72 hours of exposure. If possible, source persons should be interviewed to determine his or her history of antiretroviral use and most recent viral load because this information might provide information for the choice of nPEP medications.
Persons with exposures to potentially infectious fluids of persons of unknown HIV status might or might not be at risk for acquiring HIV infection. When the source is known to be from a group with a high prevalence of HIV infection (e.g., a homosexual or bisexual man, an injection-drug user, or a commercial sex worker), the risk for transmission might be increased. The risk for transmission might be especially great if the source person has been infected recently, when viral burden in blood and semen might be particularly high (69,70). However, ascertaining this in the short time available for nPEP evaluation is rarely possible. When the HIV status of the source is unknown, it should be determined whether the source is available for HIV testing. If the risk associated with the exposure is considered substantial, nPEP can be started pending determination of the HIV status of the source and then stopped if the source is determined to be noninfected.
# Transmission Risk from the Exposure
Although the estimated per-act transmission risk from unprotected exposure to a partner known to be HIV infected is relatively low for different types of exposure (Table 1), different nonoccupational exposures are associated with different levels of risk (71)(72)(73)(74)(75)(76)(77)(78)(79). The highest levels of estimated per-act risk for HIV transmission are associated with blood transfusion, needle sharing by injection-drug users, receptive anal intercourse, and percutaneous needlestick injuries. Insertive anal intercourse, penile-vaginal exposures, and oral sex represent substantially less per-act risk.
A history should be taken of the specific sexual, injectiondrug-use, or other behaviors that might have led to, or modified, a risk for acquiring HIV infection. Eliciting a complete description of the exposure and information about the HIV status of the partner(s) can substantially lower (e.g., if the patient was the insertive partner or a condom was used) or increase (e.g., if the partner is known to be HIV-positive) the estimate of risk for HIV transmission resulting from a specific exposure.
In addition to sexual and injection-drug-use exposures, percutaneous injuries from needles discarded in public settings (e.g., parks and buses) result in requests for nPEP with a certain frequency. Although no HIV infections from such injuries have been documented, concern exists that syringes discarded by injection-drug users (e.g., for whom the HIV infection rate is higher than that for diabetics) might pose a substantial risk. However, these injuries typically involve smallbore needles that contain only limited amounts of blood, and the viability of any virus present is limited. In a study of syringes used to administer medications to HIV-infected persons, only 3.8% had detectable HIV RNA (72). In a study of the viability of virus in needles, viable virus was recovered from 8% at 21 days when the needles had been stored at room temperature; <1% had viable virus after 1 week of storage at higher temperatures (73).
Bite injuries represent another potential means of transmitting HIV. However, HIV transmission by this route has been reported rarely (80)(81)(82). Transmission might theoretically occur either though biting or receiving a bite from an HIVinfected person. Biting an HIV-infected person, resulting in a break in the skin, exposes the oral mucous membranes to infected blood; being bitten by an HIV-infected person exposes nonintact skin to saliva. Saliva that is contaminated with infected blood poses a substantial exposure risk. Saliva that is not contaminated with blood contains HIV in much lower titers and constitutes a negligible exposure risk (83).
# Evaluation for Sexually Transmitted Infections, Hepatitis, and Emergency Contraception
Evaluation for sexually transmitted infections is important because these infections might increase the risk for acquiring HIV infection from a sexual exposure. In 1996, an estimated 5,042 new HIV infections were attributable to sexually transmitted infection at the time of HIV exposure (84). In addi- tion, any sexual exposure that presents a risk for HIV infection might also place a patient at risk for acquiring other sexually transmitted infections, including hepatitis B. Prophylaxis for sexually transmitted disease, testing for hepatitis, and vaccination for hepatitis B (for those not immune) should be considered (85).
For women of reproductive capacity who have had genital exposure to semen, the risk for pregnancy also exists. In these instances, emergency contraception should be discussed with the potentially exposed patient.
# Recommendations for Use of Antiretroviral nPEP
A 28-day course of HAART is recommended for persons who have had nonoccupational exposure to blood, genital secretions, or other potentially infected body fluids of a persons known to be HIV infected when that exposure represents a substantial risk for HIV transmission (Figure 1) and when the person seeks care within 72 hours of exposure. When indicated, antiretroviral nPEP should be initiated promptly for the best chance of success.
Evidence from animal studies and human observational studies demonstrate that nPEP administered within 48-72 hours and continued for 28 days might reduce the risk for acquiring HIV infection after mucosal and other nonoccupational exposures. The sooner nPEP is administered after exposure, the more likely it is to interrupt transmission. Because HIV is an incurable transmissible infection that affects the quality and duration of life, HAART should be used to maximally suppress local viral replication that otherwise might occur in the days after exposure and potentially lead to a disseminated, established infection (11,86). One of the HAART combinations recommended for the treatment of persons with established HIV infection should be selected on the basis of adherence, toxicity, and cost considerations (Tables 2 and 3) (87,88).
No evidence indicates that any specific antiretroviral medication or combination of medications is optimal for use as nPEP. However, on the basis of the degree of experience with individual agents in the treatment of HIV-infected persons, certain agents and combinations are preferred. Preferred regimens include efavirenz and lamivudine or emtricitabine with zidovudine or tenofovir (as a nonnucleoside-based regimen) and lopinavir/ritonavir (coformulated in one tablet as Kaletra ® ) and zidovudine with either lamivudine or emtricitabine. Different alternative regimens are possible (Table 2).
No evidence indicates that a three-drug HAART regimen is more likely to be effective than a two-drug regimen. The recommendation for a three-drug HAART regimen is based on the assumption that the maximal suppression of viral replication afforded by HAART (the goal in treating HIV-infected persons) will provide the best chance of preventing infection in a person who has been exposed. Clinicians and patients who are concerned about potential adherence and toxicity issues associated with a three-drug HAART regimen might consider the use of a two-drug regimen (i.e., a combination of two reverse transcriptase inhibitors). Regardless of the regimen chosen, the exposed person should be counseled about the potential associated side effects and adverse events that require immediate medical attention. The use of medications to treat symptoms (e.g., antiemetics or antimotility agents) might improve adherence in certain instances.
Although certain preliminary studies have evaluated the penetration of antiretroviral medications into genital tract secretions and tissues (89,90), evidence is insufficient to recommend a specific antiretroviral medication as most effective for nPEP. In addition, new antiretroviral medications might become available. As new medications and new information become available, these recommendations will be amended and updated.
When the source-person is available for interview, his or her history of antiretroviral medication use and most recent viral load measurement should be considered when selecting antiretroviral medications for nPEP. This information might help avoid prescribing antiretroviral medications to which the source-virus is likely to be resistant. If the source-person is willing, the clinician might consider drawing blood for viral load and resistance testing, the results of which might be use-ful in modifying the initial nPEP medications if the results can be obtained promptly (91).
For persons who have had nonoccupational exposure to potentially infected body fluids of a person of unknown HIV infection status, when that exposure represents a substantial risk for HIV transmission (Figure 1) and when care is sought within 72 hours of exposure, no recommendations are made either for or against the use of antiretroviral nPEP. Clinicians should evaluate the risk for and benefits of this intervention on a case-by-case basis.
When a source-person is not known to be infected with HIV, the risk for exposure (and therefore the potential benefit of nPEP) is unknown. Prescribing antiretroviral medication in these cases might subject patients to risks that are not balanced with the potential benefit of preventing the acquisition of HIV infection. Judging whether the balance is appropriate depends entirely on the circumstances of the possible exposure (i.e., the risk that the source is HIV infected and the risk for transmission if the source is HIV infected) and is best determined through discussion between the clinician and the patient.
If the source-person is available for interview, additional information about risk history can be obtained and permission for an HIV test requested to assist in determining the likelihood of HIV exposure. When available, FDA-approved rapid HIV tests are preferable for obtaining this information as quickly as possible. These additional factors might assist in the decision whether to start or complete a course of nPEP. If data to clearly determine risk are not immediately available, clinicians might consider initiating nPEP while further assessments are being made and then stopping it when other information is available (e.g., the source-person is determined to be noninfected).
For persons whose exposure histories represent no substantial risk for HIV transmission (Figure 1) or who seek care >72 hours after potential nonoccupational HIV exposure, the use of antiretroviral nPEP is not recommended. When the risk for HIV transmission is negligible, limited benefit is anticipated from the use of nPEP. In addition, animal and human study data demonstrate that nPEP is less likely to prevent HIV transmission when initiated >72 hours after exposure. Because the risks associated with antiretroviral medications are likely to outweigh the potential benefit of nPEP in these circumstances, nPEP is not recommended for such exposures, regardless of the HIV status of the source. However, it cannot be concluded on the basis of the available data that nPEP will be completely ineffective when initiated >72 hours after exposure. Moreover, data do not indicate an absolute time after exposure beyond which nPEP will not be effective. When safer and more tolerable drugs are used, the risk-benefit ratio of providing nPEP >72 hours postexposure is more favorable. Therefore, clinicians might consider prescribing nPEP after exposures that confer a serious risk for transmission, even if the exposed person seeks care >72 hours postexposure if, in the clinician's judgment, the diminished potential benefit of nPEP outweighs the potential risk for adverse events from antiretroviral drugs.
# Considerations for All Patients Treated with Antiretroviral nPEP Use of Starter Packs
Patients might be under considerable emotional stress when seeking care after a potential HIV exposure and might not attend to, or retain, all the information relevant to making a decision about nPEP. Clinicians should give an initial prescription for 3-5 days of medication and schedule a followup visit to review the results of baseline HIV testing (if rapid tests are not used), provide additional counseling and support, assess medication side effects and adherence, and provide additional medication if appropriate (with an altered regimen if indicated by side effects or laboratory test results).
# Scientific Consultation
When clinicians are not experienced with using HAART or when information from source-persons indicates the possibility of antiretroviral resistance, consultation with infectious disease or other HIV-care specialists, if it is available immediately, might be warranted before prescribing nPEP. Similarly, when considering prescribing nPEP to pregnant women or children, consultation with obstetricians or pediatricians might be advisable. However, if such consultation is not immediately available, initiation of nPEP should not be delayed. An initial nPEP regimen should be started and, if necessary, revised after consultation is obtained. Patients who seek nPEP might benefit from referral for psychological counseling that helps ease the anxiety about possible HIV exposure, strengthens risk-reduction behaviors, and promotes adherence to nPEP regimens if prescribed.
# Facilitating Adherence
Adherence to antiretroviral medications can be challenging, even for 28 days. In addition to common side effects such as nausea and fatigue, each dose reminds the patient of his or her risk for acquiring HIV infection. Adherence has been reported to be especially poor among sexual assault survivors (92)(93)(94)(95)(96). Steps to maximize medication adherence include prescribing medications with fewer doses and fewer pills per dose, educating patients about the importance of adherence and about potential side effects, offering ancillary medications for side effects (e.g., anti-emetics) if they occur, and providing access to ongoing encouragement and consultation by phone or office visit.
# Follow-up Testing and Care
All patients seeking care after HIV exposure should be tested for the presence of HIV antibodies at baseline and at 4-6 weeks, 3 months, and 6 months after exposure to determine whether HIV infection has occurred. In addition, testing for sexually transmitted diseases, hepatitis B and C, and pregnancy should be offered (Table 4).
Patients should be instructed about the signs and symptoms associated with acute retroviral infection (Table 5), especially fever and rash (97), and asked to return for evaluation if these occur during or after nPEP. Acute HIV infection is associated with high viral loads. However, clinicians should be aware that available assays might yield low viral-load results (e.g., <3,000) in noninfected persons. Such falsepositive results can lead to misdiagnosis of HIV infection (98).
Transient, low-grade viremia has been observed both in macaques exposed to SIV (99) and humans exposed to HIV who were administered antiretroviral PEP (100) and did not become infected. In certain cases, this outcome might represent aborted infection rather than false-positive test results, but this can be determined only through further study. For patients with clinical or laboratory evidence of acute HIV infection, continuing antiretroviral therapy for >28 days might be prudent because such early treatment (no longer prophylaxis) might slow the progression of HIV disease (101). Patients with acute HIV infection should be transferred to the care of HIV treatment specialists.
In addition, clinicians who provide nPEP should monitor liver function, renal function, and hematologic parameters as indicated by the prescribing information found in antiretroviral treatment guidelines (87,102,103), package inserts, and the Physician's Desk Reference (Table 3). Unusual or severe toxicities from antiretroviral drugs should be reported to the manufacturer or FDA.
# HIV Prevention Counseling
The majority of persons who seek care after a possible HIV exposure do so because of failure to initiate or maintain effective risk-reduction behaviors. Notable exceptions are sexual assault survivors and children with community-acquired needlestick injuries.
Although nPEP might reduce the risk for HIV infection, it is not believed to be 100% effective. Therefore, patients should practice protective behaviors with sex partners (e.g., abstinence or consistent use of male condoms) or drug-use partners (e.g., avoidance of shared injection equipment) throughout the course of nPEP to avoid transmission to others if they become infected, and after nPEP to avoid future HIV exposures.
At follow-up visits, clinicians should assess their patients' needs for behavioral intervention, education, and services. This assessment should include frank, nonjudgmental questions about sexual behaviors, alcohol use, and illicit drug use. Clinicians should help patients identify ongoing risk issues and develop plans for improving their use of protective behaviors (104).
To help patients obtain indicated interventions and services, clinicians should be aware of local resources for high-quality HIV education and ongoing behavioral risk reduction, counseling and support, inpatient and outpatient alcohol and drugtreatment services, substance/drug abuse treatment programs, family and mental health counseling services, and support programs for HIV-infected persons. Information about publicly funded HIV prevention programs can be obtained from state or local health departments.
# Management of Source Persons
When source-persons are seen during the course of evaluating a patient for potential HIV exposure, clinicians should also assess the source-person's access to relevant medical care, behavioral intervention, and social support services. If needed care cannot be provided directly, clinicians should help sourcepersons obtain care in the community.
If a new diagnosis of HIV infection is made or evidence of other sexually transmitted infection is identified, the patient should be assisted in notifying their sexual and drug-use contacts. Assistance with confidential partner notification (without revealing the patient's identity) is available through local health departments.
# Reporting and Confidentiality
Because of the emotional, social, and potential financial consequences of possible HIV infection, clinicians should handle nPEP evaluations with the highest level of confidentiality. Confidential reporting of sexually transmitted infections and newly diagnosed HIV infections to health departments should take place as dictated by local law and regulations.
For cases of sexual assault, clinicians should document their findings and assist patients with notifying local authorities. HIV test results should be recorded separately from the findings of the sexual assault examination to protect patients' confidentiality in the event that medical records are later released for legal proceedings. Certain states and localities have special programs to provide reimbursement for medical therapy, including antiretroviral medication after sexual assault, and these areas might have specific reporting requirements. When the sexual abuse of a child is suspected or documented, the clinician should report it in compliance with state and local law and regulations.
# Considerations for Vulnerable Populations Pregnant Women and Women of Childbearing Potential
Considerable experience has been gained in recent years in the safe and appropriate use of antiretroviral medications during pregnancy, either for the benefit of the HIV-infected woman's health or to prevent transmission to newborns. To facilitate the selection of antiretroviral medications likely to be both effective and safe for the developing fetus, clinicians should consult DHHS guidelines (102) before prescribing nPEP for a woman who is or might be pregnant.
Because of potential teratogenicity, efavirenz should not be used in any nPEP regimen during pregnancy or among women of childbearing age at risk for becoming pregnant during the course of antiretroviral prophylaxis (Table 3). A protease inhibitor-or nucleoside reverse transcriptase inhibitor-based regimen should be considered in these circumstances. When efavirenz is prescribed to women of childbearing potential, they should be instructed about the need to avoid pregnancy. Because the effect of efavirenz on hormonal contraception is unknown, women using such contraception should be informed of the need to use an additional method (e.g., barrier contraception). In addition, because of reports of maternal and fetal mortality attributed to lactic acidosis associated with prolonged use of d4T in combination with ddI in HIVinfected pregnant women, this combination is not recommended for use in an nPEP regimen (105).
# Children
Potential HIV exposures in children occur most often by accident (e.g., needlesticks in the community, fights, or playground incidents resulting in bleeding by an HIV-infected child) or by sexual abuse or assaults (106). In a review of charts from 1 year in the pediatric emergency department of one hospital, 10 children considered for nPEP were identified (six because of sexual assault and four because of needlestick injury). Eight began taking nPEP, but only two completed the 4-week course (63,107). An analysis of 9,136 reported acquired immunodeficiency syndrome cases in children identified 26 who were sexually abused with confirmed or suspected exposure to HIV infection (108).
The American Academy of Pediatrics has issued nPEP guidelines for pediatric patients (109). In addition, DHHS pediatric antiretroviral treatment guidelines ( 103) provide information about the use of antiretroviral agents in children. For young children who cannot swallow capsules or tablets and to ensure appropriate dosing for drugs that do not have capsule/tablet formulations that allow pediatric dosing, drugs for which pediatric formulations are available might need to be prescribed (Table 3). Adherence to the prescribed medications will depend on the involvement of, and support provided to, parents or guardians.
# Sexual Assault Survivors
Use of nPEP for sexual assault survivors has been widely encouraged both in the United States and elsewhere (56,94,110,111). Sexual assault is relatively common among women: 13% of a national sample of adult women reported having ever been raped (60% before age 18), and 5% reported having been raped more than once (112). Sexual assault is not uncommon among men. In one series from an emergency department, 5% of reported rapes involved men sexually assaulted by men (113). Males accounted for 11.6 % of rapes reported among persons aged >12 years who responded to the National Crime Victimization Survey in 1999 (114). However, only three documented cases of HIV infection resulting from sexual assault have been published (94,115,116). In observational studies, HIV infections have been temporally associated with sexual assault (Personal communication, A. Wulfsohn, MD, Sunninghill Hospital, Gauteng, South Africa).
Studies have examined HIV infection rates for sexual assailants (117,118). The largest of these, an evaluation of men incarcerated in Rhode Island, determined that 1% of those convicted of sexual assault were HIV infected when they entered prison, compared with 3% of all prisoners and 0.3% of the general male population (119).
Sexual assault typically has multiple characteristics that increase the risk for HIV transmission if the assailant is infected. In one prospective study of 1,076 sexual assault cases, 20% were attacked by multiple assailants, 39% were assaulted by strangers, 83% of females were vaginally penetrated, and 17% overall were sodomized. Genital trauma was documented in 53% of those assaulted, and sperm or semen was detected in 48% (120). In another study, in which toluidine blue dye was used as an adjunct to naked-eye examination, 40% of assaulted women (70% of nulliparas) had detectable vaginal lacerations, compared with 5% of women examined after consensual sex (121).
Despite these risks and the establishment of multidisciplinary support services, sexual assault survivors often decline nPEP, and many who do take it do not complete the 28-day course. This pattern has been reported in several countries and several programs in North America. In Vancouver, 71 of 258 assault survivors accepted the 5-day starter pack of nPEP, 29 returned for additional doses, and eight completed 4 weeks of therapy (96). Those with the highest risk for HIV exposure (i.e., source known to be HIV infected, a homosexual or bisexual man, or an injection-drug user) were more likely to begin and complete nPEP.
Patients who have been sexually assaulted will benefit from supportive services to improve adherence to nPEP if it is prescribed, and from psychological and other support provided by sexual assault crisis centers. All sexually assaulted patients should be tested and administered prophylaxis for sexually transmitted infections (85), and women who might become pregnant should be offered emergency contraception (122).
# Inmates
Certain illegal behaviors that result in imprisonment (e.g., prostitution and injection-drug use) also might be associated with a higher prevalence of HIV infection among prison inmates than among the general population (119). However, studies indicate that the risk for becoming infected in prison is probably less than the risk outside prison (122)(123)(124)(125). However, when exposure does occur, because sexual contact and injection-drug use are prohibited in jails and prisons, prison-ers who have experienced such exposures might be unable or unwilling to report the behaviors to health-care providers.
Administrators and health-care providers working in correctional settings should develop and implement systems to make HIV education and risk-reduction counseling, nPEP, voluntary HIV testing, and HIV care confidentially available to inmates. Such programs will allow inmates to benefit from nPEP when indicated, facilitate treatment services for those with drug addiction, and assist in the identification and treatment of sexual assault survivors.
# Injection-Drug Users
A history of injection-drug use should not deter clinicians from prescribing nPEP if the exposure provides an opportunity to reduce the risk for consequent HIV infection. A survey of clinicians serving injection-drug users determined a high degree of willingness to provide nPEP after different types of potential HIV exposure (126).
In judging whether exposures are isolated, episodic, or ongoing, clinicians should consider that persons who continue to engage in risk behaviors (e.g., commercial sex workers or users of illicit drugs) might be practicing risk reduction (e.g., using condoms with every client, not sharing syringes, and using a new sterile syringe for each injection). Therefore, a high-risk exposure might represent an exceptional occurrence for such persons despite their ongoing risk behavior.
Injection-drug users should be assessed for their interest in substance abuse treatment and their knowledge and use of safe injection and sex practices. Patients desiring substance abuse treatment should be referred for such treatment. Persons who continue to inject or who are at risk for relapse to injection-drug use should be instructed in the use of a new sterile syringe for each injection and the importance of avoiding the sharing of injection equipment. In areas where programs are available, health-care providers should refer such patients to appropriate sources of sterile injection equipment.
# Conclusion
Accumulated data from animal and human clinical and observational studies demonstrate that antiretroviral therapy initiated as soon as possible within 48-72 hours of sexual, injection-drug-use, and other substantial nonoccupational HIV exposure and continued for 28 days might reduce the likelihood of transmission. Because of these findings, DHHS recommends the prompt initiation of nPEP with HAART when persons seek care within 72 hours after exposure, the source is known to be HIV infected, and the exposure event presents a substantial risk for transmission. When the HIV status of the source is not known and the patient seeks care within 72 hours after exposure, DHHS does not recommend for or against nPEP but encourages clinicians and patients to weigh the risks and benefits on a case-by-case basis. When the transmission risk is negligible or when patients seek care >72 hours after a substantial exposure, nPEP is not recommended; however, clinicians might consider prescribing nPEP for patients who seek care >72 hours after a substantial exposure if, in their judgment, the diminished potential benefit of nPEP outweighs the potential risk for adverse events from antiretroviral medications. These recommendations are intended for the United States and might not apply in other countries. ('tr st-"w r-the) e e
# Continuing Education Activity Sponsored by CDC
Antiretroviral
# Goal and Objectives
This MMWR describes the potential risks and benefits of antiretroviral postexposure prophylaxis after nonoccupational exposures to human immunodeficiency virus (HIV). These recommendations were developed by CDC staff in collaboration with scientists, public health officials, clinicians, ethicists, members of affected communities, and representatives from professional associations and industry. The goal of this report is to provide information on which to base decisions regarding postexposure prophylaxis after a nonoccupational exposure to HIV. After completing this educational activity, the reader should be able to 1) describe the characteristics of a potential HIV exposure; 2) describe situations in which postexposure prophylaxis is most likely to be beneficial; 3) describe sources for obtaining information on antiretroviral regimens; and 4) describe appropriate follow-up schedules for persons who are prescribed antiretroviral postexposure prophylaxis.
To receive continuing education credit, please answer all of the following questions.
# Which of the following lists of exposure types are correctly ordered from greatest risk of infection to least risk of infection?
A. Insertive anal is greater than insertive oral, which is greater than insertive vaginal. B. Receptive anal is greater than receptive vaginal, which is greater than receptive oral. C. Insertive anal is greater than receptive anal, which is greater than receptive oral. D. None of the above. | 11,822 | {
"id": "523154fde65d8693dbaf17514e0bf1841ba6f3af",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | These revised recommendations of the Immunization Practices Advisory Committee update previous recommendations (MMWR 1978;27:231-3). They include information on a newly licensed oral liveattenuated typhoid vaccine that was not available when previous recommendations were published.# INTRODUCTION
The incidence of typhoid fever declined steadily in the United States from 1900 to 1960 and has since remained at a low level. From 1975 through 1984, the average number of cases reported annually was 464. During that period, greater than 50% of cases occurred among patients greater than or equal to 20 years of age; 62% of reported cases occurred among persons who had traveled to other countries, compared with 33% of reported cases from 1967 through 1976. TYPHOID AND PARATYPHOID A AND B VACCINES Two vaccines are generally available for civilian use in the United States: 1) a newly licensed oral liveattenuated vaccine (Vivotif, manufactured from the Ty21a strain of Salmonella typhi by the Swiss Serum and Vaccine Institute) and 2) a parenteral heat-phenol-inactivated vaccine (manufactured by Wyeth) that has been widely used for many years. In controlled field trials conducted among Chilean schoolchildren, three doses of the Ty21a vaccine were shown to reduce laboratory-confirmed infection by 67% for at least 4 years (95% confidence interval = 47%-79%). In a subsequent trial, a statistically significant decrease in the incidence of clinical typhoid fever occurred among persons receiving four doses of vaccine compared with two or three doses. Since no placebo group was included in this trial, vaccine efficacy could not be calculated. The mechanism by which Ty21a vaccine confers protection is unknown; however, the vaccine does elicit a humoral response. Secondary transmission of vaccine organisms does not occur because viable organisms are not shed in the stool of vaccinees.
In two field trials involving a primary series of two doses of heat-phenol-inactivated typhoid vaccine, similar to the currently available parenteral vaccine, vaccine efficacy ranged from 51%-76%. Vaccine efficacy for an acetone-inactivated parenteral vaccine, available only to the armed forces, ranges from 66%-94%.
Parenteral heat-phenol-inactivated vaccine and oral live-attenuated Ty21a vaccine have never been directly compared in a field trial, but the live-attenuated vaccine has similar efficacy to the heat-phenolinactivated vaccine and results in fewer adverse reactions. Experience is limited with the use of the Ty21a vaccine for persons from areas without endemic disease who travel to endemic-disease regions, and for children less than 5 years of age. Also, no experience has been reported regarding its use for persons previously vaccinated with parenteral vaccine.
Vaccines against paratyphoid A and B are not licensed for use in the United States. The effectiveness of paratyphoid A vaccine has never been established, and field trials have shown that the small amount of paratyphoid B antigens contained in "TAB" vaccines (vaccines combining typhoid and paratyphoid A and B antigens) is not effective. Combining paratyphoid A and B antigens with typhoid vaccine increases the risk of vaccine reaction. For these reasons, only monovalent preparations of typhoid vaccine containing S. typhi antigens should be used. VACCINE USAGE Routine typhoid vaccination is no longer recommended in the United States. However, selective vaccination is indicated for the following groups: --Travelers to areas that have a recognized risk of exposure to S. typhi. Risk is greatest for travelers to developing countries (especially Latin America, Asia, and Africa) who have prolonged exposure to potentially contaminated food and drink. Such travelers need to be cautioned that typhoid vaccination is not a substitute for careful selection of food and drink, since typhoid vaccines are not 100% effective, and the protection the vaccine offers can be overwhelmed by large inocula of S. typhi. --Persons with intimate exposure to a documented typhoid fever carrier, such as occurs with continued household contact. --Workers in microbiology laboratories who frequently work with S. typhi.
Routine vaccination of sewage sanitation workers is warranted only in areas with endemic typhoid fever. No evidence has shown that typhoid vaccine is useful in controlling common-source outbreaks. Also, the use of typhoid vaccine is not indicated for persons attending rural summer camps or in areas in which natural disasters, such as floods, have occurred. Primary Vaccination
The following dosages of typhoid vaccines are recommended, based on the experience in field trials: Adults and children greater than or equal to 10 years of age --Oral live-attenuated Ty21a vaccine: one enteric-coated capsule taken on alternate days to a total of four capsules. Each capsule should be taken with cool liquid no warmer than 37 C, approximately 1 hour before a meal. The capsules must be kept refrigerated, and all four doses must be taken to achieve maximum efficacy.
Or --Parenteral inactivated vaccine: 0.5 ml subcutaneously, given on two occasions, separated by greater than or equal to 4 weeks. Children less than 10 years of age --Oral live-attenuated Ty21a vaccine*: one enteric-coated capsule taken on alternate days to a total of four capsules. Each capsule should be taken with cool liquid no warmer than 37 C, approximately 1 hour before a meal. The capsules must be kept refrigerated, and all four doses must be taken to achieve maximum efficacy.
Or --Parenteral inactivated vaccine: 0.25 ml subcutaneously, given on two occasions, separated by greater than or equal to 4 weeks.
If parenteral vaccine is used and there is insufficient time for two doses of vaccine separated by greater than or equal to 4 weeks, common practice has been to give three doses of the parenteral vaccine in the volumes already listed at weekly intervals, although this schedule may be less effective. Booster Doses Under conditions of continued or repeated exposure to S. typhi, booster doses of vaccine are required to maintain immunity after vaccination with parenteral typhoid vaccines. If parenteral vaccine is used, booster doses should be given every 3 years. Even if greater than 3 years have elapsed since the prior vaccination, a single booster dose of parenteral vaccine is sufficient. When the heat-phenol-inactivated vaccine is used, less reaction follows booster vaccination by the intradermal route than by the subcutaneous route. (The acetone-inactivated vaccine should not be given by the intradermal route because of the potential for severe local reactions.) No experience has been reported using oral liveattenuated vaccine as a booster; however, using the primary series of four doses of Ty21a as a booster for persons previously vaccinated with parenteral vaccine is a reasonable alternative to administration of a parenteral booster dose. The following routes and dosages of parenteral vaccine for booster vaccination can be expected to produce similar booster antibody responses: Adults and children greater than or equal to 10 years of age One dose, 0.5 ml subcutaneously or 0.1 ml intradermally. Children 6 months to 10 years of age One dose, 0.25 ml subcutaneously or 0.1 ml intradermally. The optimal booster schedule for persons who have received Ty21a vaccine has not yet been determined; however, the longest reported followup study of vaccine trial subjects showed continued efficacy 5 years after vaccination. The manufacturer of Ty21a recommends revaccination with the entire four-dose series every 5 years. This recommendation may change as more data become available on the duration of protection produced by the Ty21a vaccine. PRECAUTIONS AND CONTRAINDICATIONS During volunteer studies and field trials with oral live-attenuated Ty21a vaccine in enteric-coated tablets, side effects were rare and consisted of abdominal discomfort, nausea, vomiting, and rash or urticaria. In safety trials, monitored adverse reactions occurred with equal frequency among groups receiving vaccine and placebo.
Parenteral inactivated vaccines produce several systemic and local adverse reactions, including fever (occurring among 14%-29% of vaccinees), headache (9%-30% of vaccinees), and severe local pain and/or swelling (6%-40% of vaccinees); 13%-24% of vaccinees missed work or school because of adverse reactions. More severe reactions have been sporadically reported, including hypotension, chest pain, and shock. Administration of the acetone-inactivated vaccine by jet-injector gun results in a greater incidence of local reactions and is not recommended.
The only contraindication to parenteral typhoid vaccination is a history of severe local or systemic reactions following a previous dose. No experience has been reported with parenteral inactivated vaccine nor oral live-attenuated Ty21a vaccine among pregnant women. Live-attenuated Ty21a should not be used among immunocompromised persons, including those known to be infected with human immunodeficiency virus. Parenteral inactivated vaccine presents a theoretically safer alternative for this group. SELECTED BIBLIOGRAPHY GENERAL | 1,924 | {
"id": "d852aa4116baaca414d3ebb2fbbc0d96b2d6f628",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | pregnant women with BLL <45 µg/dL. CDC recognizes the important benefits of breastfeeding for both the mother and child and considered the adverse health and developmental effects associated with lead exposure compared to those associated with not breastfeeding. The adverse developmental effects of ≥5 µg/dL in infant blood lead level was of greater concern than the risks of not breastfeeding. Thus, CDC encourages mothers with blood lead levels <40 µg/dL to breastfeed, however, mothers with higher blood lead levels are encour aged to pump and discard their breast milk until their blood lead levels drop below 40 µg/dL. These recom mendations are made for the U.S. population and are not appropriate in countries where infant mortality from infectious diseases is high. Specific recommendations regarding appropriate follow-up blood lead testing of the mother and infant are provided. v All Women of Child-Bearing Age Pregnant Women Lactating Women Neonates (<1 Month of Age) Infants (1 -6 Months) Summary of Public Health Actions Based on Maternal and Infant Blood Lead Levels cs211920 Micrograms/ Deciliter Follow local pediatric lead screening guidelines Follow-up test within 1 month Follow-up test within 24 hours Follow local pediatric lead screening guidelines Follow-up test within 2 weeks Follow-up test within 24 hours Breastfeeding should be encouraged Breastfeeding may be initiated if infant's BLLs monitored Lactation should be continued, but breast milk should be pumped and discarded until BLLs <40 Provide anticipatory guidance Consider chelation therapy; Consult with an expert in lead poisoning Medical emergency Follow-up test within 1 month Confirm and referrals Environmental assessment & abatement of lead paint hazards Medical removal from occupational exposure Provide anticipatory guidance, provide health education materials, test workers according to established guidelines, and manage elevated BLLs according to adult lead guidelines (OSHA Medical Guidelines) Chelation therapy Consider chelation therapy; Consult with an expert in lead poisoning Consider chelation therapy; Consult with an expert in lead poisoning Notify health department Follow-up test within 1-3 months Follow-up test within 3 months Follow-up test within 1-3 months vi# APPENDICES
# LIST OF TABLES
Chapter 2
# PREFACE
Lead exposure during pregnancy and breastfeeding can result in lasting adverse health effects independent of lead exposure during other life stages. However, to date there has been limited guidance available for clini cians and the public health community regarding the screening and management of pregnant and lactating women exposed to high levels of lead. Recognizing the need for national recommendations, the Centers for Disease Control and Prevention and the Advisory Committee on Childhood Lead Poisoning Prevention convened a workgroup of recognized experts to review the existing evidence for adverse effects of past and current maternal lead exposure on maternal health and fertility and on the developing fetus, infant, and child in prenatal and postnatal states and to propose evidence-based strategies for intervention.
These Guidelines for the Identification and Management of Lead Exposure in Pregnant and Lactating Women are based on scientific data and practical considerations regarding preventing lead exposure during pregnancy, assessment and blood lead testing during pregnancy, medical and environmental management to reduce fetal exposure, breastfeeding, and follow up of infants and children exposed to lead in utero.
The guidelines also outline a research agenda that will provide crucial information for future efforts to prevent and treat lead exposure during pregnancy and lactation. Further research is needed for a better understand ing of lead's effect on pregnancy outcomes and infant development; lead kinetics across the placenta and in breast milk and their relationship to long-term health effects; genetic susceptibility to damage from lead; as well as the pharmacokinetics, effectiveness, and safety of chelating agents in the pregnant woman. Research is also needed to address important clinical and public health needs including validation of risk question naires for pregnant women, optimal timing of blood lead testing, and effective strategies for identification and treatment of pica in pregnant women.
I wish to thank the members of the Advisory Committee on Childhood Lead Poisoning Prevention, mem bers of the Lead in Pregnancy Workgroup, and consultants who developed this document and acknowledge their contribution to the health of the nation's children. This document was voted on and approved with one abstention at the October 21-22, 2009, meeting of the Advisory Committee on Childhood Lead Poisoning Pre vention. I believe this document represents a major advance in our efforts to prevent lead exposure in those most vulnerable.
# EXECUTIVE SUMMARY
Despite improvements in environmental policies and significant reductions in U.S. average blood lead levels, lead exposure remains a concern for pregnant and lactating women, particularly among certain population subgroups at increased risk for exposure.
Recent National Health and Nutrition Examination Survey (NHANES) estimates suggest that almost 1% of women of childbearing age (15-44 years) have blood lead levels greater than or equal to 5 µg/dL (Centers for Disease Control and Prevention 2008, unpublished data). As documented in these guidelines, there is good evidence that maternal lead exposure during pregnancy can cause fetal lead exposure and can adversely af fect both maternal and child health across a wide range of maternal exposure levels.
However, guidance for clinicians regarding screening and managing pregnant and lactating women exposed to lead has not kept pace with the scientific evidence. There are currently no national recommendations by any medical or nursing professional association that covers lead risk assessment and management during pregnancy and lactation. Currently, New York State, New York City, and Minnesota are the only jurisdictions that have issued lead screening guidelines and follow-up requirements for pregnant women by physicians or other providers of medical care. The lack of national recommendations about testing pregnant women and managing those identified with lead exposure above background levels has created confusion in the clini cal and public health sectors. In response to this need, the Centers for Disease Control and Prevention (CDC) Advisory Committee on Childhood Lead Poisoning Prevention (ACCLPP) convened the Lead and Pregnancy Work Group to review the existing evidence for adverse effects of past and current maternal lead exposure on maternal health and fertility and on the developing fetus, infant, and child in prenatal and postnatal states. This document presents ACCLPP's summary of the evidence to date from human studies, conclusions, and CDC recommendations regarding - prevention of lead exposure for pregnant and lactating women, - risk assessment and blood lead testing of pregnant women, - medical and environmental management, - breastfeeding, and - follow up of infants and children of mothers with blood lead levels exceeding national norms.
In instances where there is an absence of clear and convincing evidence, recommendations are based on the combined clinical, practical, and research experience of ACCLPP and work group members. This document also identifies research, policy, and health education needs to inform policy and improve care of pregnant and lactating women with lead exposure above background levels. The guidelines do not address all women of childbearing age, nor does it address male reproductive health issues associated with lead exposure.
The evidence that prenatal lead exposure impairs children's neurodevelopment, placing them at increased risk for developmental delay, reduced IQ, and behavioral problems, is convincing. The research also suggests, but is inconclusive, that fetal lead exposure at levels found in the United States results in low birth weight or adverse health conditions in adults who were exposed to lead in utero, among others. Further research is needed for a better understanding of several biomedical issues, including pregnancy outcomes and infant development associated with maternal lead exposure during pregnancy, lead kinetics across the placenta and in breast milk and their relationship to long-term health effects, genetic susceptibility to damage from lead, pharmacokinetics and effectiveness of chelating agents in the pregnant woman, among others. Research is also needed to address important clinical and public health needs, like validation of risk questionnaires for pregnant women, optimal timing of blood lead testing during pregnancy, and effective strategies for identifi cation and treatment of pica in pregnant women.
This document provides guidance based on current knowledge regarding blood lead testing and follow-up care for pregnant and lactating women with lead exposure above background levels. Because there is no apparent threshold below which adverse effects of lead do not occur, CDC has not identified an allowable exposure level, level of concern, or any other bright line intended to connote a safe or unsafe level of exposure for either mother or fetus. Instead, CDC is applying public health principles of prevention in recommending follow-up blood lead testing and interventions when prudent. These guidelines recommend follow-up activi ties and interventions beginning at blood lead levels (BLLs) ≥5 µg/dL in pregnant women. Unlike the BLL level of concern of 10 µg/dL for children, which is a communitywide action level, a BLL of 5 µg/dL in pregnant wom en serves a different purpose: it flags the occurrence of prior or ongoing lead exposure above background levels, which may not otherwise be recognized. The vulnerability of a developing fetus to adverse effects and the possibility of preventing additional exposures postnatally justify intervention for pregnant women show ing evidence of lead exposure above background levels.
CDC does not recommend blood lead testing of all pregnant women in the United States. State or local public health departments should identify populations at increased risk for lead exposure and provide communityspecific risk factors to guide clinicians in determining the need for population-based blood lead testing. Rou tine blood lead testing of pregnant women is recommended in clinical settings that serve populations with specific risk factors for lead exposure. Health care providers serving lower risk communities should consider the possibility of lead exposure in individual pregnant women by evaluating risk factors for exposure as part of a comprehensive occupational, environmental, and lifestyle health risk assessment of the pregnant woman, and perform blood lead testing if a single risk factor is identified. Assessment for lead exposure, based on risk factor questionnaires or blood lead testing, should take place at the earliest contact with the pregnant patient.
For all patients, but especially those with known lead exposures, health care providers should provide guid ance regarding sources of lead and help identify potential sources of lead in the patient's environment. Risk factors for lead exposure above background levels in pregnant women differ from those described in young children. Important risk factors for lead exposure in pregnant women include recent immigration, pica prac tices, occupational exposure, nutritional status, culturally specific practices such as the use of traditional remedies or imported cosmetics, and the use of traditional lead-glazed pottery for cooking and storing food. Lead-based paint is less likely to be an important exposure source for pregnant women than it is for children, except during renovation or remodeling in older homes. Pregnant women with blood lead concentrations of 10 µg/dL or higher should be removed from occupational lead exposure. Follow-up testing; increased patient education; and environmental, nutritional, and behavioral interven tions are indicated for all pregnant women with blood lead levels greater than or equal to 5 µg/dL in order to prevent undue exposure to the fetus and newborns. Since lead exposure at these levels affects only approxi mately 1% of U.S. women of childbearing age, the recommendations in this guidance document should not significantly impact many individuals or clinical practices.
The essential activity in management of pregnant women with blood lead levels ≥5 µg/dL is removal of the lead source, disruption of the route of exposure, or avoidance of the lead-containing substance or activity. Source identification beyond obtaining a thorough environmental and occupational history should be con ducted for BLLs ≥15 µg/dL in collaboration with the local health department, which will conduct an environ mental investigation of the home environment in most jurisdictions and an investigation of the work environ ment (in some jurisdictions). Women who engage in pica behavior, regardless of the substance consumed, may benefit from nutritional counseling. Pregnant and lactating women with a current or past BLL ≥5 µg/dL should be assessed for the adequacy of their diet and provided with prenatal vitamins and nutritional advice emphasizing adequate calcium and iron intake. Chelation therapy during pregnancy or early infancy may be warranted in certain circumstances where the maternal or neonatal blood lead exceeds ≥45 µg/dL and in consultation with an expert in lead poisoning. Insufficient data exist regarding the advisability of chelation for - Identifying pregnant women with a history of lead poisoning or who are currently exposed to lead above background levels and preventing additional lead exposure can help prevent adverse health outcomes in these children.
Despite improvements in environmental policies and significant reductions in U.S. average population blood lead levels, lead exposure remains a concern for pregnant and lactating women among certain population subgroups at increased risk for exposure. There is increasing awareness that unintended exposures to environ mental contaminants, such as lead, adversely affect maternal and infant health, including the ability to be come pregnant, maintain a healthy pregnancy, and have a healthy baby. In the United States, women of child bearing age represent approximately 42% of the total population (American Community Survey 2004) and at any given time almost 9% are pregnant (Crocetti et al. 1990). In the National Health and Nutrition Examination Survey (NHANES) survey, the 95th percentile for blood lead levels among women aged 15-49 was 2.4 micrograms per deciliter (µg/dL). As Figure 1-1 indicates, blood lead levels among women aged 15-49 have dropped substantially since the 1976-1980 NHANES. Recent NHANES estimates suggest that almost 1% of women of childbearing age (15-49 years) have blood lead levels greater than or equal to 5 µg/dL (Centers for Disease Control and Prevention 2008, unpublished data).
Lead exposure remains a public health problem for subpopulations of women of childbearing age and for the developing fetus and nursing infant for several important reasons. First, prenatal lead exposure has known influences on maternal health and infant birth and neurodevelopmental outcomes (Bellinger 2005). Research findings suggest that prenatal lead exposure can adversely affect maternal and child health across a wide range of maternal exposure levels. In addition, adverse effects of lead are being identified at lower levels of ex posure than previously recognized in both child and adult populations .
Second, bone lead stores are mobilized during periods of increased bone turnover such as pregnancy and lac tation. Over 90% of lead in the adult human body is stored in bone , and may result in redistribution of cumulative lead stores from bone into blood during periods of heightened bone turnover, such as pregnancy and lactation . Since bone lead stores persist for decades, women and their infants may be at risk for continued exposure long after exposure to external environmental sources has been terminated.
For the purposes of the review of existing scientific literature, the work group was divided into three sub groups: Prevalence, Risk, and Screening; Maternal, Pregnancy, and Child Outcomes; and Management, Treat ment, and Other Interventions. The subgroups were asked to review the literature, summarize findings, and address the issues outlined in Appendix II. These guidelines do not include findings from animal studies, except when there are limited human data and consistent findings confirmed from multiple animal studies. This document presents ACCLPP's summary of the evidence, provides guidance for preventing and treat ing lead exposure in pregnant and lactating women, and identifies research, policy, and education needs to improve health outcomes and care provided to pregnant women and their infants. These guidelines do not address all women of childbearing age, nor do they address male reproductive health issues associated with lead exposure. NHANES NHANES 1999NHANES III 1991-1994NHANES III 1988NHANES II 1976-1980 - For centuries, exposure to high concentrations of lead has been known to pose health hazards. Recent evidence suggests that chronic low-level lead exposure also has adverse health effects in both adults and children and no blood lead threshold level for these ef fects has been identified.
- CDC has not identified an allowable exposure level, level of concern, or any other bright line intended to connote a safe or unsafe level of exposure for either mother or fetus. In stead, CDC is applying public health principles of prevention to intervene when prudent.
- Epidemiologic and experimental evidence suggest that lead is a potent developmental toxicant, but many details regarding lead's mechanism of action have not been deter mined.
- Recent epidemiologic cohort studies suggest that prenatal lead exposure, even with maternal blood lead levels below 10 µg/dL, is inversely related to fetal growth and neu rodevelopment independent of the effects of postnatal exposure, though the exact mechanism(s) by which low-level lead exposure, whether incurred prenatally or postna tally, might adversely affect child development remains uncertain.
- Lead may adversely impact sexual maturation in the developing female and may reduce fertility, but the scientific evidence is limited.
- Lead exposure has been associated with increased risk for gestational hypertension, but the magnitude of the effect, the exposure level at which risk begins to increase, and whether risk is more associated with acute or cumulative exposure, remain uncertain.
- Evidence is limited to support an association between blood lead levels from 10-30 µg/dL and spontaneous abortion. There are also few and inconsistent studies on the association between blood lead levels and preterm delivery.
- The available data are inadequate to establish the presence or absence of an association between maternal lead exposure and major congenital anomalies in the fetus.
# INTRODUCTION
For centuries, exposure to high concentrations of lead has been known to pose health hazards. High levels of exposure can result in delirium, seizures, stupor, coma, or even death. Other overt signs and symptoms may include hypertension, peripheral neuropathy, ataxia, tremor, headache, loss of appetite, weight loss, fatigue, muscle and joint aches, changes in behavior and concentration, gout, nephropathy, lead colic, and anemia. In general, symptoms tend to increase with increasing blood lead levels. A substantial body of recent epidemio logic and toxicologic research demonstrates that multiple health effects can occur at low to moderate blood lead levels previously without recognized harm. Health effects of chronic low-level exposure in adults include cognitive decline, hypertension and other cardiovascular effects, decrements in renal function, and adverse reproductive outcome (Agency for Toxic Substances and Disease Registry 2007).
This chapter focuses on the effects of maternal lead exposure on reproductive health, maternal health, preg nancy outcome, infant growth, and child neurodevelopment. Although the studies described in this chapter focus on maternal exposures, paternal influences may also influence reproductive outcomes. Issues related to male-mediated reproductive toxicity for lead have been reviewed elsewhere (Apostoli et al. 1998;Jensen et al. 2006). In these guidelines, the discussion of scientific literature focuses on findings in humans. However, there also exists an extensive body of literature on the health effects of lead in experimental animals, which, while not cited, generally supports the human data. The reader is referred to other sources (Agency for Toxic Substances and Disease Registry 2007; U.S. Environmental Protection Agency 2006) for recent reviews of the experimental animal data.
An area of active study is the relationship between toxic exposures (such as lead) and fetal programming of growth and chronic disease. According to the Barker hypothesis (Barker 1990), now known more broadly as "fetal origins of adult disease, " poor development in utero-for example, low birth weight-increases the risk for obesity, hypertension, and cardiovascular disease during adulthood (Barker 1995;Khan et al. 2003). These epidemiologic findings highlight the importance of the intrauterine environment and are consistent with experimental evidence of long-term "programming" in early life. For example, because exposure to develop mental toxicants, including lead, is associated with low birth weight, lead exposure to the fetus may increase the risk for later cardiovascular disease. Evidence supporting the fetal origins hypothesis is mounting rapidly (Ingelfinger and Schnaper 2005). However, evidence of effects from in utero lead exposure on adult disease are currently too limited to provide conclusive information.
# IMPACT OF LEAD EXPOSURE ON SEXUAL MATURATION AND FERTILITY
Few studies have examined possible lead-related effects on sexual maturation and fertility. Delay in puberty is an important yet understudied health outcome that may be associated with relatively low blood lead levels. Two studies have examined this outcome using cross-sectional data from the third NHANES (NHANES III). Selevan et al. (2003) analyzed blood lead and pubertal development by race in girls ages 8-18 years of age. Blood lead levels as low as 3 µg/dL were associated with 2 to 6 month delays in Tanner stage measurements (breast and pubic-hair development) and menarche in African-American and Mexican-American girls, while Non-Hispanic white girls experienced non-statistically significant delays in all pubertal measures. found that higher blood lead levels were significantly associated with delayed attainment of menarche and pubic hair development, but not breast development, even after adjustment for race/ethnicity, age, family size, residence, income, and body mass index. The cross-sectional design of NHANES III limits the ability to as sess the temporal relation between blood lead and markers of puberty.
The studies on time-to-pregnancy associated with lead exposure have not been conclusive. One study of time to-pregnancy did not suggest adverse effects of lead on fecundity at maternal blood lead concentrations less than 29 µg/dL. However, above this level, an association with longer time-to-pregnancy was found, but this was based on eight subjects (Sallmen et al. 1995). In a study of environmental lead exposure and reproductive health in Mexico City, no association was observed between maternal blood lead levels (mean = 9 µg/dL) and time-to-pregnancy in the first year (Guerra-Tamayo et al. 2003). However, in the subset of women with blood lead levels above 10 µg/dL, the likelihood of not achieving pregnancy after one year was five times higher (95% confidence interval 1.9-19.1) compared to women with blood lead levels below 10 µg/dL.
# Summary of Evidence: Sexual Maturation and Fertility
Although studies are limited, there is some suggestion that blood lead at relatively low levels may lead to alterations in onset of sexual maturation and reduced fertility. These findings underscore the importance of considering sensitive markers of human fecundity in relation to lead exposure and should be confirmed in studies that can address the methodologic limitations of previous research.
# IMPACT OF LEAD EXPOSURE ON MATERNAL HYPERTENSION DURING PREGNANCY
There is some evidence that maternal physiologic parameters in pregnancy can be modulated by low levels of lead exposure (Tabacova et al. 1994;Takser et al. 2005;). However, the definitive relation ship between lead exposure and maternal health outcomes in pregnancy is unclear. Lead is an established risk factor for hypertension in adults (Hertz-Picciotto and Croft 1993;). Hypertension is one of the most common complications of pregnancy. There is substantial evidence that lead damages the vascular endothelium (Vaziri and Sica 2004) and that endothelial dysfunction is an important mediator of hypertension and preeclampsia in pregnancy (Karumanchi et al. 2005).
The most widely used classification of high blood pressure in pregnancy is that of the National High Blood Pressure Education Program Working Group (2000). This classification distinguishes between new hyperten sion arising during the pregnancy after 20 weeks (gestational hypertension) and preexisting hypertension (chronic hypertension).
It is important to differentiate between non-proteinuric hypertension and hypertension plus proteinuria (preeclampsia), as adverse clinical outcomes are more closely related to the latter. Severe hypertension usually defined as a systolic blood pressure of ≥180 mm Hg or diastolic blood pressure of ≥110 mm Hg, even in the absence of proteinuria, has been associated with adverse maternal and perinatal outcomes.
# Gestational Hypertension
Hypertension in pregnancy is defined as a systolic blood pressure of 140 mm Hg or higher or diastolic pres sure of 90 mm Hg or higher that occurs after 20 weeks gestation in a woman with previously normal blood pressure. Increasing levels of lead in blood have been associated with gestational hypertension. Among 3,851 women delivering at a Boston hospital from 1979-1981, incidence of pregnancy hypertension and elevated blood pressure at delivery increased significantly as blood lead increased (mean blood lead 6.9 ± 3.3 µg/dL). During delivery, lead levels correlated with both systolic (Pearson r = 0.081, p = 0.0001) and diastolic (r = 0.051, p = 0.002) blood pressure. Using a reference level of 0.7 µg/dL, the relative risk doubled when blood lead level approached 15. There was no association, however, between blood lead level and risk for preeclampsia in this study . Rothenberg et al. (1999a) found that blood lead was a statistically significant predictor of maternal blood pressure among 1,627 women immigrants (mean blood lead 2.3 µg/dL) but not among nonimmigrants (mean blood lead 1.9 µg/dL).
In a cross-sectional analysis of third trimester primigravid women in Malta (N = 143), investigators compared normotensive women to those with gestational hypertension (Magri et al. 2003). Those with hypertension (mean blood lead 9.6 ± 6 µg/dL, N = 30) had significantly higher blood lead levels compared to normotensive controls (mean blood lead 5.8 ± 3 µg/dL, N = 93). A study of women with gestational age ranging from 30-41 weeks in Tehran, Iran, was conducted to assess the relationship between blood lead levels and gestational hypertension (Vigeh et al. 2004). Postpartum blood lead levels were significantly higher among 55 cases with hypertension (mean 5.7 ± 2.0 µg/dL) in comparison to 55 age-matched normotensive controls (mean 4.8 ±1.9 µg/dL).
The prevalence of gestational hypertension has been shown to be increased even at blood lead levels less than 5 µg/dL. studied a cohort of 705 women aged 12-34 years who presented for prenatal care at one of three clinics in New Jersey with with mean (standard error) blood lead level equal to 1.2 ± 0.03 μg/dL and found maternal blood lead significantly associated with gestational hypertension.
Associations have also been found between gestational hypertension and bone lead. reported on a prospective cohort study of 1,006 women aged 16-44 years enrolled during their third trimester in south central Los Angeles. This study included postpartum measures of tibia and calcaneus bone lead in addition to maternal blood lead levels. They found that each 10 µg/g increase in calcaneus bone lead (range -30.6 to 49.9 μg/g) was associated with an almost two-fold increased risk for third-trimester hypertension, a 0.70-mm Hg increase in third-trimester systolic blood pressure, and a 0.54-mm Hg increase in third-trimester diastolic blood pressure.
# Preeclampsia
Preeclampsia, a pregnancy-specific disorder associated with increased maternal and perinatal morbidity and mortality, is defined as a) systolic blood pressure ≥140 mm Hg and/or diastolic blood pressure ≥90 mm Hg beginning after the 20th week of gestation and b) proteinuria ≥300 mg per 24 hours. Preeclampsia is usually associated with edema, hyperuricemia, and a fall in glomerular filtration rate. Blood lead levels have been as sociated with the risk for preeclampsia, although the evidence is less clear than for gestational hypertension. Dawson et al. (2000) observed significant differences between normotensive (N = 20) and hypertensive or pre eclamptic (N = 19) pregnancies with respect to red blood cell lead content. They found maternal blood pres sure to be directly proportional to RBC lead content; however, the selection criteria and study population in this small group at increased risk are not well-defined, so selection bias and confounding cannot be ruled out.
In the 2004 study by Vigeh et al. noted above, there were no significant differences in blood lead concentra tions among hypertensive subjects with proteinuria (N = 30) and those without proteinuria (N = 25). In an other study by Vigeh et al. (2006), among 396 postpartum women in Tehran, 31 with preeclampsia had signifi cantly higher blood lead levels (mean 5.09 ± 2.01 µg/dL) compared to 365 normotensive controls (mean 4.82 ± 2.22 µg/dL) and significantly higher umbilical cord blood lead levels (mean 4.30 ± 2.49 µg/dL compared to 3.5 ± 2.09 µg/dL) (Vigeh et al. 2006). A 13-fold increased risk for preeclampsia compared to normotensive controls (mean blood lead 3.52 ± 2.09 µg/dL) was observed for every log-unit increase (~3 µg/dL) in blood lead. The 1987 study by Rabinowitz et al. of 3,851 women delivering in Boston found no association between blood lead level and risk for preeclampia ).
# Summary of the Evidence: Effects on Maternal Hypertension
Gestational hypertension and preeclampsia have been associated with adverse maternal and perinatal out comes. Lead exposure has been associated with increased risk for gestational hypertension but the magni tude of the effect, the exposure level at which risk begins to increase, and whether risk is most associated with acute or cumulative exposure, remain uncertain. It is unclear whether lead-induced increases in blood pressure during pregnancy lead to severe hypertension or preeclampsia. However, even mild gestational hypertension can be expected to lead to increased maternal and fetal monitoring, medical interventions, and additional health care costs. Also, causality is unclear since preexisting hypertension reduces renal function, which in turn could result in the retention of lead.
# IMPACT OF LEAD EXPOSURE ON PREGNANCY OUTCOMES
# Spontaneous Abortion
There is consistent evidence that the risk for spontaneous abortion is increased by maternal exposure to high levels of lead. In her review of studies on the association between elevated blood lead levels and spontaneous abortion, includes a detailed summary of studies involving high blood lead levels, which come primarily from the literature on industrial exposures in Europe during the 19th century. Yet few studies have addressed the risk for spontaneous abortion at lower levels of exposure. Of those studies that have ad dressed this issue, most reports provide limited evidence to support an association between maternal blood lead levels of 0 to 30 µg/dL and increased risk for spontaneous abortion (Laudanski et al. 1991;Lindbohm et al. 1992;McMichael et al. 1986;Murphy et al. 1990;Tabacova and Balabaeva 1993). However, the lack of evidence for an association at these low-to-moderate blood lead levels may be due to methodologic deficiencies in these studies, such as small sample sizes, lack of control for confounding, problems in case ascertainment, and/or limitations in exposure assessment (Hertz-Piccioto 2000).
The strongest evidence to date is a prospective study of pregnant women in Mexico City, which addressed most of the deficiencies of the prior studies and demonstrated a statistically significant dose-response rela tionship between maternal blood lead levels (average 11.0 µg/dL) and risk for spontaneous abortion . Odds ratios for spontaneous abortion for the blood lead groups 5-9, 10-14, and >15 µg/ dL were 2. 3, 5.4, and 12.2, respectively, in comparison to the reference group (<5 µg/dL) (p for trend = 0.03) with an estimated increased odds for spontaneous abortion of 1.8 (95% CI = 1.1-3.1) for every 5 µg/dL in crease in blood lead. In another study of pregnant women (N = 207) from Mexico City (mean BLL 6.2 μg/dL), a 0.1% increment in the maternal plasma-to-blood lead ratio was associated with a 12% greater incidence of reported history of spontaneous abortion (p = 0.02) (Lamadrid-Figueroa et al. 2007). On average, women with no spontaneous abortions had higher blood lead levels than women with one or more reported spontaneous abortions (6.5 vs. 5.8 µg/dL); however, with each additional abortion experienced, women had an 18% greater plasma-to-blood lead ratio on average (p < 0.01). Women with a larger plasma-to-whole blood lead ratio may be at higher risk for miscarriage due to a greater availability of lead in plasma, which more readily crosses the placental barrier.
Preterm Delivery, Low Birth Weight, Length, and Head Circumference Andrews et al. (1994) reviewed the epidemiologic literature through the early 1990s on prenatal lead exposure in relation to gestational age and birth weight. These studies are somewhat contradictory, most likely due to methodologic differences in study design, sample size, and/or degree of control for confounding. The more recent and well-designed studies suggest that maternal lead exposure during pregnancy is inversely related to fetal growth, as reflected by duration of pregnancy and infant size. Irgens et al. (1998), using a registry-based approach, found that women occupationally exposed to lead were more likely to deliver a low birth weight infant than women not exposed to lead (odds ratio = 1.1, 95% CI = 0.98-1.29). A case-control study in Mexico City found cord blood lead to be higher in preterm infants (mean 9.8 µg/dL) compared to term infants (mean 8.4 µg/dL) (Torres-Sanchez et al. 1999). A birth cohort study, also conducted in Mexico City, found ma ternal bone lead burden to be inversely related to birth weight ) and birth length and head circumference at birth . A study by Rothenberg et al. (1999) among Mexican-Americans found that over the 1-35 μg/dL range of maternal blood lead at 36 weeks of pregnancy, the estimated reduction in 6-month infant head circumference was 1.9 cm (95% CI = 0.9-3.0 cm).
# Congenital Anomalies
Very few studies have examined maternal lead exposure and risk for congenital malformations and, with one exception, none included biologic measures of lead exposure. Needleman et al. (1984) conducted a record re view and reported an association between cord blood lead and minor congenital anomalies, but major anom alies did not show a similar association. In a case-control study, Bound et al. (1997) found an increased risk between living in an area with water lead levels greater than 10 µg/L (ppb) and delivering a child with a neural tube defect. Irgens et al. (1998) found, in a registry-based study, women occupationally exposed to lead were more likely to deliver an infant with a neural tube defect than women not exposed to lead (OR = 2.87, 95% CI = 1.1-6.4). In a case-control study conducted within the Baltimore-Washington Infant Study (Jackson et al. 2004), an association was observed between maternal occupational lead exposure and total anomalous pulmonary venous return although this relationship was not statistically significant (OR = 1.57, 95% CI = 0.64-3.47).
# Summary of the Evidence: Pregnancy Outcomes
Overall, increased risk for spontaneous abortion appears to be associated with blood lead levels ≥30 µg/dL. Limited evidence suggests that maternal blood lead levels less than 30 µg/dL could also increase the risk for spontaneous abortion, although these findings remain to be confirmed in further research. Maternal lead ex posure may increase the risk for preterm delivery and low birth weight, although data are limited and a blood lead level at which the risks begin to increase has not been determined. The available data are inadequate to establish the presence or absence of an association between maternal lead exposure and major congenital anomalies in the fetus.
# IMPACT OF LEAD EXPOSURE ON INFANT GROWTH AND NEURODEVELOPMENT
# Infant Growth
Few studies have investigated the effects of prenatal lead exposure on infant growth. Two studies suggest an association between maternal lead exposure and decreased growth. In one study, maternal bone lead levels were negatively associated with infant weight at one month of age and with postnatal weight gain between birth and 1 month . In another study, postnatal linear growth rate was negatively related to prenatal blood lead level, although only when infants' postnatal lead exposure was elevated (Shukla et al. 1989). Infants born to a mother with prenatal blood lead concentration greater than 7.7 µg/dL (the median level in the cohort) and whose blood lead increased 10 µg/dL between 3 and 15 months of age were about 2 cm shorter at 15 months of age (p = 0.01). Greene and Ernhart (1991) also reported negative associations be tween prenatal lead level and birth weight, birth length, and head circumference, although none were statisti cally significant. Data on the association between prenatal lead exposure and infant growth is limted and thus inconclusive.
# Lead and Neurodevelopment
Neurotoxic effects of lead are observed during episodes of acute lead poisoning in both children and adults.
It remains unclear, however, whether prenatal or postnatal lead exposure is more detrimental to neurodevel opment. A number of chemicals, including lead, have been shown, in experimental animal models as well as in humans, to cause morphological changes in the developing nervous system (Costa et al. 2004). Given the in complete blood-brain barrier in the developing nervous system, children might be more susceptible to insults during the prenatal and early postnatal periods (Bearer 1995;Rodier 1995;Weiss and Landrigan 2000).
Animal research indicates that the central nervous system is the organ system most vulnerable to develop mental chemical injury (Rodier 2004), with vulnerabilities that pertain to processes critical to neurodevel opment, such as the establishment of neuron numbers; migration of neurons; establishment of synaptic connections, neurotransmitter activity, receptor numbers; and deposition of myelin. Neurons begin forming even before the neural tube closes. Most cerebral neurons form during the second trimester of gestation and migrate to their adult location well before birth (Goldstein 1990). Neuronal connections, however, are sparse at birth compared to adulthood. During the first 24 months of life, synaptic density and cerebral metabolic rate increase dramatically and by age 3 years are two-fold greater than those in the adult. The proliferation of synapses (synaptogenesis) is critical for the formation of basic circuitry of the nervous system (Rodier 1995). Synaptic "pruning" during early childhood establishes the final number of neurons.
Lead is known to interfere with synaptogenesis and, perhaps, with pruning (Goldstein 1992). It interferes with stimulated neurotransmitter release at synapses in the cholinergic, dopaminergic, noradrenergic, and GABer gic systems (Cory-Slechta 1997; Guilarte et al. 1994). It substitutes for calcium and zinc as a second messenger in ion-dependent events. These disturbances in neurotransmitter release would thus be expected to disrupt the normal organization of synaptic connections (Bressler and Goldstein 1991).
The brain is protected from large molecular compounds in the blood by the blood-brain barrier, created by tight junctions between endothelial cells in cerebral blood vessels (Goldstein 1990). The development of this barrier function begins in utero and continues through the first year of life (Goldstein 1990). The brain is one of the target organs for lead and lead exposure in utero and the first year of life may dirupt the development of the blood-brain barrier.
These lead-induced biochemical disturbances in the brain are accompanied by impaired performance on a wide variety of tests of learning and memory in a variety of animal models and no threshold for these impair ments has been identified (White et al. 2007).
# Epidemiologic Evidence for Neurodevelopmental Effects of Lead
A large number of studies provide convincing evidence that prenatal lead exposure impairs children's neu rodevelopment (Table 2- 1). In most of the early prospective studies, many children had prenatal exposures exceeding 10 µg/dL. Several studies reported significant inverse associations with neurobehavior Dietrich et al. 1987a,b;Ernhart et al. 1987;Shen et al. 1998;. One study found that the early developmental delays were largely overcome if postnatal lead exposures were low in the pre school years, but appeared to be more persistent among children whose postnatal blood lead levels were also greater than 10 µg/dL ). Other studies found that the effects of prenatal exposure were independent of changes in postnatal blood lead levels (e.g., . These inverse associations persisted into adolescence and beyond, as maternal blood lead levels during pregnancy predicted teenage at tention and visuoconstruction abilities (Ris et al. 2004), teenage self-reported delinquent behaviors , and increased arrest rates between the ages of 19 and 24 (Wright et al. 2008). A relationship between prenatal blood lead levels and the onset of schizophrenia between the late teens and early 20s is also seen (Opler et al. 2004(Opler et al. , 2008. Some studies, however, did not find evidence of prenatal lead effects (e.g., Baghurst et al. 1992;Bellinger et al. 1992;Cooney et al. 1989aCooney et al. , 1989bDietrich et al. 1990Dietrich et al. , 1993Ernhart et al. 1989;McMichael et al. 1988).
More-recent prospective studies have included children with lower prenatal exposures, and continue to detect inverse associations with neurodevelopment. found independent adverse effects of both prenatal and postnatal blood lead on IQ among Yugoslavian children age 3-7 years. Prenatal lead ex posure was associated with a deficit of 1.8 IQ points for every doubling of prenatal maternal blood lead after controlling for postnatal exposure and other covariates. In a study conducted in Mexico City, found that umbilical cord blood lead and maternal bone lead levels were independently associated with covariate-adjusted scores at 2 years of age on the Mental Development Index score of the Bayley Scales of Infant Development with no evidence of a threshold. Maternal blood lead level early in the second trimester and in the third trimester was a significant predictor for some measures of mental and psychomotor develop ment at age 2 years (Wigg et al. 1988). In another study in Mexico City, maternal plasma lead level in the first trimester was a particularly strong predictor of neurodevelopment at age 2 years . When this cohort was assessed at 24 months, inclusion of umbilical cord blood lead level in the model indicated that it was a significant predictor of psychomotor development even when analyses were restricted to children whose lead levels never exceeded 10 µg/dL . found that prenatal lead exposure around 28-36 weeks gestation (third trimester) was a stronger predictor of reduced intellec tual development at ages 6-10 years than second trimester (12-20 weeks) exposure, but that study did not measure prenatal exposure in the first trimester of pregnancy. Jedrychowski et al. (2008) found a higher risk of scoring in the high-risk group on the Fagan Test of Infant Intelligence at age 6 months when umbilical cord blood was higher. Low-level umbilical cord blood lead levels can also negatively impact responses to acute stress (Gump et al. 2008).
In another study conducted in Mexico City, third trimester increases in maternal blood lead levels were associ ated with decreased ability of newborns to self-quiet and be consoled during the first 30 days of life (Rothen berg et al. 1989). In addition, greater prenatal and perinatal lead exposure was associated with altered brain stem auditory evoked responses .
# Threshold Levels and Persistence of Effects
No threshold has been found for the adverse effects of lead on neurodevelopment (Centers for Disease Con trol and Prevention 2004). Recent evidence, in fact, suggests that the dose-effect relationship might be supralinear, with steeper dose responses at levels below 10 µg/dL than above 10 µg/dL (Bellinger and Needleman 2003;Kordas et al. 2006;Lanphear et al. 2000;). In the largest study of this issue, pooled data on 1,333 children who participated in seven international population-based longitudinal cohort studies and were followed from birth or infancy until 5 10 years of age. Among children with a maximal blood lead level <7.5 µg/dL, the decline in full-scale IQ for a given increase in blood lead was significantly greater than the decline observed among children with a maxi mal level ≥7.5 µg/dL. Nonlinear relationships were also detected in the Yugoslavia and Mexico City ) studies which suggest that the effects of prenatal exposure may also be more pronounced at blood lead levels less than 10 µg/dL.
Evidence from several of the prospective studies suggests that the adverse effects of early childhood lead exposure on neurodevelopment persist into the second decade of life (Bellinger et al. 1992;Fergusson et al. 1997;Ris et al. 2004; and are unrelated to changes in later blood lead level (Burns et al. 1999;Tong et al. 1998;). Administration of the chelating agent succimer to children with blood lead levels of 20-44 µg/dL did not prevent or reverse neurodevelopmental toxicity (Dietrich et al. 2004;).
# Summary of the Evidence: Infant Growth and Neurodevelopment
Data on the association between prenatal lead exposure and infant growth are limted and thus inconclusive. The findings of recent cohort studies offer suggest that prenatal lead exposure at maternal blood lead levels below 10 µg/dL is inversely related to neurobehavioral development independent from the effects of postna tal exposure. While the lead-associated differences in test score are small when viewed as a potential change in an individual child's score, they acquire substantially greater importance when viewed as a shift in the mean score within a population (Bellinger 2004). The mechanism(s) by which low-level lead exposure, whether incurred prenatally or postnatally, might adversely affect neurobehavioral development remains uncertain, although experimental data support the involvement of many pathways.
Because there is no apparent threshold below which adverse effects of lead do not occur, CDC has not identi fied an allowable exposure level, level of concern, or any other bright line intended to connote a safe or unsafe level of exposure for either mother or fetus. Instead, CDC is applying public health principles of prevention to intervene when prudent. Specific recommendations are presented throughout the rest of these guidelines.
# CHAPTER 3. BIOKINETICS AND BIOMARKERS OF LEAD IN PREGNANCY AND LACTATION
# Key Points
- No single test is available to establish total body lead burden; biological markers (bio markers) must be used to estimate maternal lead body burden and to assess lead dose to the fetus or infant during pregnancy or breastfeeding.
- Blood lead is the most well-validated and widely available measure of lead exposure.
However, a single blood lead test may not reflect cumulative lead exposure and may not be sufficient to establish the full nature of the developmental risk to the fetus/infant. Repeat testing may be necessary.
- Bone is a potential endogenous source of lead exposure and studies have demonstrated that some of the previously acquired maternal bone lead stores are mobilized during pregnancy and lactation. However, bone lead measurement is almost exclusively a re search tool.
- Lead readily crosses the placenta by passive diffusion and has been measured in the fetal brain as early as the end of the first trimester, so primary prevention of exposure is par ticularly important to reduce risk.
- Lead has been detected in the breast milk of women in population-based studies; how ever, the availability of high-quality data to assess the risk for toxicity to the breastfeeding infant is limited.
- Given the difficulty of accurately and precisely measuring trace amounts of lead in human breast milk, routine measurement of breast milk lead is not warranted for routine clinical application at this time.
# INTRODUCTION
The purpose of this chapter is to discuss biological markers (biomarkers) that have been proposed to assess lead body burden and to summarize our present understanding of the biokinetics of lead during pregnancy and lactation. There is no single test available to establish total body lead burden, since lead may be in all body fluids and tissues including bone. Biomarkers must be used to estimate lead body burden and to assess lead dose to the fetus during pregnancy and to the infant during lactation. Figure 3-1 shows the major lead expo sure pathways from mother to infant.
# BIOLOGICAL MARKERS OF LEAD EXPOSURE
Certain biomarkers of lead dose to the fetus during pregnancy have been validated as measures of exposure. These include measurement of lead collected from maternal venous blood during pregnancy and umbilical cord blood at delivery, and measurement of lead in maternal bone using the noninvasive technique of K-x ray fluorescence . Each of these biomarkers provides an independent level of information regarding fetal lead exposure; together, they are critical to understanding whether lead toxicity varies based on timing of exposure, cumulative versus acute dose, and partitioning of lead between red cells and plasma .
Variability in individual blood lead levels and limitations in the accuracy of measurement techniques includ ing limits of detection, rounding, analytical methods, and regression to the mean pose challenges to reliable assessment of blood lead levels, particularly when blood lead levels are low. Laboratory instruments intro duce measurement error, as do certain blood lead sampling methods (e.g., capillary samples may be prone to contamination due to lead dust on the skin surface). Venous blood lead tests produce the most reliable results. Capillary samples have a high level of sensitivity but lower specificity and may produce a higher number of false positives.
Other biomarkers have been used or proposed, usually because of the relative ease and noninvasiveness of collection procedures. These include hair, nails, teeth, saliva, urine, feces, meconium, placenta, and sperm. However, the utility of these alternatives as biomarkers for internal dose has not been demonstrated. In ad dition to the absence of consistent, validated analytic methods and standard reference materials for these biomarkers, they would also have to overcome the challenge of external contamination (Barbosa et al. 2005).
# Whole Blood Lead
Blood lead has been the most commonly used and readily available biomarker of exposure to date with stan dard units of measurement in micrograms per deciliter (1 µg/dL = 0.0484 µmol/L). Following removal of the subject from environmental exposure, the decline in blood lead concentration occurs relatively rapidly at first; the initial half-life of lead in blood is about 35 days (Rabinowitz et al. 1976). This initial rapid drop is followed by a slow continuing decline over several months to years. In addition to lead from exogenous sources, blood lead represents the contribution of past environmental exposure being mobilized from endogenous bone stores. It is this reservoir of lead that determines the slow decline in blood lead after the first few weeks follow ing removal from exposure.
Umbilical cord whole blood lead collected at delivery has been widely used as a measure of fetal exposure (Harville et al. 2005;Satin et al. 1991;Scanlon 1971;Rothenberg et al. 1996;). Lead readily crosses the placenta by passive diffusion (Goyer 1990;Silbergeld 1986) and fetal blood lead concentration is highly correlated with maternal blood lead concentration (Goyer 1990).
However, a single blood lead test may not reflect cumulative lead exposure and may not be sufficient to establish the full nature of the developmental risk to the fetus/infant. Physiologic changes, such as decreasing hematocrit, saturation of red cell lead-binding capacity, and increased bone resorption or intestinal absorption of lead, may influence the interpretation of blood lead levels during pregnancy. In addition, it is well known from the experimental literature that the vulnerability of developing organ systems, including the brain, to en vironmental toxicants can vary widely over the course of pregnancy (Mendola 2002). Thus, it is plausible that lead exposure may be particularly neurotoxic during a specific trimester .
# Plasma Lead
The overwhelming majority of lead in blood is bound to erythrocytes (DeSilva 1981), but plasma is the blood compartment from which lead is available to cross cell membranes (Cavalleri et al. 1978). An understanding of how plasma lead concentration is related to whole blood lead concentration is important. Plasma lead con centrations in the range of 0.1%-5.0% of whole blood lead concentration have been reported (DeSilva 1981;Ong et al. 1986). Although whole blood lead levels are highly correlated with plasma lead levels, lead levels in bone and other tissues (particularly trabecular bone) exert an additional independent influence on plasma lead levels . Recent data suggest that the plasma-to-whole blood lead ratio may vary quite widely among and within individuals (Hu 1998;Lamadrid-Figueroa et al. 2006), raising questions about the use of maternal whole blood lead as a proxy for plasma lead and fetal exposure Goyer 1990;).
However, the measurement of maternal plasma lead is not likely to become a clinically useful tool. The meth ods required to measure plasma lead accurately are laborious and require specialized equipment and ultraclean techniques (Smith et al. 1998). Moreover, recent data suggest that the gain in using measurements of plasma lead during pregnancy to predict fetal/infant outcomes is only modest . Consequently, this biomarker may be a useful research tool in efforts to understand and detect the health impacts of environ mental lead exposure, but cannot be recommended at this time as a clinical tool.
# Bone Lead
Bone is a dynamic reservoir for lead, in constant exchange with blood and soft tissue elements (Rabinowitz 1991;Tsaih et al. 1999). Lead is incorporated into the hydroxyapatite crystalline structure of bone, much like calcium, and may also transfer into bone matrix exclusive of incorporation into hydroxyapatite (Marcus 1985).
Because over 90% of lead in the adult human body is stored in bone , there is the possibility of redistribution of cumulative lead stores from bone into blood during periods of heightened bone turnover, such as pregnancy and lactation . Lead in bone has a half-life of years to decades and therefore reflects cumulative lead exposure (Hu et al. 1998). Measurement of lead in bone using a noninvasive, in vivo X-ray fluorescence (XRF) technique makes epidemiologic evaluation of the impact of retained body burden of lead possible (Hu 1998).
The amount of lead in bone depends on the individual's lead exposure history. Smith et al. (1996) determined that bone contributed 40%-70% of the lead in blood of environmentally exposed subjects who were undergo ing total hip or knee joint replacement, indicating that the skeleton can be an important endogenous source of lead exposure. By examining the lead isotopic ratio in a small number of pregnant women who were recent immigrants to Australia (and pregnant Australian controls), Gulson and his colleagues (1997) were able to show that the skeletal contribution to maternal blood lead increased during pregnancy and lactation. Lead in maternal diet and bone lead were the main contributors to circulating maternal blood lead levels (Gulson 1998a). The relative contribution of bone lead to blood lead will vary depending on the exposure history of the individuals.
The measurement of bone lead requires special equipment and trained operators and is used mainly in research settings. Therefore, it is unlikely that this method will have widespread clinical application. However, this biomarker is a useful tool in research efforts to understand and detect the health impacts of cumulative lead exposure.
# Breast Milk Lead
Detectable levels of lead in breast milk have been documented in population studies of community-dwelling women with no known source of occupational or elevated environmental lead exposure Anderson and Wolff 2000). Given the correlation of breast milk lead levels with maternal and infant blood lead levels , milk lead can be used as an indicator of both maternal and neonatal expo sures (Hallén et al. 1995). In studies of lead in human breast milk, concentrations have been observed ranging over three levels of magnitude, from <1 to greater than 100 µg/L (ppb) (Chatranon et al. 1978;Gulson et al. 1997;Larsson et al. 1981;Murthy and Rhea 1971;). These differences are partially attributable to true differences in population exposures across time and geographic location (Solomon and Weiss 2002). However, it is also likely that a variety of methodological factors affect the analytic variability and validity of the reported results. Breast milk lead levels from published studies with extremely high values should be reviewed with caution due to the high potential for environmental contamination dur ing sample collection, storage, and analysis. Documented sources of breast milk contamination include the use of lead acetate ointment (Knowles 1974), lead in nipple shields (Knowles 1974;Newman 1997), foil from alcohol wipes used in sample collection , and latex laboratory gloves (Friel et al. 1996). Pretreat ment of biological materials is also subject to unintentional addition of contaminants from chemical reagents, digestion devices, and atmospheric particles (Coni et al. 1990;Stacchini et al. 1989).
Inaccuracies of the laboratory analytic methods, particularly poor analytic sensitivity at low concentrations, also affect measurement of trace lead in human milk. Measurement of lead in breast milk is complicated by the fat content of human milk, which changes during feeding and over the course of lactation (Sim and McNeil 1992). Any partitioning of lead into the fat layer of milk must be accounted for in the analysis, which leads to the problem of either further contamination or loss during the intensive dry ashing procedure frequently used to prepare milk samples for analysis. Precise and accurate analysis is challenging due to difficulty in identify ing a method that will digest samples with 100% efficiency . Gulson et al. (1998b) reviewed and compared the results of a number of studies of the relationship of breast milk lead to maternal blood lead published over the past 15 years, and concluded that the line of best fit through the data "that are considered to represent the realistic relationships between lead in maternal blood and breast milk" defines an array of slope of less than 3%. The implication is that those studies yielding ratios greater than 3% suffered from significant contamination.
Given the difficulty of accurately and precisely measuring trace lead in human breast milk, routine measure ment of breast milk lead is not warranted for clinical application. It will only be practical in research settings or in certain extenuating circumstances, assuming that a qualified laboratory can be identified.
# BIOKINETICS OF LEAD DURING PREGNANCY
# Changes in Maternal Blood Lead Levels During Pregnancy
There are several case reports of elevated blood lead measurements in pregnancy (Mayer-Popken et al. 1986;Ryu et al. 1978;. Most cross-sectional studies investigating blood lead levels during pregnancy have shown a tendency for blood lead levels to decrease at least through the first half of pregnancy (Alexander and Delves 1981;Bonithon-Kopp et al. 1986;Gershanik et al. 1974). found no difference in BLLs between different stages of pregnancy (weeks 14-20, weeks 30-36, and delivery). However, found BLLs were associated with gestational week of measurement, with levels declining after week 12. , attempting to model kinetics over the course of pregnancy, showed a significant drop in blood lead levels from weeks 12 to 20. However, from 20 weeks to delivery, an analysis for linear trend confirmed a significant increase in blood lead levels in the later part of pregnancy. Schell et al. (2000) also reported changes in hematocrit-corrected blood lead levels over the course of pregnancy. Blood lead levels declined between the first and second trimesters and increased over the remaining course of pregnancy through delivery. followed 195 women over the course of pregnancy and also found a U-shaped pattern of maternal blood lead concentration across pregnancy. The late pregnancy increas es were steeper among women with low dietary calcium intake in both the younger and older age groups.
Most recently, Lamadrid-Figueroa and colleagues (2006) found increased plasma lead levels for a given wholeblood lead value as pregnancy progresses for whole-blood lead levels greater than approximately 11.0 µg/dL, but not for those less than 10.0 µg/dL.
# Transfer of Lead to the Fetus
That lead reaches human fetal tissues has been known for many years (Barltrop 1969;Kehoe et al. 1933;Thompsett and Anderson 1935). Barltrop (1969) collected serial fetal blood lead measurements from each trimester throughout pregnancy and found no recognizable pattern but was able to show that maternal blood lead concentration was highly correlated with umbilical cord lead, suggesting transplacental movement of lead to the fetus. In fact, lead readily crosses the placenta by passive diffusion (Goyer 1990;Silbergeld 1986) and lead has been measured in the fetal brain as early as the end of the first trimester (13 weeks) (Goyer 1990).
# Bone Lead as an Endogenous Source of Exposure
Two early studies implicated bone lead as an endogenous source of exposure during pregnancy. Thompson et al. (1985) documented a case of increased maternal and infant blood lead in a woman with a history of child hood lead poisoning, but no exposure during pregnancy or for 30 years prior. Manton (1985) reported a rise in his wife's blood lead levels over the course of her pregnancy along with changes in the specific lead-isotopic ratios, indicating that contributions to her blood lead during pregnancy did not correspond to an external source.
Recent studies have documented that bone lead stores are mobilized during pregnancy and lactation (Gul son et al. 1997;. By examining the lead isotopic ratio in a small number of pregnant women who were recent immigrants to Australia (and pregnant Australian controls), Gulson and colleagues (1997) were able to show that the skeletal contribution to blood lead increased over pregnancy. followed over 300 Hispanic-American women with serial blood lead levels over the course of pregnancy and found that whole blood lead concentrations were significantly influenced by bone lead. Markowitz and Shen (2001) reported a case of declining bone lead concentration in conjunction with an increase in blood lead levels over the course of pregnancy and the early postpartum period. described a case report suggesting that bone sources at high levels can lead to an increase in BLL.
Animal studies support the human data. Using stable lead isotopes in monkeys, researchers found that a 29%-56% decrease in bone lead mobilization in the first trimester was followed by an increase in the second and third trimesters (Franklin et al. 1997). The increases were up to 44% over baseline levels. Further analysis of maternal bone and fetal bone and tissues revealed that from 7%-39% of lead in the fetal skeleton originated from maternal bone.
# BIOKINETICS OF LEAD DURING LACTATION
Maternal bone turnover increases during lactation , which has raised the concern that maternal blood lead concentrations might increase significantly during lactation. It has been estimated that up to 5% or more of bone mass is mobilized during lactation (Hayslip et al. 1989;Sowers 1996); therefore, the possibility exists for redistribution of cumulative lead stores from bone into plasma, thus returning lead to the maternal circulation. Gulson et al. (1998a) found that mobilization of lead from bone continued after pregnancy into the postpar tum period for up to 6 months during lactation and occurred at levels higher than during pregnancy. They concluded that the major sources of lead in breast milk were maternal bone and diet. observed sustained elevations of from 1 to 4 µg/dL in maternal blood lead concentration during the first 6 to 8 months of lactation, after the expected normal postpartum reduction in plasma volume, in 6 nursing mothers with prepregnancy blood lead concentrations of less than 2 µg/dL. These elevations were followed by gradual declines over the next year in the two women who continued to breastfeed to 18 months postpartum. Isotope ratio analysis suggested that the additional lead originated from maternal bone. found no relationship between decreasing vertebral or femoral neck bone densites and the changes in maternal blood lead concentration at intervals over 6 months of lactation in 58 mainly poor Hispanic mothers with low mean blood lead concentrations of 2.35 µg/dL at enrollment in the study (32 to 38 weeks of gestation). However, at higher blood concentrations, Téllez-Rojo et al. (2002) observed an incremental increase of 1.4 µg/dL in blood lead concentration in women who were breastfeeding exclusively relative to women who had stopped lactation. These women had blood lead concentrations up to 23.4 µg/dL at delivery and were followed through 7 months postpartum. Bonithon-Kopp et al. (1986) found that women over 30 had significantly higher levels of breast milk lead than women between 20 and 30 years of age. Since bone accumulates lead with age, it is possible that the higher breast milk lead levels in the older women were associated with higher bone lead levels. Maternal bone lead levels have since been shown to be positively as sociated with breast milk lead concentrations .
# PREDICTORS OF UMBILICAL CORD BLOOD LEAD LEVELS
Umbilical cord blood lead has been widely used as a measure of fetal exposure (Rabinowitz et al. 1984;Rothenberg et al. 1996;Scanlon 1971). Numerous studies suggest that maternal blood lead and umbilical cord lead levels, measured concurrently at delivery, are highly correlated (Baghurst et al. 1991;Harville et al. 2005;Rothenberg et al. 1996), suggesting a near-perfect linear relationship. Most data indicate that umbilical cord lead is approximately 0.85 of maternal blood lead at parturition (Carbonne et al. 1998;Goy er 1990;). Thus, fetal-infant lead level, as measured in umbilical cord blood, is often lower than the maternal blood lead at delivery. However, some studies have shown umbilical cord lead to be higher than maternal blood lead levels at delivery and investigated the determinants for such differences (Harville et al. 2005;Rothenberg et al. 1996). Rothenberg et al. (1996) studied Mexican women of low-to-middle socioeconomic status from 12 weeks of pregnancy to delivery to determine factors that explain the relationship between cord and maternal blood lead. They found from 245 paired maternal-cord blood lead samples that mothers with occasional alcohol use during pregnancy, high milk intake, and more spontaneous abortions delivered babies with lower cord blood lead and that maternal age, use of lead-glazed pottery, and canned foods was associated with increased cord blood lead. They found cord blood lead levels were higher than maternal blood lead levels at delivery in 33% of the cases, predominantly influenced by older maternal age and lower milk consumption. The authors sug gested that the measurable influence of maternal blood lead on delivery cord blood lead is limited to the four to eight weeks prior to delivery. Also, many factors suspected of influencing bone lead also influenced cord blood lead, some of them independently of their effect on maternal delivery blood lead.
Harville et al. (2005) studied factors influencing the difference between maternal and cord blood lead levels to determine why some infants receive higher exposures relative to their mother's body burden than do others. They found that higher maternal blood pressure and alcohol consumption were associated with higher cord lead relative to the lead of the mother. Higher maternal hemoglobin and presence of the sickle cell trait were associated with lower cord blood lead in comparison to mother's blood lead, suggesting that iron status may be an important factor in the maternal-fetal transfer of lead across the placenta. modeled the interrelations of lead levels in bone, venous blood, and umbilical cord blood with exogenous lead exposure through maternal plasma lead in peripartum women. An interquartile range increase in either patella (trabecular) or tibia (cortical) bone lead was associated with an increase in cord blood lead by about 1 µg/dL. An increase of 0.1 µg/m3 in air lead was associated with an increase in the mean level of fetal cord blood lead by 0.67 µg/dL. With 1 additional day of lead-glazed ceramic use per week in the peripartum period, the mean cord blood lead level increased by 0.27 µg/dL. The models suggested that the contributions from endogenous (bone) and exogenous (environmental) sources were relatively equal, and that maternal plasma lead varies independently from maternal whole blood lead. - Risk factors for lead exposure in pregnant women differ from those described for young children.
- Common risk factors for pregnant women include recent immigration status, practicing pica, occupational exposure, use of alternative remedies or cosmetics, use of traditional lead glazed pottery, and nutritional status.
- Pica during pregnancy appears to occur more frequently in sections of the South and in immigrant communities where this behavior is a culturally acceptable practice.
- Lead-based paint is less likely to be an important exposure source for pregnant women than it is for children, except during renovation or remodeling of homes built before 1978.
- Sources of lead exposure in the United States vary by population subgroup and geogra phy; therefore, public health agencies should be consulted for community-specific risk data.
- Fetal exposure to lead through maternal bone lead mobilization is possible for women with significant prior lead exposure; however, most women with blood lead levels typical in the United States are unlikely to contribute substantial burdens to their infants.
# INTRODUCTION
This chapter discusses the distribution of blood lead levels in women of childbearing age, risk factors relevant to this population, and sources of lead exposure. Information on the distribution of blood lead levels in preg nant women in the United States is derived from cross-sectional surveys, case reports, and epidemiological studies. From the direct, albeit limited, information on the distribution of blood lead levels in pregnant wom en, along with the available complementary information on blood lead levels in women of childbearing age and in occupational settings, it is evident that the risk factors for lead exposure in pregnant women differ from those described in young children. Health care providers and public health departments need to understand the risk factors specific to pregnant women in order to identify sources of lead in pregnant women, provide patient education and counseling, and intervene to prevent or reduce exposures.
For pregnant women, recent immigration and practicing pica are major risk factors for blood lead levels ≥5 µg/ dL. Occupational lead exposure and nutritional status are also important risk factors warranting assessment. Certain culturally specific practices, such as the use of alternative remedies or imported cosmetics and the use of traditional lead glazed pottery for cooking and storing food, are important risk factors for lead exposure in pregnant women (Centers for Disease Control and Prevention 2004;. Some popula tion groups, such as immigrants, are more likely to be at risk for exposure from these sources. identified seven severely lead poisoned women who were exposed to sources of lead including ingestion of soil, pottery, or paint chips; household renovations; and use of herbal remedies. Lead-based paint is less likely to be an important exposure source for pregnant women than it is for children, except during renovation or remodeling in homes built before 1978.
Additionally, recent evidence has shown that bone resorption increases during pregnancy in all women (see Chapter 3). Although not an issue for most women with blood lead levels typical in the United States, fetal exposure to lead through maternal bone lead mobilization may be a concern for women with significant lead exposure earlier in life, either in the United States or in their countries of origin.
# EPIDEMIOLOGY OF BLOOD LEAD LEVELS IN U .S . WOMEN
Distribution of Blood Lead Levels Among U .S . Women of Childbearing Age studied determinants of blood lead in U.S. women of childbearing age using data from NHANES III (1988-1994. The geometric mean blood lead level among women aged 20-49 years (N = 4,393) was 1.78 µg/dL (range 0.7-31.1). Approximately 30%, 6%, and <1% of the women had blood lead levels ≥2.5 µg/dL, ≥5 µg/dL, and ≥10 µg/dL, respectively. A number of factors were associated with higher blood lead levels including higher maternal age, Black or Hispanic race/ethnicity, living in the Northeast region or in urban areas, lower educational level, poverty, lower hematocrit, alcohol use, cigarette smoking, and higher serum protoporphyrin level. Number of live births, breastfeeding history, year house was built, and type of drinking water were not significantly associated with differences in blood lead. Subjects in the first phase of the survey (1988)(1989)(1990)(1991)
# RISK FACTORS FOR LEAD EXPOSURE IN U .S . WOMEN OF CHILDBEARING AGE
Recent immigration to the United States and pica behavior are risk factors that have been shown to be associ ated with lead exposure above background levels in pregnant women, although they are actually behaviors that serve as proxies for other sources of lead. Women with a friend or relative identified with lead exposure above background levels are also more likely to have increased blood lead levels (Handley et al. 2007). In addi tion, the unique physiology of pregnancy and lactation has been shown to result in increased bone turnover and, thus, higher maternal BLLs. Nutrition may play a role in the extent to which lead is absorbed and the extent of bone turnover. An understanding of these factors is useful in assessing the sources of lead exposures in pregnant women and in developing interventions to prevent and/or interrupt lead exposure. Figure 4-1 presents common risk factors for lead exposure by pregnant women in the United States.
# Recent Immigration to the United States
A number of studies have identified immigrant status as a primary risk factor for lead poisoning in women and young children in the United States Tehranifar et al. 2008). Immigrant status is a risk fac tor for blood lead levels much higher than concurrent blood lead levels in U.S. women of childbearing age in at least three ways. First, women from countries where relatively high lead exposure is endemic may carry high cumulative body burdens of lead. (Appendices III and V provide information about lead sources and culturally specific products associated with specific countries or regions.) Brown et al. (2000) investigated determinants of bone and blood lead concentrations in women in Mexico City during the early postpartum period and found that maternal age and time spent living in Mexico City, an area with high ambient lead contamina tion, were strong predictors of bone lead levels. Second, immigrants may transport lead-containing products, cultural practices, and behaviors with them from their countries of origin. Third, some recent immigrants may live in poor conditions that increase their risk for exposure to lead-based paint and other lead hazards from renovation and repair. In addition, since immigrant women may face cultural, linguistic, economic, and legal barriers to early prenatal care, these risk factors may be compounded by delays in identification and manage ment of lead poisoning.
Data on 75 pregnant women identified with blood lead levels ≥15 µg/dL were provided in the Annual Report 2006 for Preventing Lead Poisoning in New York City. Of these 75 women, 99% were foreign born (68% were from Mexico) and 73% reported using imported products during pregnancy, including foods, spices, herbal medicines, pottery, and cosmetics. None of the women were exposed to lead at work. reported on thirty-three pregnant women in New York City with blood lead levels of 20 µg/dL or higher identified from 1996-1999 by the New York City Department of Health and Mental Hygiene blood lead surveillance program. Ninety percent of individuals were foreign born, the majority being from Mexico (57%), with a median time in the United States of 6 years (range 1 month to 20 years). Two-thirds of the women had levels between 20 and 29 µg/dL and possible sources of exposure were identified in 97% of these cases. Overall, thirteen (39%) reported pica behavior; 7 (21%) reported using imported pottery for cooking; and 8 (24%) reported consuming imported spices, tea, and/or food. Other sources identified included vitamins and supplements, lead-based paint hazards, and previous history of exposure to lead. conducted a retrospective record review of pregnant women seeking prenatal care from January 2003 to June 2005 at an inner-city women's health center serving a largely immigrant population in Elmhurst Hospital, Queens, New York City. Of the 4,814 women seeking care, 91% were foreign born and 9% were U.S. born. These data from an inner-city medical clinic suggest that prenatal lead exposure dispropor tionately occurs during the pregnancies of immigrant women from certain countries and occurs at a preva lence high enough to warrant universal blood lead testing (see Case Study 4-1). Handley et al. (2007) studied 214 women in 2002-2003 who were enrolled in health department clinics in Monterey, California, for their prenatal care. The study population was 95% Latina and 87% were born in Mexico. Sixty-six of the women were born in Oaxaca, Mexico. The prevalence of blood lead levels ≥10 µg/dL in the study population was 12%, much higher than concurrent blood lead levels in the U.S. population in general. Women with blood lead levels ≥10 µg/dL were more likely to be born in Oaxaca (96%), more likely to eat foods imported from Mexico (84%), and more likely to report having a friend or relative with "lead in their blood" (28%). This study identified home-prepared grasshoppers (chapulines) sent from Oaxaca as a source of lead exposure.
Case Study 4-1. Prenatal Lead Exposure in New York City Immigrant Communities: The Elmhurst Queens Experience
Of the 124,345 babies born in New York City in 2003, 52% were to mothers who were born out side of the United States (New York Vital Statistics). Since many of the sources of lead for pregnant women are related to cultural practices, past exposures, and certain occupations, the prevalence of elevated blood lead levels among pregnant women in New York City is likely to be higher than U.S. averages. A retrospective record review of pregnant women seeking prenatal care at an inner-city women's health center was conducted in order to describe the epidemiology of blood lead levels among pregnant women in an inner-city, primarily immigrant population.
# Pica
Although formal pica definitions vary, the behavior common to all definitions of pica is a pattern of deliberate ingestion of nonfood items. Some definitions focus solely on the eating behavior (e.g., Medline defines pica as "a pattern of eating non-food materials (such as dirt or paper)" (Medline Plus 2009). Western medicine has viewed pica as aberrant and unhealthy behavior, an eating disorder, or a psychiatric diagnosis. For example, the American Psychiatric Association defines pica as the "compulsive eating or appetite for nonnutritive substances, either non-food items (e.g., clay, soil) or some food ingredients (e.g., starch, ice), which persists for more than one month" (American Psychiatric Association 1994). However, respected non-Western community institutions have historically accepted pica as a way to improve health. Pica has been practiced by people worldwide for medicinal, religious, and cultural reasons since antiquity (Abrahams and Parsons 1996;Hunter and de Kleine 1984). The Greeks and Romans used clay to treat various medical conditions. Certain Catholic sects in Central America have sold clay tablets inscribed with Christian scenes for centuries. These clay tablets, known as tierra santa, which are blessed before sale and believed to have health-giving properties, are avail able throughout Mexico and Central America. Clay tablets are also produced and sold throughout many parts of Africa and are eaten-generally not for religious or health-related purposes, but for their taste and texture.
While pica appears to be relatively rare in the United States, it is a common practice in many parts of the world, particularly in Africa, Asia, and Central America. Prevalence rates have been reported to be as high as 50% to 74% in parts of Africa Sule and Madugu 2001), and 23% to 44% in Latin America (Lopez et al. 2004). In the United States, pica appears to occur more frequently in sections of the South and in immigrant communities where this behavior is a culturally acceptable practice. Prevalence studies of pica in U.S. subpop ulations have found 34% in Mexican-born women living in California (Simpson et al. 2000), and 14% to 38% ) in low-income rural African-American women.
In some studies, women felt that not giving in to pica cravings could harm their fetus and lead to miscarriage, illness, or an unhappy baby (Simpson et al. 2000). Since pica is viewed negatively by the Western medical com munity (American Psychiatric Association 1994), individuals who engage in pica may be reluctant to disclose that they consume nonfood items if asked directly about the practice. A review of 13 studies published be tween 1950 and 1987 (Horner et al. 1991) estimated that the risk for pica increases if pica is practiced by other family members and is increased six-fold if the woman had a prepregnancy history of pica. Other factors that may influence whether women are comfortable disclosing pica use include being able to converse in their native language, being able to discuss the practice in private, and being questioned about the practice in an accepting manner by someone from their own community (Simpson et al. 2000).
Materials ingested as pica can be benign or potentially harmful and include ice, paper, dirt, clay, starch, ashes, and small stones as well as substances contaminated with lead or other toxic substances. Pica behavior has been associated with anemia and other nutritional deficiencies in cross-sectional studies, although pica has not been confirmed to be caused by nutritional deficiencies; pica has been associated rarely with more serious side effects, such as gastrointestinal blockages . Cases of lead poison ing have also been reported if the substances consumed are contaminated with lead. Most commonly these substances have been reported to be lead-contaminated soil and pottery .
In a case study of one Hispanic pregnant woman in California, found blood lead levels of 119.4 µg/dL in the woman and 113.6 µg/dL in the cord blood at delivery. The woman practiced a form of pica in which she broke a lead-glazed clay pot from Mexico into small pieces and ate several pieces daily. The researchers found that this practice was apparently not uncommon in Mexican women. re viewed seven cases of severely lead-poisoned women (BLLs ≥45 µg/dL) over a 3-year period and identified an additional eight cases from the medical literature. He found that severe lead poisoning in these mainly Hispan ic women occurred most often because of ingestion of lead-contaminated clay, soil, and pottery. Presenting features were mostly subtle, consisting of only malaise and anemia.
# Mobilization of Endogenous Bone Lead in Pregnancy
Although, the majority of U.S. women of childbearing age are unlikely to have bone lead stores large enough to result in large elevations in maternal blood lead concentrations, at least one recent case report suggested that bone sources at high levels can lead to an increase in BLL . There also is evidence that with closely spaced multiple pregnancies, maternal blood lead levels in subsequent pregnancies are lower and the increases in maternal blood lead occurring during late pregnancy and lactation are lower rela tive to those in the first pregnancy ). This observation is consonant with observations from the lead industry in the nineteenth and early twentieth century (Legge 1901;Legge and Goadby 1912;Paul 1860), which held that if a lead-poisoned woman had a child, her symptoms would be assuaged. This limited evidence suggests that the greatest concern about lead exposure may be during the first pregnancy, although this observation is probably meaningful only at very high lead levels.
# Dietary and Lifestyle Factors
Nutritional status may make women more susceptible to lead exposures. Adequate dietary intake of certain key nutrients (calcium; iron; zinc; vitamins C, D, and E) is known to decrease lead absorption (Mahaffey 1990).
Iron deficiency anemia is associated with elevated blood lead levels and may increase lead absorption and also has an additional independent negative impact on fetal development. Calcium deficiency may increase bone turnover since maternal bone is a major source of calcium for the developing fetus and nursing infant.
Chapter 7 provides a fuller discussion of nutritional issues. Both alcohol use and cigarette smoking have also been associated with higher lead levels and should be avoided during pregnancy and lactation.
# SOURCES OF LEAD EXPOSURE
While various sources of lead exposure in pregnant women in the United States have been identified, these sources vary according to population subgroup and geography. Thus, assessment and reduction of sources must be specific to the community. The sources discussed below are those that have been identified in previ ous research and should be used as a guide for clinical and public health interventions. Figure 4-2 summarizes general advice for pregnant women to avoid lead exposure, although additional advice may be warranted due to specific local risk factors.
# Occupational Sources
Lead is used in more than 100 industries . Occupa tions in which workers may be directly exposed to lead at significant levels include construction; smelting; auto repair; work on firing ranges; painting; manufacturing of ceramics, electrical components, batteries, wire and cable, plastics, pottery, and stained glass; battery and scrap metal recycling; mining; and all types of fer rous and nonferrous metals production. In addition, women and children may be exposed to lead through the inadvertent carriage of lead dust from the workplace on workers' clothing, shoes, or bodies, also known as take-home exposure. Lead dust carried from work settles on surfaces in the vehicle and home, where it can be ingested or inhaled by young children with normal mouthing behavior and by household members handling workers' clothing . Automobile battery manufacturing and lead battery re covery industries pose the highest risks although workers in the construction trades can also have significant exposures to lead (Centers for Disease Control and Prevention 2007;. Laborers and painters have been found to have higher BLLs than other construction trade groups such as plumbers and electricians (Reynold et al. 1999). Construction work that is associated with higher BLLs includes bridge renovation; resi dential remodeling; and activities such as welding, cutting, and rivet busting (Reynold et al. 1999 1993). Currently, under the OSHA standards, a worker must be included in a lead medical surveillance program if he/she is exposed to airborne lead levels of 30 µg/m3 or higher (8-hour time-weighted average) for more than 30 days per year. Partly to diminish the risk that a lead worker takes lead home on his clothing or body (take-home exposure), the lead standards contain provisions requiring access to showers, work clothes, and changing rooms at the workplace. Some workers potentially exposed to high levels of lead may not receive adequate medical surveillance because their work does not result in air lead levels that trigger the required surveillance. Additionally, certain occupations are exempt from the workplace protections established by OSHA. Exempted workers include some public employees and the self-employed. Self employed workers might include those in cottage industries such as battery reclamation, automobile/radiator repair, pottery and ceramics, and stained glass. These job categories may not be moni tored for lead exposures. In some cases, the home itself may function as a cottage industry workplace, increas ing the potential for lead exposure to all family members. Undocumented immigrant workers are a particularly vulnerable group in that their access to lead exposure monitoring and protective measures may be limited.
Occupational exposure to lead also remains a problem in developing countries where industries are less likely to be regulated and little environmental monitoring is done. Studies have documented the impact of cot tage industries on lead exposure in international settings, including: backyard battery repair and recycling of batteries (Matte et al. 1989) and radiators (Dykeman et al. 2002) and the production of low-temperature fired lead-glazed ceramics (Fernandez et al. 1997;Hibbert et al. 1999) and tiles (Vahter et al. 1997).
# Lead-glazed Ceramic Pottery
Of all the culturally specific practices and products that may put pregnant women at risk for lead exposure, the use of traditional lead-glazed ceramic pottery for cooking and storing food is perhaps the most well-docu mented in the literature (Hernandez-Avila et al. 1991Romieu et al. 1994). Lead-glazed ceramics produc tion is a mostly home-based or cottage industry in Mexico where lead monoxide (greta; 93% lead by weight) is used to make a glaze that is often set in low-temperature (<1,000 degrees), wood-fired kilns. Pottery produced in this manner can leach large amounts of lead into food and beverages being cooked, served, or stored. This traditional pottery is used throughout the country across all levels of socioeconomic status. Acute high-dose exposures from foods and beverage contaminated by traditional Mexican pottery have been reported (Matte et al. 1994) and long-term use of lead-glazed ceramics may result in chronic low-to-moderate lead poisoning and elevated body burden of lead (Hernandez-Avila et al. 1991). Cases of lead poisoning have been reported after the consumption of crushed lead-glazed pottery, mainly among Hispanic women .
# Herbal and Alternative Remedies
Lead has been found in some alternative medicines and therapeutic herbs traditionally used by East Indian, Indian, Middle Eastern, West Asian, and Hispanic cultures (Garvey et al. 2001;. These alternative medicines can contain herbs, minerals, metals, or animal products. Lead and other heavy metals are put into certain folk medicines intentionally because these metals are thought to be useful in treating some ailments. They have also been reported to be added to increase the weight of the product for substanc es sold by weight. Sometimes lead unintentionally gets into the folk medicine during grinding, coloring, or other methods of preparation. Lead has been found in powders and tablets given for arthritis, infertility, upset stomach, menstrual cramps, colic, and other illnesses. Case Study 4-2 describes lead poisoning associated with ayurvedic medicines.
Most of the published literature relating herbal therapies and alternative medicines to elevated BLLs has been in case reports of children or adults (Ernst 2002;Lynch and Braithwaite 2005), not specifically in women of childbearing age or pregnant or lactating women. Many of the alternative therapies used were self-adminis tered, rather than recommended by a traditional healer or health care provider. In a case study of one 45-year old Korean man who drank Chinese herbal tea for medicinal purposes, Markowitz et al. (1994) found a blood lead level of 76 µg/dL. The lead exposure was found to be hai ge fen (clamshell powder), one of 36 ingredients in the tea, which had become contaminated with lead. In another report, Cheng et al. (1998) described that six of eight children found to be taking herbal medicines had BLLs >10 µg/dL. Use of greta was described in a 2-year-old boy identified with a blood lead level of 83 µg/dL in a CDC report (1993). A review of 1991-1992 California data yielded 40 cases with BLLs >20 µg/dL where children had received ethnic remedies. Over 80% of these children had Hispanic surnames (Centers for Disease Control and Prevention 1993). reported on a 24 year-old woman from India who immigrated to Australia and delivered a child there with a neonatal BLL that was the highest recorded for a surviving infant in the country (cord BLL was 158.3 µg/dL). An exposure assessment revealed the mother's long-term ingestion of lead-contaminated herbal tablets as the source.
Use of herbal and alternative remedies is not confined to immigrant communities and reported use is substan tial among the general population, as documented in several studies. Eisenberg found in a 1990 national U.S. survey that 34% of English-speaking adults >18 years of age reported use of at least one unconventional ther apy (Eisenberg et al. 1993). Only 10% reported receiving these alternative therapies from a traditional healer or health care provider and 72% did not tell their medical doctor that they used unconventional therapy. Highest use was in non-Black individuals between the ages of 25 to 49 with relatively higher education and income. A follow-up survey conducted in 1997 found that use had increased to 42% (Eisenberg et al. 1998). More than 60% did not tell their medical doctors that they used alternative therapies. A 2001 New York City study found that 47% of women used medicinal therapies (Factor-Litvak et al. 2001). In a questionnaire survey of herbal medicine use among 734 women who had recently or were about to give birth in Massachusetts, Hepner et al. (2002) found that 7.1% reported the use of herbal remedies mostly on the advice of their health care provider.
Although the rates of reported use of herbal and alternative remedies vary, symptomatic cases of lead poison ing have been reported from these sources.
# Case Study 4-2. Lead Poisoning Associated with Ayurvedic Medications, California (2003)
Lead poisoning can occur from use of alternative or folk remedies. Ayurveda is a traditional form of medicine practiced in India and other South Asian countries. Ayurvedic medications can con tain herbs, minerals, metals, or animal products and are made in standardized and nonstandard ized formulations.
A woman aged 31 years visited an emergency department with nausea, vomiting, and lower abdominal pain 2 weeks after a spontaneous abortion. One week later, she was hospitalized for severe, persistent microcytic anemia with prominent basophilic stippling that was not improving with iron supplementation. A heavy metals screen revealed a BLL of 112 µg/dL; a repeat BLL 10 days later was 71 µg/dL, before initiation of oral chelation therapy. A zinc protoporphyrin mea surement performed at that time was >400 µg/dL. Her husband's BLL was 6 µg/dL. No residen tial or occupational lead sources were identified, but the woman reported taking nine different ayurvedic medications prescribed by a practitioner in India for fertility during a 2-month period, including one pill four times daily. She discontinued the medications after an abnormal fetal ultra sound 1 month before her initial BLL. Analysis of her medications revealed 73,900 ppm lead in the pill taken four times daily and 21, 65, and 285 ppm lead in three other remedies. Her BLL was 22 µg/dL when she was tested 9.5 months after the initial BLL testing.
# Imported Cosmetics
Kohl, also known as 'al kohl' or 'surma' , is a gray or black eye cosmetic applied to the conjunctival margins of the eyes that can contain up to 83% lead. It is used in the Middle East, India, Pakistan, and some parts of Africa for medicinal and cosmetic reasons (Parry and Eaton 1991). It is believed to strengthen and protect the eyes against disease. These cosmetics have been associated with elevated lead levels in children (Mojdehi and Gurtner 1996;Sprinkle 1995) and may also be used by women of childbearing age (Moghraby et al. 1989), es pecially those who are recent immigrants to the United States.
# Foods and Other Consumer Products
Lead can enter the food chain from contaminated soil or water, deposition from the air, or contact with food containers and processing. In the United States, dietary intakes of lead have been reduced due to: the removal of lead from gasoline; the elimination of lead-soldered cans and lead-based printing ink on candy wrappers and bread bags; and changes to agricultural practices, such as banning of lead-arsenate pesticides (Bolger et al. 1991. The U.S. Food and Drug Administration (FDA) maximum total tolerable daily intake (TTDI) of lead is 6 µg/day for children under 6 years of age, 25 µg/day for pregnant women, and 75 µg/day for other adults Carrington et al. 1996;U.S. Food and Drug Administration 1993). These values were established when dietary intake levels were higher than current estimates. Several scientists have suggested that this standard be revised and made more rigorous, which would lower the TTDI for children to 1 µg/day (Carrington et al. 1996;. Nonetheless, results from the Total Diet Study (sometimes called the market basket study)-an ongoing assessment by FDA that determines levels of various contaminants and nutrients in foods-indicate that current levels of lead in the U.S. food supply are quite low (available at http:// www.cfsan.fda.gov/~comm/tds-toc.html). The estimated daily dietary intake in the United States is currently estimated to be in the range of 2 to 10 µg.
On occasion, imported foods and food products brought to the United States have been identified with elevated levels of lead. For instance, Lozeena is an orange powder used to color rice and meat that contains 7.8%-8.9% lead (U.S. Centers for Disease Control and Prevention 1998). FDA has issued warnings about tama rind candy lollipops (labeled Dulmex brand "Bolirindo") imported from Mexico due to high levels of lead that may be associated with the product, especially in the wrapper (U.S. Food andDrug Administration 1993, 2001). Analysis of these wrappers, which children may chew on or lick, showed between 21,000 to 22,000 parts per million (ppm) of lead while the lollipop sticks contained more than 400 ppm of lead, and the candy itself contained approximately 0.2 ppm of lead. Recently, the FDA revised the recommended maximum lead level for lead in candy to 0.1 ppm (U.S. Food and Drug Administration 2006). Traditional food products that are contaminated with lead may be brought into the U.S. through unregulated routes (Handley et al. 2007). Chapulines (grasshoppers) from Mexico, for example, have been found to contain high levels of lead and have been the subject of a health alert by the California Department of Health Services (California Department of Health Services 2003). "Natural" calcium supplements derived from animal bone may contain lead . Waterfowl may ingest lead shot, become contaminated, and possibly be consumed by unsuspecting hunters and their families (Levesque et al. 2003). In addition, regular ingestion of game meat harvested with lead ammunition may be be a source of lead exposure .
# Lead in Drinking Water
Control measures taken during the last two decades, including actions taken under the requirements of the 1986 and 1996 amendments to the Safe Drinking Water Act and the Environmental Protection Agency's (EPA) Lead and Copper Rule (U.S. Environmental Protection Agency 1991, 1997a), have greatly reduced exposures to lead in tap water. Even so, lead still can be found in some metal water taps, interior water pipes, or pipes con necting a house to the main water pipe in the street (Centers for Disease Control and Prevention 2004b). Lead found in tap water usually comes from the corrosion of older fixtures or from the solder that connects pipes. When water sits in leaded pipes for several hours, lead can leach into the water supply. Most studies show that consumption of lead-contaminated water alone would not be likely to elevate blood lead levels in most adults to a level that is toxicologically significant, even exposure to water with a lead content close to the EPA action level for lead of 15 parts per billion (ppb) (U.S. Environmental Protection Agency 1991). Risk will vary, however, depending upon the individual, the circumstances, and the amount of water consumed. For example, infants who drink formula prepared with lead-contaminated water may be at higher risk because of the large volume of water they consume relative to their body size and the higher percentage of lead they absorb . Officials in com munities that are considering changes in water additives or that have implemented such changes in water dis infection should assess whether these changes might result in increased lead in residential tap water (Centers for Disease Control and Prevention 2004b;Miranda 2007). EPA has asked all state health and environmental officials to monitor lead in drinking water at schools and day care centers.
# Lead Paint: Home Repair, Renovation, and Remodeling Activities
Lead-based paint was commonly used in homes built before 1950, and was not banned from sale for resi dential use in the United States until 1978. Recent studies estimate that more than 38 million U.S. homes still contain some lead-based paint, with two-thirds of the houses built before 1960 containing lead-based paint hazards . Lead-based paint hazards were concentrated in homes with incomes less than $30,000 (35% vs. 19% in homes with incomes >$30,000) and in the Northeast and Midwest where the preva lence was twice as high as in the South and West.
Lead in paint and house dust are the most common sources of exposure in U.S. children (Lanphear et al. 1998). Among adults, however, exposure to lead-based paint and construction-related lead hazards occurs mainly during home repair, renovation, and remodeling activities conducted by the residents themselves or due to improper work practices of tradesmen and contractors (Centers for Disease Control and Prevention 2009;Feldman 1978;Fischbein et al. 1981;Jacobs 1998;Jacobs et al. 2003;Reis man et al. 2002; U.S. Department of Housing and Urban Development 1995).
Two basic circumstances increase the risk for an adult's exposure to lead-based paint: if paint has deteriorated, and when paint has been disturbed during remodeling or renova tion. Paint deterioration can be caused by moisture problems, poor maintenance, or other problems. The paint on moveable building components (friction and impact surfaces such as windows and doors) pose higher risks because routine opening/closing can damage the paint on their surfaces over time and lead-based paint was used on these components historically. Property owners should take precautions when repainting surfaces with dete riorated paint or performing any remodeling or renovation work that disturbs painted sur faces (such as scraping off paint or tearing out walls) (U.S. Environmental Protection Agency 1997b).
The U.S. government defines lead-based paint hazards as not only encompassing leadbased paint, but also dangerous levels of lead in settled dust and bare soil. Testing for lead-based paint hazards can be done either by obtaining dust wipe samples from the floor and window sills or by using a portable x-ray fluorescence analyzer (XRF) to document the presence of lead in paint. EPA's 2001 hazard standard (40 CFR 745) set the benchmark for floor dust lead level at 40 µg/ft2 and 250 µg/ft2 for interior window sills.
# Lead-contaminated Soil
Soil may contain lead from deteriorating, exterior lead-based paint or other sources such as deposition from years of leaded gasoline use or industrial emissions. Lead-contaminated soil can be tracked into the home and mixed with household dust, which may also contain lead from interior paint sources. In the United States, lead-contaminated soil is defined as a hazard if there is 400 ppm of lead in bare soil in children's play areas or an average of 1,200 ppm for bare soil in the rest of the yard (U.S. EPA 2001). Poisoning from lead-contaminated soil is most common among young children who play on the floor and commonly mouth objects, but has also been reported to occur in women who consume lead-contaminated soil. For example, cases of lead poisoning have been reported after the consumption of lead-contaminated soil (Case Study 4-3). Exposure to lead from food grown in lead contami nated soil in urban gardens has also been noted (Finster et al. 2004).
# Case Study 4-3. A Case of Lead Poisoning from Soil Ingestion During Pregnancy
S.N. is a 33-year-old woman who was pregnant four times with three living children. She was born in Jamaica, West Indies, and immigrated to the United States during her most recent preg nancy. Her obstetric history included three prior uncomplicated full-term vaginal deliveries. She registered for prenatal care at 19 weeks' gestation with no significant historical problems. On questioning, she revealed a history of pica, eating soil from near her house. Her initial tests included a blood lead level of 26 µg/dL, free erythrocyte protoporphyrin 48 (normal < 35 µg/ dL), Hgb 9.5 g/dL, and Ferritin 4.9 ng/dL. She was counseled to stop the pica behavior and re ferred for genetic and nutritional counseling and to a special lead clinic. Her repeat blood lead level at 23 weeks' gestation was 13 µg/dL. Environmental lead tests of the water were negative. Soil tests were negative, except for areas around the garage door of her house. The patient had no knowledge of lead levels or lead testing during her other pregnancies in Jamaica. She was admitted for induction of labor at 38 weeks' gestation due to preeclampsia. She had a normal spontaneous vaginal delivery of a girl, 3,395 grams. Apgar scores were 9 at 1 minute and 9 at 5 minutes. She was discharged on day 3 and followed postpartum as her blood pressure gradu ally decreased to normal levels. Her postpartum blood lead level was 13 µg/dL and free eryth rocyte protoporphyrin 60. S.N. decided to breastfeed and bottle-feed. At 6 weeks, the baby was noted to have a blood lead level of 20 µg/dL.
# Point Sources of Lead
Point sources of lead exposure include active mining and smelting operations, lead contamination at former mining and smelting sites, and industrial emissions, such as those from battery-manufacturing and recycling activities, particularly in international settings where environmental regulations and monitoring programs may not be in place. Studies have documented the impact of lead mining and smelting activities, both in the United States and elsewhere Benin et al. 1999;, and women who live near active or former lead mines and smelters may be exposed to high levels of lead contamination.
# Leaded Gasoline
Recognition of the toxic effects of lead has prompted interventions that have resulted in reductions in lead exposure in many countries. In the United States, standards to phase out leaded gasoline use were first imple mented in 1973 (U.S. Environmental Protection Agency 1973). In 1995, leaded fuel accounted for only 0.6 % of total gasoline sales in the United States and, in 1996, the Clean Air Act banned the sale of leaded fuel for use in on-road vehicles (U.S. Environmental Protection Agency 1996). A worldwide initiative to phase-out lead in gas oline has already stimulated important reductions in ambient air lead levels and population blood lead levels in some countries (Cortez-Lugo et al. 2003;Romieu et al. 1992). A complete phase-out of leaded gasoline was completed throughout the Latin American and Caribbean region by 2005 (Burke 2004;Walsh 2007). However, in some parts of Africa, Asia, and the Middle East, leaded gasoline is still common (Partnership for Clean Fuels and Vehicles 2007). The impact of leaded fuel is more important in urban settings, given their higher vehicular density.
# Hobbies and Recreational Activities
Hobbies and recreational activities that may cause exposure to lead include, but are not limited to creating stained glass; enameling copper; casting bronze; making pottery with certain leaded glazes and paints; casting ammunition, fishing weights, or lead figurines; jew elry making and electronics (with lead solder); glassblowing with leaded glass; print-making; refinishing old furniture; distilling liquor; hunting; and target shooting. Living near a point source of lead, such as lead mines, smelters, or battery recycling plants (even if the establishment is closed).
Working with lead or living with someone who does . Women who work in or who have family members who work in lead-industry (take-home exposures).
Using lead-glazed ceramic pottery . Women who cook, store, or serve food in lead-glazed ceramic pottery made in a traditional process and usually imported by individuals outside the normal commercial channels.
Eating nonfood substances (pica) . Women who eat or mouth nonfood items that may be contaminated with lead (such as soil or lead-glazed ceramic pottery).
Using alternative or complementary medicines, herbs, or therapies . Women who use imported home remedies or certain traditional herbs that may be contaminated with lead.
Using imported cosmetics or certain food products . Women who use imported cosmet ics, such as kohl or surma, or certain imported foods or spices that may be contaminated with lead.
Engaging in certain high-risk hobbies or recreational activities . Women who engage in high-risk activities or have family members who do.
Renovating or remodeling older homes without lead hazard controls in place . Wom en who have been disturbing lead paint and/or creating lead dust or spending time in such a home environment.
Consumption of lead-contaminated drinking water . Women whose homes have leaded pipes or source lines with lead.
Having a history of previous lead exposure or evidence of elevated body burden of lead . Women who may have high body burdens of lead from past exposures, particularly those who are deficient in certain key nutrients (calcium, iron).
Living with someone identified with an elevated lead level . Women who may have ex posures in common with a child, close friend, or other relative living in same environment. Avoid jobs or hobbies that may involve lead exposure, and take precautions to avoid takehome lead dust if a household member works with lead. Such work includes construction or home renovation/repair in pre-1978 homes, and lead battery manufacturing or recycling.
Avoid using imported lead-glazed ceramic pottery produced in cottage industries (de scribed elsewhere in this chapter) and pewter or brass containers or utensils to cook, serve, or store food.
Avoid using leaded crystal to serve or store beverages.
Do not use dishes that are chipped or cracked.
Stay away from repair, repainting, renovation, and remodeling work being done in homes built before 1978 in order to avoid possible exposure to lead-conaiminated dust from old lead-based paint. Avoid exposure to deteriorated lead-based paint in older homes.
Avoid alternative cosmetics, food additives, and medicines imported from overseas that may contain lead, such as azarcon, kohl, kajal, surma, and many others listed in Appendix V.
Use caution when consuming candies, spices, and other foods that have been brought into the country by travelers from abroad, especially if they appear to be noncommercial prod ucts of unknown safety.
Eat a balanced diet with adequate intakes of iron and calcium, and avoid the use of ciga rettes and alcohol.
# CHAPTER 5. BLOOD LEAD TESTING IN PREGNANCY AND EARLY INFANCY
# Key Recommendations for Initial Blood Lead Testing
- Blood lead testing of all pregnant women in the United States is not recommended.
- State or local public health departments should identify populations at increased risk for lead exposure and provide guidance about community-specific risk factors to assist clinicians in determining the need for blood lead testing for identified populations or for individuals at risk.
- Routine blood lead testing of pregnant women is recommended in clinical settings that serve populations with identified risk factors for lead exposure.
- In clinical settings where routine blood lead testing of pregnant women is not indicated on the basis of community-specific risk factors, health care providers should consider the possibility of lead exposure in individual pregnant women by evaluating risk factors for exposure as part of a comprehensive occupational, environmental, and lifestyle health risk assessment of the pregnant woman . Blood lead testing should be per formed if a single risk factor is identified at any point during pregnancy.
- When indicated, blood lead testing should take place at the earliest contact with the patient, ideally pre-conceptionally or at the first prenatal visit, and be conducted using venous blood lead tests only.
- Both maternal and infant blood lead level test results, along with relevant environmental findings, should be incorporated into both the mother's and the infant's medical records in a timely fashion. Even though such records are likely to be maintained separately, these data are necessary for proper medical management of mother and infant.
# Key Recommendations for Follow-up Blood Lead Testing
- A toxicological threshold for adverse health effects has not been identified. Thus, followup blood lead testing is recommended for pregnant women with BLL ≥5 µg/dL and their newborn infants to inform environmental and clinical decision-making.
- Pregnant women with confirmed BLLs ≥45 µg/dL should be considered as high-risk preg nancies and managed in consultation with experts in lead poisoning and high-risk preg nancy.
# INTRODUCTION
This chapter describes considerations for initial and follow-up blood lead testing during pregnancy and early infancy. It provides information for providers, public health agencies, and communities to guide the approach to the testing and follow up of blood lead levels where lead exposure above background levels is either known or thought to be a concern or where there is no information on the epidemiology of blood lead lev els among the target groups (pregnant women and infants less than 6 months of age). The tables outlining frequency of follow-up blood lead testing of newborns and infants exposed in utero fill a gap left by the CDC recommendation for the follow-up testing of lead-exposed children, which begins at age 6 months (Centers for Disease Control andPrevention 1991, 2002).
The strategy described in this chapter for secondary prevention of lead toxicity through testing and identifi cation of lead-exposed pregnant women is focused on the individual. However, a primary prevention strat egy of community-focused reduction of lead sources is crucial to prevent the adverse consequences of lead exposure. Secondary prevention strategies, such as testing and follow up of lead exposure above background levels in individual women, do not adequately prevent exposure or the resultant adverse health outcomes. An understanding of the community characteristics, ethnicity, cultural practices, local industry and common oc cupations, and alternative medicine use practices will assist in identifying groups of women at risk for lead ex posure. This strategy may be successful in primary prevention of exposure to the developing fetus and infant if it guides health education and outreach activities in high-risk communities.
# IDENTIFICATION OF PREGNANT WOMEN WITH ELEVATED BLOOD LEAD LEVELS
# Screening for Elevated Blood Lead Levels
The purpose of screening pregnant women is to identify women exposed to lead who can reasonably be expected to benefit from the knowledge of their lead exposures above background levels and subsequent actions to prevent additional lead exposure or adverse effects to themselves or their fetuses. In this report, screening refers to a laboratory test that is performed on a blood sample from an asymptomatic person to determine if that person has evidence of lead exposure above background levels. One goal of identifying pregnant women at risk is to prevent the potential adverse health outcomes for mother and infant associated with lead exposure during pregnancy. As described in Chapter 2, evidence suggests that no threshold exists for the impacts of lead on maternal health or on the birth, growth, and neurodevelopmental outcomes of the offspring. NHANES data on the blood lead levels of U.S. women of childbearing age indicate that a BLL ≥5 µg/dL is higher than the 98th percentile (or 3 standard deviations) for this population (Centers for Disease Control and Prevention 2009, unpublished data). Thus, a BLL ≥5 µg/dL indicates that a pregnant woman has been exposed to lead well above the U.S. average exposure.
The U.S. Preventive Services Task Force (USPSTF) recently completed a review of the evidence for lead screen ing in pregnancy. USPSTF found no studies examining the effectiveness of screening or interventions on improving health outcomes in asymptomatic pregnant women, and a lack of availability of evidence for inter ventions to reduce blood lead levels in this population. The potential harms of screening cited include falsepositive test results, anxiety, inconvenience, work or school absenteeism, and financial costs associated with repeated testing (Rischitelli et al. 2006; U.S. Preventive Services Task Force 2006). However, the USPSTF review did not assess the health impact on subpopulations exposed to lead prenatally or during breastfeeding or the benefits of screening only such subgroups.
CDC has determined that there is evidence for health effects in asymptomatic pregnant women at the popula tion level and that a threshold for these effects has not been established. However, there is currently a lack of evidence of improved outcomes from interventions provided to pregnant women with a BLL ≥5 µg/dL since no studies on this point exist. Therefore, the traditional model for medical decision-making of a case defini tion linked directly to a proven clinical treatment is not useful in this context. Until such research data are available, and given the convincing evidence of neurodevelopmental effects of lead in the prenatal period, CDC recommends a precautionary approach, noting that a BLL ≥5 µg/dL in a pregnant woman indicates that she has or has recently had exposure to lead well above that for most women of child bearing age in the U.S. population. Since there are still many potential lead sources that a pregnant woman can encounter, a blood lead test is a simple and inexpensive way to identify pregnant women with lead exposures above background levels, so that lead sources can be identified and further exposure can be prevented in the best interests of the mother and child. In addition, source identification and remediation activities may benefit other household and community members, depending upon the source in question, as well as the mother and fetus/infant in subsequent pregnancies. Finally, in contrast to abstract and generalized anticipatory guidance, blood lead test results above background levels are also concrete and actionable data points that may help focus attention by an expectant woman on the challenge of identifying and reducing lead exposure.
# Blood Lead Testing in the General Population and High-risk Subgroups
Universal blood lead testing of all pregnant women in the United States is not warranted (Rischitelli et al. 2006; U.S. Preventive Services Task Force 2006) considering the current estimated prevalence of elevated blood lead levels is less than 1% of women of childbearing age (Centers for Disease Control and Prevention 2009, unpublished data). However, routine blood lead testing may be warranted in specific U.S. subpopulations at increased risk for lead exposure due to local, community-specific factors, such as environmental sources of lead or the demographics of the population. In addition, individual characteristics and behaviors put certain populations of women of childbearing age at increased risk for lead exposure relative to that of the general population.
The presence of risk factors in a subpopulation of pregnant women, for example in a particular clinic popula tion, is an indication for routine blood lead testing among all pregnant women in this subpopulation. State or local public health departments should provide clinicians with information on community-specific risk factors ap propriate for use in determining the need for routine blood lead testing, including data describing the dis tribution of blood lead levels in the community and local knowledge of immigration patterns and ethnicity, common occupations, alternative medicine use, cultural practices, local industries, and idiosyncratic sources. Routine testing should continue in subpopulations known to be at increased risk until the specific risk factors within that population are better understood and more targeted methods for identifying women at increased risk can be employed.
The presence of a large industry in a community, such as a battery recycling plant or a lead smelter, is also indication for blood lead testing of the local pregnant population. A list of occupations with the potential for lead exposure can be found in Appendix IV. When the prevalence of lead exposure above background levels is known to be high in certain communities, it may benefit the providers to develop a centralized blood lead testing program at a local hospital, clinic, or community center.
# Individual Risk Factor Assessments
Blood lead testing of individual pregnant women based on individual risk factors may be warranted even when blood lead testing of population subgroups is not warranted. Identification of women who may be at increased risk for lead exposure consists of a comprehensive occupational, environmental, and lifestyle history to assess individual risk. However, validated risk factor questionnaires do not currently exist to predict who would benefit from blood lead testing. Local variation in lead exposure patterns makes national development of such a tool impractical. Instead, development (or adaptation) and validation of a risk factor questionnaire should occur at the local level, under the leadership of local public health authorities, after local risk factors for lead poisoning in pregnancy have been ascertained.
In general, when risk factor questionnaires are used Assessments of other risk factor questionnaires primarily for children have been conducted, including one that was adapted for use with pregnant women. Stefanak et al. (1996) assessed the accuracy of the CDC childhood lead poisoning risk questionnaire (Centers for Disease Control and Prevention 1991) for administration as a screening tool to 314 pregnant women. In this study, which included both rural and urban areas, questions associated with elevated blood lead in pregnant women included the following: home built before 1960 with chipping or peeling paint, current smoker, and consumption of more than nine servings of canned food per week. Women who answered "yes" to any one of those questions were five times more likely to have an el evated blood lead level (BLL ≥10 µg/dL) (p<0.001). The authors calculated the sensitivity and negative predic tive value of the CDC questionnaire to be 75.7% and 93.1%, respectively, in this population, suggesting a high confidence that a negative response would classify a respondent correctly. However, the positive predictive value was only 46%, suggesting less confidence in a positive response to correctly classify individuals. When a full 19-question survey was administered, the sensitivity and negative predictive value increased to 89.2% and 96.4%, respectively. The performance difference between the questionnaires is most likely because the CDC childhood lead poisoning risk factor questionnaire developed for children does not target the major sources of lead exposure in pregnant women.
# Clinical Indicators for Blood Lead Testing
Clinical indications for measuring a blood lead level include the presence of a risk factor for exposure, physi cal signs or symptoms, or the presence of a household member with known lead exposure above background levels. Most individuals with measurable lead exposure above background levels are asymptomatic. When symptoms or physical findings of lead poisoning are present, they are often difficult to differentiate as they are generally nonspecific and quite common. These include constipation, abdominal pain, anemia, headache, fatigue, myalgias and arthralgias, anorexia, sleep disturbance, difficulty concentrating, and hypertension, among others. Blood lead levels should be measured when these symptoms are present and the suspicion of a source of lead exists. Blood lead levels should also be measured in the work-up of acutely ill pregnant women presenting with severe abdominal colic, seizure, or coma, and considered in the differential diagnosis of con sistent constitutional symptoms (e.g., persistent headache, myalgias, fatigue, etc.) and anemia.
# Timing of Blood Lead Testing During Pregnancy
Identifying maternal lead exposure prior to conception or early in the pregnancy potentially offers the most benefit to the developing fetus. Unfortunately, lead poisoning is frequently identified late in pregnancy. reports that the median gestational age at diagnosis was 25.4 weeks (range 6 to 39), while reports that lead poisoning was discovered in the third trimester in 12 of 15 (86%) subjects after the women presented with subtle but characteristic findings of severe lead poisoning, including malaise, anemia, or basophilic stippling on blood smear. Early blood lead testing may not always identify lead poisoned women sooner in cases where the exposure is first occurring during pregnancy, such as pregnancyrelated pica behavior. In these cases, the measurement of a BLL preconception or early in the first trimester may precede the patient's exposure. Earlier testing, however, does have the benefit of early identification in pregnant women with chronic, ongoing, or historical cumulative exposures (Hu 1991;. Therefore, it is recommended that blood lead testing of women at increased risk take place at the earliest contact with the patient, ideally preconceptionally or at the first prenatal visit.
# Methods to Collect Blood Samples for Testing
Although blood lead levels can be measured from both capillary and venous samples, the preferred method for adults is a venous blood sample in a vacuum tube. Venous samples are more reliable than capillary blood lead levels, which can be inaccurate due to environmental contamination or dilution of the specimen from finger squeezing. Capillary samples can be used if strict protocols are employed to reduce the risk for contami nation; however, even if obtained under these conditions, a capillary BLL ≥5 µg/dL requires confirmation with a venous blood lead test.
# Methods to Analyze Lead Levels in Blood
Blood lead levels from venous samples should be analyzed by a certified laboratory using one of the ap proved methods such as: inductively coupled plasma mass spectroscopy (ICP-MS), graphite-furnace atomic absorption spectrophotometry (GFAAS), or anodic-stripping voltammetry (ASV). Specimen tubes for collection should be lead-free and laboratories should be consulted about the preferred specimen tube and collection procedures. For details about laboratory analytic procedures see Analytical Procedures for the Determination of Lead in Blood and Urine, Approved Guideline ); available at / orders/free/c40-a.pdf 22.
Using a centralized laboratory ensures the accuracy of testing and enables better compliance with local reporting requirements. Clinical Laboratory Improvement Act (CLIA) certification of laboratories and participa tion in national proficiency testing programs helps assure that the methods employed for blood lead testing are accurate and precise to within a specified range of BLLs. Regardless of the method employed, CLIA-man dated proficiency testing programs require accuracy to within the range of +/-4 µg/dL (or +/-10%). The limits of detection, accuracy, and precision of BLL determination will vary with the type of method used and among laboratories using the same method. Of the three commonly used methods, ICP-MS and GFAAS have limits of detection of about 0.3-1.0 µg/dL with values reported to two significant figures. ASV has a detection limit of 1-3 µg/dL and is less precise usually reporting values as whole numbers, but is adequate for BLL testing above the limit of detection. For medical interpretation and decisions on management, BLLs can be rounded to a whole number.
Measurement of the BLL of patients at risk for lead exposure can also be done at the point-of-care using a por table blood lead analyzer (Pineau et al. 2002;Shannon and Rifai 1997). Although this method offers the benefit of an immediate result and intervention, point-of-care measurements for pregnant women should be limited to situations where sending specimens to a centralized, certified laboratory is not feasible due to logistics, lack of refrigeration, or cost limitations. Any blood lead measurement ≥5 µg/dL obtained by this method should be confirmed by a certified laboratory with a venous blood lead test (as noted in the point-of-care's instrument use guidance).
# Interpretation of Blood Lead Test Results
Analytical variability must be considered when interpreting blood lead results. Changes in successive blood lead measurements on an individual can be considered significant only if the net difference of results exceeds the analytic variance of the method. The degree of analytical variability between laboratories that employ different analytic methods usually exceeds that within a single laboratory. Therefore, a single laboratory using one analytical method should be used to best compare multiple blood lead results from an individual or a population (Centers for Disease Control and Prevention 2005). As a practical matter, ACCLPP therefore recom mends that trends in blood lead levels for an individual should not be considered clinically significant until the magnitude of the change is ≥5 µg/dL ).
As described above, a BLL ≥5 µg/dL indicates that a pregnant woman has been exposed to lead well above the average U.S. exposure. Separate scientific studies indicate that adverse effects at BLLs ≥5 µg/dL are likely in pregnant women and likely to increase with increasing blood lead levels. Therefore, additional actions on the part of health care providers and public health are indicated for pregnant women with BLLs ≥5 µg/dL and their infants (see Table 6-1).
As described in Chapter 6, occupationally exposed women should be referred to an occupational physician or center treating occupationally exposed adults.
Steps to minimize lead exposure should be undertaken if the BLL is ≥5 µg/dL, and medical removal from workplace exposure should be undertaken if the BLL is ≥10 µg/dL.
# Transmission of Blood Lead Test Results
Health care providers have two important responsibilities with respect to sharing laboratory reports of blood lead levels. First, blood lead levels of both mother and child should be transmitted and entered into both the mother's and the infant's medical records in a timely fashion. For instance, the infant's initial and sequential BLLs should be in the mother's chart, and vice versa, as these data should inform decisions about additional blood lead testing, breastfeeding, and environmental interventions, among other actions. In addition, infor mation about identified lead exposure sources can be clinically useful and should be shared.
In jurisdictions where reporting of BLLs is not done by laboratories, health care providers should also notify the Lead Poisoning Prevention Program of the local or state health department of confirmed BLLs ≥10 µg/dL in a pregnant woman to ensure that health department data are complete and that women receive appropri ate services from public health. The report should include complete demographic information on the patient, the health care provider's name and phone number, and the method of sample collection (venous or capil lary).
# FOLLOW-UP TESTING IN THE PREGNANT WOMAN
Once a blood lead level ≥5 µg/dL has been identified, an important component in the management of leadexposed individuals is follow-up blood lead testing to assess trends. After the source of exposure has been identified and removed, it is expected that the BLL will decline. However, there is no clear formula to estimate the expected rate of decline of BLLs in exposed women or their offspring. Several factors play a role, including duration of the exposure, presence of physiological stressors affecting bone turnover rates, nutritional status, and medical and environmental interventions.
Follow-up blood lead testing is indicated for pregnant women with a BLL ≥5 µg/dL according to the sched ules in Table 5-3. At higher BLLs, a follow-up confirmatory BLL might be indicated earlier than on the schedule provided. Even a single BLL ≥5 µg/dL should prompt the asking of certain risk related questions as soon as possible. Depending on the answers, it may be important to take immediate action. For example, if a pregnant woman from India has a BLL of 10 µg/dL and is taking ayurvedic supplements, she should be advised to im mediately stop taking the supplements instead of waiting weeks for another BLL.
When the patient's BLL does not fall after several months, the various factors that may impact the rate of decline (i.e., duration of exposure, psychological stressors, nutritional status, and medical and environmental interventions) should be reconsidered. In some cases, further environmental investigation may be needed. A continuing increase in the measured venous BLL during the follow-up period may indicate continuing or pos sibly increased exposure to lead and indicates a need for further environmental investigation. Potential causes of rising BLLs in pregnant women include the failure to address the source of the lead or inappropriate man agement of the lead source; continued use of lead-contaminated products such as spices, foods, cosmetic, folk remedies or lead-glazed ceramics that were not revealed during the initial investigation; and increases in mobilization of bone lead stores from past, high-dose exposures. Additionally, prevention of exposure to lead from occupational sources may not be adequate to maintain a BLL below the level of concern. (See Chapter 6 for medical management guidelines for occupationally exposed women.) Measurement of follow-up BLLs is the main method for determining how urgently additional intervention is needed and whether blood lead levels are declining once interventions, such as removal from the source of exposure, have taken place.
As described in Chapter 3, blood lead levels in pregnancy generally follow a U-shaped curve over the course of pregnancy, with peak blood lead level appearing to be at or near delivery. Assuming unchanging lead intake, the combination of hemodilution, increased weight of organs, and enhanced metabolic activity may account for much of the observed decrease in whole blood lead between 12 and 20 weeks gestation. Accelerated absorption of dietary lead, decreased elimination of lead from the body, and release of bone lead, perhaps following the calcium conservation strategies of late pregnancy, may all operate to yield the observed pattern of lead during pregnancy. Bone resorption dynamics change throughout pregnancy, and the implications for follow-up testing are two-fold. Pregnant women who have an initial blood lead level that is ≥5 µg/dL in the first trimester may have a lower BLL on repeat testing during the second trimester, regardless of interventions. This blood lead level may increase prior to delivery and may, in fact, be higher than the initial level. Addition ally, the measured BLL of pregnant women in the second and late first trimesters may be an underestimation of the actual body lead burden. However, the magnitude of this change is uncertain and it is unclear whether the change is clinically significant for determining whether a follow-up BLL <5 µg/dL measured in the first or second trimester should be repeated again at or near delivery. In addition, a single blood lead level cannot be used to establish a woman's risk during her entire pregnancy.
# FOLLOW-UP TESTING IN THE NEWBORNS AND INFANTS <6 MONTHS OF AGE
Maternal and umbilical cord lead levels at delivery are, in most cases, highly correlated. However, in a woman with a known BLL ≥5 µg/dL during pregnancy, umbilical cord or neonatal lead levels should be measured to establish a baseline for clinical management. Follow-up blood lead testing is indicated for neonates and infants with a BLL ≥5 µg/dL according to the schedules in Tables 5-1 and 5-2 Potential causes of rising BLLs in newborns and infants under the age of 6 months include environmental sources of lead exposure, such as environmental contamination from lead dust and lead in the diet. Not enough is known about the kinetics of lead in the prenatally exposed newborn to make reliable projections about the rate of change of infant BLLs after birth.
# FOLLOW-UP TESTING IN THE LACTATING MOTHER AND NURSING INFANT
Postpartum maternal blood lead levels are expected to increase during the first month after delivery (Os terloh and Kelly 1999; . This increase is thought to be due partially to postpartum hemoconcentration due to fluid loss and is also greater in lactating women than in women who bottle-feed their infants, suggesting that lactation stimulates the release of lead from bone and that bone lead mobilization may actually be higher during lactation than in pregnancy . These findings illustrate the importance of understanding that an increase in maternal blood lead level after delivery may not necessarily be associated with a new source of exogenous exposure and may, in fact, result from endogenous release of cumulative bone lead stores. However, it is difficult to draw a conclusion from the scientific literature about the magnitude of the change warranting concern. The higher the BLL on the initial test, the more urgent the need for confirmatory testing. The higher the BLL on the screening test, the more urgent the need for confirmatory testing.
c
If possible, obtain a maternal BLL prior to delivery since BLLs tend to rise over the course of pregnancy. Health-care providers should use a blood lead test to screen pregnant women if they answer "yes" or "don't know" to any of the following questions, or if they have moved to Minnesota from a major metropolitan area or another country within the last 12 months:
1. Do you or others in your household have an occupation that involves lead exposure?
2. Sometimes pregnant women have the urge to eat things that are not food, such as clay, soil, plaster, or paint chips. Do you ever eat any of these things-even accidentally?
3. Do you live in a house built before 1978 with ongoing renovations that generate a lot of dust (for example, sanding and scraping)?
4. To your knowledge, has your home been tested for lead in the water and if so, were you told that the level was high?
5. Do you use any traditional folk remedies or cosmetics that are not sold in a regular drug store or are homemade? - Attempt to determine source(s) of lead exposure and counsel patients on avoiding fur ther exposure, including identification and assessment of pica behavior (see Chapter 4).
- Assess nutritional adequacy and counsel on eating a balanced diet with adequate in takes of iron and calcium (see Chapter 7).
- Perform confirmatory and follow-up blood lead testing according to the recommended schedules (see Chapter 5 ).
- For occupationally exposed women, review the proper use of personal protective equip ment and consider contacting the employer to encourage reducing exposure.
- Encourage breastfeeding consistent with the provisos in Chapter 9.
For women with prenatal blood lead levels of 10-14 µg/dL, ALL OF THE ABOVE, PLUS:
- Notify Lead Poisoning Prevention Program of local health department if BLLs ≥10 µg/dL are not reported by laboratory.
- Refer occupationally exposed women to occupational medicine specialists and remove from workplace lead exposure.
For women with prenatal blood lead levels of 15-44 µg/dL, ALL OF THE ABOVE, PLUS:
- Support environmental risk assessment by the corresponding local or state health de partment with subsequent source reduction and case management.
For women with prenatal blood lead levels ≥45 µg/dL, ALL OF THE ABOVE, PLUS:
- Treat as high-risk pregnancy and consult with an expert in lead poisoning on chelation and other treatment decisions (see Chapter 8).
Note: Women of childbearing age with BLLs ≥5 µg/dL who are not currently pregnant or breastfeeding should be followed according to the OSHA medical surveillance guidelines in Appendix C.
# INTRODUCTION
This chapter summarizes actions to be undertaken by health care providers, in coordination with local and state health departments, in providing clinical and environmental services to pregnant and lactating women with BLLs ≥5 µg/dL. Both the health department and health care provider have roles to play in keeping preg nant and lactating women and their offspring safe from further lead exposure. The chapter also describes how public health case management can work to coordinate actions between health departments and health care providers to optimize the health of and prevent lead exposure for both the affected mother and fetus or infant.
This report recommends follow-up activities and interventions beginning at BLLs ≥5 µg/dL in pregnant women; Table 6-1 presents specific CDC recommendations for medical and public health actions according to blood lead levels of the pregnant/lactating woman receiving intervention. Because the prevalence of BLLs ≥5 µg/dL, and especially ≥15 µg/dL, is low in the United States, the frequency of follow-up testing recom mended herein should not be an undue burden on the health care system. Although the BLL at which par ticular elements of case management will be initiated is variable by jurisdiction, education and follow-up BLL monitoring should be available for any pregnant woman who has a confirmed BLL ≥5 µg/dL. More intense management, including home environmental and source investigation, should be available to any pregnant woman with a BLL ≥15 µg/dL.
Unlike the blood lead level of concern of 10 µg/dL for children, which is a communitywide action level, a BLL of 5 µg/dL in pregnant women serves a different purpose: it flags the occurrence of prior (or ongoing) lead ex posure above background levels, which may not otherwise be recognized. Given the vulnerability of a devel oping fetus to adverse effects and the possibility of preventing additional exposures, and despite the lack of proven interventions linked to improved outcomes, CDC feels it is prudent to initiate prevention and screen ing activities for pregnant women showing any evidence of lead exposure above background levels. And, as noted earlier, in contrast to abstract and generalized anticipatory guidance, blood lead test results above background levels are also concrete and actionable data points that may help focus attention by an expectant woman on the challenge of identifying and reducing lead exposure.
This chapter describes the role of clinicians in medical management of pregnant and lactating women with BLLs ≥5 µg/dL, including both clinical interventions, with reference to detailed chapters, and environmental counseling to reduce lead exposures. This chapter also reviews the role of public health agencies in pro viding environmental investigations and case management. These essential activities complement those provided by health care providers to ensure that pregnant women receive the full spectrum of appropriate services to identify and reduce exposures to lead.
# MEDICAL MANAGEMENT: ROLE OF THE HEALTH CARE PROVIDER
Medical management of pregnant women with BLL ≥5 µg/dL consists of two parallel tracks: environmental management and clinical services. The mainstay of management for pregnant women with blood lead levels ≥5 µg/dL is removal of the source, disruption of the route of exposure, or avoidance of the lead containing substance or activity. Recommendations for reducing lead exposure are presented below.
Recommended clinical care is described throughout this report in the chapters presenting the research base on blood lead testing, nutrition, chelation, breastfeeding, and other issues. For the convenience of readers, a brief overview of important aspects of clinical care to accompany environmental risk reduction is provided in Box 6-1. Each topic presented is discussed in detail in separate chapters, as noted.
Reducing lead exposure can be a complex challenge, which does not always lend itself to straightforward interventions. Lead exposure can occur in the home, community, or workplace, so identifying specific sources of lead and exposure pathway(s) for an individual is essential to reducing exposure for a particular woman. Any or all of the following strategies may need to be applied depending on a woman's residence, lifestyle, or occupation. This section describes the essential actions recommended for health care providers to assess lead exposure and counsel on its reduction.
Source identification beyond obtaining a thorough environmental and occupational history should be con ducted in collaboration with the local health department when BLLs are ≥15 µg/dL. During this process, local or state health departments will visit the home to conduct in-person interviews and collect samples that allow for more-thorough understanding of the risk factors and lead sources and pathways of exposure. In some jurisdictions, an investigation of the workplace may take place as well. This information should be shared with the health care providers for both the mother and infant. Health care providers can assist in the investigation by providing information to health departments on suspected sources that are identified during the care of the patient. In identifying the source or risk factors, testing of all family and household members of the patient for blood lead levels will reveal whether the source is common to everyone or unique to the patient.
# Evaluate Occupational Exposure and Make Appropriate Notifications
While blood lead levels in occupationally exposed individuals have fallen dramatically since lead industry standards were revised in 1978 , occupational exposures are still a source of lead ex posure in women (Centers for Disease Control and Prevention 2007;. Public health depart ments and health care providers should evaluate occupation as a possible source of lead exposure in pregnant or lactating women with BLLs ≥5 µg/dL and, if occupational exposure exists, refer these women to an occupa tional physician or occupational medicine center that treats occupationally exposed adults. Appendix IV lists major lead-using industries.
Under current OSHA standards, workplace protections to reduce lead exposure include medical surveillance, periodic air monitoring, and provision of change and shower facilities to reduce take-home exposure. Medical removal is required by OSHA only when blood lead concentrations exceed 50 µg/dL (for construction) or 60 µg/dL (for general industry). BLLs of 40 µg/dL trigger a medical evaluation. However, the OSHA standards are out of date and are inadequate for protecting the health of lead-exposed workers, especially pregnant women and their offspring. Adverse health effects have been associated with much lower blood lead levels currently set as benchmarks for OSHA enforcement.
New evidence has emerged over the last 20 years showing that both cumulative as well as acute lead ex posures pose significant health risks ). As discussed in Chapter 2 of this document, lead exposure during pregnancy has been associated with an increased risk for spontaneous abortion and adverse effects on fetal growth and neurodevelopment. In response to current research findings, recent recommenda tions by and by the Association of Occupational and Environmental Clinics ( 2007) call for setting the general lead industry blood lead level of concern to 10 µg/dL; for occupationally exposed women who are or may become pregnant, the goal is to maintain a BLL <5 µg/dL.
From a clinical perspective, it is important to note that the OSHA Medical Surveillance Guidelines included as Appendix C to the 1977 Lead Standard (www.osha.gov/pls/oshaweb/owadisp.show_document?p_ table=STANDARDS&p_id=10033) explicitly states:
"Recommendations may be more stringent than the specific provisions of the standard. The examining physician, therefore, is given broad flexibility to tailor special protective procedures to the needs of individual employees. This flexibility extends to the evaluation and management of pregnant workers and male and female workers who are planning to raise chil dren. Based on the history, physical examination, and laboratory studies, the physician might recom mend special protective measures or medical removal for an employee who is pregnant or who is plan ning to conceive a child when, in the physician's judgment, continued exposure to lead at the current job would pose a significant risk. "
The appendix goes on to state: "The adverse effects of lead on reproduction are being actively researched and OSHA encourages the physician to remain abreast of recent developments in the area to best advise pregnant workers or workers planning to conceive children. "
Since substantial research developments have occurred since the 1970s when the OSHA standards were developed, occupationally exposed women who are or may become pregnant should be removed from lead exposure if their blood lead level is ≥10 µg/dL. If the blood lead level is in the range of 5 to 9 µg/dL, the health care provider should ask about potential sources of lead exposure on the job and review appropriate use of personal protective equipment in an effort to reduce exposure. Workplace hygiene should be emphasized in order to keep exposure as low as possible and to prevent take-home exposures for other household members. Specifically, patients should be advised to
- Wear a respirator and keep it clean.
- Use wet cleaning methods and HEPA vacuums to clean work areas. Never dry sweep or use compressed air.
- Wash hands and face before eating and drinking. Never eat or drink in the work area.
- Normal handwashing and cleaning of eating surfaces may not remove all surface lead, 'lead visualization' wipes are available can help determine if lead has been removed to an adequate degree.
- When possible, wash or shower and change clothes and shoes before leaving work. Keep all work items away from family areas in the home, and wash and dry work clothes separately from other laundry.
Where feasible, the occupational medicine provider should consider contacting the woman's employer with recommended best practices to monitor and reduce lead exposure in their workplace. An example of such a letter is provided in Appendix XIV. Appendix XV contains the California Department of Public Health Work place Hazard Alert. Prior to issuing such a letter, the healthcare provider should discuss the contents with the affected employee, and obtain her authorization. Although the letter in Appendix XIV refers to the medical removal protection provisions of the OSHA lead standards, the provider and the employee should be aware that some patients (e.g., employees of government agencies, mines, railroads and airlines) may work for a busi ness that does not necessarily fall under OSHA jurisdiction. For these employees, the reference to the OSHA standard should be omitted and the employee should give explicit consent for release of medical information to her employer.
# Identify and Discourage Pica Behavior
As discussed in Chapter 4, the behavior common to all definitions of pica is a pattern of deliberate ingestion of nonfood items, which can cause lead exposure if the substances consumed are contaminated with lead. All pregnant women, but especially those with blood lead levels ≥5 µg/dL, should be counseled to never eat nonfood items that may contain lead, such as clay, soil, pottery, or paint chips. Appendix III lists commonly reported pica substances.
Once pica is identified, the specific behavior must be characterized in order to determine how best to inter vene. Clinicians are encouraged to follow a standardized history outline to obtain a more complete picture of pica behavior for an individual woman. Table 6-2 provides suggested factors to assess and characterize pica behavior, including such issues as the reason(s) for the behavior (if known) and the substance(s) being con sumed. Only a few studies are available that evaluate the effectiveness of interventions designed to reduce or eliminate pica behavior. Most of these studies evaluated the impact of interventions on pica behavior in developmentally delayed individuals or those with obsessive-compulsive disorders (Goh et al., 1999;McAdam, et al. 2004;Piazza et al. 1998). Other studies have attempted to reduce pica behavior by providing vitamin supplements and improving the quality of the diet. While this approach appears to be effective in some case reports (Bugle and Rubin 1993;Pace and Toyer 2000), a randomized, double blind, placebo-controlled, two-by two factorial study found that micronutrient supplementation did not affect geophagy (eating earth) in 220 school-aged children in Zambia . They concluded that the results supported the premise that geophagy is a learned activity and that nutritional deficiencies associated with geophagy are more likely to be a result, not a cause, of this practice. No intervention studies were found that included pregnant women.
Therefore, until further research is available that can guide clinical practice, interventions should promote alternative, healthier strategies in response to the patient's apparent reasons for pica. The approach depends upon eliciting accurate information from the patient about the behavior. In the clinical setting, it may be use ful to ask women specifically about the discomforts of pregnancy and the techniques being used to minimize them. Pica has been commonly reported to be used in pregnancy to help relieve abdominal pain, diarrhea, and nausea; to assuage cravings and to improve appetite; and to impart a sense of well-being. Obstetrical pro viders also should inquire about cravings. Ice pica is particularly common and is often accompanied by pica as sociated with less benign substances. Inquiring first about general cravings in pregnancy, then about specific cravings for ice, and finally cravings for other less-commonly ingested nonfood items may be more likely to uncover pica behavior. Follow-up questions inquiring about the ingestion of other substances commonly used by members of a woman's community may also help elicit a history of pica.
If the substance is consumed due to cravings, then substitution with a similar, but uncontaminated, substance could be suggested. If a woman is experiencing stomach upset, nausea, or lack of appetite, more appropriate interventions should be followed. Current recommendations of the American College of Obstetricians and Gynecologists (ACOG) for the effective management of nausea and vomiting in pregnancy include vitamin B6 supplementation, use of antiemetic medications, and nonpharmacological approaches such as use of "sea-bands" which use pressure points on the wrist to suppress nausea. These interventions can reduce the discomfort associated with nausea and vomiting of pregnancy by 70% (American College of Obstetricians and Gynecologists 2004). When associated with a psychiatric disorder, appropriate referrals for counseling and behavior modification are warranted.
Descriptive studies have found associations between nutritional deficiencies and pica. Several studies report ed lower serum ferritin levels , lower hemoglobin or hematocrit levels Rainville 1998), or higher rates of anemia (Ket taneh et al. 2005) in those who engage in pica, while others have found no health effects associated with pica . Therefore, women who engage in pica behavior, regardless of the substance consumed, require nutritional counseling.
# Counsel Women About Avoiding Sources of Lead Exposure
# Avoid alternative products that may contain lead and stay informed of new risks
As discussed in Chapter 4, certain products have been found to be contaminated with lead. Some products have been associated clearly with lead poisoning cases and women should be counseled to avoid these products. These include alternative cosmetics, food additives, and medicines imported from overseas that may contain lead, such as azarcon, kohl, kajal, surma, and many others listed in Appendix V. Pregnant women should be informed that herbal medicines and alternative remedies imported personally or ordered from other countries by mail or online are not subject to FDA premarket approval and therefore their safety cannot be assured, even if the product is professionally packaged and labeled. Pregnant women should be cautioned against consuming candies, spices, and other foods that have been brought into the country from travelers abroad, especially if they appear to be noncommercial products, since their safety is unknown.
Obstetrical providers should advise pregnant women not to expose their fetuses to the risks of herbal medi cines and supplements (Marcus and Snodgrass 2005). Herbal medicines and supplements are often regarded as safe by the public and some health care providers, but there is no scientific basis for that belief. In addition, certain herbal medicines and supplements are known to be contaminated with lead and, therefore, should be avoided. There are no rigorous scientific studies of the safety of herbal medicines and supplements dur ing pregnancy, and the Teratology Society has stated that it should not be assumed that they are safe for the embryo or fetus (Friedman 2000).
The literature also contains numerous reports of excessive lead intake associated with the use of lead-glazed ceramic pottery produced by artisans or small manufacturers overseas. As noted earlier, pregnant women should be warned that lead leaches out of these products if they are used for food preparation or storage, es pecially if used to store acidic liquids such as wine or juice. Leaded crystal glassware is another potential lead source, but one that has not been linked with lead poisoning cases.
On occasion, products available through domestic channels of commerce are found to cause lead exposures to consumers. Recent exposures of concern reported by the media have included jewelry, toys, and other products. In some cases, federal agencies have authority to issue recalls of contaminated products, but some times they can only issue warnings. For instance, dietary supplements sold in the United States are not subject to FDA premarket approval, but FDA has authority to act if products are adulterated (e.g., lead contaminated) or misbranded. In either instance, consumer education is essential to avoiding these exposures. Consumers and health care providers can monitor FDA and CPSC recalls and CDC alerts in order to be apprised of newly recognized products of concern. Local health departments can also communicate this information to com munities and medical providers. Pregnant women should be given an updated list of products found to be contaminated with lead at their prenatal visits. . The Consumer Prod uct Safety Improvement Act (CPSIA), which took effect in February 2009, lowered the allowable lead content of consumer products intended for children 12 and younger, setting the standard at 600 ppm of lead in any ac cessible part. Beginning in August 2009, the allowable concentration declined to 300 ppm, and in August 2011 it will decline to 100 ppm. Starting in 2010, manufacturers must test their products and certify that they meet CPSIA standards. In the meantime, products exceeding the new standard remain prohibited and are subject to recall.
# Avoid using lead-contaminated drinking water
Lead found in drinking water is usually due to corrosion that causes lead to leach out of plumbing pipes. The Safe Drinking Water Act (1991) prohibited the sale of lead-containing pipe for residential use (U.S. Environ mental Protection Agency 1991). Homes built before 1986 are more likely to have lead in pipes, fittings, solder, fixtures, or faucets. Therefore, owners of older homes may want to test their water for lead. Certain attributes of the water, such as its temperature and pH, as well as the presence of additives, can all affect lead levels.
Families with private wells as a water source will need to test their water to determine if lead contamination is a problem, as this is not regulated by EPA.
The EPA's community action level for lead in tap water is 15 ppb. If test results exceed this level, public water systems must comply with public education requirements, as well as conduct additional testing. Such public water systems may also be required to conduct source water treatment and/or lead service line replacement.
Reducing lead levels in water may also require replacing internal plumbing such as pipes or fixtures or both. Until the source(s) of lead is removed, homeowners should employ several strategies to minimize their expo sure to lead in tap water. Flushing the system for several minutes after nonuse discards water that has been standing in the system and is more likely to contain lead. All tap water used for consumption-whether for drinking, cooking, or particularly for preparation of infant formula-should be flushed before use. Use of bot tled or filtered water are other alternatives, although not all filtration systems remove lead and not all bottled water is guaranteed to be lead-free. For detailed instructions for flushing water, along with testing information and federal regulations, see the EPA Lead in Drinking Water Web page (/ index.html).
# Avoid exposure to lead hazards in housing (paint, dust, soil)
As noted in Chapter 4, lead-based paint hazards are a major source of exposure for young children. In con trast, the research literature suggests that pregnant women are more likely to be exposed to lead-based paint hazards associated with renovations in older homes. Nevertheless, pregnant women should be educated about the potential risks associated with lead-based paint in older housing for several important reasons. First, pregnant women should understand the importance of using lead-safe work practices in older homes during repair, renovation, repainting, or remodeling work. Failing to minimize and contain dust generated by any ac tivity that disturbs paint can increase or create exposure risk. Dangerous paint removal and repair techniques that generate lead dust or fumes such as dry scraping, sanding, burning of paint with a torch, or using a hightemperature heat gun should be avoided-and may be illegal in some jurisdictions or in federally subsidized housing. Without appropriate education, there is a risk that families (or renovation workers) will inadvertently create or worsen lead-based paint hazards as they work diligently to prepare the baby's room or make other home improvements, thereby exposing the pregnant woman and fetus to lead during the pregnancy or after ward when the baby comes home.
Federal law requires that property owners disclose known lead-based paint and lead hazards to prospec tive buyers and renters of older homes and that remodeling contractors give lead information to residents before renovating homes built before Families should also understand the importance of maintaining painted surfaces in older homes. While intact lead-based paint poses little risk, peeling paint or other signs of paint deterioration in a pre-1978 home can result in lead exposure hazards. If there is peeling paint (or any other indication of a problem) in the home of a family that is expecting or has young children, and lead-based paint is suspected to be present, the homeown er or tenant should contact the local health department for advice on options such as testing and remediation.
Appropriate remediation strategies vary according to the location and condition of the lead-based paint and the extent of the contamination. Interventions can include a range of activities such as professional cleaning; thorough repair or replacement of components (e.g., entire windows, window sashes, trim/molding, or door jambs), paint stabilization, complete repainting, or complete paint removal. All of these interventions have been found to significantly reduce lead dust levels for at least 3 years postintervention, with the more inten sive treatments found to be associated with greater post-intervention reductions (Dixon et al. 2005). In cases where heavy soil contamination has occurred, the soil may need to be removed. However, when less con tamination is present, techniques such as planting with ground covers or installing gravel pathways, drip line boxes, or raised planting beds and play areas may be sufficient (Binns et al. 2004;Dixon et al. 2006).
After interventions have been completed in the home, the home should not be reoccupied until it has passed lead dust clearance testing, indicating that the home has been adequately cleaned and that invisible lead dust has not been left behind. Numerous resources are available for the general public. For more information, see Chapter 11, Resources and Referral Information.
# Minimize lead exposure from point sources
Women who live close to active lead mines, smelters, or battery recycling plants should take precautions to avoid exposure to lead via inhalation exposures or ingestion of hazardous waste (e.g., mine tailings, acid mine drainage) through contamination of the home environment from industrial lead dust or fumes.
# Avoid hobbies and recreational activities that may involve lead exposure
Numerous recreational activities can result in exposure to lead. These activities include crafts (print making, stained glass, ceramics), outdoor sports (hunting and fishing), and liquor distillation, among others. Since women may not know that these activities carry a risk for lead exposure, consumer education is critical. General safety procedures such as performing these activities in well-ventilated spaces, frequent hand washing, and the use of jacketed ammunition at shooting ranges, can all minimize the risk of lead exposures from recreational activities. Under some circumstances, consumption of game meat (e.g., venison, wild fowl) harvested with lead ammunition may pose a risk for excess lead exposure . Health care facilities providing care to pregnant women should provide informational brochures to pregnant women on the risks associated with these activities.
# ENVIRONMENTAL AND CASE MANAGEMENT: ROLE OF PUBLIC HEALTH AGENCIES
This section describes the essential role of public health agencies in assuring appropriate services for preg nant women needing intervention for lead exposure above background levels. Such public health services are recommended at various blood lead levels to complement ongoing medical management being provided by the woman's health care provider (see Table 6-1). Specifically, public health agencies ensure that lead hazards in the home environment are assessed and remediated. Public health agencies also provide case management services to ensure that all appropriate services are provided. Public health agencies can also provide guidance about reimbursement issues regarding environmental investigation or case management and make referrals to private providers, such as lead risk assessors, if necessary.
# Environmental Investigation and Management
As previously noted, the critical element in the prevention of lead exposure is the control or elimination of all sources of lead, which must include the home environment to be effective. The goal of environmental management is to ensure a lead-safe home for mothers and babies. To this end, it is recommended that an investigation of the home environment, which is variously called an environmental investigation, exposure assessment, or risk assessment, be conducted for all women and newborn infants with BLLs of ≥15 µg/dL in order to identify potential sources of lead and pathways of exposure and to identify appropriate activities to reduce or prevent further lead exposure. This investigation and subsequent control activities should be carried out by the local or state health department, or under its supervision, as part of case management activities for pregnant and lactating women identified with blood lead levels ≥15 µg/dL.
The investigation should include questions about potential lead exposure pathways, a visual inspection of the home and other relevant environments, and testing of specific media for the presence of lead (such as water, household dust, soil, paint chips, foods, ethnic remedies, spices, ceramic ware, or other suspected sources of lead), as indicated. Examples of environmental management protocols for pregnant women are found in Ap pendix VIII (New York City Department of Health Pregnancy Risk Assessment Form), which is used for all women with prenatal BLLs of ≥15 µg/dL); Appendix IX (Minnesota DOH Assessment Interview Form); and Appendix X (Minnesota DOH Lead-Based Paint Risk Assessment Form), which is used for both pregnant women and for children with elevated blood lead levels). An example of an environmental management pro tocol for infants of mothers with elevated prenatal blood lead levels who do not have elevated BLLs is found in Appendix XI (NYC DOH Primary Prevention Information Form). An example of a protocol for environmental risk assessment for case management of young infants and children with blood lead levels of 15 µg/dL or higher is provided in Appendix XII (NYC DOH Child Risk Assessment Form).
At a minimum, environmental management should include isolating the expectant mother from known expo sure sources, by workplace removal for occupational exposures and through temporary relocation until hazard remediation has been completed and clearance achieved for lead-based paint hazards in the home. Local and state health departments may also utilize information and resources provided by the Centers for Disease Con trol and Prevention's National Center for Environmental Health (NCEH) and other agencies and organizations to provide the most current and updated case manage ment services to their constituents (Centers for Disease Control and Prevention 2002).
# Case Management
This section is intended to facilitate the management of pregnant and lactating women and newborn infants with lead exposure above background levels by providing information and guidance to health department personnel who provide or oversee care coordination and follow-up activities.
Case management of pregnant women with lead exposure involves coordinating, providing, and overseeing the services required to reduce their BLLs to prevent harm to the developing fetus. It is based on the efforts of an organized team that includes the pregnant woman and newborn infant's health care providers. A hallmark of effective case management is ongoing communication with the health care and other service providers and a cooperative approach to solving any problems that may arise during efforts to decrease the mother-infant pair's BLLs and eliminate lead hazards in the their environment.
CDC recommends that public health agencies provide case management services for pregnant women with blood lead levels ≥15 µg/dL (See Table 6-1). These services are adapted from the current model of case man agement adopted by CDC for young children, which has eight components: a) client identification and out reach, b) individual assessment and diagnosis, c) service planning and resource identification, d) the linking of clients to needed services, e) service implementation and coordination, f ) the monitoring of service delivery, g) advocacy, and h) evaluation (Centers for Disease Control and Prevention 2002). Typical case management activities could include the following, depending upon the patient's needs and local resources:
- Assess factors that may impact the woman's BLL (including sources of lead, nutritional status, access to services, family interaction, and understanding).
- Visit the woman's residence and other sites where the woman spends significant amounts of time, such as a job site, to conduct a visual investigation of the site and identify sources of environmental lead exposure. Such visits may be made by a case manager and/or by certified environmental investigators or risk assessors.
- Develop a written plan for intervention.
- Oversee the activities of the case management team.
- Coordinate implementation of the plan, including collaboration with the primary health care provider (s) and other specialists.
- Evaluate compliance with the plan and the success of the plan.
- Ensure that a woman receives services in a timely fashion consistent with guidance.
Another variable, the duration of management, will depend on when the blood lead level ≥5 µg/dL is identi fied-during the prenatal period, at birth, or while the mother-infant pair is nursing. The interventions recom mended in this report are for the secondary prevention of adverse health effects from lead exposure; that is, to prevent further lead exposure and to reduce BLLs in pregnant women who have been identified as having lead exposure. However, the ultimate goal is primary prevention of any lead exposure of the developing fetus or newborn infant. Of course, primary prevention is also indicated for women of reproductive age who may become pregnant, not only those who are already pregnant. The importance of primary prevention should not be overlooked, since the behavioral and cognitive effects of lead exposure in young children may be irrevers ible.
Practices and resources for case management of lead exposure vary markedly among states, cities, and juris dictions. (In some communities, case management is called care coordination.) The sources of exposure and prevalence of blood lead levels above background levels among pregnant and lactating women and newborn infants also vary by geographic location and community-specific risk factors and may not be readily identifi able. Therefore, users of these guidelines may need to modify them to meet the needs unique to their specific communities. CDC provides technical assistance for the development and implementation of case manage ment protocols. This box provides clinicians with a concise reference on general considerations in medical man agement of their patients with confirmed lead exposure above background levels. Readers are encouraged to refer to the relevant chapters for additional information.
# Counseling patients on identifying and avoiding lead sources (Chapter 4)
For all patients, but especially those with known lead exposures, health care providers should provide guidance regarding sources of lead and help identify potential sources of lead in patients' environments. Public health agencies may have additional information on community-specific risk factors based on geographic location, occupation, or ethnic background. If not completed prior to determination of initial blood lead level, providers should take a complete occupational and environmental history, including questions that may identify the presence of risk factors for lead exposure.
# Identification and counseling regarding pica behavior (Chapters 4 and 6)
Many studies agree that pica behavior is likely to be underreported. Identifying pica in a clinical setting may best be accomplished by treating it as a sensitive issue: proceeding from general to more-specific questions and from less-intrusive to more-intrusive questions. A recommended approach is to ask women specifically about techniques being used to minimize the discom forts of pregnancy and about cravings, inquiring first about general cravings in pregnancy, then about specific cravings for ice, and finally cravings for other less commonly ingested nonfood items. Follow-up questions inquiring about the ingestion of other substances commonly used by members of a woman's community may also help elicit a history of pica. If a substance is con sumed due to cravings, then substitution with a similar, but uncontaminated, substance could be suggested. When associated with a psychiatric disorder, appropriate referrals for counseling and behavior modification are warranted. Women who engage in pica behavior, regardless of the sub stance consumed, may benefit from nutritional counseling due to the documented associations between nutritional deficiencies and pica.
# Nutritional assessment and referrals (Chapter 7)
Pregnant and lactating women with a current or past BLL ≥5 µg/dL should be assessed for the adequacy of their diet and provided with prenatal vitamins and nutritional advice emphasizing calcium and iron intake. A balanced diet with a dietary calcium intake of 2,000 milligrams daily should be maintained, either through diet or by supplementation or by a combination of both. Additionally, iron status should be evaluated and supplementation provided in order to correct and prevent any iron deficiency. Women with anemia (defined in pregnancy as a hemoglobin level <11 g/dL in the first trimester and third trimester and <10.5 g/dL in the second trimester), requires higher dosing (Institute of Medicine 1990). Generally, pregnant women with iron defi ciency anemia should be prescribed 60 to 120 mg of iron daily in divided doses. Dosage can be reduced to 30 mg daily once anemia is corrected. Women receiving supplemental iron or calcium should be encouraged to split the dose, taking no more than 500 mg of calcium or 60 mg of iron at one time, as only small amounts of these nutrients can be absorbed at any one time. Obstetri cal providers should advise pregnant women not to expose their fetuses to the risks of herbal medicines, since there is no evidence of their safety and some are known to be leadcontaminated.
Interpretation and follow-up of blood lead tests (Chapter 5)
Pregnant women identified with blood lead levels ≥5 µg/dL should be tested per Table 5-3. Follow-up blood lead testing should be performed according to schedules provided in Table 5-1 for newborns and Table 5-2 for infants under 6 months of age. Adjust the frequency of follow-up tests according to the chronicity of exposure; risk factors for continued, repeat, or future exposure; and types of clinical inter ventions. Occupationally exposed women should be referred to an occupational physician or center treating occupationally exposed adults and removed from the workplace lead exposure at BLL ≥10 µg/ dL. If not reported directly by the clinical laboratory, the health care provider should notify the Lead Poi soning Prevention Program of the local or state health department of BLLs ≥10 µg/dL. Communication with the local or state health department and the pediatric health care provider is crucial in ensuring ap propriate follow-up care and developmental monitoring and referrals. Pregnant women identified with blood lead levels ≥5 µg/dL should be tested at the time of birth to establish a baseline to guide postnatal care for mother and child, and followed up according to the testing schedule in Table 9-1. If past expo sure to lead was higher than for most people, maternal blood lead levels may increase slightly during lactation due to the liberation of lead from bone stores.
# Assisting with identification of lead sources in the environment (Chapters 4 and 6)
The essential activity in management of pregnant women with blood lead levels ≥5 µg/dL is removal of the lead source, disruption of the route of exposure, or avoidance of the lead-containing substance or activity. Source identification beyond obtaining a thorough environmental and occupational history should be conducted in collaboration with the local health department when BLLs ≥15 µg/dL, which will conduct an environmental investigation of the home environment in most jurisdictions. This process usually includes in-home interviews and collection of environmental samples to confirm lead sources and pathways of exposure. Health care providers can assist by providing information to health depart ments on suspected sources identified during patient care. Findings should be shared with the health care providers of the mother and infant.
# Chelation therapy (Chapter 8)
In consultation with a lead poisoning expert, pregnant women with confirmed BLLs ≥45 µg/dL may be considered for chelation therapy and should be considered as high risk pregnancies. Immediate removal from the lead source is still the first priority. In some cases, women may need hospitalization. Reserving the use of chelating agents for later in pregnancy is consistent with the general concern about the use of unusual drugs during the period of organogenesis (National Research Council, 2000). However, BLLs ≥70 µg/dL may result in significant maternal toxicity and chelation therapy should be considered, re gardless of trimester, in consultation with experts in lead poisoning and high-risk pregnancies. Chelation therapy should also be considered in neonates and infants less than 6 months of age for a confirmed BLL ≥45 µg/dL.
# Counseling on breastfeeding (Chapter 9)
Initiation of breastfeeding should be encouraged for mothers with BLLs <40 µg/dL. - The human body's nutritional status affects the absorption, deposition, and excretion of lead and may also affect lead toxicity.
- Lead exposure can also modify the body's ability to utilize nutrients.
- Avoidance of lead exposure remains the primary preventive strategy for reducing adverse health effects. However, the existence of nutrient-lead interactions suggests that optimiz ing nutritional status during pregnancy and lactation may assist in preventing the adverse consequences of lead exposure.
# General Nutritional Recommendations for Pregnant and Lactating Women
- All pregnant and lactating women should eat a balanced diet in order to maintain adequate amounts of vitamins, nutrients, and minerals.
- All pregnant and lactating women should be evaluated for iron status and be provided with supplementation in order to correct iron deficiency.
- All pregnant and lactating women should be evaluated for the adequacy of their diets and be provided with appropriate nutritional advice and prenatal vitamins.
- Women in need of assistance should be referred to programs, such as WIC or the Supplemen tal Nutrition Assistance Program (SNAP) (formerly food stamps).
- All pregnant and lactating women should avoid the use of alcohol, cigarettes, herbal medi cines, and any other substance that may adversely affect the developing fetus or infant.
# Recommendations for Pregnant and Lactating Women with Blood Lead Levels ≥5 µg/dL
- In pregnant and lactating women with BLLs ≥5 µg/dL or with a history of lead exposure, a dietary calcium intake of 2,000 milligrams daily should be maintained, either through diet or in combination with supplementation.
# OVERVIEW OF THE RELATION BETWEEN NUTRITION AND LEAD
Pregnancy and the first 2 years of life are exceptionally important intervals with respect to adequate maternal and child nutrition (Horton 2008). Pregnancy and lactation are also critically important periods from a toxico logical perspective because of the special significance of the potential for adverse effects of toxic exposures on early human development. If inadequate nutritional status increases susceptibility to the toxic effects of lead, lifelong adverse effects are more likely. In addition, lead exposure can interfere with the metabolism of nutrients-an especially important consideration when nutritional status is marginal. This chapter provides an overview of the information on dietary intake and lead levels in pregnant women. These data are limited. Any beneficial effects of dietary supplementation must be demonstrated in well-designed (randomized, placebocontrolled) clinical trials. However, given the importance of basic nutrition in normal pregnancy and lactation, this chapter provides practical recommendations based on the limited suggestive data available for primary and secondary prevention of lead exposure. Recommended dietary intakes (dietary reference intakes formerly called RDAs) are from the Institute of Medicine, Food and Nutrition Board, unless specifically noted otherwise, and are provided for reference as Appendix XIII (Institute of Medicine 1997Medicine , 2001.
Decades of laboratory and clinical investigation have confirmed that the body's nutritional condition affects lead absorption, deposition, metabolism, and excretion (for reviews see Ahamed and Siddiqui 2007;Bogden et al. 2001;Mahaffey 1980Mahaffey , 1985Mahaffey et al. 1992;Ros and Mwanri 2003). The physiological mechanisms that are the basis for nutrition/lead interactions are multiple and include nutrients: binding lead in the gut, competing with lead for absorption, altering intestinal cell avidity for lead, or altering affinity of target tissues for lead (Ballew and Bowman 2001). Lead can modify the metabolism of nutrients (Pounds, 1991;Sauk and Somerman 1991). For example, changes in iron metabolism and changes in the formation of the metaboli cally active forms of vitamin D occur with lead exposure. As understanding of cellular biology has advanced, the mechanisms through which nutritional status (at least for the divalent cations, calcium and iron) alter the metabolic response to lead are becoming clarified (Godwin 2001).
Avoidance of lead exposure remains the primary preventive strategy for reducing adverse effects of lead ex posure. However, the existence of nutrient-lead interactions suggests that optimizing nutritional status during pregnancy and lactation may reduce the adverse consequences once lead exposure has occurred. Although the lead-nutrient interaction data are limited and somewhat inconsistent, ensuring adequate intakes of min erals such as calcium; iron; selenium; and zinc, and vitamins C, D, and E is a strategy that is generally health promoting, is associated with few risks, and may confer additional benefits to lead-exposed pregnant and lactating women.
Whether there are benefits for lead poisoned pregnant and lactating women resulting from ingestion of dietary supplements in excess of nutritional requirements is not clear and super-supplementation is not recommended. Differences in response between marginally adequate and super-nutritional status may be physiological. For example, the physiological mechanisms that foster adaptation to low dietary intakes (e.g., in creased production of binding proteins in the gastrointestinal tract that can transport lead, as well as calcium or iron) may differ significantly from those that occur when nutrient intakes are higher than required. Dietary supplementation with nutrients at levels higher than those required by nonexposed women may constitute a secondary prevention effort aimed at reducing circulating levels of lead in the mother and at reducing lead exposure to the developing fetus and nursing infant.
Studies of the effects of nutrition and blood lead levels are complicated by a number of different factors. A general problem is that variability in the nutritional status of subjects can impact whether there is a response to changes in the nutrient level. For example, iron absorption is increased when the body is deficient in iron, but when the body is iron-replete absorption of additional iron is inhibited (Finch 1994). These same mecha nisms also influence the percent of lead that is absorbed. Specific problems related to observational studies are discussed below.
# OBSERVATIONAL STUDIES OF ASSOCIATIONS BETWEEN BLOOD LEAD AND MATERNAL DIET
The majority of the research on the influence of nutrition on lead status during pregnancy and lactation has been observational. Such studies can only determine the associations between nutritional status and lead poi soning, not whether these associations are causal. Observational studes are further complicated because the intercorrelations between nutrients in the diet limit the identification of the effects of specific dietary compo nents Observational studies on the association of maternal diet and lead have shown varying results.
In an observational study of maternal diet during pregnancy, higher intakes of calcium, iron, and vitamin D were associated with lower neonatal blood lead levels (Schell et al. 2003). Before treatment, more than 50% of the mothers had dietary intakes below the recommended dietary allowances for zinc, calcium, iron, vitamin D, and kilocalories. Maternal and neonatal blood lead levels were correlated and all of the neonatal blood lead levels were low (geometric mean = 1.58 µg/dL). West (1994) investigated the relationship between prenatal vitamin supplement use and maternal blood lead levels and pregnancy outcomes in 349 African American women. Supplement users had significantly lower blood lead levels than those who did not use supplements (p = 0.0001). This study did not describe the content of the supplements consumed or provide adherence data, but levels of calcium and vitamins C and E were confirmed by blood analysis and were higher among the reported supplement-users, suggesting that the self-reports were accurate.
Among postpartum women in Mexico City, lower levels of bone lead were associated with higher intakes of calcium, vitamin D, phosphorus, magnesium iron, zinc, and vitamin C, though these relationships showed inconsistent trends (Ettinger et al. 2004). Gulson et al. (2006) measured daily intakes of the micronutrients calcium, magnesium, sodium, potassium, barium, strontium, phosphorus, zinc, iron, and copper from 6-day duplicate diets (2-13 collections per individual) and blood lead concentrations in a small number of motherchild pairs (total of 21 pregnant and 15 nonpregnant subjects in one cohort, nine pregnant subjects in a second cohort, and one group of ten 6-to 11-year-old-children) to evaluate the association of dietary intakes of selected micronutrients and blood lead. They found no statistically significant relationship between blood lead concentration and intake of specific micronutrients (Gulson et al. 2006).
# ROLE OF SPECIFIC NUTRIENTS WITH RESPECT TO LEAD
# Calcium
# Association of dietary calcium intake and lead
Increased lead absorption and tissue retention among overtly calcium-deficient experimental animals have been confirmed in multiple species. As shown in experimental animal studies reported in the 1970s, a diet clearly deficient in only calcium when fed to rats for several months produced much higher tissue stores of lead than occurred in animals fed comparable amounts of lead plus a calcium-adequate diet (Mahaffey et al. 1973;Mahaffey-Six and Goyer 1972). Unusually high deposition of lead in nonosseous tissues (including the kidneys) occurred in contrast with less dramatic elevations of bone lead (Mahaffey et al. 1973). This differ ence likely reflects impaired bone formation and deposition of lead into bones of the high-lead, low-calcium animals (Mahaffey et al. 1973). The increased absorption and retention of lead by calcium-deficient animals has been confirmed in other species (among others, see information for dogs (Hamir et al. 1982;Stowe and Vandevelde 1979) and horses (Willoughby et al. 1972). Generally, the major calcium effects on lead absorption and distribution occur when dietary calcium is deficient . Little influence of calcium on lead metabolism is observed by increasing calcium intake above required levels in animal studies, i.e., the equivalent of super supplementation (e.g., Barton et al. 1978;Mahaffey et al. 1973).
Confirmation of the impact of low dietary calcium intakes has also been found among human subjects who were also shown to have increased lead absorption when their diets were low in calcium (Heard and Cham berlain 1982). Several cross-sectional studies of calcium intake and blood lead levels in women of childbearing age and pregnant women have shown an inverse relationship between calcium-rich foods or calcium intake and blood lead levels. Lacasana-Navarro et al (1996) observed a statistically significant association among women of reproductive age between increased calcium intake and reduced risk of blood lead levels >10 µg/ dL. showed that consumption of foods providing calcium (corn tortillas and milk products) was associated with reduced blood lead levels. Researchers also observed a statistically significant trend among women of reproductive age between decreased risk of elevated blood lead levels (>10 µg/dL) with increasing calcium intake (Lacasana-Navarro et al. 1996). Higher milk intake during pregnancy has also been associated with lower maternal and umbilical cord lead levels in postpartum women in Mexico (Hernandez-Avila et al. 1997).
# Dietary calcium supplementation and lead levels
During pregnancy and lactation, lead accumulated in the maternal skeleton is released (Gulson et al. 1999;, with greater mobilization of lead during lactation than during pregnancy ). Calcium supplements have been suggested as a means of reducing mobiliza tion of skeletal mineral. Observations of the variability in release of skeletal lead reinforced the suggestion that low calcium intake may contribute to mobilization of skeletal lead during pregnancy (Gulson et al. 1999). Use of calcium supplements to meet fetal demand for calcium and thereby reduce maternal bone mobilization has been described. Results from Gulson et al. (2004) indicated that calcium supplements were ineffective in mini mizing the mobilization of lead from the skeleton during lactation; however, this small observational study lacked a control group and was not designed to properly account for other potential confounding factors.
Calcium supplementation (1,200 mg at bedtime) during the third trimester of pregnancy has been shown, in a randomized crossover trial design, to reduce maternal bone resorption by 14% on average in comparison to placebo , suggesting that calcium supplements may reduce maternal bone lead mobilization during the third trimester of pregnancy.
Two large randomized clinical trials have been conducted to assess whether calcium supplements reduce blood lead levels during pregnancy and lactation. In a randomized, double-blind, placebo-control trial of calcium supplementation during lactation, Hernandez-Avila et al. (2003) showed that 1,200-mg daily dietary supplementation with calcium carbonate among lactating women reduced maternal BLLs 15%-20% over the course of lactation. Compared with women who received the placebo, those who took supplements had a modest decrease in their blood lead levels of -0.12 µg/dL at 3 months (95% CI = -0.71 to 0.46 µg/dL) and -0.22 µg/dL at 6 months (95% CI = -0.77 to 0.34 µg/dL). The effect was more apparent among women who were most compliant with supplement use and had patella bone lead >5 µg/g bone (-1.16 µg/dL; 95% CI = -0.23 to -2.08). During the second and third trimesters of pregnancy, calcium supplementation (1,200 mg) was associated with an average reduction of 19% in blood lead concentration in relation to placebo (p75% pills (-24%, p5 µg/g) and reported use of lead-glazed ceramics, the reduction in blood lead of 31% corresponds to an average reduction of 1.95 µg/dL (95% CI = -0.78 to -2.87). Bone resorption was also reduced by 13% in the supplement group com pared with the placebo group (p = 0.002) . Calcium supplementation was also associat ed with 5%-10% lower breast milk lead levels among these women over the course of lactation , suggesting that calcium supplementation may also be an intervention strategy to reduce lead in breast milk from both current and previously accumulated sources. Such data support the role of calcium supple mentation in decreasing bone resorption, which can release bone lead stores. Calcium supplementation may also decrease intestinal absorption of lead.
Overall, calcium supplementation has been associated with modest reductions in blood lead levels both when administered during pregnancy and lactation. Suppression of bone resorption appears to be the most likely mechanism, although reduced absorption of lead from the gastrointestinal tract may also contribute to this change. It has been suggested that high levels of calcium are needed to supply the nutritional needs of the developing fetus (Johnson 2001).
# Calcium status in U.S. women
Calcium requirements during pregnancy and lactation have been investigated extensively. The increased fetal/infant demand for calcium is met by increasing maternal gastrointestinal absorption, decreasing renal excretion, and increasing bone mineral mobilization (Kovacs and Kronenberg 1997). Physiological adaptations (including endocrine responses) are part of why there is no simple relationship between dietary calcium intake and calcium availability to mother, fetus, or infant (Prentice 2000a). In general, however, Americans do not meet dietary recommendations for calcium (Ma et al. 2007), with ethnic minorities and socially disadvantaged groups more likely not meeting dietary calcium recommendations (Affenito et al. 2007). The recommended intakes for calcium are 1,300 mg for pregnant and lactating women 18 years and younger and 1,000 mg for pregnant and lactating women 19 years of age and older.
Estimated calcium intake during pregnancy in the United States varies substantially. Based on data from 1999-2000 NHANES, average calcium consumption for women of childbearing age was between 820 and 940 grams from both diet and supplements. Earlier data from NHANES II showed that for white women in the 18-through-39 year age group mean calcium intake from food was 642 mg/day, contrasted with 467 mg/day among black, non-Hispanic women (Looker et al. 1993). African Americans in all age groups have been shown to consume fewer mean servings of total dairy, milk, cheese, and yogurt than non-African-Americans and have lower calcium intakes (Fulgoni et al. 2007;Weinberg et al. 2004). Meeting dietary recommendations for calci um on a dairy-free diet is difficult (Gao et al. 2006), but can be made easier through the use of calcium-fortified foods such as citrus juices (Gao et al. 2006) and consumption of ready-to-eat cereals, which facilitate milk in take (Song et al. 2006). In contrast to several of the studies cited above, the assessment by Harville et al. ( 2004) evaluated total oral calcium intake including both food and antacids. Although median oral calcium intake exceeded 1,200 mg/day, more than 10% of the youngest women consumed <600 mg calcium/day. Within the overall group, 10% of African-American women and 6% of white women reported being either lactose intoler ant or allergic to milk. However, there was no difference in calcium intake (both approximately 1,200 mg/day) for women reporting lactose intolerance and not being intolerant. It should be noted that in this particular study, many of the women were enrolled in the Women, Infants, and Children (WIC) program which supplies milk and cheese. In this study racial differences in calcium intake were not significant.
Calcium requirements are increased substantially during pregnancy and lactation to meet the demands of the developing fetus and nursing infant (Prentice 2000b). Approximately 25 to 30 grams of calcium are trans ferred to the fetus during pregnancy, with the majority of this transfer occurring during the third trimester (Institute of Medicine 1990). The major physiological adaptation of the mother to meet this increased calcium requirement is increased efficiency in intestinal absorption of calcium. Decreased renal excretion of calcium and increased bone mineral mobilization are other maternal mechanisms used to meet the needs of the fetus. The Institute of Medicine (IOM) currently recommends 1,000 mg calcium per day for pregnant and lactating women 19-50 years (and 1,300 mg per day for pregnant and lactating women <19 years) (Institute of Medicine 1997). Optimal calcium intake may be achieved through diet, calcium-fortified foods, calcium supplements, or various combinations of these.
NIH has articulated several challenges to optimate calcium intake (National Institutes of Health 1994). High oxalate and phytate in a limited number of foods can reduce the availability of calcium in these foods. Other factors, such as drugs (glucocorticoids), can decrease calcium absorption. There are also genetic factors that may significantly influence many aspects of calcium metabolism. Vitamin D metabolites enhance calcium absorption. Sources of vitamin D, besides supplements, include sunlight, vitamin D-fortified liquid dairy prod ucts, cod liver oil, and fatty fish. Calcium and vitamin D need not be taken together to be effective. Excessive doses of vitamin D may introduce risks such as hypercalciuria and hypercalcemia and should be avoided. In addition, high levels of calcium intake have several potential adverse effects but there are adaptive mecha nisms that protect from calcium intoxication at calcium intakes less than approximately 4 g/day. Even at intake levels less than 4 g/day, people may be more susceptible to developing hypercalcemia or hypercalciuria and high blood calcium levels may produce renal damage. There is also some concern that increased calcium in take might interfere with absorption of other nutrients, such as iron, or medications. Ingestion of some forms of calcium supplements or milk may reduce iron absorption by as much as 50%. However, calcium formations that contain citrate and ascorbic acid enhance iron absorption.
There are two randomized placebo-controlled trials that aimed to decrease lead exposure to fetus and nursing infant by providing 1,200 milligrams of daily calcium supplementation to maternal diet during pregnancy (Et tinger et al. 2008) and lactation (Hernandez-Avila et al. 2003). Both studies found that, on average, women in the calcium supplement group had 20% lower maternal blood lead levels than the placebo group at the end of follow up, suggesting decreased potential for exposure to the fetus and nursing infant. These studies were carried out in Mexico City, Mexico where the estimated average dietary calcium intake was about 800 mil ligrams per day, similar to estimates in the United States. NHANES data on dietary intake of selected minerals in 1999-2000 indicate that for women aged 20-39, the average dietary intake of calcium is 797 mg (Ervin et al. 2004). In pregnant women with exposure to lead, high calcium intake (2,000 mg/day) may diminish pregnan cy-induced increases in blood lead levels by decreasing intestinal absorption of lead or by decreasing mater nal bone resorption (mobilization), thereby reducing exposures to the fetus (Johnson 2001). Thus, the amount of calcium supplement should be adjusted by combining estimated average dietary intake and supplementa tion in order to achieve the recommended calcium intake of 2,000 mg per day. Care should be taken as some calcium supplements, particularly those derived from natural sources (bonemeal, dolomite, or oyster shell), have been found to contain high levels of lead (Bourgoin et al. 1993;.
# Summary
In summary, calcium supplementation in pregnant women with elevated blood lead levels may be beneficial in reducing blood lead levels. For pregnant and lactating women with BLLs ≥5 µg/dL or a history of lead expo sure above background levels, a dietary calcium intake of 2,000 mg daily should be maintained either through diet or in combination with supplements.
# Iron
# Association of dietary iron intake and iron status with lead levels
Both low iron status and elevated lead exposure impair hematopoiesis and intellectual development during gestation and infancy (Black et al. 2008). Exposure to lead and reduced iron status result in greater impairment than the lead-associated impairment in heme biosynthesis alone (Kwong et al. 2004;Mahaffey-Six and Goyer 1973). Such findings were confirmed in humans, as well as experimental animals (Barton et al., 1978;Mahaffey 1983).
Iron absorption is highly regulated physiologically and iron absorption is reduced when iron stores are en larged (Finch 1994). Overall, variation of iron stores in a normal range do not increase lead absorption, but iron deficiency raises the level of divalent metal transporter proteins which carry lead as well as iron (Morgan and Oates 2002). The ability to control iron absorption through regulation of the molecular mechanisms of iron absorption appears during late infancy (Leong et al. 2003).
Iron deficiency is associated with increases in absorption and deposition of lead (Barton et al. 1978). Several cross-sectional studies in children showed an inverse relationship between iron status and blood lead (Brad man et al. 2003;Choi and Kim 2003;Hammad et al. 1996). Consistent with pediatric studies, cross-sectional studies of lead-exposed adults have found that lower serum iron and dietary intake, as well as increased rates of iron deficiency anemia, were associated with higher blood lead levels and better iron status was associated with lower blood lead levels Kim et al. 2003). These studies have generally used dietary intake or laboratory tests (e.g., serum iron or ferritin) to determine iron status.
There are few studies that have investigated the association between iron intake or iron status and blood lead levels. These studies do not provide consistent findings. Schell et al. (2003) studied the effect of maternal diet during pregnancy on neonatal blood lead levels. Among the nutrients studied, iron had the largest impact on newborn lead levels: a two standard-deviation decrease in maternal iron intake (from 30.2 to 11.8 mg/day) was associated with a 0.51 µg/dL increase in newborn lead (29% of the mean newborn lead level of 1.72 µg/ dL). More than 50% of mothers in this study had intakes below the recommended dietary allowance for iron in pregnancy. However, data from a nationally representative population survey that included reproductive-aged women (N = 4,394 women aged 20-49 years) found a positive association between dietary iron intake and blood lead levels .
# Dietary iron supplementation and lead
Studies of the association between iron status and blood lead levels found that children with iron-deficiency had higher blood lead levels than iron-replete children (Markowitz et al. 1990;Wright et al. 1999. Con sequently, many experts recommend that iron supplementation be prescribed only to iron-deficient children, irrespective of lead exposure, and do not recommend universal iron supplementation for the prevention or treatment of lead poisoning in children (Wright et al. 1999).
Iron supplementation has been shown to prevent lead-induced disruption of the blood-brain barrier during rat development (Wang et al. 2007a). The supplemental iron protected the blood brain barrier from changes in permeability caused by lead (Wang et al. 2007a) and was also protective against lead-induced apoptosis (Wang et al. 2007b). A prospective study of the effects of prenatal lead exposure on child development was carried out in Yugoslavia with outcomes assessed at age 4 years (Wasserman et al. 1994). Because 34% of the cohort was iron deficient (hemoglobin concentrations <10.5 g/dL at age 2 years and serum ferritin concentra tions <12 ng/dL), iron supplements were provided when children were 18 to 38 months of age. Treatment of iron deficiency improved the hematological profile. Low-iron status and elevated lead exposure both affect infants' intellectual development. Lead exposure was associated with cumulative losses in cognitive function during the preschool years. Deficits attributable to iron-deficiency anemia at age 2 (Wasserman et al. 1992) ap pear to have been reversed by age 4 in response to iron supplementation.
Effectiveness and strategies for iron supplementation during pregnancy have been evaluated, indicating that the efficacy of the supplement intervention is dependent on the following: composition of the diet; presence of a condition, such as pregnancy, that would alter iron absorption or loss; composition of the supplement; se verity of the iron deficiency at baseline; and the duration of the intervention (Beard 2000). There have been no supplementation trials addressing the effects of iron on lead levels in pregnancy and the research data are too scanty to determine the relationship between maternal iron intake and maternal or neonatal blood lead levels. However, given that iron deficiency is common among pregnant women (Kraemer and Zimmermann 2007), until further data are available, all women should be evaluated for the adequacy of their iron status and intake and be provided with appropriate nutritional advice and supplements if deficiencies exist.
# Iron status in U.S. women
Pregnancy is the most at-risk period for developing iron-deficiency anemia (American College of Obstetrics and Gynecology 2008; Beard 2000). The current recommended intakes for iron are 27 mg in pregnant women and 10 mg in lactating women (Institute of Medicine 2001). While there is some uncertainty regarding the most useful indicators of iron status during pregnancy, cell indices (including mean cell volume, percent hypochromic red blood cells, percent reticulocytes, and cellular hemoglobin in reticulocytes) have been rec ommended as indicators of iron status (Ervasti et al. 2007), but their usefulness in diagnosing iron deficiency longitudinally needs to be confirmed.
Based on NHANES III data, 9% to 11% of adolescent girls and women of childbearing age were iron deficient (Looker et al. 1997). Iron-deficiency ane mia was found in 5% of women, which corresponds to an estimated 3.3 million U.S. women. Iron deficiency was more common among women who were from minority, low-income, and multiparous groups (Looker et al. 1997(Looker et al. , 1999. Among women ages 19 through 50 years who participated in NHANES during the years 1988 through 1994, 72 ± 4% of pregnant women and 60 ± 4% of lactating women (Cogswell et al. 2003) were iron deficient. Use of supplements containing iron was associated with a significant reduction in the prevalence of iron deficiency among women ages 19-through-50 years, but the study lacked statistical power to make this assessment for pregnant and lactating women (Cogswell et al. 2003). Low-income women and minor ity women were less likely to consume supplements (Cogswell et al. 2003). Analyses of data from the Special Supplemental Nutrition Program for Women, Infants, and Children in 12 U.S. states indicated that the preva lence of post-partum anemia was 27%, reaching 48% among non-Hispanic black women (Bodnar et al. 2001).
Using NHANES III data, Bodnar et al. (2001) estimated that, among women with a poverty index ratio >130%, postpartum women (up to 12 months postpartum) had the highest rates of iron deficiency of between 12% and 13%. Mexican-American females have a higher prevalence of iron-deficiency anemia than did non-Hispan ic white females (Frith-Terhune et al. 2000).
# Summary
Studies of the effect of iron supplementation in lead poisoned women are not available. Thus, iron supplemen tation in pregnant and lactating women should be consistent with those given for pregnancy and lactation.
No additional iron supplementation is recommended for women with elevated BLLs. However, the iron status of all pregnant women should be evaluated and supplementation should be provided to correct any deficien cy.
# Zinc
Deficiencies of other trace elements, such as zinc, may increase both lead absorption and lead toxicity (Cerk lewski and Forbes 1976). Although of substantial importance worldwide (Black et al. 2008), zinc deficiency is not common in the United States (Hotz et al. 2003). Suboptimal zinc status may be caused by lack of zinc in the diet, but more likely is caused by inhibition of zinc absorption by factors such as other trace metals (e.g. iron, copper, lead, cadmium) (Lonnerdal 2000). Serum zinc concentration is influenced by multiple covariables and declines during pregnancy, presumably reflecting hemodilution that occurs during pregnancy (Hotz et al. 2003). In general, dietary protein is associated with increased zinc absorption and the U.S. population gener ally receives sufficient protein from dietary sources. Hence, zinc deficiency is not considered of major impor tance in altering susceptibility to lead toxicity in the U.S. population.
# Ascorbic Acid (Vitamin C)
Another category of nutrient-lead interactions involve nutrients noted for their antioxidant properties (e.g., ascorbic acid , vitamin E, selenium, thiamine). Antioxidants are involved in the prevention of cellular damage that occurs from free radicals (atoms or groups of atoms that can be formed when oxygen interacts with certain molecules). The role of the antioxidant nutrients in altering the outcomes of lead exposure is not well established. Supplementation with vitamin C and other antioxidants (such as vitamin E and selenium) may prevent lead-induced oxidative damage due to lead exposure and bolster the body's antioxidant defense system. Unfortunately, the research conducted to date is insufficient in either quality or quantity to evaluate many of these hypotheses.
In addition to its antioxidant properties, vitamin C has been suggested as acting as a natural chelating agent that enhances the urinary elimination of lead from the body (Simon and Hudes 1999). Two large cross-section al studies in adults have found associations between blood lead levels and dietary intake or serum levels of vitamin C Simon and Hudes 1999). In an analysis of nutritional data provided by over 15,000 adult participants in NHANES III, Simon and Hudes (1999) found that adults in the highest two serum vitamin C tertiles had a 65% to 68% lower prevalence of elevated blood lead levels compared to adults in the lowest tertile (p = 0.03). In another analysis of NHANES III data, described the relationship between serum vitamin C and blood lead levels in over 4,000 reproductive-aged women (20-49 years). Women with high serum vitamin C levels had a 2.5 lower odds of having blood lead levels in the highest decile (>4 µg/dL). Among postpartum women in Mexico City, higher intakes of vitamin C were associated with lower levels of breast milk lead (Ettinger et al. 2004).
Studies with human subjects have also found that supplementation with vitamin C reduced lead levels (Dawson et al. 1999). One study randomly assigned nonoccupationally exposed male smokers into three treatment groups (placebo N = 25, Vitamin C 200 mg daily N = 25, and vitamin C 1,000 mg daily N = 25). Baseline blood lead levels were low and similar to that reported by other studies of the general population. Supplementation with 1,000 mg of vitamin C (but not 200 mg) reduced blood lead levels by 81% (Dawson et al. 1999). However, according to a literature review by Hsu and Guo (2002), the benefit of vitamin C supple mentation seems to be found most consistently in studies with subjects with lower lead levels. Human and animal studies with higher blood lead levels in general tend to show minimal to no improvement with vitamin C supplementation.
Determining the dose of vitamin C needed to lower blood lead levels is unclear in that dose-response was not typically observed in these studies. Blood lead levels were lowered only in those studies which the vitamin C intake exceeded nutritionally recommended intakes. The safety of exceeding these levels is unclear. In sum mary, the research to date suggests that vitamin C may lower blood lead levels. However, further research is needed to confirm these conclusions, since the studies conducted to date have relatively small numbers of subjects and do not include pregnant or lactating women.
# Vitamin D
A final category of nutritional interactions with lead is interference by lead with formation of metabolites of the nutrient. The primary example of this is the severe compromise found in formation of the metabolites of vitamin D (i.e., the endocrine function of vitamin D) as lead exposure increases (Mahaffey et al. 1983;Rosen et al. 1980;Smith et al. 1981). Lead is well established as inhibiting the renal synthesis of 1,25-dihydroxyvitamin D in rats (Smith et al. 1981), chicks (Fullmer 1995), and young children (Rosen et al. 1980). As the body burden of lead increases (exposures associated with children's blood across blood lead concentrations of 12 to 120 µg/ dL), there is a linear decline in 1,25-dihydroxyvitamin D (Mahaffey et al. 1983). To date, this interaction has not been evaluated among pregnant or lactating women.
Important sources of vitamin D are from synthesis of vitamin D through sunlight activation of pro-vitamin D present in skin and dietary intake (Holick 2007). Many factors influence the efficiency of cutaneous produc tion of vitamin D. In winter months, ultraviolet B rays, needed to promote cutaneous vitamin D production, are absent at latitudes above 35° N (i.e., north of Memphis, Tennessee). Dark-skinned individuals require exposures about 5-10 times as long as light-skinned individuals to achieve similar levels of cutaneous vitamin D produc tion. (Holick 2004). Even in summer months, sun exposures outside the peak sun hours of 10:00 AM to 3:00 PM have limited impact on cutaneous vitamin D synthesis (Holick 2003). Application of sunscreen blocks produc tion of vitamin D (Holick 2007). Higher prepregnancy body mass index is associated with lower vitamin D status (Bodnar 2007a). Additionally, women who wear concealing clothing or are house-bound may have low vitamin D. Clinicians should therefore be aware of the potential for multiple risk factors for inadequate vitamin D status among certain recent immigrants who may not receive adequate exposure to sunlight.
# Vitamin D status in U.S. women
The recommended adequate intake of vitamin D in both pregnant and lactating women is 200 IU. However, only about half of U.S. women ages 19-50 years get this amount of vitamin D daily from diet or supplement sources (Moore et al. 2004). The lowest mean dietary intakes of vitamin D in the U.S. population (based on data from food consumption patterns identified in the NHANES III and multiple years of the Continuing Survey of Food Intakes by Individuals ) were among teenage girls and women (Moore et al. 2004). The American Academy of Pediatrics (AAP) recommends that all children and adolescents receiving <400 IU/day from foods receive a supplement of 400 IU vitamin D daily (Wagner et al. 2008). In adults, daily supplementation with 400 IU vitamin D increases 25(OH)D by 7.0 nmol/L (Heaney 2003). Supplementation of a pregnant woman with 400 IU vitamin D, as in prenatal vitamins, has little effect on her 25(OH)D concentration (Wagner 2008).
Inadequate vitamin D status is common among women in the United States (Bodnar et al. 2007a,b;Hollis 2005;Hollis and Wagner 2004;Looker et al. 2008;Specker 2004;Specker et al. 1994). There is no universal con sensus on adequate levels of 25-hydroxyvitamin D, but 75-80 nmol/L (Calvo and Whiting 2006) is a common benchmark. The AAP recommends that pregnant women maintain a 25(OH)D level of ³80 nmol/L (32 ng/mL) (Wagner et al. 2008).
Based on NHANES 2000-2004 data, lower-than-optimal serum 25-hydroxyvitamin D levels were frequent (Looker et al. 2008); 49.1% of non-Hispanic white pregnant women, 76.4% of Mexican-American pregnant women, and 92.2% of non-Hispanic black pregnant women had serum 25-hydroxyvitamin D <75 nmol/L and 8.5%, 74.6%, and 41.6%, respectively, had serum 25-hydroxyvitamin D <50 nmol/L (Looker et al. 2008). Among a sample of pregnant women residing in northern United States, 25(OH) vitamin D levels were considered ≤ 80 nmol/L in 83.3% of black women and 47.1% of white women; more than 90% of these women used pre natal vitamins (Bodnar et al. 2007b).
# Summary
Because data on the association of lead and Vitamin D are limited, no specific recommendation is made for supplementation of vitamin D in lead poisoned pregnant or lactating women. Adequate levels of vitamin D should be maintained.
# NUTRITIONAL ASSESSMENT AND REFERRALS
All pregnant women should be assessed for the adequacy of their diets and be provided with appropriate nutritional advice and prenatal vitamins. This should be reinforced and maintained throughout pregnancy and lactation. General nutritional guidance is readily available; for example, see Dunlop et al. (2008) and Gardiner et al. (2008). Nutritional assessment of pregnant and lactating women with blood lead levels ≥5 µg/dL should be, at a minimum, consistent with anticipatory guidance, evaluation, and nutritional recommendations for all pregnant and lactating women. However, in pregnant and lactating women with a current or past BLL ≥5 µg/ dL, certain nutritional recommendations should particularly be reinforced. Calcium and iron are of particular focus here for reasons that are related to how calcium and iron influence blood lead levels and pregnancy out comes. A balanced diet with a dietary calcium intake of 2,000 milligrams daily should be maintained through diet, supplementation, or a combination of both. Additionally, iron status should be evaluated and supple mentation provided in order to correct and prevent any iron deficiency. Anemia is the most easily identifiable indicator of functional iron deficiency. The Institute of Medicine (IOM) recommends starting iron supplemen tation after 12 weeks of pregnancy with the lowest dose needed. Women with anemia (defined in pregnancy as a hemoglobin level less than 11 g/dL in the first trimester and third trimester, and less than 10.5 g/dL in the second trimester), require higher dosing (Institute of Medicine 1990). Generally, pregnant women with iron deficiency anemia should be prescribed 60 to 120 mg of iron daily in divided doses. Dosage can be reduced to 30 mg daily once anemia is corrected. Women receiving supplemental iron or calcium should be encouraged to split the dose, taking no more than 500 mg of calcium or 60 mg of iron at one time, as only small amounts of these nutrients can be absorbed at any one time.
# Referrals and Resources
Practitioners who interact with pregnant or lactating women should routinely screen for the presence of nutri ent deficiencies like iron deficiency. Although comprehensive assessment of dietary adequacy is not routinely conducted in medical office visits, all pregnant and lactating women should be screened for the adequacy of their diets. If the presence of dietary inadequacy is suspected, women should be provided appropriate nutri tional advice and should be referred to resources designed to improve knowledge and/or access. Appendix XIII contains nutritional reference information, including dietary reference intakes: recommended vitamin and elements intakes for individuals, tolerable upper intake levels, food sources for key nutrients, dietary assessment tools, and other background information. Resources that might be useful for referrals or interac tions with patients are summarized in this section.
# Registered dietitian
A registered dietitian (RD) is a health professional who has received specialty training in food and nutrition. Using various dietary assessment tools, an RD can conduct a thorough assessment of an individual's dietary intake and can identify dietary inadequacies. Local RDs can be located by contacting local health care facilities, such as hospitals or health centers, or by using the Find A Nutrition Professional link of the American Dietetic Association Web site ().
# WIC
The Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) is a federal grant pro gram that provides nutritious foods, nutrition education, and referrals to low-income (at or below 185% of the U.S. poverty income guidelines) pregnant and lactating women (in addition to infants and children) who are at nutritional risk. The two major types of nutrition risk recognized for WIC eligibility are medically based risks-such as anemia, underweight, overweight, history of pregnancy complications, or poor pregnancy out comes-and dietary risks-such as failure to meet the dietary guidelines or inappropriate nutrition practices.
In most WIC state agencies, WIC participants receive checks or vouchers to purchase specific foods each month that are designed to supplement their diets. The foods provided are high in one or more of the follow ing nutrients: protein, calcium, iron, and vitamins A and C. These are the nutrients frequently lacking in the diets of the program's target population. Detailed information about WIC including eligibility criteria, contact information, and instructions for applying can be found on the WIC Web site (/).
# Supplemental Nutrition Assistance Program (formerly the Food Stamp Program)
The Supplemental Nutrition Assistance Program (SNAP) is a federal program that provides low-income house holds with subsidies they can use like cash at most grocery stores. The assistance can be used to buy breads and cereals; fruits and vegetables; protein foods like meat, fish, and poultry; and dairy products. For additional information, call 1-800-221-5689 or visit the SNAP Web site (/).
# MyPyramid
The
# CHAPTER 8. CHELATION OF PREGNANT WOMEN, FETUSES, AND NEWBORN INFANTS
# Key Recommendations for Chelation Therapy
- Chelation therapy should be considered for pregnant women with confirmed blood lead levels ≥45 µg/dL on a case-by-case basis, in consultation with an expert in lead poisoning.
- Pregnant women with confirmed BLLs ≥45 µg/dL should be considered as having high-risk pregnancies and managed in consultation with an expert in high-risk pregnancy.
- Pregnant women with life-threatening lead encephalopathy should be chelated regardless of trimester.
- Insufficient data exist regarding the advisability of chelation for pregnant women with BLLs <45 µg/dL.
- Infants (0-6 months of age) with a confirmed BLL of ≥45 µg/dL should be considered as can didates for chelation in consultation with an expert in pediatric lead chelation therapy.
- Before considering chelation therapy for a pregnant woman (or infant), blood lead levels should be repeated and confirmed using an additional venous blood lead sample collected within 24 hours.
- Chelation therapy must occur in a lead-safe environment; therefore, prior to initiating chela tion therapy, the patient should be removed from further lead exposure (see Chapter 6).
# INTRODUCTION
There is a potential role for chelation therapy to treat pregnant woman and newborns, and, in some cases, chelation may be life-saving. However, the scientific evidence to support its use is very limited, and chelation during pregnancy and in the early postpartum period should be initiated only in consultation with an expert in treatment for lead poisoning.
# OVERVIEW OF CHELATION
Chelation therapy utilzes the chemical characterstics of a chelating agent to remove lead from participation in biological reactions in the body, by binding the agent with the metal (lead) to form a chelate. A chelate is defined as a "complex formation involving a metal ion and two or more polar groupings of a single molecule" (Stedman's 2008). Notice that this definition does not indicate the fate of the chelated metal. Possibilities include excretion of the chelate, persistence in the tissue where the bonding occurred, or redistribution to other tissues. Ideally, the drug should effectively increase lead excretion, be easily administered, be affordable, and be safe. The consequences of lead removal should be to halt further toxicity and to reverse previous lead effects (Markowitz 2000).
# DRUGS AVAILABLE IN THE UNITED STATES
There are four drugs (CaNa 2 EDTA, DMSA, BAL, PCA) in use for lead chelation in the United States (Table 8-1) and others are in use elsewhere. None of these drugs specifically bind only lead and, thus, some loss of essen tial elements also occurs. The toxicity profiles of these drugs differ. Two of the drugs are administered orally (DMSA, PCA) and two must be given parenterally (BAL im only; CaNa 2 EDTA im or iv). The latter two require expert nursing care and are always used in the hospital. The former two are used in both inpatient and outpa tient settings. All of these drugs increase lead excretion, primarily through the kidneys (Aposhian 1982;Gra ziano et al. 1999). There may also be tissue redistribution during or as a consequence of chelation.
The introduction of chelating agents for the treatment of severe lead poisoning (blood lead ≥70 µg/dL) was associated with a marked decline in lead-related mortality in children, from 30% to <1% (Chisolm 1968). Chelation treatment at lower blood lead levels, where mortality is not a major concern, is associated with a fall in blood lead levels and an improvement in biochemical markers of lead toxicity, such as erythrocyte proto porphyrin (EP) levels and delta-aminolevulinic acid dehydratase (ALAD) activity (Graziano et al. 1991;Piomelli 1996). Depending on the amount of lead in the body prior to chelation, the effect of treatment on blood lead is generally temporary, with levels increasing within 2 weeks after the conclusion of a course of treatment in many patients. The effect on the biochemical markers of toxicity is disparate. ALAD activity declines as blood lead rebounds, whereas EP levels tend to fall if no further lead absorption occurs, despite the rebound in blood lead. All of the drugs increase the excretion of essential metals, but to differing degrees. DMSA appears to be the most specific for binding heavy metals such as lead and mercury. The excessive loss of essential met als has been postulated to account for the observed teratogenicity associated with all of the agents tested in animal studies.
# Utility of These Drugs in Other Populations
Candidates for chelation therapy differ by age group. Previous CDC guidelines (1991) established a blood lead level of ≥45 µg/dL as the indication for treatment of children regardless of symptoms. At these levels, gastro intestinal symptoms may occur in a a large number of children; biochemical toxicity is demonstrable in the majority of children (elevated EP level, decreased ALAD activity); and, subclinically, cognitive scores are likely lower. Additionally, and of importance, such children are very likely to excrete large amounts of lead in re sponse to chelation treatment-much greater amounts than they would spontaneously excrete over periods of time comparable to a course of chelation. However, the amount excreted is only a small fraction of the total lead in the body. Though symptoms and biochemical markers of toxicity may improve post chelation, there is no documentation of cognitive improvements in nonencephalopathic children. For blood lead levels of <45 µg/dL, chelation treatment can also lower blood lead levels and improve biochemical markers of toxicity tem porarily. However, there is no evidence that lead excretion is substantially increased for the majority of chil dren. A randomized placebo-controlled trial of succimer for children with initial BLLs of 20-44 µg/dL also failed to demonstrate any difference in mean cognitive scores when tested 2 years later . There are no published guidelines identifying a specific blood lead level as requiring chelation therapy in adults nor is there a universal protocol for which agents to use, dose, or duration of treatment.
# CONCERNS ABOUT CHELATION THERAPY DURING PREGNANCY
Consideration of chelation therapy during pregnancy requires identification of the targeted beneficiary and estimation of the anticipated benefits and risks. Limited availability of research findings on comparable pa tients means that extrapolation from data on other types of patients are necessary to make treatment deci sions. Since the correlation between maternal and newborn blood lead levels is high as measured by cord and maternal blood lead levels determined at delivery, maternal blood lead level can be used as a proxy for the fetus' blood lead level. Therefore, if the known risks and benefits of chelation treatment for lead poisoned children are extrapolated to fetuses, then a blood lead level ≥45 µg/dL in the mother's blood would trigger chelation treatment of the fetus in situations where the fetus is the intended beneficiary of the treatment. If the intended beneficiary of chelation therapy is the pregnant woman, then there is insufficient clinical data to guide decisions about treatment by blood lead level in the absence of symptoms.
No chelation-attributable toxicities have been reported in the existing published case reports. However, very limited information is available to understand any potential short-or long-term effects. Use of chelat ing agents should therefore only be considered in consultation with experts in lead poisoning and high risk pregnancies.
# CLINICAL EVIDENCE IN PREGNANCY AND IN THE NEWBORN
The literature search identified only case reports of chelation therapy during pregnancy (see Table 8-2) and early postpartum (see . In general, maternal blood lead levels decline after a course of chelation and neonatal blood lead levels at birth were also lower than peak maternal levels during the pregnancy. However, very limited information is available to determine if any long term benefit is derived from in utero treatment or whether adverse effects occur from chelation. In the few case reports, babies did not appear to have gross developmental delays.
The women in the case reports were selected for chelation therapy based on their blood lead levels with the lowest pretreatment level reported as 44 µg/dL, although in that case a prior blood lead of 62 µg/dL was observed. All women appeared to have been treated during the second half of pregnancy. All but one of the women were treated with varying amounts and for varying durations with CaNa 2 EDTA. A single patient also received BAL in addition to CaNa 2 EDTA. A single case reported the exclusive use of DMSA. In all cases, CaNa 2 EDTA therapy was associated with a decline in maternal blood lead levels. There was no change in maternal blood lead after the one case of treatment with DMSA (18-day course). However, she was treated as outpatient without apparent oversight for either compliance or ongoing lead exposure. In all but one case a healthy newborn was delivered. The exception occurred in a case where maternal blood lead pretreatment was 104 µg/dL. The woman received CaNa 2 EDTA and BAL. The 1.6 kg infant was born prematurely after an tepartum hemorrhage 36 hours into treatment. This baby was later noted to have developmental delay and hearing deficit. No consistent pattern in cord blood lead levels was apparent in the few cases where they were reported. The interval between chelation and delivery also varied from months to minutes. Cord blood lead levels were higher than maternal blood lead in the case treated with DMSA and in that of the sick premature infant described. In the other cases cord blood lead levels were lower than maternal prechelation levels. In sev eral reports, chelation treatment was not initiated until shortly before or soon after delivery and was directed toward the newborns. Various drugs at full dosages have been used singly or in combination: CaNa 2 EDTA alone, CaNa 2 EDTA and BAL, CaNa 2 EDTA and DMSA, and DMSA alone. In general, chelation therapy was well tolerated by the infants.
Exchange transfusion has been used, in combination with chelation therapy, to successfully lower blood lead levels in neonates Mycyk and Leikin 2004). In one case report, after a single-volume exchange transfusion, the infant with a cord blood lead level of 100 µg/dL was chelated on day 2 with a combination of BAL and CaNa 2 EDTA for 5 days, at the end of which the blood lead was 37 µg/dL (Mycyk and Leikin 2004). Chelation was continued for 19 days with DMSA, at the end of which the infant's blood lead was 38 µg/dL. Both the exchange and chelation treatments were described as "well tolerated. " Of particular inter est in this case is that maternal blood lead at preconception was 117 µg/dL and declined to 72 µg/dL by the third trimester. The mother was not chelated during her pregnancy. The baby was delivered at 40 weeks with a blood lead level of 100 µg/dL, weighed 3.7 kg, and achieved normal developmental milestones at 1 month of age. Another case report ) describes a double-volume exchange transfusion plus 5 days intravenous CaNa 2 EDTA where the infant blood lead of 114 µg/dL fell to 12.8 µg/dL immediately following the exchange transfusion. Caution is advised, however, as Bearer et al. (2000Bearer et al. ( , 2003 report on blood transfusions in newborn premature infants as an unexpected source of lead exposure. The relative benefits/risks of chelation versus exchange transfusion have not been investigated.
# SUMMARY AND RECOMMENDATIONS REGARDING CHELATION THERAPY
While chelation may be beneficial especially in protecting the mother with very elevated blood lead levels, given the lack of controlled studies and the paucity of even published case reports or series, chelation therapy should be undertaken only with advice from experts in this field. Such decision making should weigh the lack of definitive evidence of safety for the fetus (especially in the first trimester) against the extensive safety profile and experience with these drugs in children and adults. Recommendations for chelation therapy prenatally and postnatally are presented below:
# Prenatal Chelation of the Mother
BLLs ≥70 µg/dL may result in significant maternal toxicity and chelation therapy should be considered, regard less of trimester, in consultation with an expert in the management of lead poisoning, high-risk pregnancies, and neonatology. Lead poisoning may be life threatening at levels greater than 100 µg/dL, though many cases have been described where patients with such levels were asymptomatic. Encephalopathic pregnant women should be chelated regardless of trimester.
Pregnant women with confirmed BLLs ≥45 µg/dL (repeated on at least two venous blood samples collected within 24 hours) may be considered for chelation therapy and should be managed in conjunction with experts in high-risk pregnancy and lead poisoning. Immediate removal from the lead source is still the first prior ity and, in some cases, pregnant women may require hospitalization. When chelation is being considered, it should be performed in an inpatient setting only with close monitoring of the patient and in consultation with a physician with expertise in the field of lead chelation therapy. Data regarding the reproductive risk associ ated with chelation during pregnancy are sparse. Most case reports of infant outcomes report on the use of chelating agents after the first trimester (see Table 8 -2). Reserving the use of chelating agents for later in preg nancy is consistent with the general concern about the use of unusual drugs during the period of organogen esis (National Research Council, 2000). However, severe maternal lead intoxication, such as encephalopathy, will warrant chelation regardless of the stage of pregnancy.
# Neonatal Chelation of the Infant
Chelation should be considered in neonates and infants less than 6 months of age for a confirmed BLL ≥45 µg/ dL in consultation with a pediatric expert in lead chelation therapy. The limited data published suggest that toxicities for 0-to 6-month-olds are no different than those of 6-to 12-month-olds. Chelation treatment must occur in an environment free of lead hazards; therefore, prior to initiating chelation therapy, the patient should be removed from further lead exposure. Very limited data are available on the use of exchange transfusion as an alternative in this age group.
# Chelating Agents
Three of the four available chelating agents (CaNa 2 EDTA, BAL, DMSA) have been used during pregnancy and may be considered. Data for penicillamine used in pregnancy are unavailable. (This drug is FDA-approved for use in children, but its use in pregnancy is not approved.) The most experience, little as it is, has been with CaNa 2 EDTA. This drug may be used intravenously at regular doses for 5 days.
# CHAPTER 9. BREASTFEEDING
Breastfeeding is an optimal infant feeding practice compared with other infant feeding practices which carry risks. With regard to short-term risks, lack of breastfeeding is associated with increases in common childhood infections, such as diarrhea (Chien and Howie 2001) and ear infections (Ip et al. 2007), with potentially serious complications such as meningitis, dehydration, and hearing impairment. Lack of breastfeeding also increases the risk for some relatively rare but severe infections and diseases, such as severe lower respiratory infections (Bachrach et al. 2003;Ip et al. 2007), leukemia (Ip et al. 2007;Kwan et al. 2004), and-especially important for preterm infants-necrotizing enterocolitis (Ip et al. 2007). The risk of hospitalization for lower respiratory tract disease in the first year of life is more than 250% higher among babies who are formula fed compared with those who were exclusively breastfed at least 4 months (Bachrach et al. 2003). Furthermore, the risk for Sudden Infant Death Syndrome is 56% higher among formula-fed versus breastfed infants (Ip et al. 2007). The Agency for Healthcare Research and Quality ( 2007) report also concludes that formula feeding has long-term health effects related to increased risks for certain chronic diseases and conditions, such as type 2 diabetes (Owen 2006) and childhood obesity (Arenz et al. 2004), both of which have increased among U.S. children over time.
Decisions made with regard to breastfeeding by a mother whose blood lead levels exceed background lev els should be based on scientific evidence suggesting undue risk for the child. Scientific observations have consistently shown that biologically significant elevations in milk lead concentration do not occur in lactating women at the blood lead concentrations typical of women with long-term residence in developed countries.
Only a small number of American women will meet the crieteria to defer breastfeeding, though more will be subject to additional follow up out of an abundance of caution. Transfer of lead can occur from maternal plasma to breast milk in roughly the same concentrations. This chapter describes recommendations for breastfeeding by women with blood lead levels above background levels and summarizes the scientific evidence supporting these recommendations.
# INTRODUCTION
The overall goal in counseling a woman whether or not to breastfeed is to provide the best possible nutrition al and nurturing environment for the infant. Any decision either not to initiate or to discontinue breastfeed ing must be made only after careful consideration of all the factors involved. The basis of the initial decisionmaking process should include a thorough discussion between the mother and her health care provider of the factors to be considered. This discussion should ideally take place before the baby is born. Many factors have an impact on whether or not a woman with a blood lead level ≥5 µg/dL chooses to breastfeed her child. Many of these factors are poorly quantified and others are not readily quantifiable. Thus, a detailed and balanced discussion is essential.
# THE IMPORTANCE OF BREASTFEEDING
Due to the unique nutritional characteristics of human milk, breastfeeding is understood to be the optimal mode of nutrient delivery to term infants. The U.S. Department of Health and Human Services' Blueprint for Ac tion on Breastfeeding (U.S. Department of Health and Human Services 2000) emphasizes the value of breastfeeding, as does AAP (American Academy of Pediatrics 2005). Human breast milk is specific to the needs of the human infant. It provides the ideal nutrients for human growth and development in the first year of life, in a form that is readily transferred into the infant's bloodstream. Human milk also protects the breastfed infant against certain common infections and reduces the incidence of certain chronic diseases as well as symptoms of allergy (U.S. Department of Health and Human Services 2000). Women who breastfeed experience less post partum bleeding, earlier return to prepregnancy weight and a reduced risk for ovarian cancer and premeno pausal breast cancer (U.S. Department of Health and Human Services 2000). Breastfeeding also provides the added benefit of the mother-child bonding that takes place during nursing sessions.
The decision to breastfeed in the presence of a possible contraindication should be made on an individual basis, considering the risk of the complication to the infant and mother versus the tremendous benefits of breastfeeding (Lawrence 1997;Lawrence and Lawrence 2005).
The current AAP statement on breastfeeding does not address the issue of breastfeeding by mothers with lead exposure above background levels (American Academy of Pediatrics 2005). An earlier statement specifically addressing the transfer of toxic environmental agents through breast milk and the risk of infant exposure to environmental toxicants by this route suggests that before advising against breastfeeding, the practitioner should weigh the benefits of breastfeeding against the risks of not receiving human milk (American Academy of Pediatrics 2001).
Specifically with regard to lead, a technical information bulletin published by the Health Resources and Ser vices Administration in 1997 held that breastfeeding is not contraindicated unless the concentration of lead in maternal blood exceeds 40 µg/dL (Lawrence 1997). This recommendation was one small section of a larger re view of the evidence then available on breastfeeding benefits and contraindications. It has not been updated since publication.
# LEAD IN BREAST MILK
Since maternal blood is the medium from which lead is transferred to breast milk and ultimately to the nurs ing infant, the relationship of lead in maternal blood to lead in breast milk is of key importance. Early studies supported the belief that milk lead levels were one-tenth to one-fifth the levels of lead in maternal whole blood (for a review, see . These high values were due in part to contamination and analyti cal inaccuracies in the laboratory measurement of lead in breast milk. (See Chapter 3 for discussion of issues associated with laboratory analysis of lead in human milk.)
Recent carefully conducted studies of lead in breast milk consistently show breast milk lead to maternal blood lead ratios of approximately 3% or less; that is, a milk lead concentration of 3 µg/dL (or 30 µg/L) would be as sociated with a maternal blood lead concentration of 100 µg/dL, or a milk lead concentration of 0.3 µg/dL (3 µg/L) would be associated with a maternal blood lead concentration of 10 µg/dL. found that the breast milk lead to blood lead ratio was less than 3% in 15 adult female immigrants to Australia with blood lead concentrations up to 34 µg/dL. Li et al. (2000) evaluated 119 nonoccupationally exposed women in Shanghai, reporting a mean maternal blood lead concentration of 14.3 µg/dL and a mean milk lead to blood lead ratio of 3.9%. Counter et al. (2004) reported ratios of milk lead concentration to maternal blood lead concentration in 13 nursing mothers from Ecuadorian Andean villages. The ratios ranged from 0.4% to 3.3% in 12 of the subjects, appearing to increase with increasing blood lead level. The thirteenth subject, with a blood lead concentration of 27.4 µg/dL, had a milk lead to blood lead ratio of 7.5%. showed that breast milk lead was significantly correlated with maternal blood lead at one month postpartum in 310 lactating women in Mexico City. The ratio of the geometric mean milk lead concentration to the geometric mean maternal blood lead concentration was 0.013, or 1.3%, and the highest observed blood lead concentra tion was 29.9 µg/dL.
There is limited evidence that with closely spaced multiple pregnancies, baseline maternal blood lead concen trations are lower and the increases in maternal blood lead concentrations occurring during late pregnancy and lactation are reduced relative to those in the first pregnancy ). However, for most women in the United States, more than 98% of whom have blood lead levels <5 µg/dL, this has no practical implications.
# INFANT LEAD EXPOSURE FROM BREAST MILK
Limited experimental observations suggest that breast milk lead has a relatively small impact on infant blood lead. It is generally agreed that biologically significant elevations in milk lead concentration do not occur in lactating women at the blood lead concentrations typical of women with long-term residence in developed countries . Other sources of lead also contribute to the nursing infant's blood lead level. concluded from lead isotope analyses that the principal source of lead exposure in very young children, irrespective of whether they are breast-or bottle-fed, is hand-to-mouth activity. However, the relative importance of early hand-to-mouth activity depends on the child's environment. Neonatal bone turnover is another potential source of lead in infant blood ) that should be factored into expectations about infant blood lead levels. Bone turnover is very high in the newborn because both bone accretion and bone loss during reshaping of the growing bone are high. The rapid turnover of bone lead is reflected in a short blood lead half-life in very young children compared to older children, with bone turnover varying by age, rather than to the length of the exposure O'Flaherty 1995).
Although levels of lead in breast milk are generally low, they can influence infant blood lead levels over and above the influence of maternal blood to which the infant was exposed in utero. In a large-scale study of breast milk and infant blood lead levels, milk lead was found to account for 10% of the variance in 6-month blood lead and there was a linear dose-response relationship between breast milk lead and infant blood lead at age 6 months (Rabinowitz et al. 1985). In another study, breast milk lead accounted for 12% of the variance of infant blood lead levels at 1 month of age and levels of breast milk lead were significantly correlated with infant blood lead (Ettinger et al. 2004 b ).
It is possible to estimate milk lead concentrations associated with various maternal blood lead concentrations.
As discussed above, the most probable value of the maternal milk lead to blood lead ratio is substantially less than 3%. Table 9-2 illustrates calculated milk lead concentrations at various maternal blood lead concentra tions assuming breast milk lead concentration to be 3% of maternal blood lead concentration. Employing a tenfold larger percentage, this calculation might be thought of as providing an upper limit on the milk lead associated with a given maternal blood lead. It partly offsets the effect of binding of lead to milk casein at very low concentrations.
From the breast milk lead, that portion of the nursing infant's blood lead originating from maternal milk can be estimated. reported that an increase of about 2 µg/L in breast milk lead was associ ated with a 0.82 µg/dL increase in the blood lead of breast-fed infants at 1 month of age, adjusting for cord blood lead, infant weight change, and reported breastfeeding status. Calculated based on this observed rela tionship, the increase in infant blood lead concentration associated with different maternal blood lead con centrations can be estimated (Table 9-3). Based on this calculation, the predicted contribution of breast milk lead to infant blood lead at 1 month of age would be about 3.7 µg/dL at a maternal blood lead concentration of 30 µg/dL, 2.5 µg/dL at a maternal blood lead concentration of 20 µg/dL, or 0.25-0.5 µg/dL at maternal blood lead concentrations of 2-4 µg/dL. This calculation is based on a data set whose values did not exceed 30 µg/dL. Its application outside this range represents an extrapolation and becomes progressively less certain as maternal blood lead increases above 30 µg/dL. These calculations are supported by observational data only in infants about 1 month old, but they do not suggest undue concern for lead exposure of nursing infants at maternal blood lead and breast milk lead concentrations typical of those found in the United States.
Evidence also suggests that the breast milk lead to maternal blood lead ratio may increase in a nonlinear fashion when maternal blood lead concentrations exceed about 40 µg/dL. This hypothesis is supported both by observational data on women with very high breast milk lead concentrations (Li et al. 2000;Namihara et al. 1993) and by studies on the components of the blood (e.g., plasma) and breast milk as they relate to mater nal lead exposure Manton et al. 2001;O'Flaherty 1993;Schutz et al. 1996). A finding that breast milk contains proportionally more maternal lead at higher blood lead levels suggests possible risk associated with breastfeeding at maternal blood lead levels above 40 µg/dL. Epidemiological evidence is not entirely consistent about the extent to which maternal blood lead concentra tions increase during lactation .
The breastfeeding recommendations developed herein are intended for women living in the United States. In sufficient data are available to guide clinical decisions regarding women with extremely high breast milk lead concentrations or in women living or working in lead-polluted areas outside the United States. Some evidence suggests different rates of transfer of lead into breast milk for maternal blood lead concentrations less than and greater than about 40 μg/dL (Li et al. 2000), but available human data are insufficient to make reliable estimates.
# RECOMMENDATIONS FOR BREASTFEEDING
On the basis of the health and developmental benefits to infants of breastfeeding and consideration of the available research on the contribution of breast milk lead to infant blood lead, CDC has developed clinical guidance for breastfeeding by women exposed to lead. Initial criteria for breastfeeding are maternal blood lead levels, but ongoing monitoring of infant blood lead levels (described in Chapter 5) provides the addi tional feedback loop needed for clinical decision making about continuing breastfeeding. Specifically, a rise in infant BLL of 5 µg/dL or more is regarded as clinically significant and affects breastfeeding recommendations.
Testing recommendations for women with BLL ≥5 µg/dL identified during pregnancy or at delivery are pre sented in Table 9-1 and for infants in Tables 5-1 and 5-2. Measurement of breast milk lead is not recommended given current laboratory methods and the availability of maternal blood lead as a proxy.
An important practical challenge to clinicians in implementing these recommendations is ensuring that the recommended laboratory and other findings are entered into both the mother's and the infant's medical records in a timely fashion, as noted in Chapter 5. For instance, the mother's initial and sequential blood lead levels should be in the infant's chart. Without this data, clinicians lack the information needed to provide ap propriate and real-time guidance about breastfeeding.
# Initiating Breastfeeding
Initiation of breastfeeding should be encouraged for all mothers with blood lead levels <40 µg/dL, with followup recommendations varying by blood lead levels. Initial maternal BLLs <20 µg/dL are unlikely to be associ ated with a detectable increase in infant blood lead, even using a ratio of breast milk to maternal blood ten times the most likely value, as in the above calculations. In women with BLLs between 5-19 µg/dL, an initial infant blood lead level is warranted to establish a baseline.
At maternal blood lead levels between 20-39 µg/dL, data do not exist to weigh accurately the risks of lead exposure from breast milk against the benefits of breastfeeding. Thus, a prudent course of action is for these women to initiate breastfeeding accompanied by sequential mother and infant blood lead levels to moni tor trends, so that adjustments can be made if indicated. Mothers with BLL between 20-39 µg/dL should be retested 2 weeks postpartum and then at 1-to 3-month intervals, depending on the direction and magnitude of trend in infant blood lead levels (Table 9-1).
CDC considered the adverse health and developmental effects associated with lead exposure compared to those associated with not breastfeeding and, based on the available information, determined that at maternal blood lead levels ≥40 µg/dL the adverse developmental effects of ≥5 µg/dL increase in an infant's blood lead level was of greater concern than the risks of not breastfeeding until maternal blood lead level dropped <40 µg/dL. Mothers with blood lead levels ≥40 µg/dL should not initiate breastfeeding immediately. They should be advised to pump and discard their breast milk until their blood lead levels drop below 40 µg/dL. In such cases, infants' blood lead levels should be monitored after the initiation of breastfeeding. This recommen dation reaffirms the prevailing guidance about deferring breastfeeding at maternal BLL ≥40 µg/dL.
# Continuing Breastfeeding
All infants born to mothers with BLL ≥5 µg/dL should have blood lead tests at birth and be followed according to the schedule in Chapter 5. Breastfeeding should continue for all infants with BLLs below 5 µg/dL or trending downward.
For breastfed infants whose blood lead levels are rising or failing to decline by 5 µg/dL or more, environmental and other sources of lead exposure should be evaluated. If no external source is identified, and maternal BLLs are >20 µg/dL and infant BLL ≥5 µg/dL, then breast milk should be suspected as the source, and temporary interruption of breastfeeding until maternal blood lead levels decline should be considered. There are insuffi cient data to estimate how many mother-child pairs would meet these criteria, but anecdotal evidence sug gests that it would apply to a very small number in the United States.
Follow-up testing of women with BLL ≥5 µg/dL identified during pregnancy or at delivery should follow the schedule outline in Table 9-1. This should include women with known risk factors that are not controlled, re gardless of the BLL of the women or their infants.
# Lead in Infant Formula
Since breast milk may not be provided exclusively, for an extended period of time, or even at all, many infants are likely to be nourished, at least in part, by commercially available infant formula. Therefore, it is important to characterize the contribution of non-breast milk sources to total potential lead exposure from dietary intake in infants and young children.
Over the past several decades, the FDA and other federal agencies have worked to reduce dietary and other lead exposures of the general population, and in particular of vulnerable subpopulations such as infants, children, and pregnant women . Lead-lined and lead-soldered cans are no longer used for commercial infant formula produced in the United States, and the most recent Total Diet Study confirms that currently marketed milk-based ready-to-feed infant formulas in the United States contain no appreciable amounts of lead. Only one sample (in the high-iron category) of 88 samples of high-and low-iron infant formula contained any measurable lead (trace lead detected in 1 sample = 0.007 mg/kg) (U.S. Food and Drug Administration 2007).
To the extent that lead can be found in infant formula, the relative bioavailability of such lead may be less than that of lead in breast milk. For example, it has been documented that iron is more readily absorbed from breast milk than from infant formula (Lonnerdal 1985). Rabinowitz et al. (1985) found breast milk to be the strongest correlate of 6-month blood lead levels while formula lead correlated poorly with infant blood lead levels. However, showed that the contribution of formula to infant blood lead varied from 24% to 68% in exclusively formula-fed infants. They later estimated average daily intake of lead at age 6 months for infants in their Australian study group fed exclusively by breast milk to be 0.73 µg (subjects = 17; observations = 78), and for infants fed exclusively by infant formula to be 1.8 µg (subjects = 11; observations = 42) . also found that infants fed exclusively with breast milk had lower blood lead levels than those fed partially with breast milk, suggesting that formula or other dietary sources may contribute more lead to infant diets than breast milk does. In that study, an interquartile range increase in breast milk lead (~2 ppb) increased infant blood lead by 25%, or approximately 1 µg/dL.
There are published reports of lead entering formula through lead in tap water used to prepare infant for mula (Shannon and Graef 1989) or the use of leaded storage containers (Shannon 1998). For instance, in a convenience sample of home-prepared reconstituted infant formula collected in a pediatrics department in metropolitan Boston, two of forty samples were found to have lead concentrations above 15 µg/L , which is the EPA lead action level for water. It is recommended that infant formula requiring reconstitution be made only with bottled or filtered tap water, or with cold water after flushing the tap for at least 3 minutes before use. Water authorities, in conjunction with state and local public health authorities, should consider issuing recommendations for the use of tap water in preparing infant formula based on lead levels in local tap water. 2 weeks postpartum and then at 1-to 3-month intervals depending on direction/ 20-39 magnitude of trend in infant BLLs.
Within 24 hours postpartum and then at frequent intervals depending on clinical interventions and trend in BLLs. ≥40
Consultation with a clinician experienced in the management of lead poisoning is advised.
a If a woman becomes pregnant while lactating, she should be followed according to the schedule for pregnancy .
b Need to coordinate care between mother and infant in the postpartum period.
c
Last blood lead level measured in pregnancy or at delivery (maternal or cord BLL).
d Venous blood sample is recommended for maternal blood lead testing. e Infant should be monitored according to schedules in Tables 5-1 and 5 Numerically equal to maternal plasma lead concentration, but expressed per liter rather than per deciliter. c
Assuming the upper ingestion limit of 1,000 mL milk per day at these ages (U.S. Environmental Protection Agency 1997).
d
Assuming the upper ingestion limit of 900 mL milk per day at this age (U.S. Environmental Protection Agency 1997). Calculated based on the observation that a 2 µg/L increase in breast milk lead is associated with an increase of 0.82 µg/dL in the blood lead of the nursing infant ).
e Extrapolation beyond the range of observed data from (where maternal BLLs ranged from 1-30 µg/dL).
# CHAPTER 10. RESEARCH, POLICY, AND HEALTH EDUCATION RECOMMENDATIONS
# INTRODUCTION
The clinical and public health recommendations presented throughout these guidelines are based on current research findings where available; however, research has not been published to provide definitive guidance on all issues of interest. On other topics, the research base is clear, but existing policy is not consistent with research findings. For some topics, existing training and continuing education mechanisms are not working to deliver key findings to health professionals in critical fields, like obstetrics, pediatrics, family practice, and nursing. Together, these gaps in research, policy, and health education create an infrastructure that fails to reinforce optimal clinical and public health practice. This chapter presents specific research, policy, and health education needs identified by CDC to improve current service delivery and to inform development of future practice guidelines and policy with respect to lead exposure above background levels in pregnancy and lactation.
# RESEARCH NEEDS Biomedical Research
# Long-term prospective studies of the effect of lead exposure during fetal development and disease risks later in life
Given the immaturity of the blood-brain barrier in the developing nervous system, children might be more susceptible to morphologic changes in the nervous system during the prenatal and early postnatal periods. Further research is needed on
- Lead kinetics across the placenta and in breast milk, and their relationship to development and disease risk across the lifespan for children exposed to lead in utero or as nurslings.
- Specific health outcomes of interest, other than neurodevelopmental effects, such as pregnancy out come and cardiovascular disease in adulthood following in utero exposure.
# Follow-up studies of pregnancy outcomes and infant development in women with a history of lead expo sure above background levels during pregnancy
Research is needed to better characterize health outcomes for mothers and infants associated with maternal lead exposure during pregnancy-at low elevations of blood lead typical for the U.S. population of women of childbearing age, as well as in more heavily exposed subgroups. Research is needed on
- Specific health outcomes of interest, including pregnancy-related hypertension, low birth weight, and preterm birth.
- Possible association between maternal lead exposure and spontaneous abortion, particularly at BLLs <30 µg/dL.
- Epidemiology of lead exposure during pregnancy and health outcomes.
- Experimental investigation of the biological mechanisms.
# Genetic susceptibility to adverse effects of lead exposure (gene-environment interactions)
Some studies have suggested that specific genes may render certain individuals more vulnerable to the ad verse effects of lead exposure. Research is needed to
- Characterize whether and how the bioaccumulation and toxicokinetics of lead are associated with genetic variation, such as ALDA phenotype or the HFE gene variants.
- Investigate other potential gene-environment interactions.
# Value of maternal biomarkers to predict later infant and childhood blood lead levels
While research has shown that maternal blood lead level is closely associated with infant/cord blood lead level at birth, the kinetics of lead in the newborn exposed in utero are not well understood. In addition, it is not clear whether tissue stores built up during gestation may be a significant source of lead as children age. Studies are needed to determine whether maternal biomarkers (maternal or umbilical blood lead levels) are useful to predict postnatal blood lead levels throughout infancy and childhood.
# Biokinetics of lead in breastmilk
More information is needed on the biokinetics and cumulative dose of lead to the breastfeeding infant at vari ous maternal flood lead levels. Research is needed to determine how breast milk lead levels change over the course of lactation, and whether there are factors in breast milk or maternal diet that would enhance or retard the absorption of lead from breast milk by the infant.
# Biokinetics of lead with nutritional supplementation or super-supplementation during pregnancy
- Large randomized clinical trials are needed to determine if nutritional supplements, diet modification, or a combination of diet and supplements may be a means of secondary prevention of exposure to lead during pregnancy.
- Research is needed to determine whether the impact of nutritional factors differs for women prepreg nancy, during pregnancy, or during lactation, or depending on the woman's lead burden or prior chela tion therapy. Extrapolation from animal studies may be necessary.
# Pharmacokinetics and effectiveness of chelating agents during pregnancy and lactation
Minimal clinical data are available to inform decisions regarding the use of chelating agents in pregnant women, such as data on toxicity, treatment regimen, and timing of treatment. Studies are needed on
- The effects of prenatal chelation on mothers and infants and on lead kinetics across the placenta; how ever, since this type of research is often not possible in humans due to ethical concerns about research on human subjects, extrapolation from animal studies may be necessary.
- The effectiveness of chelation therapy on mitigation of adverse health outcomes other than neurodevel opment.
# Use of educational and developmental support and intellectual stimulation to improve academic/life per formance of children exposed to lead in utero
Current research shows that lead exposure is associated with lifelong health and developmental effects in humans; however, questions have been raised from animal studies and clinical experience about whether and the extent to which certain cognitive effects can be mitigated by educational interventions during childhood. Long-term follow-up studies of children exposed to lead in utero are needed to evaluate whether specific educational or developmental interventions can improve cognitive outcomes. To be useful, such studies must carefully control for factors that may confound the relationship between educational strategies and cognitive outcomes.
# Identification and development of new therapeutic agents or mechanisms to remove lead from breast milk and bone or tissue storage sites in women of childbearing age.
Since bone lead stores persist for decades, women and their infants may be at risk for exposure long after environmental sources have been abated. At present, no interventions are available to remove lead from breast milk or from bone or tissue storage sites in women of childbearing age. Identification and development of prepregnancy interventions that decrease bone lead stores, or render them less mobilizable, may prove beneficial.
# Health Services Research
# Develop estimates for the number and distribution of pregnant women in the United States who should have blood lead tests, and the costs and benefits associated with testing and follow-up care
Limited data are available on the numbers of pregnant women who meet the criteria for blood lead testing recommended in these guidelines. Research is needed to
- Estimate the number of pregnant women in the US who should be tested for lead exposure, the costs for such testing, and the costs for recommended follow-up care. This research should include an assessment of the ability of high-risk women to access blood lead testing and follow-up services, including environ mental intervention, as well as determine who bears the burden of these costs.
- Estimate the societal benefits expected to be derived from testing and treating pregnant women for lead exposure as recommended herein.
# Develop guidance for validation of risk questionnaires for pregnant women in specific clinical settings and subpopulations
Only a few communities have developed risk questionnaires to inform decisions about blood lead testing of pregnant women; however, these guidelines recommends their use. Practical methods for adapting and vali dating risk questionnaires at the local level should be developed and disseminated by CDC and state and local health departments. Such guidance would allow local health agencies and health care providers to develop reliable risk questionnaires that are responsive to local conditions.
# Optimal timing for blood lead testing during pregnancy
Identification of lead-exposed pregnant women potentially offers the most benefit to women and their in fants; however, there are no studies that identify when in pregnancy blood lead testing should be done. Given the curvilinear trajectory of blood lead levels over the course of pregnancy, blood lead testing done in differ ent trimesters may either over-or underestimate the woman's true lead exposure.
# Characterize risk factors for pica and clinical strategies to identify pica in pregnant and lactating women
While pica behavior is relatively uncommon in the general population, pica is observed in some populations of pregnant women in the United States, particularly those who have recently immigrated. Research is needed on how clinicians can more effectively identify pica, particularly those factors (age, race, country of origin, nutritional or health status, etc.) that may predispose a woman to pica.
# Effectiveness of interventions to reduce pica among pregnant women
Only a few studies are available that evaluate the effectiveness of interventions designed to reduce or elimi nate pica behavior; none of these include pregnant women. Studies are needed on the effectiveness of behavior modification strategies for specific types of pica. Given the frequency of pica among some immigrant populations, culturally specific interventions should be a priority for investigation.
# HEALTH POLICY NEEDS Stronger Occupational Standards for Lead Exposure, Especially for Pregnant Women
Current OSHA policy requires medical evaluations at blood lead levels of 40 µg/dL, and removal from the workplace when blood lead levels exceed 50 µg/dL (for construction) or 60 µg/dL (for general industry). Some industries where workers may be exposed to high levels of lead are not protected by OSHA. Current occupa tional standards were developed over 30 years ago and have not been updated to reflect research findings that lead exposure during pregnancy is associated with adverse effects on fetal growth and neurodevelop ment, maternal health, and an increased risk for spontaneous abortion. Updated standards consistent with the current knowledge about the health effects of lead exposure are needed to provide clear guidance to industry, policy makers, and workers, as well as because medical judgments may be influenced by existing regulations.
- The Occupational Safety and Health Administration Standard for lead exposure should be updated to require that occupationally exposed women who are pregnant be removed from lead exposure if their blood lead level is 10 µg/dL or higher.
- If the blood lead level is in the range of 5 to 9 µg/dL, efforts should be made to identify and reduce lead exposure on the job and review appropriate use of personal protective equipment.
- All lead-exposed workers who have the potential to be exposed by lead ingestion, even in the absence of documented elevations in air lead levels, should be under medical surveillance.
- Lead exposure should be regulated in categories of workers currently not covered by the OSHA stan dard.
# Regulation of Alternative Medicines and Dietary Supplements to Ensure Product Safety and Accuracy in Labeling and Marketing
National policy is needed to establish regulatory mechanisms to control the safety and quality of alternative medicines and dietary supplements sold commercially in the United States.
- Health claims for alternative medicines and dietary supplements should meet the same rigorous criteria as claims by drugs used to prevent or treat disease.
- Regulatory standards for the content, labeling, and marketing of such products should be established and enforced.
- The Federal Trade Commission, in cooperation with FDA, should ensure that advertising for dietary supplements is accurate and not misleading.
# Regulatory Authority to Require Lead Safety in Dwellings Occupied by Pregnant Women and Resources to Control Lead Hazards in These Units
State and local health or housing agencies should have the statutory authority to require and enforce lead paint hazard abatement in rental housing where pregnant women reside, to allow parents to bring their ba bies home to safe housing. Such statutes should also have provisions to protect pregnant tenants from retalia tory eviction by property owners unwilling to comply. Jurisdictions should also have public resources available to control lead hazards in those units where private resources are unattainable.
# Mandatory Reporting of All Adult Blood Lead Levels
Public health agencies need to be informed of blood lead testing results on adults in order to identify and investigate new community exposure sources, monitor epidemiological trends, and assure appropriate inter ventions for identified cases, including environmental inspection and case management services. Laboratories should be required to report all blood lead level test results on adults to the health department, preferably in standard electronic form. Such reporting could enable health departments to identify pregnant women with lead exposure above background levels for priority interventions.
# Reimbursement for Blood Lead Testing and Follow-Up Care for Uninsured Pregnant and Lactating Wom en and Their Infants
Blood lead testing and follow up services (including case management, nutritional interventions, chelation therapy, and environmental investigation) are essential to appropriate medical management of pregnant and lactating women with lead exposure above background levels. However, a lack of insurance can be prohibitive to proper care for many women. In addition, such services may not be covered by insurance for documented immigrants during their first 5 years of residence in the United States or at all for undocumented immigrants.
The State Children's Health Insurance Program allows the use of federal funds for prenatal services to women regardless of immigration status in order to ensure the health of the fetus. States should use these funds for services necessary to reduce or treat lead exposure above background levels during the woman's pregnancy and lactation.
# Sharing of Clinical Data Via Electronic Health Records
Proper medical management of pregnant or lactating women with lead exposure above background levels and their infants requires that the medical records of both mother and child contain relevant data related to lead. For example, the infant's chart should contain information about the mother's blood lead level at birth and about identified environmental sources. Likewise, the mother's chart should contain information about the infant's blood lead level. However, such records are likely to be maintained by diferrent health care provid ers and complicated by differing records systems, the possibility of different maternal/child surnames, etc. The adoption of electronic medical records would permit an automated linkage of the two charts to ensure that appropriate data can be transmitted to the other chart.
# HEALTH EDUCATION NEEDS
# Continuing Medical Education on Lead and Pregnancy
Continuing Medical Education (CME) training on lead and pregnancy is needed to familiarize health care providers with this current research base and clinical recommendations. CDC, in consultation and cooperation with medical specialty associations (e.g., ACOG, AAP, American Academy of Family Physicians), nursing asso ciations (e.g., American Nurses' Association, American College of Nurse Midwives), and environmental health associations should develop a training course module on lead and pregnancy or alternatively incorporate a discussion of lead exposure and pregnancy into preexisting educational materials, such as the Agency for Toxic Substances and Disease Registry's Case Studies in Environmental Medicine, which can be taken for con tinuing education credit. The training should include information on evaluating risk factors for lead exposure as part of an occupational, environmental, and lifestyle health risk assessment.
# Environmental Health Requirement in Basic Practitioner's Curriculum
Pediatric medical and nursing education currently lacks sufficient environmental health content neces sary to prepare pediatric health care professionals to prevent, recognize, manage, and treat environmental exposure related disease including lead exposure during pregnancy. Thus, educational opportunities for physicians, nurses, environmental engineers, and other practitioners during their training are needed. Such courses should also incorporate material on cultural competency and health literacy. The Pediatric Environ mental Health Specialty Units (PEHSUs) and CDC's provider education series are appropriate vehicles for these courses. CDC and the PEHSUs should coordinate publications and educational offerings with ACOG, AAP, the American Academy of Family Physicians, and the American College of Nurse Midwives.
# Preconceptional Counseling on Lead Exposure for Adults of Childbearing Age
Primary and reproductive health care providers should provide counseling to patients of childbearing age about the effects of lead on fertility, pregnancy, and infant outcomes. They should educate their patients about possible lead exposure sources and how to reduce exposure in advance of conception. Such counseling should include referrals to appropriate sources for further assistance in assessing and reducing environmental or occupational lead expsosures. CDC should collaborate with the national professional health organizations, such as the American College of Obstetricians and Gynecologists, American Medical Association, and The American Academy of Family Physicians, and nonprofit organizations, such as the March of Dimes, to develop and disseminate educational materials to convey these messages.
# Expand Resources for National Centralized Data Collection and Management Facility
A comprehensive online system is needed to improve dissemination of data on various sources of lead to medical and public health providers and the community. Such a system would provide real-time product iden tification information to alert providers and the communities at risk for exposure. It would also allow agencies that are testing products (e.g., CPSC, FDA, State of California) to enter information on tainted products into one easily accessible database.
# Evaluate the Effectiveness of Currently Available Personal Protective Equipment
The capacity of available personal protective equipment to keep BLLs below 5 µg/dL is an area of needed research. Such studies should also inform the creation of more sophisticated equipment that can ensure that BLLs of workers remain below 5 µg/dL.
# CHAPTER 11. RESOURCES AND REFERRAL INFORMATION
Contact information is provided here for key information sources for topics covered in this report. While not an exhaustive list, these resources provide a useful starting point for readers interested in updates, publications, referrals, or additional information.
For information on lead poisoning prevention, including screening, case management, and referrals to state and local lead poisoning prevention programs: New York Public Health Law §1370-a(2) Summary: Requires dept to promulgate and enforce regulations for screening children and pregnant women for lead poisoning, and for follow-up treatment for those with positive results.
New York Public Health Law §1370-c Summary: Authorizes dept to establish screening intervals and methods, which shall be followed by every physician or other provider of medical care to children or pregnant women.
Connecticut Gen. Stat. §19a-111
The commissioner shall establish, in conjunction with recognized professional medical groups, guidelines consistent with the CDC for assessment of the risk of lead poisoning, screening for lead poisoning and treatment and follow-up care of individuals including children with lead poisoning, women who are pregnant and women who are planning pregnancy.
# RISK REDUCTION LAW(S):
Maryland Code §6-801-6-852; Article 48A §734-737; Real Property § 8-208.2 Comply with specific Risk Reduction standards when notified of certain conditions such as chipping paint or the presence in the unit of a child or pregnant woman with an elevated blood lead level of 15 µg/dl or higher.
Minnesota Statute §144.9504 Lead risk assessment. (a) An assessing agency shall conduct a lead risk assessment of a residence according to the venous blood lead level and time frame set forth in clauses ( 1) to ( 4) for purposes of secondary prevention: within ten working days of a pregnant female in the residence being identified to the agency as having a venous blood lead level equal to or greater than ten micrograms of lead per deciliter of whole blood. Subd. 5. Lead orders. (a) An assessing agency, after conducting a lead risk assessment, shall order a property owner to perform lead hazard reduction on all lead sources that exceed a standard adopted according to section 144.9508.
# EDUCATION LAW(S):
Michigan Comp. Laws §333.5473a (2)(3) Summary: Requires department to establish and conduct educational programs to educate homeowners and remodelers of lead-safe practices and methods of lead-hazard reduction activities; (4): requires department to recommend appropriate maintenance practices for owners of residential property and day care facilities designed to prevent lead poisoning in children 6 years or younger and pregnant women.
# PROPOSED LEGISLATION (NOT ENACTED):
California -Requires the Department to make available to all health care providers that administer perinatal care services informational materials on lead and require providers to make this information available to pregnant women. New York -Bill aimed at eliminating lead hazards in housing which is or will be occupied by pregnant women or children 7 years of age or less.
Ohio -Requires the Director to produce an educational audio-video recording on lead poisoning prevention for at-risk pregnant women. As a likely consequence of its capacity to interfere with biochemical events present in cells throughout the body, inorganic lead exerts a wide spectrum of multisystemic adverse effects. These health impacts range from subtle, subclinical changes in function to symptomatic, life-threatening intoxication.
In recent years, research conducted on leadexposed adults has increased public health concern over the toxicity of lead at low dose. These findings support a reappraisal of the levels of lead exposure, sustained for either short or extended periods of time, that may be safely tolerated in the workplace. In this article we offer health-based recommenda tions on the management of lead-exposed adults aimed at primary and secondary pre vention of lead-associated health problems.
As noted in the introduction to this minimonograph (Schwartz and Hu 2007) In setting forth our perspective on the rec ommended medical management of adult lead exposure, the narrative of this article focuses on four categories of health effects-hyperten sion, decrement in renal function, cognitive dysfunction, and adverse reproductive outcome-that have been the subject of much recent research. The discussion of these end points highlights those studies, that by virtue of their design and scope, were particularly influential in establishing the authors' con cerns regarding the potential for adverse health effects at low to moderate levels of lead exposure in adults. Collectively, these effects support the preventive medical management strategies that are recommended in the tables. A review of the extensive literature on the health effects of lead is beyond the scope of this article, but the reader is referred to reviews on the cardiovascular and cognitive impacts of lead on adults that appear else where in this mini-monograph Shih et al. 2007) reduction of lead exposure at low levels to removal from lead exposure accompanied by probable chelation therapy at the highest lev els. The designation of risks as either "short term" or "long-term," depending on whether the risks are associated with exposure lasting less than or more than 1 year, reflects a quali tative understanding of the duration of lead exposure that may be required to elicit cer tain adverse health effects of lead. For some of the long-term risks, such as hypertension, research employing noninvasive K-shell X-ray fluorescence measurement of lead in bone, a biomarker of long-term cumulative exposure, suggests that several years of sustained eleva tions in blood lead may be necessary for a significant risk to emerge. The use of 1 year as a cut-point in the table is not intended to represent a sharp division, in terms of cumu lative dose, between what might constitute a short-term versus a long-term risk nor does it imply that a significant long-term risk begins to exist as soon as 1 year is surpassed. Blood lead, a measure of the amount of lead circu lating in the tissues, reflects both recent exogenous exposure as well as endogenous redistribution of lead stored in bone.
The categorization of risks in Table 1 by discrete intervals of blood lead concentration is a qualitative assessment. In clinical practice, substantial interindividual variability in the susceptibility to symptomatic adverse effects of lead is commonly observed. Factors that might influence the risk of lead toxicity in adults include preexisting disease affecting relevant target organs (e.g., hypertension, renal disease, or neurologic dysfunction), nutritional deficiencies that modify the absorption or distribution of lead (e.g., low dietary calcium or iron deficiency), advanced age, and genetic susceptibility. Although recent studies suggest that polymorphisms in specific genes may modify the toxicokinetics and renal effects of lead (Theppeang et al. 2004;Weaver et al. 2006;, research findings at present are insufficient to conclusively identify genotypes that confer increased risk.
Recommendations for medical management of adult lead
# Health Effects at Low Dose
Hypertension. Animal investigations support a pressor effect of lead at low dose (Fine et al. 1988;Gonick et al. 1997;Vaziri 2002). Epidemiologic investigations conducted in large general population samples (e.g., Harlan 1988;Nash et al. 2003;Pocock et al. 1988;Schwartz 1988) suggest lead may elevate blood pressure in adults at blood lead concen trations < 20 µg/dL. In some human studies of the link between blood lead and blood pressure, the relationship appeared to be influenced by subjects' sex or race (e.g., Den Hond et al. 2002;Staessen et al. 1996;Vupputuri et al. 2003). Three meta-analyses of studies examining the relationship between blood lead and blood pressure found rela tively consistent effects of blood lead on blood pressure. The studies showed statistically signif icant coefficients for a 2-fold increase in blood lead of 1.0 mmHg (Nawrot et al. 2002;Staessen et al. 1994) or 1.25 mmHg (Schwartz 1995) for systolic blood pressure, and 0.6 mmHg for diastolic blood pressure (Nawrot et al. 2002;Staessen et al. 1994).
The study populations analyzed in these meta-analyses included many with blood lead concentrations < 20 µg/dL. Further support for the impact of lowlevel lead exposure on blood pressure has emerged from studies employing K-shell X-ray fluorescence measurement of lead in bone, a biomarker of long-term cumulative lead exposure. In two major studies drawn from samples of the general population, bone lead concentration was a significant predictor of the risk of hypertension Korrick et al. 1999). Findings from the study by illustrate the associated risk. In that general population sample of middle-aged to elderly men (n = 590), the average blood lead concentration was 6.3 µg/dL. On the basis of the subjects' ages (mean 67 ± 7.2 years), it may be expected that they lived most of their adult lives at a time when the blood lead concentration of the general population ranged from 10 to 25 µg/dL (Hofreuter et al. 1961;Mahaffey et al. 1982;Minot 1938). Comparing the lowest with the highest quintile of bone lead among that cohort, a tibia bone lead incre ment of 29 µg/g was associated with a 1.5 odds ratio (OR) for hypertension . Given the slope of 0.05 that has described the linear relation ship between tibia bone lead concentration and cumulative blood lead index in subjects with chronic lead exposure in many studies (Hu et al. 2007), this increment in bone lead is roughly equivalent to a cumulative blood lead index of 580 µg/dL - years (i.e., 29 ÷ 0.05 = 580). Considered in the context of a 40-year working lifetime, the risk of leadassociated hypertension may be significantly reduced by preventive measures that lower chronic workplace blood lead concentrations from the 20s and 30s µg/dL range to < 10 µg/dL. For example, a change in average workplace blood lead concentration from 25 to 10 µg/dL over a 40-year working life time would reduce a worker's cumulative blood lead index by 600 µg/dL - years, slightly more than the 580 µg/dL - years cited above.
Hypertension is a significant risk factor for cardiovascular and cerebrovascular mortal ity. As reviewed in an accompanying article in this mini-monograph , studies conducted in general popula tion cohorts have consistently observed a pos itive association between lead exposure and cardiovascular disease. Because of their size and design, studies derived from the National Health and Nutrition Evaluation Surveys (NHANES) are particularly notable. A 16-year longitudinal analysis of the general population cohort studied between 1976 and 1980 as part of NHANES II found that blood lead concentrations of 20-29 µg/dL at baseline were associated with 39% increased mortality from circulatory system disease compared with subjects with blood lead < 10 µg/dL (Lustberg and Silbergeld 2002). Two studies recently examined the longi tudinal relationship between blood lead con centration and cardiovascular mortality among participants in NHANES III. In a 12-year longitudinal study of participants in NHANES III, ≥ 40 years of age (n = 9,757), the subgroup with blood lead concentration ≥ 10 µg/dL (median, 11.8) had a relative risk of cardiovascular mortality of 1.59 (95% CI, 1.28-1.98) compared with subjects with blood lead < 5 µg/dL (Schober et al. 2006). In a 12-year longitudinal analysis of subjects ≥ 17 years of age (n = 13,946), the relative risk for cardiovascular mortality was 1.53 (95% CI,, comparing a blood lead of 4.92 µg/dL (80th percentile of the dis tribution) with a blood lead of 1.46 µg/dL (20th percentile of the distribution) .
Renal effects. Renal injury that appears after acute high-dose lead exposure may include reversible deficits in proximal tubular reabsorption and prerenal azotemia induced by renal vasoconstriction and/or volume depletion (Coyle et al. 200;Wedeen et al. 1979). In a minority of exposed individuals, years of chronic, high-dose lead exposure may result in chronic lead nephropathy, a slowly progressive interstitial fibrosis characterized by scant proteinuria (Lilis et al. 1968). Epidemiologic investigations of renal function in workers with lower levels of chronic lead exposure have yielded variable findings. For example, in a cohort of approximately 800 current and former lead workers with mean blood lead of 32 ± 15 µg/dL, there was no significant linear relationship between blood lead concentration and two measures of renal function, serum creatinine and creatinine clearance (Weaver et al. 2003). There was an interaction between age and tibia lead concen tration, a biomarker of cumulative lead expo sure, on these same biomarkers, resulting in a trend toward worse renal function with increasing bone lead in the oldest tercile of workers (> 46 years of age) but improved renal function with increasing bone lead in the youngest workers (≤ 36 years of age). The authors suggested that lead-induced hyperfil tration, a finding noted in other studies, might presage the eventual development of leadinduced renal insufficiency. Both blood lead and tibia lead were correlated with increased urinary N-acetyl-β-D-glucosaminidase (NAG), a biomarker of early biological effect on the renal tubule, but in an analysis of a smaller subset of the lead workers (n = 190) that con trolled for the relatively low levels of urinary cadmium (1.1 ± 0.78 µg/g creatinine), only the relationship with tibia lead and NAG remained significant (Weaver et al. 2003). Among a cohort of 70 active lead workers with a median blood lead concentration of 32 µg/dL (range, 5-47), there were modest correlations between blood lead and urinary β-2-microglobulin (r = 0.27; p = 0.02), and between cumulative blood lead index and NAG (r = 0.25; p = 0.04) (Gerhardsson et al. 1992).
Several studies conducted in general popu lation samples have reported an association between blood lead concentration and com mon biomarkers of renal function (serum creatinine and creatinine clearance). In a crosssectional investigation of a subcohort of mid dle-aged to elderly men enrolled in the Normative Aging Study (n = 744), there was a negative correlation between blood lead (mean, 8.1 ± 3.9 µg/dL; range, < 4.0-26.0 µg/dL) and measured creatinine clearance, after natural log transformation of both variables and adjustment for other covariates (Payton et al. 1994). Among an adult population that included subjects with environmental cad mium exposure , log-transformed blood lead concentration was inversely correlated with measured creatinine clearance (Staessen et al. 1992). In a population-based study of Swedish women 50-59 years of age (n = 820), low lev els of blood lead (mean 2.2 µg/dL; 5th-95th percentiles, 1.1-4.6 µg/dL) were inversely correlated with creatinine clearance and glomerular filtration rate, after adjusting for age, body mass index, urinary or blood cad mium, hypertension, diabetes, and regular use of nonsteroidal anti-inflammatory drug (NSAID) medication (Akesson et al. 2005).
Individuals with other risk factors for renal disease, notably hypertension and dia betes, may be more susceptible to an adverse impact of low-level lead exposure on renal function. Among adults participating in NHANES III (n = 15,211), blood lead was a risk factor for elevated serum creatinine (defined as ≥ 99th percentile of the analyte's race and sex specific distributions, generally > 1.2-1.5 mg/dL) and "chronic kidney dis ease" (defined as an estimated glomerular fil tration rate < 60 mL/min) only among subjects with hypertension (n = 4813) (Muntner et al. 2003). Compared with hypertensives in the lowest quartile of blood lead (range, 0.7-2.4 µg/dL), hypertensive subjects in the next highest quartile of blood lead (range, 2.5-3.8 µg/dL) had a covariate adjusted OR for elevated serum creatinine of 1.47 (95% CI, 1.03-2.10) and for chronic kidney disease of 1.44 (95% CI, 1.00-2.09). At the next highest quartile of blood lead (range, 3.9-5.9 µg/dL), the covariate-adjusted OR for elevated serum creatinine was 1.80 (95% CI, 1.34-2.42), and for chronic kidney disease it was 1.85 (95% CI, 1.32-2.59). In a subcohort of middle-aged to elderly men par ticipating in the Normative Aging Study (n = 427, blood lead 4.5 ± 2.5 µg/dL), multiple regression analysis revealed that log-trans formed blood lead was positively correlated with serum creatinine in hypertensive but not normotensive subjects (Tsaih et al. 2004). In a longitudinal study of this cohort over a mean of 6 years, an interaction between lead and diabetes yielded a positive association between baseline blood lead concentration and change in serum creatinine that was strongest in diabetic subjects (Tsaih et al. 2004). An interaction with diabetes was also present in the association of tibial lead con centration with longitudinal change in serum creatinine (Tsaih et al. 2004). Although these general population studies are consistent with an adverse effect of lead exposure on renal function at notably low levels, the extent to which diminished renal function may itself result in increased body lead burden has not been fully elucidated.
Cognitive dysfunction. A few studies examining relatively small numbers of work ers (n ≤ 100) with blood lead concentrations ranging approximately 20-40 µg/dL have associated lead exposure with subclinical decrements in selective domains of neurocog nitive function (Barth et al. 2002;Hänninen et al. 1998;Mantere et al. 1984;Stollery 1996). Among a large cohort of current and former inorganic lead workers studied in Korea, a cross-sectional analysis (n = 803 workers) (Schwartz et al. 2001) and a 3-year longitudinal analysis (n = 576 workers) (Schwartz et al. 2005) found that blood lead concentrations across the approximate range of 20-50 µg/dL were associated with subclini cal neurocognitive deficits. Among a small population of former lead workers (n = 48) and age-matched controls with similar blood lead concentrations (approximately 5 µg/dL in both groups; range, 1.6-14.5 µg/dL; mean age, 39.8 years), increases in current blood lead concentration within the entire study population were correlated with poorer per formance on several tests of neurocognitive function but on only one measure was cumu lative lead exposure (measured in the workers) associated with poorer performance (Winker et al. 2005).
In the population-based sample of adults 20-59 years of age participating in the NHANES III study (n = 4937), there was no relationship between blood lead concentra tion (geometric mean, 2.51 µg/dL) and covariate-adjusted performance on neu rocogntive function (Krieg et al. 2005). However, significant associations have emerged in some studies of older adults with slightly higher blood lead concentrations. In a rural subset of elderly women (mean age, 71.1 ± 4.7 years; n = 325) with background, community lead exposure (geometric mean blood lead concentration, 4.8 µg/dL; range, 1-21 µg/dL), certain measures of neuro psychologic function (Trailmaking part B and Digit Symbol test) were performed more poorly by women in the upper 15th per centile of blood lead (blood lead ≥ 8 µg/dL, n = 38; Muldoon et al. 1996). However, in the slightly younger subset of elderly women who resided in an urban area (mean age, 69.4 ± 3.8 years; n = 205), no relationship between blood lead (geometric mean, 5.4 µg/dL) and neuropsychologic perfor mance was discernible (Muldoon et al. 1996). In a general population sample of middleaged to elderly men (n = 141; mean age, 66.8 ± 6.8 years) with a mean blood lead con centration of 5.5 ± 3.5 µg/dL examined as part of the Normative Aging Study, increased blood lead concentration was associated with poorer performance on neuropsychologic assessment of memory, verbal ability, and mental processing speed (Payton et al. 1998). In a larger subset of men (n = 736; mean age, 68.2 ± 6.9 years) from the Normative Aging Study assessed with the Mini-Mental Status Examination (MMSE), the OR for having a test score associated with an increased risk of dementia was 3.4 (95% CI, 1.6-7.2) compar ing the mean blood lead of the highest quar tile (mean, 8.9 µg/dL) to that of the lowest quartile (mean, 2.5 µg/dL) . There was a positive interaction between age and blood lead, which is consis tent with a lead-associated acceleration in agerelated neurodegeneration.
As reviewed in an accompanying article in this mini-monograph (Shih et al. 2007), there is evidence that at low levels of lead exposure, biomarkers of cumulative lead exposure, such as lead in bone, may be associated with an adverse impact on neurocognitive function that is not reflected by measurement of lead in blood. Among subjects from the Normative Aging Study (n = 466; mean age, 67.4 ± 6.6 years) examined for longitudinal change in MMSE score over an average of 3.5 ± 1.1 years, higher patella bone lead con centrations, a biomarker of cumulative lead exposure, predicted a steeper decline in per formance (Weisskopf et al. 2004). By com parison, baseline blood lead concentration (median, 4 µg/dL; interquartile range = 3, 5) did not predict change in MMSE score. In a longitudinal analysis of performance on a bat tery of cognitive tests in a subset of the Normative Aging Study, bone lead measure ments were predictive of worsening perfor mance over time on tests of visuospatilal/ visuomotor ability (Weisskopf et al. 2007). In a cross-sectional analysis of 985 community dwelling residents 50-70 years of age, increas ing tibia bone lead concentrations were sig nificantly associated with decrements in cognitive function, whereas an impact of blood lead (mean, 3.46 ± 2.23 µg/dL) was not apparent (Shih et al. 2006).
Reproductive outcome in women. Adverse effects on reproductive outcome constitute a special risk of lead exposure to women of reproductive age. A nested case-control study examined the association of blood lead con centration with spontaneous abortion in a cohort of 668 pregnant women seeking pre natal care in Mexico City . After matching for maternal age, edu cation, gestational age at study entry, and other covariates, the OR for spontaneous abortion before 21 weeks gestation was 1.13 (95% CI, 1.01-1.30) for every 1 µg/dL increase in blood lead across the blood lead range of 1.4-29 µg/dL. Compared with the reference category of 15 µg/dL had ORs for sponta neous abortion of 2.3, 5.4, and 12.2, respec tively (test for trend, p = 0.03). Although several earlier studies failed to detect this sub stantial impact, they may have been subject to methodologic limitations not present in the Mexico City investigation .
Several studies have found that lead expo sure during pregnancy affects child physical development measured during the neonatal period and early childhood. In an extensively studied cohort of 272 full-term, parturient women from Mexico City with environmental lead exposure common to the region (mean maternal blood lead, 8.9 ± 4.1 µg/dL; mean tibia bone lead, 9.8 ± 8.9 µg/g; range, 12-38 µg/g), every increase of 10 µg/g in maternal tibia lead was associated with a 73-g (95% CI, 25-121) decrease in birth weight . The impact of tibia bone lead on birth weight was nonlinear and was most pronounced in mothers with the highest quartile of bone lead (> 15-38 µg/g) where the decrement relative to the lowest quartile was estimated to be 156 g. Primarily in the same cohort, a maternal patella lead concentration > 24.7 µg/g was associated with an OR of 2.35 (95% CI, for a neonate with one category smaller head cir cumference at birth, assessed as a five-cate gory-ordered variable . In a different Mexico City cohort, each doubling of maternal blood lead at 36 weeks of pregnancy (geometric mean, 8.1 µg/dL; 25th-75th percentile, 5-12 µg/dL) was associ ated with a decrease of 0.37 cm (95% CI, 0.57-0.17) in the head circumference of a 6-month-old infant (Rothenberg et al. 1999b).
Prenatal lead exposure assessed by umbili cal cord blood lead concentration has been inconsistently associated with an adverse effect on neurobehavioral development in childhood. However, recent studies suggest that mobiliza tion of maternal bone lead during pregnancy may contribute to fetal lead exposure in ways that may be incompletely reflected by the sin gle measurement of umbilical cord wholeblood lead . In a prospective study conducted in Mexico City of 197 mother-infant pairs, a statistically significant adverse effect of umbili cal cord blood lead (mean, 6.7 ± 3.4 µg/dL; range, 1.2-21.6 µg/dL) was also accompanied by an independent adverse effect of maternal bone lead burden on the 24-month Mental Development Index (MDI) of the Bayley Scales of Infant Development, which decreased 1.6 points (95% CI, 0.2-3.0) for every 10-µg/g increase in maternal patellar lead (mean, 17.9 ± 15.2 µg/g; range, < 1-76.6 µg/g) .
A prospective study that measured mater nal plasma lead and maternal whole-blood lead during pregnancy found that maternal plasma lead during the first trimester was the stronger predictor of infant mental develop ment at 24 months of age . In this cohort, first trimester maternal plasma lead was 0.016 ± 0.014 µg/dL and first trimester maternal whole-blood lead was 7.07 ± 5.10 µg/dL (n = 119). Adjusting for covariates that included maternal age, mater nal IQ, child sex, childhood weight and height for age, and childhood whole-blood lead at 24 months, an increase of one SD in log e (natural log)-transformed plasma lead in the first trimester was associated with a 3.5-point decrease in score on the 24-month MDI of the Bayley Scales of Infant Development. The corresponding impact of one SD increase in log e maternal whole blood during the first trimester was a 2.4-point decrease in the 24-month MDI. The logarithmic relationship between mater nal plasma and blood lead concentrations and infant MDI indicated that the strongest effects occurred among mothers with the lowest plasma and blood lead concentrations.
Two long-term prospective studies that conducted multiple measurements of mater nal blood lead during pregnancy and child hood have identified an adverse impact of low-level prenatal lead exposure on postnatal neurobehavioral development extending beyond infancy. Applying a repeated mea sures linear regression technique to analysis of age-appropriate IQ test data obtained in 390 children 3-7 years of age, the Yugoslavia Prospective Lead Study found independent adverse effects of both prenatal and postnatal blood lead. After controlling for the pattern of change in postnatal blood lead and other covariates, IQ decreased 1.8 points (95% CI, 1.0-2.6) for every doubling of prenatal blood lead, which was assessed as the average of maternal blood lead at midpregnancy and delivery (mean, 10.2 ± 14.4 µg/dL; n = 390) . The Mexico City Prospective Lead Study used generalized lin ear mixed models with random intercept and slope to assess the impact on IQ measured at 6-10 years of age of blood lead measurements systematically obtained during weeks 12, 20, 24, and 36 of pregnancy, at delivery, and at multiple points throughout childhood . Geometric mean blood lead during pregnancy was 8.0 µg/dL (range, 1-33 µg/dL; n = 150); from 1 through 5 years it was 9.8 µg/dL (2.8-36.4 µg/dL), and from 6 through 10 years it was 6.2 µg/dL (range, 2.2-18.6 µg/dL). IQ at 6 to 10 years of age, assessed by the Wechsler Intelligence Scale for Children-Revised, decreased significantly only with increasing natural-log thirdtrimester blood lead, controlling for other blood lead measurements and covariates. Every doubling of third trimester blood lead (geometric mean of maternal blood lead at weeks 28 and 36 = 7.8 µg/dL, 5th-95th per centile: range, 2.5-24.6 µg/dL) was associated with an IQ decrement of 2.7 points (95% CI, 0.9-4.4). Notably, the nonlinear (i.e., log-lin ear) relationships detected in the Yugoslavia and Mexico City studies indicate that across a maternal blood lead range of 1-30 µg/dL, an increase in blood lead from 1 to 10 µg/dL will account for more than half the IQ decrement.
Two independent cohorts have provided evidence that maternal lead burden during pregnancy may be associated with increased risk of pregnancy hypertension and/or ele vated blood pressure during pregnancy. In a retrospective study of 3,210 women during labor and delivery, increasing umbilical cord blood lead levels (mean, 6.9 ± 3.3 µg/dL; range, 0-35 µg/dL) were associated with increased systolic blood pressure during labor (1.0 mmHg for every doubling of blood lead) and increased odds of hypertension (not fur ther defined) recorded any time during preg nancy (OR = 1.3; 95% CI, 1.1-1.5) for every doubling of blood lead . A prospective study of third trimester blood lead (geometric mean, 2.3 ± 1.4 µg/dL; range, 0.5-36.5 µg/dL) in 1,188 predomi nantly Latina immigrants showed that, in the immigrants, every doubling in blood lead was associated with increased third-trimester sys tolic blood pressure (1.2 mmHg; 95% CI, 0.5-1.9) and diastolic blood pressure (1.0 mmHg; 95% CI, 0.4-1.5) (Rothenberg et al. 1999a). A study of a subset of the same cohort (n = 637) without regard to immigra tion status found that every 10-µg/g increase in calcaneus (heel) bone lead increased the OR of third trimester pregnancy hypertension (systolic > 90 and/or diastolic > 140 mmHg) by 1.86 (95% CI, 1.04-3.32) .
# Medical Surveillance for Lead-Exposed Workers
The OSHA workplace standard for lead exposure in general industry (adopted in 1978) and a corresponding standard for lead exposure in construction trades (adopted in 1993) set forth medical surveillance require ments that include baseline and periodic medical examinations and laboratory testing. Details of the two standards, which establish distinct criteria for the implementation of surveillance, can be found on the OSHA website (OSHA 2002). Because of the con cern regarding adverse health effects of lead associated with the lower levels of exposure discussed in this article, we recommend a revised schedule of medical surveillance activ ities (Table 2). Unlike the OSHA medical surveillance requirements, which apply only to workers exposed to airborne lead levels ≥ 30 µg/m 3 as an 8-hr time-weighted aver age, the recommendations in Table 2 are intended to apply to all lead-exposed workers who have the potential to be exposed by lead ingestion, even in the absence of documented elevations in air lead levels (Sen et al. 2002). As shown in Table 2, the level of a worker's current blood lead measurement, as well as possible changes in lead-related exposure, influences the recommended time interval for subsequent blood lead measurements. Blood lead measurements should be obtained from a clinical laboratory that has been designated by OSHA as meeting the specific proficiency requirements of the OSHA lead standards. OSHA maintains a list of these laboratories on its website (OSHA 2005). Venous blood should be used for biological monitoring of adult lead exposures, except where prohibited by medical or other reasons. Routine meas urement of zinc protoporphyrin, a require ment of the OSHA lead standards, is not recommended in Table 2 because it is an insensitive biomarker of lead exposures in individuals with blood lead concentrations < 25 µg/dL (Parsons et al. 1991).
The content of the baseline or preplace ment history and physical examination for lead-exposed workers should continue to fol low the comprehensive scope set forth in the OSHA lead standard for general industry. Measurement of serum creatinine will iden tify individuals with chronic renal dysfunc tion who may be subject to increased health risks from lead exposure. With the potential exception of an annual blood pressure meas urement and a brief questionnaire regarding the presence of medical conditions (such as renal insufficiency) that might increase the risk of adverse health effects of lead exposure, medical evaluations for lead-exposed workers should be unnecessary as long as blood lead concentrations are maintained < 20 µg/dL. Annual education of lead workers regarding the nature and control of lead hazards, and ongoing access to health counseling regarding lead-related health risks are recommended as preventive measures.
# Lead Exposure during Pregnancy and Lactation
As summarized earlier in this article, the recent findings concerning lead-related adverse repro ductive outcomes render it advisable for preg nant women to avoid occupational or avocational lead exposure that would result in blood lead concentrations > 5 µg/dL. Calcium supplementation during pregnancy may be especially important for women with past exposure to lead. Calcium decreases bone resorption during pregnancy and may minimize release of lead trial among Mexican women with mean blood lead concentrations of approximately 9 µg/dL found that calcium supplementation during lactation may reduce the lead concen tration of breast milk by 5-10% . Breast feeding should be encour aged for almost all women Sinks and Jackson 1999), with decisions concerning women with very high lead exposure addressed on an individual basis.
# Medical Treatment of Elevated Blood Lead Concentration and Overt Lead Intoxication
Removal from all sources of hazardous lead exposure, whether occupational or nonoccu pational, constitutes the first and most funda mental step in the treatment of an individual with an elevated blood lead concentration. A careful history that inquires about a broad spectrum of potential lead sources is recom mended (Occupational Lead Poisoning Prevention Program 2006). Removal from occupational lead exposure will usually require transfer of the individual out of any environ ment or task that might be expected to raise the blood lead concentration of a person not using personal protective equipment above background levels (i.e., 5 µg/dL). If there has been a history of an affected individual bring ing lead-contaminated shoes, work clothes, or equipment home from the workplace, evalua tion of vehicles and the home environment for significant levels of lead-containing dust might be considered (Piacitelli et al. 1995). Although such "take-home" exposure might contribute to further lead exposure of the worker, it ordi narily poses more of a potential risk to young children and pregnant or nursing women who share the worker's home environment Roscoe et al. 1999).
Medical treatment of individuals with overt lead intoxication involves decontamina tion, supportive care, and judicious use of chelating agents. Comprehensive discussion of such treatment is beyond the scope of this article but has been reviewed in recent med ical toxicology texts (Kosnett 2001(Kosnett , 2005. A variety of chelating agents has been demon strated to decrease blood lead concentrations and increase urinary lead excretion. A recent double-blind randomized clinical trial of oral chelation in young children with blood lead concentrations ranging from 22 to 44 µg/dL found that the drug succimer lowered blood concentrations transiently but did not improve cognitive function (Dietrich et al. 2004;. Although anecdotal evidence suggests that chelation has been associated with improvement in symptoms and decreased mortality in patients with lead encephalopathy, controlled clinical trials demonstrating efficacy are lacking. Treatment recommendations are therefore mostly empiric, and decisions regarding the initiation of chelation therapy for lead intoxication have occasionally engendered controversy.
In our experience, adults with blood lead concentrations ≥ 100 µg/dL almost always warrant chelation, as levels of this magnitude are often associated with significant symptoms and may be associated with an incipient risk of encephalopathy or seizures. Occasionally, patients with very high blood lead con centrations may have no overt symptoms. Patients with blood lead concentrations of 80-99 µg/dL, with or without symptoms, can be considered for chelation treatment, as may some symptomatic individuals with blood lead concentrations of 50-79 µg/dL. These demarcations are imprecise, however, and decisions on chelation should be made on a case-by-case basis after consultation with an ;
Consider removal (see Table 1) Revert to BLL every 6 months after 3 BLLs < 10 µg/dL . Using 1% as a guide, it
Recommendations for medical management of adult lead experienced specialist in occupational medi cine or medical toxicology.
Hair lead analysis or measurement of urine lead concentration seldom provide exposure information of clinical value beyond that provided by the history and the measure ment of blood lead concentration. Chelation initiated exclusively on the basis of hair or urine lead levels or chelation of asymptomatic individuals with low blood lead concentra tions is not recommended.
Adults with overt lead intoxication will generally experience improvement in symp toms after removal from lead exposure and decline in blood lead concentration. This clini cal observation on improvement in overt symptoms finds some support from the rela tively limited number of studies that have examined the impact of naturally declining blood lead concentrations on cognitive func tion in occupationally exposed subjects (Chuang et al. 2005;Lindgren et al. 2003;Winker et al. 2006). Improvement or resolu tion of neurocognitive or neurobehavioral symptoms may sometimes lag the decline in blood lead concentration, possibly because of the relatively slower removal of lead from the central nervous system (Cremin et al. 1999;Goldstein et al. 1974). The pace of improve ment can be highly variable, and may range from weeks to a year or more depending on the magnitude of intoxication. Anecdotal expe rience and analogy to other forms of brain injury suggest a potential role for rehabilitative services (e.g., physical therapy, cognitive reha bilitation) in enhancing the prospect for recov ery, and in demonstrating the capacity for safe return to work. Short-term improvement in neurocognitive function associated with a decline in blood lead concentration does not obviate concern that long-term cumulative lead exposure may nonetheless have a deleterious effect on cognitive reserve, and may accelerate age-related decline in cognitive function (Schwartz et al. 2005;Weisskopf et al. 2004).
# Additional Management Considerations
With appropriate engineering controls, safe work practices, and personal protective equip ment, workers without a previous history of substantial lead exposure should be able to work with lead in a manner that minimizes the potential for hazardous levels of exposure. For such workers, elevations in blood lead concentration that result from unforeseen transient increases in exposure will often decline promptly once the exposure is con trolled. However, in a worker with a long his tory of high exposure, redistribution of lead from a large internal skeletal burden may result in a prolonged elevation of blood lead concentration despite marked reductions in external lead dose.
The recommendations for management of adult lead exposure contained in this article are derived from consideration of risks to health, and have not been the subject of a cost-benefit analysis examining economic feasibility or social impacts. Nonmedical, socioeconomic factors will likely influence how workers, employers, and clinicians respond to the rec ommendations. In particular, the blood lead concentrations for which some major interven tions, such as removal from lead exposure, are recommended are considerably lower than those explicitly specified in the current OSHA lead standards (OSHA 2002). The OSHA standards do require an employer to imple ment reductions in exposure recommended by a physician who determines an employee has a "detected medical condition" that places him or her at increased risk of "material impair ment to health." This nonspecific provision could form the basis for implementation of protective workplace action at the lower blood concentrations recommended by the authors. Nonetheless, clinicians should inform patients that such recommendations may be contested by an employer or an insurer, and could poten tially jeopardize their job benefits or work status. Prudent case management that consid ers the worker's perspective on their unique health risks and employment situation will usually be advisable.
# Interpretative Guidance for Clinical Laboratory Report Forms
Clinical laboratories routinely offer brief interpretative guidance on the forms that report the result of blood lead concentrations.
There is considerable variability among labo ratories regarding the content of such guid ance, and laboratories exercise their own discretion regarding the source and detail of the information they provide. Overexposure to inorganic lead continues to be an important problem worldwide. The reduction of lead in the U.S. environment, largely accomplished through effective EPA regulatory efforts, has resulted in lowering the overall geometric mean whole blood lead level (BLL) for the general population in the United States from approximately 13 µg/dL (0.63 µmol/L) in the 1970s to less than 2 µg/dL (0.10 µmol/L) (CDC 2005;NCHS 1984). Lead exposure remains a significant public health and medical concern for thousands of children and adults exposed primarily through remaining lead-based paint in older housing stock as well as to workplace exposures, although other sources occur. For children and adults, the role of environmental investigation, identification and reduction or elimination of sources of exposure remains of primary importance. While the clinical care of lead-exposed children has been well established in the pediatric and public health communities, similar clinical recommendations for adults have not been widely available.
The purpose of this document is to provide useful advice to clinicians caring for adult patients who have been exposed to lead, whether at work, at home, through hobbies, in the community, through consumer products, retained bullets, or other sources. This document is derived, in part, from the input of an expert panel convened by the Association of Occupational and Environmental Clinics (AOEC). However, three clinical scholars then considered the medical evidence submitted by the expert panel and incorporated many of the conclusions reached by this panel. This paper, therefore, reflects a general consensus of the clinical views of AOEC members, not necessarily the expert panel, particularly in areas where the expert panel had been unable to come to consensus. The following points are emphasized:
1) Medical care serves as an adjunct to public health and industrial hygiene exposure control.
Clinicians who evaluate patients with potential lead exposure should have appropriate referral mechanisms in place for prevention of further exposure to lead. Although one goal of health care is to remove the patient from exposure, the social consequences of potential disruption of housing or of income may be important and must be considered by the clinician.
2) Current occupational standards are not sufficiently protective and should be strengthened. Although the federal Occupational Safety and Health Administration's (OSHA) lead standards have provided guidance that has been beneficial for lead-exposed workers, these regulations have not been substantially changed since the late 1970s and thus are primarily based on health effects studies that are well over three decades old. There is an urgent need to revise them.
3) The clinical guidelines presented here are appropriate for adults, recognizing that younger adults, particularly those in workplace settings, may share developmental risks that place them closer to pediatric populations, and that maternal exposure, whether in the workplace or in the general environment, places the developing fetus at risk for exposure.
) Clinicians should feel free to contact any of the member AOEC clinics for additional telephone advice, and are encouraged to refer patients when appropriate.
# Background
Lead is used in over 100 industries. Job activities known to involve the use or disturbance of lead include: handling of lead-containing powders, liquids, or pastes; production of dust or fumes by melting, burning, cutting, drilling, machining, sanding, scraping, grinding, polishing, etching, blasting, torching, or welding lead-containing solids; and dry sweeping of leadcontaining dust and debris. Adults also encounter lead in environmental settings and through activities such as home remodeling, particularly in homes built before 1978 that contain leadbased paint, lead-contaminated consumer products, traditional remedies, moonshine whiskey, hobbies, such as melting lead sinkers or use of target ranges, from retained bullets, and through other sources.
Lead is not an essential element and serves no useful purpose in the body. A substantial body of recent research demonstrates that multiple health effects can occur at levels once considered safe. The routes of exposure for inorganic lead are inhalation and ingestion. Once absorbed, lead is found in all tissues, but eventually 90% or more of the body burden is accumulated (or redistributed) into bone with a biological half-life of years to decades. Lead is excreted primarily in the urine. Lead does not remain in the bone permanently but is slowly released back into the blood.
The "dose" or quantity of lead that a person receives will be determined by the concentration of lead in the air and/or the amount ingested as well as the duration of such exposure. The BLL remains the predominant biological marker used in clinical assessment, workplace monitoring, public health surveillance, and regulatory decisions regarding removal from exposure under the OSHA lead standards.
Research tools capable of measuring cumulative lead exposure, such as the use of in-vivo Kshell X-ray fluorescence (K-XRF) instruments for the rapid, non-invasive measurement of lead in bone, have expanded recent understanding of long-term consequences from lead exposure on a population basis. These studies have demonstrated adverse effects of lead exposure across populations, including on neurologic, reproductive and renal function and on blood pressure, that occur at extremely low levels of exposure and appear not to have a threshold. However, because inter-individual differences are greater than population differences at lower lead levels, these effects are less important for clinical evaluation than they are for public health policy. The preponderance of the evidence for adverse effects at levels of exposure far below those currently permitted by OSHA speaks forcefully for an immediate reduction in permissible exposure levels in the workplace and for enhanced public health attention to those sources, including among self employed individuals, not currently subject to OSHA regulation.
# Blood Lead Level and Zinc Protoporphyrin
Because lead interferes with biochemical processes occurring in cells throughout the body, adverse effects occur in multiple organ systems. The non-uniformity of symptoms that appear in exposed individuals, as well as a growing body of epidemiologic studies, suggest that wide variation exists in individual susceptibility to lead poisoning. Early overt symptoms in adults are often subtle and nonspecific, involving the nervous, gastrointestinal, or musculoskeletal systems. High levels of exposure can result in delirium, seizures, stupor, coma, or lead colic.
Other overt signs and symptoms include hypertension, peripheral neuropathy, ataxia, tremor, gout, nephropathy, and anemia. In general, symptoms increase with increasing BLLs.
In addition to exposure that occurs from external sources, carefully performed lead isotope studies demonstrated that pregnancy and lactation are both associated with large increases in the release of lead from the maternal skeleton . High levels of lead in women's bones at the time of childbirth corresponded to lower birth weight , lower weight gain from birth to one month of age , and reduced head circumference and birth length .
In males, abnormal sperm morphology and decreased sperm count have been observed at BLLs of approximately 40 µg/dL (1.93 µmol/L) or less (Telisman et al. 2000). In the absence of effects on sperm count or concentration, the impact of paternal lead exposure on reproductive outcome is uncertain.
Recent research has examined several genetic polymorphisms that may influence lead uptake, distribution, and target organ toxicity. However, at this point in time, research findings are insufficient to conclusively identify subpopulations that may have increased susceptibility to lead toxicity based on specific genotypes. Other factors that might modify the risk of lead toxicity include pre-existing disease affecting relevant target organs (such as diabetic nephropathy or borderline hypertension), nutritional deficiencies (particularly of dietary cations such as iron and calcium), ethnicity, and aging.
# CLINICAL ASSESSMENT OF LEAD EXPOSURE
Taking a detailed medical and occupational/environmental history is a fundamental step in the assessment of a person with lead exposure. It is important to ask about exposure to lead in current and previous jobs (Table 1), protections used, biological and air monitoring data, hygiene practices, knowledge and training, hobbies, traditional medications, moonshine use and other non-occupational sources (Table 2). A medical and reproductive history is essential in identifying individuals at increased risk of adverse health effects from lead exposure. Table 3 summarizes symptoms and target organ toxicity of lead at progressive BLLs. Physical exam findings in lead poisoning are frequently lacking. Gingival lead lines and wrist or foot drop are rarely seen.
rarely seen.
# Blood Lead Level and Zinc Protoporphyrin
The BLL is the most convenient and readily interpretable of the available lead biomarkers. It is mainly an estimate of recent external exposure to lead, but it is also in equilibrium with bone lead stores. The BLL alone is not a reliable indicator of prior or cumulative dose or total body burden; nor can a single BLL be used to confirm or deny the presence of chronic health effects thought due to lead exposure. The "normal" or "reference range" BLL is less than 5 µg/dL (0.24 µmol/L) for more than 90% (CDC 2005) of the adult population. When interpreting the BLL, key questions are whether the exposure has been 1) of short-term or long-term duration;
2) recent or in the remote past; and 3) of high or low intensity.
Erythrocyte protoporphyrin IX (EP), which can be measured as free EP (FEP) or zinc protoporphyrin (ZPP), is a measurement of biological effect and is an indirect reflection of lead exposure. Lead affects the heme synthesis pathway. Increases in EP or ZPP are not detectable until BLLs reach 20 to 25 µg/dL, (0.97-1.21 µmol/L) followed by an exponential rise relative to increasing BLLs. An increase in EP or ZPP usually lags behind an increase in BLL by two to six weeks.
Periodic testing of BLL and ZPP, called biological monitoring, is required by the OSHA lead standards for workers exposed to significant levels of airborne lead.
# Other Laboratory Tests
Depending on the magnitude of lead exposure, a complete blood count, serum creatinine, blood urea nitrogen, and complete urinalysis may be indicated. Evaluation of reproductive status may be pertinent for some lead-exposed adults.
It is important to check BLLs of family members, particularly children, of lead-exposed individuals. Lead workers may unwittingly expose their families to lead dust brought home on clothes, shoes and in cars.
Except for rare circumstances, there is little or no value in measuring lead in urine or hair.
Because of the pharmacokinetics of lead clearance, urine lead changes more rapidly and may vary independently of BLL. Urine lead is less validated than BLL as a biomarker of external exposure, or as a predictor of health effects. Lead in hair may be a reflection of external contamination rather than internal lead dose; laboratory analysis is not standardized.
# EXPOSURE INVESTIGATION
The occupational and environmental exposure history is the first step in identifying the source of the lead exposure. Both because the cornerstone of intervention is source removal or reduction and because others may be at risk from exposure, the first step is to identify the source.
# HEATH-BASED MEDICAL MANAGEMENT
The single most important aspect of treating lead poisoning is removal from exposure, yet there may be important socioeconomic constraints for a given individual that limit this approach. For this reason, the panel and the AOEC petition OSHA to update the requirements of the current lead standards and urge clinicians to engage public health and industrial hygiene professionals whenever lead exposure is suspected.
Documented health risks and medical management recommendations are summarized in Table 4. The table presents recommendations for a broad range of BLLs. Although the BLL range is categorized in discrete steps, the outcomes will not neatly conform to these arbitrary divisions, and expectation of health effects in the BLL categories will also be influenced by cumulation of dose. For example, clinical peripheral neuropathy can be present at the high end of the BLL 40 to 79 μg/dL (1.93-3.81 µmol/L) range, while it would not be expected to occur from lead exposure at the low end of the same range. The table is intended to assist clinicians in discussing the short-term and long-term health risks of lead exposure with their patients.
There are other instances where removal from lead exposure is warranted that are consistent with the OSHA lead standards. In addition to specific "trigger" BLLs for medical removal protection (MRP), under the OSHA lead standards (e.g. BLL 50 µg/dL (2.41 µmol/L) or greater) the physician can remove an individual from lead work due to a medical condition which places the employee "at increased risk of material impairment to health from exposure to lead", chronic renal dysfunction (serum creatinine > 1.5 mg/dL (133 µmol/L) for men, > 1.3 mg/dL (115 µmol/L) for women, or proteinuria), hypertension, neurological disorders, cognitive dysfunction, and pregnancy.
Central nervous system effects may have a delayed onset and may sometimes persist well after the BLL has dropped below the BLLs at which the OSHA lead standards permit return to work. These persistent effects could negatively impact work performance and safety in certain jobs. Anecdotal evidence, and analogy to other neurotoxic injury, suggests that individuals who develop overt neurological signs and symptoms from lead exposure above that permissible under current OSHA regulations may benefit from rehabilitative measures (e.g., physical therapy, cognitive rehabilitation) that have been used effectively in patients with other brain injuries, such as traumatic brain injury or stroke. Participation in a rehabilitation 6 program may enhance the prospect for recovery, and may demonstrate the worker's capacity to safely return to work. 1
# Medical Surveillance
Medical surveillance is an essential part of an employer's lead safety program and includes biological monitoring with periodic BLL testing, medical evaluation, and treatment if needed, and intervention to prevent or control identified exposure. The BLL is the best available measure of total exposure from both inhalation and ingestion. Biological monitoring provides feedback to the employer and worker about the efficacy of workplace controls, helps avoid surprises, and saves costs such as medical removal.
Currently, under the OSHA standards, a worker must be included in a lead medical surveillance program if his/her airborne lead exposure is 30 µg/m 3 (eight-hour time-weighted average) or higher for more than 30 days per year. The panel believes that the trigger for medical surveillance should not rely solely on air monitoring results; instead, workers should be included in a medical surveillance program whenever they are handling or disturbing materials with a significant lead content in a manner that could reasonably be expected to cause potentially harmful exposure through inhalation or ingestion.
A medical surveillance program with increased frequency of BLL testing and early intervention for all lead-exposed workers is recommended to reduce health risks. The panel does not recommend routine ZPP testing as an early biomarker of lead toxicity; however, ZPP measurement is required by OSHA for certain levels of lead exposure. New employees and those newly assigned to lead work should have a preplacement lead medical examination and BLL test, followed by periodic BLL testing, blood pressure measurement, and health status review. Monthly BLL testing is recommended for the first three months of employment for an initial assessment of the adequacy of exposure control measures. Subsequently, testing frequency can be reduced to every six months as long as BLLs remain below 10 µg/dL (0.48 µmol/L). Any increase in BLL of 5 µg/dL (0.24 µmol/L) or greater should be addressed by re examining control measures in place to see where improvements should be made and by increasing BLL monitoring if needed. If the task assignment changes to work with significantly higher exposures, the initial BLL testing schedule of monthly tests for the first three months at this task should be repeated.
The above schedule for BLL testing may be inadequate for certain situations where the exposures are very high and/or highly variable. In these situations, the BLL testing schedule should be tailored to address the special risks of different types of work and exposures. For example, a construction worker may have very high, intermittent exposures in contrast to someone working in a battery plant or other general industry setting with significant exposures but less day-to-day variability. Employees assigned to tasks where exposures are extremely high (e.g., abrasive blasting) should be tested more frequently than as recommended above, 7 i.e., at least monthly. In general, it is a good idea to do BLL testing at peak exposures to assess controls and, specifically for the construction trades, to test pre-, mid-, and post-job.
Because of the significant reduction of lead in the general environment, new workers enter lead jobs with very low BLLs while others who have worked with lead often have much higher BLLs and body burdens. With increased biological monitoring frequency to ensure that low BLLs are maintained, it is possible that some workers with lead-related health risks may be able to work safely in a lead-exposed environment. All lead-exposed workers should receive education about the health effects of lead and prevention information from the clinician and the employer, and they should be provided necessary protections including protective clothing, clean eating areas, and hygiene measures such as wash-up facilities and/or showers to prevent both ingestion of lead and take-home exposures.
# Chelation Therapy
Primary management for adult lead poisoning is identification of the lead source and cessation of exposure. In adults, chelation therapy generally should be reserved for individuals with high BLLs and/or significant symptoms or signs of toxicity. There is no evidence-based guidance in this regard because of lack of appropriate studies. On a population basis it is important to reduce fetal exposure to lead, and maternal lead levels less than 5 µg/dL are optimal. However, laboratory measures are not absolutely precise, and clinical judgment is needed in every patient encounter. Chelation should be used during pregnancy ONLY to protect the life and health of the mother and ONLY if the potential benefit to the mother justifies the potential risk to the fetus. This decision will need to be made on a case by case basis by the attending physician. Because of the increase in lead mobilized from maternal bone during pregnancy, clinicians should be aware that maternal blood lead levels may exhibit an upward trend in the second and third trimesters even in the absence of further 8 exposure. Women with a history of long-term lead exposure or prior elevated BLL's should be monitored regularly during pregnancy for BLL elevation. If the occupational history or clinical evaluation suggests elevated bone lead stores, clinicians may wish to counsel patients on delaying conception until the risk of mobilization of lead from bone depots has been reduced.
Prophylactic chelation therapy of lead-exposed workers, to prevent elevated BLLs or to routinely lower BLLs to pre-designated concentrations believed to be "safe," is prohibited by OSHA. Non-traditional uses of chelation therapy are not advised. There is no established basis to initiate chelation based on results of hair analysis or, in most cases, urine lead levels nor for chelation of asymptomatic individuals with low blood lead concentrations. Chelation should be used during pregnancy only if the potential benefit justifies the potential risk to the fetus. Breast feeding during chelation therapy is not recommended. The effect of chelating agents on the fetus and newborn is unknown.
# Pregnancy and Breast Feeding Concerns
Prevention of fetal and postnatal lead exposure of breastfed infants requires identification and control of sources of environmental and occupational lead exposures (both endogenous and exogenous) for pregnant and lactating women. The CDC has established 10 µg/dL (0.48 µmol/L) as a BLL of concern in children .
Because fetal blood contains approximately 80% of the blood lead concentration of the mother, and because of the risk of spontaneous abortion, the panel's recommendation is that the mother's BLL should be kept below 5 µg/dL (0.24 µmol/L) from the time of conception through pregnancy. For women with a history of lead exposure, calcium supplementation during pregnancy may be especially important and may thus minimize release of lead from bone stores and subsequent fetal lead exposure.
In a recent prospective study, umbilical cord BLL and maternal bone lead measured shortly postpartum were independent risk factors for impaired mental development of the infants assessed at 24 months of age, even after controlling for contemporaneous BLL . Long-term prospective studies suggest that the adverse neurodevelopmental effects of prenatal lead exposure may not persist into adolescence if early postnatal exposure falls to background levels (Bellinger et al. , 1992. However, maternal BLL measured during pregnancy has been associated with alterations in brainstem auditory response at in the offspring at age five , and in retinal response at age 10 ( Rothenberg et al. 2002b).
Lead does not concentrate in breast milk because it does not bind to nor dissolve in fat; thus, levels of lead are generally higher in a mother's blood than in her milk. Lead in human breast milk appears to be well-absorbed by breast fed infants. Nevertheless, breast feeding should be encouraged in most situations since the benefits generally outweigh the negatives. Decisions relating to lactating women with evidence of very high lead exposure should be made on an individual basis.
If elevated maternal blood lead is suspected or demonstrated, the source(s) of lead exposure in the mother's diet, home, and work environment should be identified and mitigated. Also, the clinician should monitor infant BLLs during the early weeks of breast feeding. Only upon detection and elimination of all other suspected lead sources without corresponding reduction of infant BLL should cessation of breast feeding be advised.
# Retained Bullet
Gunshot injuries to the head, face, and neck may be associated with swallowed bullets, fragments, or pellets, which result in a rapid increase in blood lead in the first days following injury. After detection of bullet fragments in the gut with X-rays, efforts to promote gastrointestinal decontamination may result in a gradual reduction of blood lead over the following weeks. Retained bullets or fragments, particularly those in joint spaces, are risk factors for elevated BLL after injury. Decisions to remove bullet fragments imbedded in tissue should be made in consultation between the treating physician and the surgeon. Individuals with retained bullets should receive baseline and periodic blood lead testing to monitor their lead status. Follow-up blood lead levels may not be needed if the bullets are in muscle tissue and physicians are sure the lead fragments have not migrated from muscle into tissues more likely to allow lead uptake.
# CONCLUSIONS
AOEC offers these Guidelines as a resource for health care providers, public health professionals, employers, and others to utilize in providing medical management of leadexposed adults. In this document, the panel has summarized the current scientific evidence concerning the non-carcinogenic adverse health effects in adults from exposure to inorganic lead.
The toxic effects of lead can occur without overt symptoms. A substantial body of recent research demonstrates a high probability that lead exposure at levels previously thought to be of little concern can result in an increased risk of adverse chronic health effects if the exposure is maintained for many years, thereby resulting in a progressively larger cumulative dose. Such effects may include elevations in blood pressure and increased risk of hypertension, kidney disease, cognitive dysfunction and/or accelerated declines in cognitive function, and reproductive risks.
Prevention of lead exposure should remain the primary goal of health care providers, public health professionals, and employers. Biological monitoring, mainly by periodic measurement of blood lead levels (BLLs) for adults engaged in activity with potential exposure to lead, should be conducted routinely to assess the efficacy of primary prevention and to guide the clinician in determining whether exposure has become excessive. Clinicians are encouraged to advise patients of the risks associated with any elevation of lead level and to advocate strongly for environmental controls that would maintain BLLs below 10 μg/dL (0.48 µmol/L) wherever feasible.
# Counseling and Education
# Follow up with your doctor
How often you will need a blood lead test is based on the results of your previous blood tests as well as your risk for further exposure.
Discuss breastfeeding with your doctor. Breastfeeding is generally considered safe in most cases.
# Eat a healthy diet during pregnancy
It is important to eat foods with enough calcium, iron and vitamin C.
Talk to your doctor to make sure you are getting enough of these nutrients. Your doctor may suggest changes to your diet or may prescribe a supplement to help you get enough of these nutrients.
# Reduce your exposure to lead
Avoid using medicines, spices, foods or cosmetics from other countries. They are more likely to contain lead than products made in the United States.
Avoid using clay pots and dishes from other countries to cook, store or serve food. Do not use pottery that is chipped or cracked.
Never eat non-food items such as clay, soil, pottery or paint chips.
Stay away from any repair work being done in your home.
Avoid jobs and hobbies that may involve contact with lead.
# Get other household members tested for lead
This is especially important for children younger than 6 years of age, children with developmental problems and pregnant women.
Older children and adults should be tested if they may have had contact with lead.
# For more information about lead poisoning
Speak with your doctor.
You can contact me at 212-676-6379.
Call 311 and ask for the BAN-LEAD information line. 3. Leaded solder is typically used to hold the stained glass together at the seams. 4. Some older pesticides may contain lead arsenate, usually in powder form. 5. Antique toys or those produced in another country may have lead paint. 6. Some brands of green pool cue chalk may contain lead. 7. Colored newsprint, more likely glossy print, may be printed with ink containing lead. 8. Bullets and shot used for reloading are made of lead and the dust from reloading may also be a hazard. 9. Pewter contains lead. 10. Old paint and varnish may contain lead. 11. Paint and glaze used on ceramics and pottery may contain lead.
# F. IMPORTED PRODUCTS
Now I am going to ask you about some product(s) may have used or come in contact with, such as medications and health remedies, foods or spices. Some of these products may be made in other countries and may contain lead. They could be products:
- sent by friends and family - bought in local stores - brought back from trips you may have taken
- or given to you by friends or family I want to find out if used or was given any of these products during the past 12 months. Blood hemoglobin levels are routinely measured to screen for iron deficiency anemia (IOM 2001) and serum concentration of 25(OH)D is the best indicator of vitamin D status (IOM 1997). Biochemical assessment of nutrient status is not routinely performed for all vitamins and minerals, however, because reliable and valid laboratory tests of nutritional status are not available for many nutrients such as zinc and calcium (reference needed).
In the absence of biochemical assessment options, dietary assessment methods are utilized to estimate usual dietary intake and to screen for possible dietary inadequacies. The most commonly used dietary assessment methods include: 24-hour recalls, food (diary) records, and food frequency questionnaires.
# 24-Hour Recall
During a 24-hour recall, an individual is asked to report food and fluid intake information for the previous day (Hu 2008). The individual is probed for additional information about the food or beverage consumed, including preparation the portion size eaten. A brief qualitative assessment of the recall is usually conducted by the clinician performing the assessment. Common qualitative assessments conducted include: estimation of the number of servings of fruits/vegetables eaten or for the inclusion so of iron-rich or calcium-rich foods. The 24-hour recall is the dietary assessment method most commonly utilized in clinical settings because they can be conducted in a short amount of time and they do not require advanced preparation or complicated scoring. Limitations of the 24-hour recall include: reliance on memory, difficulty in estimation of portion sizes, underreporting of food intake, and intentional omission of nutrient-poor foods. In addition, food consumed in the previous day may also not be representative of usual dietary intake.
# Food Record
The food record (diary) method requires that an individual record in detail all the foods and beverages consumed over one or more days (typically between 3 and 7 days) (Hu 2008). The individual completing the food record is typically taught about recording procedures such as portion size estimation prior to completing the records. Quantitative analysis of completed food records are typically completed by a Registered Dietitian so dietary inadequacies and excesses can be identified. Although food records are typically considered the "gold standard" of dietary assessment, there are a number of limitations to this approach. Food records are time and labor intensive to both the individual completing the record and the individual conducting the dietary analysis. Individuals may also alter their food intake while completing the food record.
# Food Frequency Questionnaire
Food Frequency Questionnaires (FFQs) were developed to assess long-term dietary intake and are often used in epidemiologic studies (Hu 2008). Numerous food frequency questionnaires have been developed and validated for use in specific populations. A food frequency questionnaire consists of a list of food items and beverages. Individuals are asked to report their usual consumption over a specified period of time from a list of frequency categories. The average intake over the designated time period of an assortment of nutrients is calculated based on the individual's responses. FFQs have a number of advantages in epidemiologic studies such as minimal respondent burden and low costs. However, since FFQs lack the detail of dietary records or 24-hour recalls, they provide less accurate estimates of absolute intake.
# Template for Letter to Construction Employer re: Occupational Exposure
# TEMPLATE FOR HEALTH CARE PROVIDER LETTER TO EMPLOYER
Prior to issuing such a letter, the healthcare provider should discuss the contents with the affected employee and obtain her authorization. My evaluation of Ms. indicates that she is pregnant or planning to conceive. Lead exposure has been associated with adverse reproductive outcomes, including an increased risk of miscarriage, hypertension during pregnancy, decreased fetal growth, and developmental problems in children born to lead-exposed mothers. The U.S. Centers for Disease Control and Prevention recommends that women who are or may become pregnant limit their exposure to lead.
# Physician
In accordance with the OSHA Lead Standards , this letter represents my medical opinion that Ms. should be removed from lead exposure at your company. This removal should remain in effect until such time that she is no longer pregnant or no longer trying to conceive a child. In the interim, Ms. is capable of continuing to work at a job task or location associated with her employment that would not be expected to result in a blood lead concentration of ≥ 5 μg/dL. I am available to discuss the acceptability of any alternative work assignments for the patient with you or one of your representatives.
I have also attached a brochure that discusses the health effects of lead exposure and outlines steps that may be taken to reduce workplace exposure. Health damage from lead: Can be permanent.
Can be occurring even if you have no symptoms.
May not show up until many years later.
# If you work with lead you need to:
k Find out how much lead is in your blood. k Talk to your doctor about lead and your health. k Take steps to protect yourself at work.
# What health damage can low levels of lead cause?
Studies in recent years show that low levels of lead in adults can:
k increase blood pressure-may increase your chances of having a heart attack or stroke.
k decrease brain function-making it more difficult to think, learn, and remember. k decrease kidney function-making it more difficult to get rid of toxic waste products through your urine.
# Levels of lead once thought harmless now shown to be toxic How does lead get into my body?
Lead gets into the body through the air you breathe. You can also swallow lead without knowing it if lead dust gets onto your hands or face or on food you eat.
# How do I know how much lead is in my body?
Get a blood lead level test. This test measures the amount of lead in a person's blood. Blood lead test results are reported as micrograms of lead per deciliter of blood (µg/dL or mcg/dL). The typical blood lead level for adults in the U.S. is less than 2 µg/dL. Even if you feel fine, you should get tested.
# What level of lead is harmful?
Some of the harmful effects of lead have been seen at very low levels. Scientists and doctors now recommend that blood lead levels be kept below 10 µg/dL. Pregnant women or women considering pregnancy should not have a blood lead level above 5 µg/dL.
# Will my health be damaged?
No one can predict for sure whether your health will be damaged at a low blood lead level. Your risk (chance) of suffering from health damage increases with the amount of lead in your blood and the length of time you have been exposed. It will also depend on whether you have any health conditions that place you at higher risk of damage from lead.
If your blood lead level has been above 10 µg/dL for more than a year, the most important thing you can do is take steps to lower your exposure in the future. Information on how you can protect yourself is on pages 4 and 5.
You should also talk to your personal doctor about whether you have any medical conditions that may make you more sensitive to the harmful effects of lead.
# What should I tell my doctor?
Your doctor needs to know if you work with lead. Your doctor can order a blood lead level test if you need one. Also, you may have a medical condition that makes you more sensitive to the harmful effects of lead.
# Tell the doctor:
k What you do at work. k How long you have been at your job. k Any lead jobs you've had in the past. k If you've ever had a blood lead level test. k If you've had to be moved to a different job or be off work because your lead level was high.
k If you think working with lead is making you sick.
Women should also tell their doctor if they are pregnant or considering becoming pregnant.
# Ask the doctor if you:
k Have any medical conditions that may make you more sensitive to the effects of lead.
# My blood lead level has been high for years. Should I find other work?
Whether you continue to work with lead is a personal decision. It is often a tough decision to make. When making this decision, consider:
k Are there steps you can take to lower your exposure to lead? See pages 4 and 5 for steps you can take to protect yourself.
k Do you have any health conditions that may make you more sensitive to the harmful effects of lead?
k If you have a medical condition that places you at higher risk, can you transfer to another job without lead at the same company? k Ask your employer for a respirator to wear while you work with lead. If you already wear a respirator, ask whether there is another type of respirator that will protect you better. If you use a respirator, your employer has to pay for a doctor to evaluate whether you can wear one safely. Your employer must also provide you with a fit-test to make sure that the respirator fits you well.
Get a blood lead level test at least every 6 months.
k Ask your employer for a blood lead level test.
If you have significant lead exposure at work, your employer must provide you with a test and pay for it.
k Ask your personal doctor for a test if your employer doesn't provide one.
# For industrial workers
# What can I do to protect myself?
Get tested at least every 6 months.
k Wash your hands and face with soap and water before eating or drinking and before leaving work. Use a portable plastic container with a spigot if running water is not available. k Ask your employer for a respirator to wear while you work with lead. If you already wear a respirator, ask whether there is another type of respirator that will protect you better. If you use a respirator, your employer has to pay for a doctor to evaluate whether you can wear one safely. Your employer must also provide you with a fit-test to make sure that the respirator fits you well.
Get a blood lead level test at least every 6 months.
k Ask your employer for a blood lead level test. If you have significant lead exposure at work, your employer must provide you with a test and pay for it.
k Ask your personal doctor for a test if your employer doesn't provide one.
# For construction workers
# What can I do to protect myself?
Strip back paint before cutting or welding.
Attach power tools to a HEPA vacuum.
Use a long-handled torch and stand upwind.
Get tested at least every 6 months.
# Worksite Evaluation Form
# What your employer should do to protect you
The best thing that your employer can do is to get rid of lead and lead-containing materials. If it's not possible to get rid of the lead, your employer should take steps to keep the amount of lead in the workplace as low as possible. Your employer should:
Train you to work safely with lead.
Provide wash-up and shower facilities.
k If you work in construction these may be portable wash stations and portable showers. k Your employer should provide you sufficient time to wash up before breaks, lunch, and going home.
# Provide clean areas for eating and changing.
Provide work clothes and work shoes that stay at the job site.
Provide a HEPA vacuum or tools for wet cleaning the work area.
# Install local exhaust ventilation whenever possible.
k If there is already local exhaust ventilation your employer should check it regularly to make sure it works well.
Provide you with the right tools to keep lead dust and fume levels down such as power tools attached to a HEPA vacuum and long-handled torches.
# Separate lead work areas from non-lead work areas.
k In construction, plastic sheeting can be used to isolate dusty work from the surrounding area.
Provide you with a respirator to give you even more protection.
k If you use a respirator, your employer has to pay for a doctor to evaluate whether you can wear one safely. Your employer must also provide you with a fit-test to make sure that the respirator fits you well.
Provide you with a blood lead level test at least every six months.
# OLPPP Occupational Lead Poisoning Prevention Program
California Department of Public Health, Occupational Health Branch 850 Marina Bay Parkway, Building P, Third Floor, Richmond, CA 94804 | 75,310 | {
"id": "a30401e698342e054cc0feaf8df65fcffdd1ed44",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | All MMWR HTML versions of articles are electronic conversions from typeset documents. This conversion might result in character translation or format errors in the HTML version. Users are referred to the electronic PDF version () and/or the original MMWR paper copy for printable versions of official text, figures, and tables. An original paper copy of this issue can be obtained from the Superintendent of Documents, U.S. Government Printing Office (GPO), Washington, DC 20402-9371; telephone: (202) 512-1800. Contact GPO for current prices.#
During the 2010 influenza season in Australia, administration of a 2010 Southern Hemisphere seasonal influenza trivalent inactivated vaccine (TIV) (Fluvax Junior and Fluvax) manufactured by CSL Biotherapies was associated with increased frequency of fever and febrile seizures in children aged 6 months through 4 years (1). Postmarketing surveillance indicated increased reports of fever in children aged 5--8 years after vaccination with Fluvax compared to previous seasons. An antigenically equivalent 2010--11 Northern Hemisphere seasonal influenza TIV (Afluria) manufactured by CSL Biotherapies is approved by the Food and Drug Administration (FDA) for persons aged ≥6 months in the United States. Prescribing information for the 2010--11 Afluria formulation includes a warning that "Administration of CSL's Southern Hemisphere influenza vaccine has been associated with increased postmarketing reports of fever and febrile seizures in children predominantly below the age of 5 years as compared to previous years" (2). In the United States, annual influenza vaccination is recommended for all persons aged ≥6 months. On August 5, 2010, the Advisory Committee on Immunization Practices (ACIP) recommended that the 2010--11 Afluria vaccine not be administered to children aged 6 months through 8 years. Other age-appropriate, licensed seasonal influenza vaccine formulations should be used for prevention of influenza in these children. If no other age-appropriate, licensed inactivated seasonal influenza vaccine is available for a child aged 5--8 years who has a medical condition that increases their risk for influenza complications (3), Afluria can be used; however, providers should discuss with the parents or caregivers the benefits and risks of Afluria use before administering this vaccine to children aged 5--8 years.
# Background
In Australia and New Zealand, use of 2010 Fluvax Junior (0.25 mL preparation) and Fluvax (0.5 mL preparation) was suspended in children aged <5 years because of reports of fever and febrile seizures occurring after receipt of these vaccines in children aged 6 months through 4 years (1,4--7). Australia and New Zealand are the only Southern Hemisphere countries in which Fluvax Junior and Fluvax have been used during 2010. Investigations in Australia indicated that administration of 2010 Fluvax or Fluvax Junior was associated with higher rates of fever in young children 4--24 hours after vaccination when compared with rates observed with TIV during previous years (1). A retrospective cohort study among children aged <5 years who received TIV in 2010 reported that the risk for fever following receipt of Fluvax was 6.5 times greater than for Influvac (Solvay/Abbott), a different TIV (1). Other data indicated that the rate of fever in 2010 was eight times greater after receipt of Fluvax Junior versus Influvac among children aged <3 years, and 10 times greater for Fluvax versus Influvac among children aged 3--4 years (1). A follow-up New Zealand study among more than 300 children aged <5 years found substantially increased febrile reactions in the 24 hours after receipt of Fluvax, but not with Vaxigrip (sanofi pasteur), another TIV (6). Postmarketing surveillance found increased reports of fever in children aged 5--8 years after receipt of 2010 Fluvax compared with reports for the same product in three previous seasons (unpublished data, CSL; 2010). An increased frequency of fever after receipt of 2009 CSL seasonal TIV compared with TIV from another manufacturer among children aged 6 months through 8 years age also was reported in a U.S. clinical trial (2). Additional investigations determined that the higher frequencies of fever with Fluvax and Fluvax Junior in Australia during 2010 were associated with substantially higher rates of febrile seizures in children aged 6 months through 4 years; febrile seizures occurred a mean of 7.2 hours (range: 5.9--8.4 hours) after vaccination (1). Overall, the rate of febrile seizures following Fluvax and Fluvax Junior was estimated at ≤9 per 1,000 doses administered, and approximately nine times more than expected (1). Among children aged 6 months through 2 years, the rate of febrile seizures after vaccination with Fluvax Junior was approximately 10 per 1,000 doses administered, and 1.5 (Fluvax) to 14 (Fluvax Junior) per 1,000 doses administered among children aged 3--4 years versus zero for Influvac in both age groups (1). Before Fluvax use in New Zealand was suspended in young children on April 26, 2010, nine cases of febrile seizures were reported in children aged <5 years after receiving Fluvax, and one case was reported after vaccination with an unknown influenza vaccine that was strongly suspected to be Fluvax (6). No febrile seizures were reported in an estimated 5,000 to 7,000 children aged <5 years who received approximately 10,000 to 12,000 doses of Vaxigrip, and no febrile seizures were reported after Influvac in New Zealand (6). To date, despite extensive investigations, no biological cause (e.g., contamination, incomplete virus inactivation or disruption, etc.) has been identified to explain the increase in febrile reactions and febrile seizures associated with Fluvax Junior and Fluvax among children in 2010.
In the United States, annual influenza vaccination is recommended for all persons aged ≥6 months (3). Alternative, age-appropriate, approved TIV formulations are available for children aged ≥6 months, and live attenuated influenza virus vaccine (LAIV) is approved for healthy children aged ≥2 years (Table ). Studies that assessed adverse events after receipt of TIV or LAIV in the United States during past influenza seasons (8--10) and unpublished surveillance data have not demonstrated an association between TIV administration and febrile seizures.
Afluria- was approved by FDA in 2007 for persons aged ≥18 years. Since November 2009, Afluria has been approved by FDA for persons aged ≥6 months. The manufacturing process for 2010 Fluvax and Fluvax Junior is the same as for 2010--11 Afluria, and the vaccines strains are antigenically equivalent, although the influenza A (H3N2) virus strains are different. For the 2010--11 influenza season, the warning and precautions section of the Afluria package insert was revised to include the increased incidence of fever and febrile seizures in young children, predominantly among those aged <5 years, based on postmarketing reports from Australia and New Zealand (2). Limited information is available about seasonal influenza vaccine coverage or the risk of febrile seizures or fever in children aged ≥5 years from Australia and New Zealand. However, available data to date suggest that children aged 5--8 years might experience higher incidence of fever after vaccination with Fluvax. No information is available on the risk of febrile seizures in children aged 5--8 years, although febrile seizures from any cause are uncommon in this age group.
# Recommendations
Based on the available information, ACIP recommendations for the 2010--11 influenza season in the United States include the following: Afluria should not be used in children aged 6 months through 8 years.
Other age-appropriate, licensed seasonal influenza vaccine formulations, including other TIVs and LAIV, have not been associated with an increased risk of fever or febrile seizures, are safe, and should be used for prevention of influenza in children aged 6 months through 8 years.
If no other age-appropriate, licensed inactivated seasonal influenza vaccine is available for a child aged 5--8 years who has a medical condition that increases the child's risk for influenza complications (3), Afluria can be used; however, providers should discuss with the parents or caregivers the benefits and risks of influenza vaccination with Afluria before administering this vaccine.
Afluria may be used in persons aged ≥9 years.
# Safety Monitoring
Although CSL Southern Hemisphere 2010 seasonal influenza vaccine is the only influenza vaccine to be associated with increased reports of fever and febrile seizures in young children, as in previous seasons, CDC, FDA, and other federal agencies will closely monitor the safety of seasonal influenza vaccines during 2010--11. CDC will rely primarily on the Vaccine Adverse Event Reporting System (VAERS) † and the Vaccine Safety Datalink (VSD) § to conduct safety monitoring. VAERS is a passive reporting system, co-managed by CDC and FDA, which identifies potential vaccine safety problems in the United States. VAERS reports following 2010--11 influenza vaccinations will be reviewed regularly with special attention to reports of febrile seizures in children aged <9 years. VSD is a collaboration of eight managed-care organizations with more than 9 million members that links computerized vaccination and health-care encounter data. VSD will be used for rapid, ongoing analyses to monitor for serious adverse events associated with vaccination against seasonal influenza, including seizures in young children. VSD also is available to evaluate possible associations detected by VAERS or other sources, as needed.
# Reported by
Advisory Committee
In addition, to identify children who might be at greater risk for asthma and possibly at increased risk for wheezing after receiving LAIV, parents or caregivers of children aged 2--4 years should be asked: "In the past 12 months, has a health-care provider ever told you that your child had wheezing or asthma?" Children whose parents or caregivers answer "yes" to this question and children who have asthma or who had a wheezing episode noted in the medical record within the past 12 months should not receive FluMist.
Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of Health and Human Services. | 2,228 | {
"id": "d3e264b4f1f87ed845259ed107e511ed1be956d5",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # Introduction
Worldwide, tuberculosis is the most common opportunistic infection among people with HIV infection. In addition to its frequency, tuberculosis is also associated with substantial morbidity and mortality. Despite the complexities of treating two infections requiring multidrug therapy at the same time, antiretroviral therapy can be life-saving among patients with tuberculosis and advanced HIV disease. Observational studies in a variety of settings have shown that use of antiretroviral therapy during tuberculosis treatment results in marked decreases in the risk of death or other opportunistic infections among persons with tuberculosis and advanced HIV disease 1,2 .
Concomitant use of treatment for tuberculosis and antiretroviral therapy is complicated by the adherence challenge of polypharmacy, overlapping side eff ect profi les of antituberculosis drugs and antiretroviral drugs, immune reconstitution infl ammatory syndrome, and drug-drug interactions 3 . Th e key interactions, and the focus of this document, are those between the rifamycin antibiotics and four classes of antiretroviral drugs: protease inhibitors, non-nucleoside reverse-transcriptase inhibitors , CCR5-receptor antagonists, and integrase inhibitors 3 . Only two of the currently available antiretroviral drug classes, the nucleoside analogues (other than zidovudine 4 ) and enfuvirtide (a parenteral entry inhibitor) do not have signifi cant interactions with the rifamycins.
Th e purpose of this summary is to provide the clinician with updated recommendations for managing the drug-drug interactions that occur when using antiretroviral therapy during tuberculosis treatment. Changes from previous versions of these guidelines include: an eff ort to obtain and summarize the clinical experience of using specifi c antiretroviral regimens during tuberculosis treatment (not just pharmacokinetic data), a table summarizing the clinical experience with key antiretroviral regimens and providing recommended regimens (Table 1), and sections on treatment for special populations (young children, pregnant women, patients with drug-resistant tuberculosis). We include drug-drug interaction data for antiretroviral drugs that have been approved or are currently available through expanded access programs in the United States; these recommendations will be updated as additional antiretroviral drugs progress become available.
# The Role of Rifamycins in Tuberculosis Treatment
Despite the complexity of these drug interactions, the key role of the rifamycins in the success of tuberculosis treatment mandates that the drug-drug interactions between the rifamycins and antiretroviral drugs be managed, not avoided by using tuberculosis treatment regimens that do not include a rifamycin or by withholding antiretroviral therapy until completion of anti-tuberculosis therapy among patients with advanced immunodefi ciency. In randomized trials, regimens without rifampin or in which rifampin was only used for the fi rst two months of therapy resulted in higher rates of tuberculosis treatment failure and relapse 5,6 . Th e sub-optimal performance of the regimen of two months of rifampin (with isoniazid, pyrazinamide, and ethambutol) followed by 6 months of isoniazid + ethambutol was particularly notable among participants with HIV co-infection 5 . Th erefore, patients with HIV-related tuberculosis should be treated with a regimen including a rifamycin for the full course of tuberculosis treatment, unless the isolate is resistant to the rifamycins or the patient has a severe side eff ect that is clearly due to the rifamycins. Furthermore, patients with advanced HIV disease (CD4 cell count < 100 cells/mm 3 ) have an increased risk of acquired rifamycin resistance if treated with a rifamycin-containing regimen administered once or twiceweekly 1,7 . Th e rifamycin-based regimen should be administered daily (5-7 days per week) for at least the fi rst 2 months of treatment among patients with advanced HIV disease 8,9 .
# Predicting Drug Interactions Involving Rifamycins
Knowledge of the mechanisms of drug interactions can help predict the likelihood of an interaction, if that specifi c combination of drugs has not been formally evaluated. Th e rifamycin class upregulate (induce) the synthesis of several classes of drug transporting and drug metabolizing enzymes. With increased synthesis, there is increased total activity of the enzyme (or enzyme system), thereby decreasing the serum half-life and serum concentrations of drugs that are metabolized by that system. Th e most common locus of rifamycin interactions is the cytochrome P450 enzyme system, particularly the CYP3A4 and CYP2C8/9 isozymes. To a lesser extent, rifampin induces the activity of the CYP2C19 and CYPD6 isozymes. Th e rifamycins vary in their potential as CYP450 inducers, with rifampin being most potent, rifapentine intermediate, and rifabutin being much less active. Rifampin also upregulates the synthesis of cytosolic drug-metabolizing enzymes, including glucuronosyl transferase, an enzyme involved in the metabolism of zidovudine 10 and raltegravir.
# Rifampin and Antiretroviral Therapy
Th e most important drug-drug interactions in the treatment of HIV-related tuberculosis are those between rifampin and the NNRTIs, efavirenz and nevirapine. Rifampin is the only rifamycin available in most of the world, and initial antiretroviral regimens in areas with high rates of tuberculosis consist of efavirenz or nevirapine (in combination with nucleoside analogues). Furthermore, because of its potency and durability in randomized clinical trials, efavirenz-based therapy is a preferred option for initial antiretroviral therapy in developed countries.
# Rifampin and Efavirenz
Rifampin causes a measurable, though modest, decrease in efavirenz concentrations 11,12 (Table 2). Increasing the dose of efavirenz from 600 mg daily to 800 mg daily compensates for the eff ect of rifampin 11,12 , but it does not appear that this dose increase is necessary to achieve excellent virological outcomes of therapy 12 . Trough concentrations of efavirenz, the best predictor of its virological activity, remain well above the concentration necessary to suppress HIV in vitro among patients on concomitant rifampin 13 . A testament to the potency of efavirenz against HIV is that the standard dose of efavirenz results in very high rates of complete viral suppression despite 10-fold interpatient diff erences in trough concentrations. Th erefore, it is unlikely that the 20% decrease in serum concentrations resulting from rifampin will have a clinicallysignifi cant eff ect on antiretroviral activity. In several cohort studies, antiretroviral therapy of standard-dose efavirenz + 2 nucleosides was well-tolerated and highly effi cacious in achieving complete viral suppression among patients receiving concomitant rifampin-based tuberculosis treatment 14,15 . Furthermore, there was no apparent benefi t from a higher dose of efavirenz (800 mg daily) in one randomized trial 12 , and a small observational study documented high serum concentrations and neurotoxicity among 7 of 9 patients receiving the 800 mg dose with rifampin 16 . Th erefore, this combination -efavirenz-based antiretroviral therapy and rifampin-based tuberculosis treatment, at their standard doses -is the preferred treatment for HIV-related tuberculosis (Table 1). Some experts recommend the 800 mg dose of efavirenz for patients weighing > 60 kg.
# Alternatives to Efavirenz-Based Antiretroviral Therapy
Alternatives to efavirenz-based antiretroviral therapy are needed for patients with HIV-related tuberculosis: efavirenz cannot be used during pregnancy (at least during the fi rst trimester), some patients are intolerant to efavirenz, and some are infected with NNRTI-resistant strains of HIV.
# Rifampin and Nevirapine
Rifampin decreases serum concentrations of nevirapine by 20-55% 17,18 (Table 1). Th e common toxicities of nevirapine -skin rash and hepatitis -overlap common toxicities of some fi rst-line antituberculosis drugs. Furthermore, nevirapine-based regimens are not recommended for patients with higher CD4 cell counts (> 350 cells/mm 3 for men, > 250 cells/mm 3 for women) because of increased risk of severe hypersensitivity reactions. Th erefore, there are concerns about the effi cacy and safety of using nevirapine-based antiretroviral therapy during rifampin-based tuberculosis treatment. At present, there have been no studies comparing efavirenz vs. nevirapine-based antiretroviral therapy among patients being treated for tuberculosis. Trough serum concentrations of nevirapine among patients on concomitant rifampin often exceed the concentration necessary to suppress HIV in vitro 17,19 . Several cohort studies have shown high rates of viral suppression among patients receiving nevirapine-based antiretroviral therapy 17,20 . Th e risk of hepatitis among such patients was also comparable to patients receiving fi rst-line tuberculosis treatment without antiretroviral therapy 20 . Despite the interaction with rifampin, nevirapine-based antiretroviral therapy appears to be reasonably eff ective and well-tolerated among patients being treated for tuberculosis.
Th ese studies are neither adequately powered nor reported in suffi cient detail to fully answer the concerns about the effi cacy and safety of nevirapine-based antiretroviral therapy during tuberculosis treatment. However, the collected experience is suffi cient to make nevirapine an alternative for patients unable to take efavirenz and who do not have access to rifabutin. Some investigators have suggested using an increased dose of nevirapine among patients on rifampin 18 . However, a recent randomized trial comparing standard dose nevirapine (200 mg twice-daily) to a higher dose (300 mg twice daily) among patients on rifampin demonstrated an increased risk of nevirapine hypersensitivity among patients randomized to the higher dose of nevirapine 21 . Th erefore, the standard dose of nevirapine should be used among patients on rifampin (200 mg daily for 2 weeks, followed by 200 mg twice-daily).
# Other Antiretroviral Regimens for use with Rifampin
For patients who are infected with NNRTI-resistant HIV, neither efavirenz nor nevirapine will be eff ective. Unfortunately, there is little clinical experience with alternatives to NNRTI-based therapy among patients being treated with rifampin. Standard doses of protease inhibitors cannot be given with rifampin (Table 1); the > 90% decreases in trough concentrations of the protease inhibitors would surely make them ineff ective . Most protease inhibitors are given with low-dose ritonavir (100-200 mg per dose of the other protease inhibitor). However, low-dose ritonavir does not overcome the eff ects of rifampin; serum concentrations of indinavir, lopinavir, and atazanavir were decreased by > 90% when given with the standard ritonavir boosting dose (100 mg) in the presence of rifampin , and a once-daily regimen of ritonavirboosted saquinavir (saquinavir 1600 mg + ritonavir 200 mg) resulted in inadequate concentrations of saquinavir 26,27 . Th erefore, standard protease inhibitor regimens, whether boosted or not, cannot be given with rifampin.
Th e dramatic eff ects of rifampin on serum concentrations of other protease-inhibitors can be overcome with high-doses of ritonavir (400 mg twice-daily, "super-boosted protease inhibitors") or by doubling the dose of the co-formulated form of lopinavir/ritonavir 23 . However, high rates of hepatoxicity occurred among healthy volunteers treated with rifampin and ritonavir-boosted saquinavir (saquinavir 1000 mg + ritonavir 100 mg twice-daily 28 ) and those treated with rifampin and lopinavir/ritonavir (either as lopinavir 400 mg + 400 mg ritonavir twice-daily or as lopinavir 800 mg + ritonavir 200 mg twice-daily) 23,29 .
Whether patients with HIV-related tuberculosis will have the same high rates of hepatotoxicity when treated with super-boosted protease inhibitors or double-dose lopinavir/ritonavir has not been adequately studied. Among patients receiving rifampin-based tuberculosis treatment, the combination of ritonavir-boosted saquinavir (400 mg of each, twice daily) was not well-tolerated 30 . Th e initial positive experience with super-boosted lopinavir among young children (see below) suggests that these regimens may be tolerable and eff ective among at least some patients with HIV-related tuberculosis. However, these regimens should only be used with close clinical and laboratory monitoring for possible hepatoxicity, when there is a pressing need to start antiretroviral therapy.
Regimens composed entirely of nucleoside analogues are less active than combinations of two classes of antiretroviral drugs (e.g., NNRTI + nucleosides) 31 . A regimen of zidovudine, lamivudine, and the nucleotide agent, tenofovir, has been reported to be active among patients on rifampin-based tuberculosis treatment 32 . However, this regimen has not been compared to standard initial antiretroviral therapy (e.g., efavirenz + 2 nucleosides). Finally, a quadruple regimen of zidovudine, lamivudine, abacavir, and tenofovir has been reported to be as active as an efavirenz-based regimen in an initial small trial 33 . While these regimens of nucleosides and nucleotides cannot be recommended as preferred therapy among patients receiving rifampin, their lack of predicted clinically-signifi cant interactions with rifampin make them an acceptable alternative, for patients unable to take NNRTIs or those with NNRTI-resistant HIV 32,34 .
Rifampin has substantial interactions with the recently-approved CCR5-receptor antagonist, maraviroc 35 . An increased dose of maraviroc has been recommended to allow concomitant use of rifampin and maraviroc, but there is no reported clinical experience with this combination. Rifampin decreases the trough concentrations of raltegravir, the recently-approved integrase inhibitor, by ~ 60% 36 . Because the antiviral activity of raltegravir 200 mg twice daily was very similar to the activity of the licensed dose (400 mg twice-daily), the current recommendation is to use the standard dose of raltegravir in a patient receiving concomitant rifampin. However, this combination should be used with caution -there is very little clinical experience with using concomitant raltegravir and rifampin. Finally, rifampin is predicted to substantially decrease the concentrations of etravirine (a second-generation NNRTI 37 currently available through an expanded access program). Additional drug-interaction studies will be needed to further evaluate whether these new agents can be used among patients receiving rifampin-based tuberculosis treatment.
# Rifabutin and Protease Inhibitors
Rifabutin has little, if any eff ect on the serum concentrations of protease-inhibitors (other than unboosted saquinavir) 22 . Cohort studies have shown favorable virological and immunological outcomes of proteaseinhibitor-based antiretroviral therapy in the setting of rifabutin-based tuberculosis treatment 1,41 . Th ough no comparative studies have been done, the combination of rifabutin (if available) with protease-inhibitor based antiretroviral therapy is the preferred form of therapy for patients unable to take NNRTI-based antiretroviral therapy (Table 1). As above, there are concerns about the safety of super-boosted protease-inhibitors and the effi cacy of nucleoside-only regimens in the setting of rifampin-based tuberculosis treatment. Th e protease-inhibitors, particularly if pharmacologically boosted with ritonavir, markedly increase serum concentrations and toxicity of rifabutin 42 . Th erefore, the dose of rifabutin should be decreased when used with protease-inhibitors (Table 3). As above, the decreased dose of rifabutin would be sub-therapeutic if the patient stopped taking the protease-inhibitor without adjusting the rifabutin dose. Th erefore, adherence to the protease-inhibitor should be assessed with each dose of directly observed tuberculosis treatment; one convenient way to do so is to give a supervised dose of protease-inhibitor at the same time as the directly observed dose of tuberculosis treatment.
# Special Populations
# Pregnant women
A number of issues complicate the treatment of the HIV-infected woman who is pregnant and has active tuberculosis. Efavirenz is contraindicated during at least the fi rst 1-2 trimesters. Furthermore, pregnant women have an increased risk of severe toxicity from didanosine and stavudine 43 , and women with CD4 cell counts > 250 cells/mm 3 have an increased risk of nevirapine-related hepatitis 44 . Th erefore, the choice of antiretroviral agents is limited among pregnant women.
Pregnancy alters the distribution and metabolism of a number of drugs, including antiretroviral drugs 45 (there is very little information on whether the metabolism of anti-tuberculosis drugs is altered during pregnancy). Notably, the serum concentrations of protease-inhibitors are decreased during the latter stages of pregnancy 46,47 . Th ere are no published data on drug-drug interactions between anti-tuberculosis and antiretroviral drugs among pregnant women. However, it is likely that the eff ects of rifampin on protease inhibitors are exacerbated during pregnancy.
In the absence of pharmacokinetic data and published clinical experience it is diffi cult to formulate guidelines for the management of drug-drug interactions during the treatment of HIV-related tuberculosis among pregnant women. Nevirapine-based therapy could be used among women on rifampin-based tuberculosis treatment, with the caveat that there be a good monitoring system for symptoms and laboratory tests for hepatotoxicity. Efavirenz-based therapy may be an option during the later stages of pregnancy. Th e quadruple nucleoside/nucleotide regimen (zidovudine, lamivudine, abacavir, and tenofovir) is an alternative, though additional experience is required, particularly during pregnancy. Finally, despite their sub-optimal activity, triple nucleoside or nucleoside/nucleotide regimens are an alternative during pregnancy. Where rifabutin is available, the preferred option is protease-inhibitor-based antiretroviral therapy.
# Children
HIV-infected children in high-burden countries have very high rates of tuberculosis, often with severe, lifethreatening manifestations (e.g., disseminated disease, meningitis). Such children may also have advanced and rapidly-progressive HIV disease, so there are pressing reasons to assure potent treatment for both tuberculosis and AIDS. In addition to the complexities raised by the drug interactions discussed above, children with HIV-related tuberculosis raise other challenges. Th ere are very limited data on the absorption, metabolism, and elimination of anti-tuberculosis drugs among children, particularly among very young children (< 2 years of age). Some antiretroviral agents are not yet available in suspension formulations, and there are limited pharmacokinetic data for all antiretroviral drugs among young children. Th e use of single-dose nevirapine selects for NNRTI-resistant strains among those infants who are infected despite perinatal prophylaxis, and such children have inferior outcomes if subsequently treated with nevirapine-based combination antiretroviral therapy 48 . Th erefore, there is understandable reluctance to use NNRTI-based therapy among perinatallyinfected infants who were exposed to single-dose nevirapine. As above, the inability to use NNRTI-based antiretroviral therapy limits options for antiretroviral therapy among children receiving rifampin-based tuberculosis treatment.
Th ere are emerging, though unpublished, pharmacokinetic data and clinical experience with using proteaseinhibitor-based antiretroviral therapy among young children (< 5 years of age) with HIV-related tuberculosis. Children treated with super-boosted lopinavir (ritonavir in addition to doses of co-formulated lopinavir/ ritonavir) while on rifampin-based tuberculosis treatment had serum concentrations of lopinavir comparable to those of children treated with standard dose lopinavir/ritonavir in the absence of rifampin 49 . Furthermore, a cohort study found similar virological and immunological outcomes of antiretroviral therapy among children treated with super-boosted lopinavir and rifampin-based tuberculosis treatment compared with children treated with standard dose lopinavir/ritonavir 50 . Th erefore, super-boosted lopinavir plus appropriate nucleoside agents is the preferred antiretroviral regimen among children on rifampin-based tuberculosis treatment.
Th e triple nucleoside regimen of zidovudine, lamivudine, and abacavir has been suggested for young children who are taking rifampin-based tuberculosis treatment 51 . However, there is limited published clinical experience with this regimen among young children, with or without concomitant tuberculosis. Furthermore, young children often have very high HIV RNA levels, suggesting the need for highly-potent antiretroviral regimens. While awaiting additional studies, the triple-nucleoside regimen is an alternative for young children receiving rifampin-based tuberculosis treatment.
In an initial pharmacokinetic study, efavirenz concentrations were not signifi cantly diff erent among children on rifampin, compared to children without tuberculosis 49 . However, efavirenz concentrations were suboptimal in both groups, raising concerns about the adequacy of current efavirenz dosing recommendations among children 52 . However, efavirenz-based antiretroviral therapy is highly-active among older children 53,54 , and can be used with rifampin-based tuberculosis treatment.
# Patients with Multidrug-Resistant Tuberculosis
Outbreaks of multidrug-resistant tuberculosis among HIV-infected patients have been documented since the 1980s. Recently, an outbreak of highly-lethal multidrug-resistant tuberculosis was discovered in South Africa, primarily involving HIV-infected patients 55 . Prompt initiation of antiretroviral therapy may be one way to decrease the alarmingly high death rate among HIV-infected patients with multidrug-resistant tuberculosis.
Most of the drugs used to treat multidrug-resistant tuberculosis (the "second-line drugs": fl uoroquinolone antibiotics, ethionamide, cycloserine, kanamycin, amikacin, capreomycin, para-amino salicylate) were developed and approved nearly 40 years ago, prior to the development of modern laboratory techniques to determine pathways of drug metabolism. Furthermore, there are no published studies of possible drug-drug interactions between second-line antituberculosis drugs and antiretroviral drugs. Based on the existing, albeit incomplete, knowledge of the metabolism of the second-line drugs, only ethionamide has a signifi cant possibility of an interaction with antiretroviral drugs 22 (ethionamide is thought to be metabolized by the CYP450 system, though it is not known which of the CYP isozymes are responsible). Whether doses of ethionamide and/or certain antiretroviral drugs should be modifi ed during the co-treatment of multidrugresistant tuberculosis and HIV disease is completely unknown.
# Limitations of these Guidelines
Th e limitations of the information available for writing these guidelines should be appreciated. First, drugdrug interaction studies are often done among healthy volunteers. While such studies reliably predict the nature of a drug-drug interaction (e.g., that rifampin decreases the serum concentrations of efavirenz), they seldom provide the optimal management of that interaction among patients with HIV-related tuberculosis (in cases of extreme interactions, such as that between rifampin and unboosted protease-inhibitors, the data from healthy volunteers can be defi nitive). In this update of the guidelines we emphasize studies done among patients with HIV-related tuberculosis, particularly those that evaluate treatment outcomes of the two diseases. However, such studies often had small sample sizes, limiting the generalizability of their fi ndings. Second, rates of drug metabolism often diff er markedly between individuals, and part of that variance may be due to genetic polymorphisms in drug-metabolizing enzymes. Th erefore, drug interactions and their relevance may not be the same in diff erent populations. Th ird, in the attempt to provide the most up-to-date information we include studies that have been presented at international conferences, but that have not yet completed the peer review process and been published. Fourth, it is very diffi cult to predict the outcome of complex drug interactions, such as those that might occur when three drugs with CYP3A activity are used together (e.g., rifabutin, atazanavir and efavirenz). Th erapeutic drug monitoring, if available, may be helpful in such situations. Finally, in the Special Populations section, we highlighted the lack of pharmacokinetic data on two key populations of patients with HIV-related tuberculosis -pregnant women and children. We provide recommendations for these key populations, but these are based primarily on expert opinion because of the lack of pharmacokinetic data.
# Dual protease-inhibitor combinations
# Recommended change in dose of antiretroviral drug
# Recommended change in dose of rifampin
Comments
# CCR-5 receptor antagonists
# Maraviroc
# No change
No change No clinical experience; a signifi cant interaction is unlikely, but this has not yet been studied
# Integrase inhibitors
# Raltegravir
# No change
No change No clinical experience; a signifi cant interaction is unlikely, but this has not yet been studied | 5,614 | {
"id": "3037002e377cafe3cf784908b47bc5c38e6f55d8",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | This sign shall also be printed in the predominant language of non-English-speaking workers. All employees shall be trained and informed of the hazardous areas, with special instructions given to illiterate workers.
# (b)
This sign shall also be posted at or near entrances to areas in which there is occupational exposure to 1,1,1-trichloroethane. (3) Only appropriate respirators as described in Table 1-1 shall be used pursuant to the following requirements:
(A) To determine the class of respirator to be used, the employer shall measure the atmospheric concentration of 1,1,1-trichloroethane in the workplace initially and thereafter whenever process, worksite, climate, or control changes occur which are likely to increase the 1,1,1-trichloroethane concentration. This requirement shall not apply when only self-contained or combination supplied-air and self-contained positive pressure respirators are used.
# (B)
The employer shall ensure that no worker is being exposed to 1,1,1-trichloroethane in excess of the exposure limit because of improper respirator selection, fit, use, or maintenance.
(C) When respirators are required, the employer shall provide respirators in accordance with Table 1-1 and shall ensure that the employee uses the respirator provided.
(D) Respiratory protective devices described in Table 1-1 shall be those approved under the provisions of 30 CFR 11.
(E) Respirators specified for use in higher concentrations of 1,1,1-trichloroethane are permitted in atmospheres of lower concentrations.
(F)
The employer shall ensure that respirators are adequately cleaned, maintained, and stored, and that employees are instructed on the use of respirators and on testing for leakage.
(4) Chemical cartridges and canisters shall not be used for periods of time in excess of those indicated in Table 1-1.
In any case chemical cartridges and canisters should be replaced after each day of use.
Where an emergency may develop that could result in employee injury from overexposure to 1,1,1-trichloroethane, the employer shall provide respiratory protection as listed in Table 1-1.
# (b)
Protective Clothing
In any operation where the worker may come into direct contact with liquid 1,1,1-trichloroethane, protective clothing shall be worn. The clothing should be resistant to 1,1,1-trichloroethane. Gloves, boots, overshoes, and bib-type aprons that cover boot tops shall be provided when necessary. Impervious supplied-alr hoods or suits shall be worn when entering confined spaces such as pits or tanks unless known to be safe. In situations where heat stress is likely to occur, air-supplied suits shall be used. All protective clothing shall be well-aired and inspected for defects prior to reuse. Hands placed in liquid 1,1,1-trichloroethane shall be protected by Impervious gloves. Any liquid 1,1,1-trichloroethane that contacts the skin should be promptly removed. (1) Containers delivered by closed truck or rail shall not be unloaded until the vehicle in which they arrived has been ventilated.
The absence of any odor of 1,1,1-trichloroethane should not be used as a criterion of adequate ventilation.
(2) Storage containers piping, and valves shall be inspected periodically for leakage
(3) Storage facilities shall be designed to contain spills and prevent contamination of workroom air.
(4) Processes and storage facilities shall not be located near open flames or high-temperature operations, unless precautions are taken to prevent fire and explosion hazards.
# (b) Contaminant Controls
(1) Suitable engineering controls designed to limit exposure to 1,1,1-trichloroethane shall be utilized if needed. Ventilation systems, if used, shall be designed to prevent the accumulation or recirculation of 1,1,1-trichloroethane in the workroom and to effectively remove 1,1,1-trichloroethane from the breathing zones of workers.
Adequate, uncontaminated make-up air shall be provided. Ventilation systems shall be subjected to regular preventive maintenance and cleaning to ensure maximum effectiveness, which shall be verified by periodic airflow measurements.
(2) Portable exhaust ventilation or suitable general ventilation shall be provided, if necessary, to limit environmental concentrations for nonroutine operations that require the application of 1,1,1-trichloroethane. Personnel entering confined spaces shall be furnished with appropriate personal protective equipment and protected by a lifeline tended by another worker outside the space, who shall also be equipped for entry with approved respiratory, eye and skin protection, lifeline, and have contact with a third party.
(E) Written operating instructions and emergency medical procedures shall be formulated and posted in conspicuous locations where accidental exposure to concentrations of 1,1,1-trichloroethane in excess of the environmental limit may occur. These instructions and procedures shall be printed both in English and in the predominant language of non-English-speaking workers, if any. Special instructions shall be given to illiterate workers.
# (d) Showers and Eye Wash Fountains
Showers and eye wash facilities shall be provided and so located as to be readily accessible to workers in all areas where skin or eye splash with 1,1,1-trichloroethane is likely. If 1,1,1-trichloroethane is splashed on the worker, contaminated clothing shall be promptly removed and the skin washed with soap and water. If liquid 1,1,1-trichloroethane contacts the eyes, they shall be thoroughly irrigated with clean water, promptly followed by medical assistance. Such incidents shall be reported to the immediate supervisor by the affected employee or by a fellow worker. exposure in excess of the recommended environmental limit, additional monitoring shall be promptly initiated. If confirmed, control procedures shall be instituted as soon as possible; these may precede and obviate confirmatory monitoring if the employer desires. Affected employees shall be advised that exposures have been excessive and be notified of the control procedures being implemented. Monitoring of these employees shall be conducted at least as often as every 30 days and shall continue until 2 successive samplings at least a week apart confirm that exposure no longer exceeds recommended limits. Normal monitoring may then be resumed.
For each TWA concentration determination, a sufficient number of samples to characterize each worker's exposure during each workshift shall be taken and analyzed.
The number of TWA and ceiling concentration determinations for an operation shall be based on such factors as the variations in location and job functions of workers in that operation.
# (c) Recordkeeping
Environmental monitoring records shall be maintained for at least 20 years. These records shall include methods of sampling and analysis used, types of respiratory protection used, and TWA and ceiling concentrations found. Each employee shall be able to obtain information on his own envi ronmen t al exp o s ur es.
Pertinent medical records shall be retained for 20 years after the last occupational exposure to 1,1,1-trichloroethane. Records of environmental exposures applicable to an employee should be included in that employee's medical records. These medical records shall be made available to the designated medical representatives of the Secretary of Labor, of the Secretary of Health, Education, and Welfare, of the employer, and of the employee or former employee. Further research is needed to more completely characterize the effects of certain levels of 1,1,1-trichloroethane on workers, especially epidemiological, chronic and teratological studies. Animal studies are underway at the National Cancer Institute to determine if 1,1,1trichloroethane is carcinogenic.
Similar studies are being performed by the Dow Chemical Company.
# III. BIOLOGIC EFFECTS OF EXPOSURE
Extent of Exposure 1.1.1-Trichloroethane (CH3 CC1 3) is also known as methyl chloroform and alpha-trichloroethane.
There are two isomers of trichloroethane. The 1,1,2 isomer is also known as ethane trichloride, vinyl trichloride, beta-trichloroethane and monochloroethylenechloride.
Some reports do not clearly distinguish between the isomers, and some confusion exists in the literature because of
# Indiscriminate use of the term trichloroethane.
The odor threshold of 1,1,1-trichloroethane was reported by the American National Standards Institute to be around 100 ppm.
Stewart found a sex difference in odor "acceptance" at 350 ppm. May reported that the odor threshold for 1,1,1-trichloroethane of his subjects was 400 ppm, and that they perceived the odor more clearly at 700 ppm.
Arthur D. Little Inc.
reported the odor recognition threshold, detected by four expert panel members when 1,1,1-trichloroethane was in room air, to be 16 ppm. These data reflect the variability in odor threshold values and highlight the danger of using odor as a criterion for detection of harmful levels of 1,1,1-trichloroethane. Some physical data for 1,1,1-trichloroethane are presented in Production has increased steadily and in 1973, production of 548,394,000 lbs was reported. There are many uses of 1,1,1-trichloroethane as a solvent and clean ing agent. In 1969, Gleason et al tabulated over 40 pro ducts, marketed by 30 companies, which contained it. Among the products were type cleaners, color film cleaners, insecticides, spot removers, cements and adhesives, and fabric cleaning solutions. One of the most common industrial uses is as a degreasing agent.
Workers involved in the manufacture of 1,1,1-trichloroethane, in its formulation into the many products containing it, and in their final uses, are potentially exposed to 1,1,1-trichloroethane.
In addition to job related exposures, workers may be exposed to 1,1,1-trichloroethane by home use of the many products which contain it. 2-bromo-2-chloro-l,1,1-trifluoroethane (halothane) anesthesia. The investigators found 1,1,1-trichloroethane to be clinically less potent than either chloroform or 2-bromo-2-chloro-l,1,1-trifluoroethane, and even less potent than trichloroethylene in supplementing nitrous oxide-oxygen for anesthesia.
In an experiment reported in 1961 by Stewart et al, the exposure chamber concentration was increased continuously from 0 to 2,650 ppm for 15 minutes (total exposure).
One of seven exposed subjects became very lightheaded when the concentration reached 2,600 ppm, and at 2,650 ppm, two could not stand, and three others became very lightheaded. Two of the seven subjects did not become lightheaded.
One subject maintained the ability to perform a normal Romberg test (loss of proprioceptive control evidenced by unsteadiness of standing patient when eyes are closed)
throughout the exposure. The other six subjects regained their equilibrium within 5 minutes after cessation of exposure. The five subjects who became lightheaded reported a feeling of malaise for 5 hours after the exposure.
The inhibited 1,1,1-trichloroethane used in this experiment contained 94-97% 1,1,1-trichloroethane, 2.4-3.0% dioxane, 0.12-0.3% butanol, and small amounts of 1,2-dichloroethane, water and other materials. The only factor that was reported to be statistically significant with regard to 1,1,1-trichloroethane was the interaction between perception of mental strain and 1,1,1-trichloroethane exposure at 450 ppm. Under stress conditions, exposure to 1,1,1-trichloroethane at 450 ppm decreased perceptive capabilities.
Because of the choice of subjects (healthy students) as well as the lack of controls on intervening variables (food and drinking habits), it would seem that further work is needed to justify a dose-response relationship. repeatedly in four of five subjects exposed 6.5 to 7 hours daily for 5 consecutive days at 500 ppm of a commercial 1,1,1-trichloroethane preparation.
Other subjective symptoms of central nervous system response that were occasionally experienced by the subjects were lightheadedness and mild headache.
The ability of two subjects to perform a modified Romberg test was impaired during the last 5-6 hours of each daily exposure. Their ability to perform the test normally was regained 5-10 minutes after removal from exposure.
The 1,1,1-trichloroethane used in this experiment contained (in liquid form) 4 vol% of 1,4-dioxane, 0.5 vol% butylene oxide, and 0.5 vol% of nitromethane as inhibitors. The liquid also contained traces of 1,2- Although there were slight increases noted in two patients, the authors concluded that anesthesia with 1,1,1-trichloroethane of up to 2 hours duration would not be hepatotoxic.
Positive urinary urobilinogen was found 7 hours after exposure in two of seven male subjects exposed for 15 minutes to inhibited 1,1,1-trichloro ethane in a concentration increased continuously from 0-2,650 ppm during the 15-minute exposure.
On examination of centrifuged urine, five subjects were found to have 1-2 red blood cells per high-power field, com pared to none before exposure.
Elevated urinary urobilinogen was reported by Stewart et al in one of three male subjects 20 hours after removal from a 20-minute exposure at 900 ppm of inhibited 1,1,1-trichloroethane.
In one experimental exposure of six male subjects at 500 ppm of inhibited 1,1,1-trichloroethane for 78 minutes, 3-6 red blood cells per high power field and a trace of albumin were found in the urine of one subject 20 hours after the exposure. A worker who experienced a sudden onset of nausea, vomiting, "explosive diarrhea" and dizziness 2 hours after leaving work was the subject of a fourth case reported by Stewart. The previous day the man had worked in the vicinity of an engine cleaning operation where 1,1,1trichloroethane had been used and he had been aware of its odor throughout the day. His illness, however, was not attributed by the author to the effects of 1,1,1-trichloroethane. The illness lasted for 6 hours. Results When subjects were exposed by Stewart et al to inhibited 1,1,1triehloroethane 6.5 to 7 hours/day on 5 consecutive days at an average concentration of 507 ppm (420-612 ppm), the concentrations in the alveolar air 16 hours after removal from each daily exposure increased on successive days.
They did not present the data, but reported that 1,1,1trichloroethane was found in alveolar air for 1 month after the last exposure.
1,1,1-Trichloroethane and tetrachloroethylene were both still present in the breath of one individual at 0.1 ppm 1 month after removal from an exposure to the vapors of these two compounds.
The individual was In a second experiment reported by Tada, the same two subjects were exposed at an average of 420 ppm 1,1,1-trichloroethane, 2 hours/day for three days.
With this exposure, both the TCA concentration in the urine and the total amount excreted per day continued to increase on the 2 days following the last exposure, before beginning to decline on the third and fourth days after exposure. The maximum amount excreted per day was 7.2 mg, averaged for the two subjects.
Excretion of TCA in the urine of two subjects was studied for 8 days and reported by Fukabori in 1970. These subjects were exposed 2 hours/day to 1,1,1-trichloroethane at average concentrations of 195 ppm on the first day, 376 ppm on the second, 558 ppm on the third, and 832 ppm on the fourth day. The maximum amount of TCA (7.5 mg/day) was excreted on the day following the last exposure. Four days after the last exposure, an average of 2.5 mg TCA/liter of urine was found, compared to a normal value in Japanese men of 0-0.9 mg/liter. TCA was measured by Fukabori by the alkali-pyridine-benzidine method.
Trichloroacetic acid has been studied in the urine of workers occupationally exposed to 1,1,1-trichloroethane. Chemical analyses or other information about the inhibitors or impurities in the 1,1,1-trichloroethane to which these workers were exposed were not reported.
Marked accumulation of white, frothy, slightly bloody edema fluid was found in the lungs and a small amount of mucoid material was found in the bronchi of one of the fatalities reported by Hall and Hine.
The man had inhaled (and possibly aspirated) a 1,1,1-trichloroethane cleaning fluid containing dioxane. The other fatality reported by Hall and Hine
had habitually inhaled another cleaning fluid containing 1,1,1trichloroethane.
In this case of chronic exposure, passive congestion was found throughout the lungs and there were considerable amounts of thick, dark red blood and thin, frothy fluid in the dependent areas. Thick, yellowish-brown secretions were found in the bronchi. Microscopic findings included some atelectasis in the lung parenchyma, edema and congestion in the lungs and desquamated epithelium in the bronchi.
Slight eye irritation was experienced by one of four subjects during the experimental 30-minute exposure to uninhibited 1,1,1-trichloroethane at 900-1,000 ppm reported by Torkelson et al. Eye, nose and throat irritation have been experienced during experimental exposures to inhibited 1,1,1-trichloroethane at 400-500 ppm All employees in the exposed group had 1,1,1-trichloroethane (and other solvent) exposures, in varying concentrations, for up to 6 years. The concentration range was 11-838 ppm, with a mean of 115 ppm 1,1,1trichloroethane. Whether this period of exposure would result in toxic systoms is questionable, however. Since only healthy, active workers were selected, and the average length of exposure for the study population was less than 1 year at the stated TWA, no conclusions can be drawn about chronic effects.
The control group had only minimal exposure to nonchlorinated solvents.
Pairs were matched with regard to age, race, sec, work shift, job description and socioeconomic status, and examined within a 10-week period.
and reported in 1950. The 1,1,1-trichloroethane used in this investigation Trichloroethane was administered at an average concentration of about 125,000 ppm. Duration of exposure varied from 1.5-6.0 minutes and the total amount of 1,1,1-trichloroethane administered was 10 to 40 ml.
# Ventribular fibrillation, followed by cardiac arrest, occurred in one dog
during its second 3-minute exposure; the first exposure had taken place 2 weeks earlier.
Cardiac arrhythmias, myocardial depression, and tachycardia were reported by Belej et al to have developed in three anesthetized Rhesus monkeys exposed for 5 minutes at 25,000 ppm of 1,1,1-trichloroethane, and again 10 minutes later when they were similarly exposed at 50,000 ppm. hours daily for 50 days to 1,1,1-trichloroethane at 73 ppm. After 120 days of exposure, the emphysematous condition was much more pronounced, the interalveolar walls were thin and in some places they had broken down. The vascular walls were thickened and swollen, and around many of them there were accumulations of lymphoid and histiocyte cell elements and isolated plasma cells.
The mucous membrane of the bronchi was swollen, and there were small amounts of mucus and detached epithelial cells in the lumen.
The peribronchial lymphatic nodules were hyperplastic.
In this chronic study, even though conditioned reflex activity was not disrupted in the rats, structural changes in cortical and subcortical areas were noted.
The authors did not report microscopic studies of cat brains in which changes in differential reflexes were found with chronic exposure. In the absence of other information, and because of the nature of the conditioned reflexes they were studying and the methods they used, it is difficult to assess the significance of the behavioral aspects of this study.
Chemosis and hyperemia of the conjunctivas of rabbits after in stillation of 1,1,1-trichloroethane (5% in corn oil) were found by Krantz et al to be similar to those produced by chloroform administered similarly and simultaneously in the other eye.
A single undiluted application of either uninhibited or inhibited 1,1,1-trichloroethane to the eyes of rabbits was reported by Torkelson et increased liver weights were reported by Torkelson et al in 5 male rats exposed 70 times in 99 days to inhibited 1,1,1-trichloroethane at 10.000 ppm 1 hour/day, 5 days/week. Liver weights did not increase in other rats exposed for 0.5, 0.2 or 0.05 hours/day in the same experiment.
increased liver weights were found in female guinea pigs exposed 69 times in the same experiment at 2,000 ppm, for 0.5 or 0.2 hours/day, and in others exposed 69 times at 1,000 ppm, for 3.0 or 1.2 hours/day. Exposures of shorter daily duration at 2,000 or 1,000 ppm did not result in increased liver weights.
In the guinea pigs with increased liver weights, fatty changes were found in the livers.
# No iivei. or kidney effects were reported by Torkelson et al in
126-130 exposures of rats, guinea pigs, rabbits or monkeys to inhibited 1,1,1-trichloroethane at 500 ppm. The exposures were 7 hours/day, 5 days/week.
# Fatty degeneration of the liver was reported by Adams et al for
guinea pigs, but not rats or rabbits, exposed 31 times in 44 days for 7 hours/day, to redistilled 1,1,1-trichloroethane at 5,000 ppm. Fatty degeneration was also found in livers of guinea pigs exposed 20 times for 7 hours/day at 3,000 ppm. No liver or kidney effects were reported for rats or monkeys repeatedly exposed (about 50 times) 7 hours/day, 5 days/week at 3.000 ppm. Liver or kidney effects were not reported for guinea pigs similarly exposed at 1,500 ppm over 2 months or at 650 ppm over 3 months. Liver to body weight ratios were also significantly increased in rats exposed to 1,1,1-trichloroethane at 1,000 ppm, but at 250 ppm they were similar to controls.
Cytoplasmic alterations found by electron microscopy were most severe in centrilobular hepatocytes of the 1,000 ppm group and were mild to minimal in the 250 ppm group. These alterations consisted of vesiculation of the rough endoplasmic reticulum with loss of attached polyribosomes, increased smooth endoplasmic reticulum, microbodies and triglyceride droplets. Some cells had swollen cisternae of the endoplasmic reticulum.
The authors also observed chronic respiratory disease, in 12 of 40 control rats, in 28 of 40 exposed at 250 ppm, and in 7 of 40 exposed at 1000 ppm. It appears that the respiratory disease was intercurrent rather than related to exposure.
No lesions in dogs or monkeys were ascribed to the exposure by the investigators. In the rats examined after 120 days of exposure, these effects were more prominent, and protein dystrophy of the liver parenchymal cells was found.
[ all major organs and glands were taken for microscopic examination.
Behavioral signs indicative of CNS depression, such as hyperactivity, were sought.
The authors reported no signs or indices attributable to 1,1,1trichloroethane exposure. Several "spontaneous lesions" were found, but on comparison of frequencies between control and experimental groups were not associated with exposure. The portion of the diurnal cycle during which the animals were exposed was not stated. These findings were reported after 24 months and 18-23% of the males and 32-37% of the females were still alive at that time. The doses administered to three rats were 727, 642 and 705 g/kg (approximately 0.5 ml/kg). For two rats in which it could be measured, an average of 97.6% of the administered dose was excreted unchanged in the exhaled air during 25 hours, and in three rats an average of 0.85% (0.55, 0.86, and 1.14%) of the administered radioactivity was found in the urine. The "behavior" of 1,1,1-trichloroethane and its metabolites was investigated in the expired air, blood and urine of male Wistar rats by Eben and Kimnerle.
To quantitate concentrations of 1,1,1trichloroethane and its metabolites, a gas chromatograph, equipped with an Ni-63 electron capture detector, was used. The major metabolites studied were TCE and TCA in the urine, 24 hours after exposure, for 3-4 days in the acute group, and 16 hours after exposure, daily, in the subchronic group.
Chloral hydrate concentrations were also determined in the blood and 1,1,1trichloroethane was determined in the blood and breath. It was concluded that the results gave an explanation for 1,1,1trichloroethane-induced depression of myocardial respiration. (g) Drug Interactions and Potentiation of 1,1,1-Trichloroethane Toxicity 1,1 ,l-'i' richloroethane was found by Van Dyke and Rikans to stimulate aniline hydroxylase activity when added to the incubation medium of rat liver microsomes.
In the same experiment, it had no effect on aminopyrine demethylase.
Metabolism of hexobarbital, meprobamate, and zoxazolamine, based on loss of righting reflex, was studied by Fuller et al in male rats and mice, 24 hours after removal from 24 hours of continuous exposures to reagent gra r 1J ,1-trichloroethane at 3,000 ppm. The sleeping times induced by all three drugs were also reduced in rats and mice exposed to 1,1,1-trichloroethane. This indicates that 1,1,1-trichloroethane stimulates the activity of hepatic microsomal enzymes used to metabolize these drugs.
Other studies were conducted with hexobarbital to determine the mechanism by which 1,1,1-trichloroethane reduced the sleeping time.
To functionally block the hypophysis, rats were treated with morphine sulfate (20 mg/kg ip) for 4 days before exposure. This treatment did not alter the effect of 1,1,1-trichloroethane on hexobarbital sleeping time.
In another experiment, adrenalectomized rats retained the 1,1,1-trichloroethane effect on hexobarbital sleeping time.
Other groups of rats were treated with either cycloheximide or actinomycin D to block protein synthesis before they were exposed to 1,1,1trichloroethane.
Both these drugs blocked the effect of 1,1,1trichloroethane inhalation on hexobarbital sleeping time by preventing reduction of hexobarbital narcosis and an increase in hexobarbital metabolism.
In vitro studies showed that both hexobarbital and zoxazolamlne metabolism by rat livers were increased by exposure of the donor rat to Other experiments were conducted at the 3,000 ppm exposure level.
When 24 hours of total exposure time were completed in three to six exposure periods, each separated by 18-21 hours, a cumulative effect was shown by progressive decreases in sleeping time. The cause of the decreased hexobarbital sleeping times following 1,1,1-trichloroethane exposures was studied. It was found that neither barbital nor chloral hydrate induced sleeping times were affected by exposure to 1,1,1-trichloroethane. Oxidation of hexobarbital by 9,000 G supernatant fractions from livers of exposed mice was increased. Reduction of p-nitro-benzoic acid by the same liver fractions was not affected by exposure of the mice to 1,1,1-trichloroethane.
Treatment of mice with atropine, chlorpromazine or tolazoline immediately before and after 12 hours of exposure to 1,1,1-trichloroethane, did not block the effect of exposure on hexobarbital sleeping time. Administration of two doses of phénobarbital (50 mg/kg ip) by Cornish et al 1 and 2 days before injection of reagent grade 1,1,1trichloroethane at doses of 0.3, 0.5, 1.0 and 2.0 ml/kg, did not enhance the hepatoxicity of 1,1,1-trichloroethane, evaluated by SG0T determinations. Similar increases in SGOT levels were found in control (no phénobarbital treatment) and treated animals at each 1,1,1-trichloroethane dose level. The SGOT levels were significantly increased by 1,1,1trichloroethane with or without phénobarbital pretreatment.
Enhancement of 1,1,1-trichloroethane hepatotoxicity by phénobarbital was reported in 1973 by Carlson.
In this experiment, the male rats were pretreated with phénobarbital (50 mg/kg/day) for 4 days before an Exposures of rats to redistilled reagent grade 1,1,1-trichloroethane for 2 hours at 10,000 or 15,000 ppm or for 6 hours at 5,000 or 10,000 ppm did not increase serum enzyme levels whether or not there had been ethanol
pretreatment. The serum enzymes studied in this 1966 report by Cornish and Adefuin were SGOT, SGPT, and isocitric dehydrogenase.
The ethanol (50%) was administered by stomach tube at 5 g/kg, 16-18 hours before the vapor exposures.
Isopropyl alcohol or acetone administered by gavage to male Swiss-Webster mice 18 hours before ip injection of 1,1,1-trichloroethane did not alter the response of SGPT activity to the administered 1,1,1-trichloro ethane. The doses of 1,1,1-trichloroethane used in this experiment, by Traiger and Plaa in 1974, were 1.0, 2.0 and 2.5 ml/kg. The latter dose caused increases in SGPT activity, but the increases were not affected by isopropyl alcohol or acetone pretreatment.
# Correlation of Exposure and Effect
# Summaries of inhalation exposures and effects are presented in
Tables XII-6 to XII-10. The most significant findings concerning the effects of 1,1,1-trichloroethane in man and animals are the depression of the CNS, cardisvoscular toxicity and hepatic toxicity. Irritation of the lungs and mucous membranes also has been reported. Information on experimental exposures of more than 6 months duration, or at 1,1,1trichlorethane concentrations below about 75 ppm, was not found in the literature.
Both experimental studies and occupational experiences indicate that 1,1,1-trichloroethane is irritating to the skin and mucous membranes and that the nervous system, cardiovascular system, and the liver are affected by exposures.
# (a) Central Nervous System Effects
The first reported biologic study of 1,1,1-trichloroethane, by Tauber
in 1880, established that it had anesthetic properties. Clinical trials in 1958-1960 established that it was not very effective as a surgical anesthetic, and its use for this purpose was discontinued.
The anesthetic properties of 1,1,1-trichloroethane have had occupational significance, and will continue to be of significance to work practices and requirements for respiratory protective devices.
Concentrations of 1,1,1-trichloroethane required to induce anesthesia under working conditions have not been determined. The clinical studies are difficult to extrapolate to occupational situations because the patients were usually given sedatives before administration of anesthetic gas, and nitrous oxide was used in the 1,1,1-trichloroethane carrier gas.
[ Changes in cardiovascular function found in dogs exposed to 1,1,1trichloroethane at 8,000 ppm for no more than 5 minutes included an abrupt drop in total peripheral resistance with compensatory cardiac responses.
Within seconds, the compensatory cardiac responses were dissipated and stroke volume, heart rate, and myocardial contractility decreased.
[ Elevated urinary urobilinogen was also found in one subject following a 20-minute exposure at 900 ppm, and some evidence of possible kidney injury was found in six subjects after exposure at 500 ppm for 78 minutes.
# Serum enzymes were not elevated.
Evidence of kidney injury (red blood cells and protein in the urine)
and elevated serum bilirubin were also found in a man following ingestion of 1,1,1-trichloroethane.
Autopsy findings in a woman with a history of chronically sniffing 1.1.1-trichloroethane were limited to the respiratory system, stomach, and
# brain.
Elevated urinary urobilinogen was found In one worker after he had worked with 1,1,1-trichloroethane for 1 hour in a closed room, and in two of nine women after they had worked with 1,1,1-trichloroethane for several months.
# IV. ENVIRONMENTAL DATA AND BIOLOGIC EVALUATION Environmental Concentrations and Engineering Controls
There is little information available about the concentrations of 1,1,1-trichloroethane to which workers have been routinely exposed.
Concentrations of 1,1,1-trichloroethane developed in cleaning electrical equipment were reported by Burkatskaya et al in 1973. The sampling and analytical method were not described. The greasy parts were sprayed with or dipped into the solvent. Parts were sprayed for 4-5 minutes and dried with a jet of air for 2-3 minutes. The room had an exhaust fan and a general two-way exhaust that provided 35-40 air changes (time interval not given).
Concentrations of 1,1,1-trichloroethane in the shop area during the spraying period were 27-70 ppm. These concentrations declined rapidly when the spraying and drying process was completed and, after 30 minutes, the room air concentration was not measureable.
Concentrations found in different parts of the room when the dipping tanks opened were 25-55 ppm. When the parts were taken from the bath, about 250 ppm of 1,1,1-trichloroethane were found in the workroom air. were exposed in excess of the TLV (100 ppm) throughout their working day.
During the survey, concentrations of 1,1,1-trichloroethane were "well below toxic levels" (apparently meaning 350 ppm) during normal vapor degreasing.
In this same study, methods of operation and ventilation were highlighted as important safeguards. The authors reported samples taken near vapor degreasing plants at 21 factories in England. They found that lip exhaust ventilation was provided at three-fourths of the open top tanks surveyed.
Extraction rates varied (tank to tank) from 0 to > 100 cu ft/min. Lips slots were commonly found to be closed, by dropage of heavy objects or deposits of dirt. Conditions were found to be poorest at tanks without lip exhausts, and over half of the operators were receiving concentrations above the TLV. This was aggrevated in some conditions by poor general ventilation. Also, downward drafts caused solvent vapor to be blown out of the tanks and into the workers' breathing zone. The authors suggested that this effect could be limited by the use of covers and screens at the tank.
The authors found that manual unloading of tanks as well as preparation of work for the tanks, caused a sharp increase of 1,1,1-trichloroethane in the workers' breathing zones.
The following recommendations were made: (1) Degreasing tanks should be sited in well ventilated areas giving particular attention to tanks in confined areas, (2) vapor degreasing tanks should be provided with efficient lip exhaust systems (35 cu ft/min was suggested as an adequate extraction rate) and covered by protective screens to prevent escape of 1,1,1-trichloroethane vapor, (3) work should be arranged so that it can be contained in the freeboard zone of the tank during the removal of excess solvent and stacked to ensure complete drainage of the degreasing solvent, Activated charcoal has been used as a collection medium followed by analysis by gas chromatography.
Charcoal is nonpolar and will generally adsorb organic vapors in preference to water vapor resulting in less interference from atmospheric moisture than with silica gel. The dechlorination method (alkaline hydrolysis) requires collection of the 1,1,1-trichloroethane contaminated atmosphere by a suitable collection medium followed by alkaline hydrolysis in isopropyl alcohol, and titration of the liberated chloride with silver nitrate.
The percentage of chlorine hydrolyzed is determined by comparison between samples and known controls. A disadvantage of this method is that chlorine is not easily removed from 1,1,1-trichloroethane and the amount removed depends on the duration of the dechlorination process. Another disadvantage is that it is not specific for 1,1,1-trichloroethane.
In the colorimetric analytical method based on the Fujiwara reaction, a stream of air containing 1,1,1-trichloroethane is passed through a bottle containing pyridine.
Potassium hydroxide is then added to a portion of the sample, and this mixture is heated in a boiling water bath and cooled during a fixed time period. A portion of the potassium hyroxide solution, to serve as a blank, is similarly heated and cooled. Absorption coefficients of the pyridine layer are determined with a spectrophotometer.
This method requires less time than the dechlorination method, but the problem of specificity with mixtures of chlorinated hydrocarbons remains.
The third chemical method utilizes direct reading detector tubes.
These are glass tubes packed with chemicals that change color when a measured and controlled flow of air containing 1,1,1-trichloroethane passes through the chemical. Depending on the type of detector tube, the air may be drawn directly through the tube and compared with a calibration chart, or the air may be drawn into a pyrolyzer accessory prior to the detection tube. In either case, the analysis is not specific for 1,1,1-trichloroethane since liberated halide ions produce the stain and any The atmosphere of relevant working stations must be sampled and must correspond to the breathing zone of the workers at the working stations. Infrared analysis is subject to interferences from other air contamiants and these interferences are not easily detected or resolved without substantial knowledge of infrared spectrophotometry.
Gas chromatography provides a quantitative analytical method which can be specific for different chlorinated hydrocarbons.
Every compound has a specific retention time in a given chromatography column, but several compounds in a mixture may have similar retention times.
This problem can be overcome by altering the stationary phase of the chromatography column or by changing the column temperature or other analytical parameters.
Altering conditions usually will change the retention times and separate the components.
A mass spectrometer can be used subsequent to gas chromatography to identify the substance present in a gas chromatographic peak more positively. Linked gas chromatograph-mass spectrometer instruments perform this identification automatically. A charcoal capillary tube has been used to trap and transfer the material associated with a gas chromatographic peak to a mass spectrometer for qualitative identification when only unlinked units are available.
A comparative study of a colorimetric method, a gas chromatographic method, and colorimetric detection tubes for analysis of 1,1,1trichloroethane was reported in 1970 by Fukabori.
The data are presented in Table XII-11. They suggest that the detector tubes give higher values than the other two methods used. Although considerable data have been collected, they have not been synthesized into useable form to be able to quantitatively evaluate exposure to 1,1,1-trlchloroethane by either breath or urine analysis.
# V. DEVELOPMENT OF STANDARD
# Basis for Previous Standards
The first Threshold Limit Value (TLV) for 1,1,1-trichoroethane was published by the American Conference of Governmental Industrial Hygienists The USSR MAC is 3.66 ppm (20.0 mg/cu m). The present US federal standard was adopted from "Threshold Limit Autopsy findings of gross congestion and pulmonary edema in workers overcome by 1,1,1-trichloroethane exposures Respiratory irritation has been reported in man and several other species. At 400 ppm eye, nose and throat irritation have been experienced by subjects during exposure to 1,1,1-trichloroethane. Varying degrees of lung congestion were found in all species exposed continuously for 90 days to 1,1,1-trichloroethane at 135 ppm.
The authors stated, however, in view of pneumonitis in the surviving animals and in the control group, that no positive conclusions could be drawn connecting the 1,1,1trichloroethane exposure to the effects. Irritation of the upper respiratory tract was reported among women during occupational exposure to 1,1,1-trichloroethane. No other studies of respiratory disease associated with chronic occupational exposure to 1,1,1-trichloroethane have been reported. The recommended ceiling limit should protect workers from acute irritation effects, but it is not known if it will protect them from chronic respiratory effects.
From the data presented above, it is evident that a ceiling should be placed on occupational exposure to 1,1,1-trichloroethane. Evidence of CNS response at 450 ppm and minimal to no response at 250 to 350 ppm leads to the conclusion that 350 ppm is a reasonable ceiling concentration.
This ceiling will assure a safe TWA as excursions above the action level will not lead to the chronic effects described in humans and animals.
Although information on workers exposed to 1,1,1-trichloroethane for over 6 years is sui.-e, workers who had experienced TWA's of 217 ppm for up to 6 years showed no adverse effects, and thus it is unnecessary to recommend a TWA limit below 350 ppm to prevent chronic effects.
To provide some assurance that the environmental limit is not exceeded, an action level of 200 ppm is recommended.
It is recognized that many workers handle small amounts of 1,1,1trichloroethane or work in situations where, regardless of the amounts used, there is only negligible contact with the substance. Under these conditions, it should not be necessary to comply with all the provisions of this recommended standard. However, concern for worker health requires that protective measures be instituted below the enforcable limit to ensure that exposures stay below that limit. Therefore, Airflow through the pump should be within +5% the desired rate. New calibration curves should be established for each sampling pump after making any repairs or modifications to the sampling system.
The volumetric flowrate through the sampling system should be spot-checked and the proper adjustments made before and during each study to ensure obtaining accurate airflow data. The area of the sample peak is determined and preliminary sample results are read from a standard curve prepared as discussed below.
# Determination of Desorption Efficiency
It is necessary to determine the percentage of 1,1,1-trichloroethane on the charcoal that is removed in the desorption process. This desorption efficiency is determined once for a given compound provided the same batch of charcoal is always used.
Activated charcoal, equivalent to the amount in the first section of It is convenient to prepare standards in terms of mg 1,1,1-trichloroethane/0.5 ml of carbon disulfide because samples are desorbed in this amount of carbon disulfide. To minimize error due to the volatility of carbon disulfide, 20 times the weight can be injected into 10 ml of car bon disulfide. For example, to prepare a 0.3 mg/0.5 ml standard, 6.0 mg of 1,1,1-trichloroethane is injected into exactly 10 ml of carbon disulfide in a glass-stoppered flask.
The density of 1,1,1-trichloroethane (1.2528 g/ml) is used to convert 6.0 mg into microliters for easy measurement with a microliter syringe.
A series of standards is prepared, varying in concentration over the range of interest and analyzed under the same gas chromatographic conditions and during the same time period as the unknown samples. Curves are established by plotting concentration versus average peak area.
# Calculations (a)
The weight in mg corresponding to the peak area is read from the standard curve. No volume corrections are needed, because the standard curve is based on mg l,l,l-trichloroethane/0.5 ml carbon disulfide, and the volume of sample injected is identical to the volume of the standards in jected.
# (b)
Separately determine the weights of 1,1,1-trichloroethane on the front and reserve sections of the charcoal tube.
(c) Corrections must be made to the 1,1,1-trichloroethane weights determined on both the front and reserve sections for the weights of the respective sections of the blank charcoal tube.
(1) Subtract the weight of 1,1,1-trichloroethane found on the front section of the blank charcoal tube from the weight of 1,1,1-
The authors did not give any additional information about these incidents. However, Irish considered that the concentrations may have been near saturation, approximately 167,000 ppm. He gave the exposure time of one
Immediately before sampling, break both ends of the tube to provide openings at least one-half the internal diameter of the tube (2 mm).
The smaller section of charcoal Is used as a reserve and should be positioned nearest the sampling pump.
( (c) A recorder and some method for determining peak area.
(d) Glass stoppered microtubes of 2.5-ml capacity or 2-ml vials that can be sealed with inert caps.
(e) Microsyringe of 10-^1 capacity, and convenient sizes for making standards.
(f) Pipets, 0.5-ml delivery pipets or 1.0-ml pipets graduated in 0.1-ml increments.
(g) Volumetric flasks of 10-ml capacity or convenient sizes for making standard solutions.
# Reagents (a)
"Spectroquality" carbon disulfide.
(b) 1,1,1-Trichloroethane, preferably "chromatoquality" grade.
# EXTREME CAUTION MUST BE EXERCISED AT ALL TIMES WHEN USING CARBON DISFULIDE BECAUSE OF ITS HIGH TOXICITY AND FIRE AND EXPLOSION
# HAZARDS. IT CAN BE IGNITED BY HOT STEAM PIPES. ALL WORK WITH CARBON DISULFIDE MUST BE PERFORMED UNDER AN EXHAUST HOOD. (c) Typical chromatographic operating conditions:
(1) 50 ml/min (70 psig) helium carrier gas flow.
(2) 65 ml/mln (24 psig) hydrogen gas flow to detector.
(3) 500 ml/min (50 psig) airflow to detector. "safety solvent," or "aliphatic hydrocarbon" when the specific name is known.
The "%" may be the approximate percentage by weight or volume (indicate basis) which each hazardous ingredient of the mixture bears to the whole mixture. This may be indicated as a range or maximum amount, ie, "10-40% vol" or "10% max wt" to avoid disclosure of trade secrets.
# P U B L I C H E A L T H S E R V I C E C E N T E R F O R D I S E A S E C O N T R O L N A T I O N A L I N S T I T U T E F O R O C C U P A T I O N A L S A F E T Y A N D H E A L T H R O B E R T A . T A F T L A B O R | 10,895 | {
"id": "ac444ca20d78b815ade53053b5f73b1f5dae94c9",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Ohe Centers for Disease Control and Prevention published Guidelines for Preventing the Transmission of Tuberculosis in Health-Care Settings, with Special Focus on HIV-Related Issues' in December 1990. Partially in response to recent reports of TB outbreaks and transmission of Mycobacterium tuberculosis in institutional settings,27 the Federal TB Task Force called for the update and revision of the CDC guidelines. On Oct. 28, 1994, the CDC published Guidelines for Preventing the Transmission of Mycobacterium tuberculosis in Health Care Facilities, 1994.8 At an October 1992 public meeting about the 1990 TB infection control guidelines, the CDC received suggestions that infection control policies should be based, in part, on the level of risk in each local facility. As a consequence, the revised guidelines direct personnel in all health care facilities to conduct a TB risk assessment. This risk assessment will allow health care personnel to implement TB infection control programs appropriate to their facility's level of risk of M. tuberculosis transmission. It also will provide greater flexibility in adapting the recommended controls to a wide variety of health care facilities, such as dental settings, in which few or no patients with TB are examined or treated.
The guidelines emphasize basing infection control policies on a hierarchy of control measures, including administrative controls, engineering controls and personal respiratory protection (Table 1); developing, implementing and maintaining a written TB infection control plan based on a risk assessment; providing TB training, education, counseling and screening to health care workers (HCWs); evaluating TB infection control programs.
This article should assist dental workers (DWs) in conducting a risk assessment and in identifying TB infection control interventions appropriate to the level of risk for M. tuberculosis transmission in the dental facility.
Although the CDC guidelines are directed primarily at in-patient health care facilities, specific considerations for ambulatory care settings-such as dental offices-are discussed in a separate section [Section II.M.2.eI (see sidebar, "CDC Guidelines for Dental Care Settings," page 600) with appropriate references to other parts of the guidelines. In this article, references to specific sections of the CDC guidelines are indicated in brackets. The term "HCWs" refers to all persons working in health care settings who could be exposed to TB. The terms dental facility, dental setting and dental office are used interchangeably and refer to any location, whether a private practice or part of a larger institution, in which dental treatment is provided.
# TRANSMISSION AND PATHOGENESIS [SECTION LBD
TB is a clinical illness caused by the bacterium M. tuberculosis.
# SPECIAI REPORT-TABLE I
Effective prevention and control of M. tuberculosis is based on a clear understanding of how TB is transmitted, how infection becomes established and how infection progresses to clinical disease.
M. tuberculosis is spread through airborne particles, known as droplet nuclei, that can be generated when persons with pulmonary or laryngeal TB sneeze, cough, speak or sing. The particles are an estimated 1 to 5 microns in size, and normal air currents can keep them airborne for prolonged periods and spread them throughout a room or building.9 If a person inhales droplet nuclei containing TB bacilli, infection may begin if the bacilli reach the alveoli. Within two to 10 weeks, the body's immunologic response to TB bacilli usually prevents further multiplication and spread. This condition is referred to as latent TB infection.
Persons with latent TB infec-tion usually have a positive skin test reaction to tuberculin purified protein derivative (PPD), do not have active TB and cannot infect others. In about 90 percent of Americans infected with TB, the infection remains latent for life, with no progression to active TB. In the United States, active TB disease will develop in the first year or two after infection with M. tuberculosis in about 5 percent of people who have been recently infected. In another 5 percent, disease will develop later in life. The risk of developing active TB varies with age and immunologic status.10 In general, people not receiving treatment who have or who are suspected of having pulmonary or laryngeal TB should be considered infectious if they are coughing, are undergoing cough-inducing or aerosol-generating procedures or have sputum smears that are positive for acid-fast bacilli . Persons with extrapulmonary TB are not considered infectious unless they have concomitant pulmonary disease, nonpulmonary disease located in the respiratory tract or oral cavity or extrapulmonary disease that includes an open abscess or lesion. Coinfection of human immunodeficiency virus does not appear to affect the infectiousness of patients with TB.
Hierarchy of control measures. A facility's TB infection control program should be based on a hierarchy of control measures (Table 1) and should be appropriate for the facility's level of risk of M. tuberculosis transmission (Table 2). The first level in the hierarchy is the use of administrative controls, which are intended primarily to reduce the risk of exposing uninfected people to people who have infectious TB. These measures include developing and implementing policies and protocols for early identification and isolation of patients suspected of having active TB or, in facilities that do not have TB isolation capabilities, for referral to a collaborating facility that does have such capabilities. Other administrative measures include ensuring the use of effective work practices, performing PPD skin testing on HCWs and educating HCWs about TB.
The second level of the hierarchy includes engineering controls to prevent the spread and reduce the concentration of infectious droplet nuclei. These controls may include the use of local exhaust devices (such as booths or hoods), the dilution and removal of contaminated air via general ventilation, controlling the direction of airflow and cleaning air by filtration or-as an adjunct to ventilation-by ultraviolet germicidal In all dental settings, an initial or baseline risk assessment should be conducted by a designated person who understands the basic principles of infection control and can adapt TB infection control recommendations according to the facility's level of risk for M. tuberculosis transmission. In institutional dental settings, this person may be the hospital epidemiologist, an infection control practitioner or an occupational health specialist. In non-institutional dental settings, such as private dental offices, the risk assessment usually is conducted by the dentist or another staff member who has been assigned responsibility for the office's infection control program. Sources for training specific to TB infection control risk assessment may include the local public health department, state or local dental society, academic institutions, a local infection control organization or an infection control practitioner in a local hospital. The guidelines recommend that the risk assessment should be conducted for each area of the facility. Since most dental settings constitute an entire "area," a single risk assessment should be sufficient.
Risk categories. Each dental facility should be classified into one of five defined categories of risk for M. tuberculosis transmission: minimal, very low, low, intermediate or high (Table 2). This classification should be based on three factors. The first factor is the incidence of active TB in the community or county served by the facility. This information should be available from the local or county public health department. The second factor is the number of patients with active TB who received dental treatment in the facility during the preceding 12 months (this does not include patients who were promptly screened and referred). The third factor is possible transmission of M. tuberculosis in the facility as evidenced by PPD skin test conversions among DWs or evidence of transmission between patients in the dental setting.
Minimal risk. Most dental settings probably will fall into the minimalor very-low-risk categories. A dental setting at minimal risk is one that does not provide treatment to patients with active TB and is located in a community in which no TB cases have been reported during the previous 12 months.
i SPECIA DEPORT- Each dental office in the minimalor very-low-risk category should identify collaborating facilities that are capable of evaluating, managing and/or providing urgent dental treatment to patients with suspected or active TB. These facilities should be identified before DWs or patients are placed at risk of possible exposure to M. tuberculosis so that patients suspected of having TB may be referred promptly. The risk assessment should be used to determine the need for a DW skin-testing program and the appropriate frequency of such testing. Baseline PPD skin testing of all DWs, performed by trained personnel, is recommended at the beginning of employment for facilities in all risk categories except minimal risk. However, baseline PPD skin testing of DWs in the minimalrisk category may be advisable so that if an unprotected expo- | 1,736 | {
"id": "79ea54959a3882cf419ce5510e35454929018627",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Centers for Disease Control and Prevention. Updated guidelines for evaluating public health surveillance systems: recommendations from the guidelines working group. MMWR 2001;50(No. RR-13):.# INTRODUCTION
In 1988, CDC published Guidelines for Evaluating Surveillance Systems (1 ) to promote the best use of public health resources through the development of efficient and effective public health surveillance systems. CDC's Guidelines for Evaluating Surveillance Systems are being updated to address the need for a) the integration of surveillance and health information systems, b) the establishment of data standards, c) the electronic exchange of health data, and d) changes in the objectives of public health surveillance to facilitate the response of public health to emerging health threats (e.g., new diseases). For example, CDC, with the collaboration of state and local health departments, is implementing the National Electronic Disease Surveillance System (NEDSS) to better manage and enhance the large number of current surveillance systems and allow the public health community to respond more quickly to public health threats (e.g., outbreaks of emerging infectious diseases and bioterrorism) (2 ). When NEDSS is completed, it will electronically integrate and link together several types of surveillance systems with the use of standard data formats; a communications infrastructure built on principles of public health informatics; and agreements on data access, sharing, and confidentiality. In addition, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) mandates that the United States adopt national uniform standards for electronic transactions related to health insurance enrollment and eligibility, health-care encounters, and health insurance claims; for identifiers for health-care providers, payers and individuals, as well as code sets and classification systems used in these transactions; and for security of these transactions (3 ). The electronic exchange of health data inherently involves the protection of patient privacy.
Based on CDC's Framework for Program Evaluation in Public Health (4 ), research and discussion of concerns related to public health surveillance systems, and comments received from the public health community, this report provides updated guidelines for evaluating public health surveillance systems.
# BACKGROUND
Public health surveillance is the ongoing, systematic collection, analysis, interpretation, and dissemination of data regarding a health-related event for use in public health action to reduce morbidity and mortality and to improve health (5)(6)(7). Data disseminated by a public health surveillance system can be used for immediate public health action, program planning and evaluation, and formulating research hypotheses. For example, data from a public health surveillance system can be used to - guide immediate action for cases of public health importance;
- measure the burden of a disease (or other health-related event), including changes in related factors, the identification of populations at high risk, and the identification of new or emerging health concerns;
- monitor trends in the burden of a disease (or other health-related event), including the detection of epidemics (outbreaks) and pandemics;
- guide the planning, implementation, and evaluation of programs to prevent and control disease, injury, or adverse exposure;
- evaluate public policy;
- detect changes in health practices and the effects of these changes;
- prioritize the allocation of health resources;
- describe the clinical course of disease; and
- provide a basis for epidemiologic research.
Public health surveillance activities are generally authorized by legislators and carried out by public health officials. Public health surveillance systems have been developed to address a range of public health needs. In addition, public health information systems have been defined to include a variety of data sources essential to public health action and are often used for surveillance (8 ). These systems vary from a simple system collecting data from a single source, to electronic systems that receive data from many sources in multiple formats, to complex surveys. The number and variety of systems will likely increase with advances in electronic data interchange and integration of data, which will also heighten the importance of patient privacy, data confidentiality, and system security. Appropriate institutions/agencies/scientific officials should be consulted with any projects regarding pubic health surveillance.
Variety might also increase with the range of health-related events under surveillance. In these guidelines, the term "health-related event" refers to any subject related to a public health surveillance system. For example, a health-related event could include infectious, chronic, or zoonotic diseases; injuries; exposures to toxic substances; health promoting or damaging behaviors; and other surveilled events associated with public health action.
The purpose of evaluating public health surveillance systems is to ensure that problems of public health importance are being monitored efficiently and effectively. Public health surveillance systems should be evaluated periodically, and the evaluation should include recommendations for improving quality, efficiency, and usefulness. The goal of these guidelines is to organize the evaluation of a public health surveillance system. Broad topics are outlined into which program-specific qualities can be integrated. Evaluation of a public health surveillance system focuses on how well the system operates to meet its purpose and objectives.
The evaluation of public health surveillance systems should involve an assessment of system attributes, including simplicity, flexibility, data quality, acceptability, sensitivity, predictive value positive, representativeness, timeliness, and stability. With the continuing advancement of technology and the importance of information architecture and related concerns, inherent in these attributes are certain public health informatics concerns for public health surveillance systems. These concerns include comparable hardware and software, standard user interface, standard data format and coding, appropriate quality checks, and adherence to confidentiality and security standards (9 ). Because public health surveillance systems vary in methods, scope, purpose, and objectives, attributes that are important to one system might be less important to another. A public health surveillance system should emphasize those attributes that are most important for the objectives of the system. Efforts to improve certain attributes (e.g., the ability of a public health surveillance system to detect a health-related event ) might detract from other attributes (e.g., simplicity or timeliness). An evaluation of the public health surveillance system must therefore consider those attributes that are of the highest priority for a given system and its objectives. Considering the attributes that are of the highest priority, the guidelines in this report describe many tasks and related activities that can be applied in the evaluation of public health surveillance systems, with the understanding that all activities under the tasks might not be appropriate for all systems.
# Organization of This Report
This report begins with descriptions of each of the tasks involved in evaluating a public health surveillance system. These tasks are adapted from the steps in program evaluation in the Framework for Program Evaluation in Public Health (4 ) as well as from the elements in the original guidelines for evaluating surveillance systems (1 ). The report concludes with a summary statement regarding evaluating surveillance systems. A checklist that can be detached or photocopied and used when the evaluation is implemented is also included (Appendix A).
To assess the quality of the evaluation activities, relevant standards are provided for each of the tasks for evaluating a public health surveillance system (Appendix B). These standards are adapted from the standards for effective evaluation (i.e., utility, feasibility, propriety, and accuracy) in the Framework for Program Evaluation in Public Health (4 ). Because all activities under the evaluation tasks might not be appropriate for all systems, only those standards that are appropriate to an evaluation should be used.
# Task A. Engage the Stakeholders in the Evaluation
Stakeholders can provide input to ensure that the evaluation of a public health surveillance system addresses appropriate questions and assesses pertinent attributes and that its findings will be acceptable and useful. In that context, we define stakeholders as those persons or organizations who use data for the promotion of healthy lifestyles and the prevention and control of disease, injury, or adverse exposure. Those stakeholders who might be interested in defining questions to be addressed by the surveillance system evaluation and subsequently using the findings from it are public health practitioners; health-care providers; data providers and users; representatives of affected communities; governments at the local, state, and federal levels; and professional and private nonprofit organizations.
# Task B. Describe the Surveillance System to be Evaluated Activities
- Describe the public health importance of the health-related event under surveillance.
- Describe the purpose and operation of the system.
- Describe the resources used to operate the system.
# Discussion
To construct a balanced and reliable description of the system, multiple sources of information might be needed. The description of the system can be improved by consulting with a variety of persons involved with the system and by checking reported descriptions of the system against direct observation.
# B.1. Describe the Public Health Importance of the Health-Related Event Under Surveillance
Definition. The public health importance of a health-related event and the need to have that event under surveillance can be described in several ways. Health-related events that affect many persons or that require large expenditures of resources are of public health importance. However, health-related events that affect few persons might also be important, especially if the events cluster in time and place (e.g., a limited outbreak of a severe disease). In other instances, public concerns might focus attention on a particular health-related event, creating or heightening the importance of an evaluation. Diseases that are now rare because of successful control measures might be perceived as unimportant, but their level of importance should be assessed as a possible sentinel health-related event or for their potential to reemerge. Finally, the public health importance of a health-related event is influenced by its level of preventability (10 ).
# MMWR 5
Measures. Parameters for measuring the importance of a health-related eventand therefore the public health surveillance system with which it is monitored-can include (7 ) - indices of frequency (e.g., the total number of cases and/or deaths; incidence rates, prevalence, and/or mortality rates); and summary measures of population health status (e.g., quality-adjusted life years );
- indices of severity (e.g., bed-disability days, case-fatality ratio, and hospitalization rates and/or disability rates);
- disparities or inequities associated with the health-related event;
- costs associated with the health-related event;
- preventability (10 );
- potential clinical course in the absence of an intervention (e.g., vaccinations) (11,12 ); and
- public interest.
Efforts have been made to provide summary measures of population health status that can be used to make comparative assessments of the health needs of populations (13 ). Perhaps the best known of these measures are QALYs, years of healthy life (YHLs), and disability-adjusted life years (DALYs). Based on attributes that represent health status and life expectancy, QALYs, YHLs, and DALYs provide one-dimensional measures of overall health. In addition, attempts have been made to quantify the public health importance of various diseases and other health-related events. In a study that describes such an approach, a score was used that takes into account age-specific morbidity and mortality rates as well as health-care costs (14 ). Another study used a model that ranks public health concerns according to size, urgency, severity of the problem, economic loss, effect on others, effectiveness, propriety, economics, acceptability, legality of solutions, and availability of resources (15 ).
Preventability can be defined at several levels, including primary prevention (preventing the occurrence of disease or other health-related event), secondary prevention (early detection and intervention with the aim of reversing, halting, or at least retarding the progress of a condition), and tertiary prevention (minimizing the effects of disease and disability among persons already ill). For infectious diseases, preventability can also be described as reducing the secondary attack rate or the number of cases transmitted to contacts of the primary case. From the perspective of surveillance, preventability reflects the potential for effective public health intervention at any of these levels.
# B.2. Describe the Purpose and Operation of the Surveillance System
Methods. Methods for describing the operation of the public health surveillance system include - List the purpose and objectives of the system.
- Describe the planned uses of the data from the system.
- Describe the health-related event under surveillance, including the case definition for each specific condition.
# MMWR July 27, 2001
- Cite any legal authority for the data collection.
- Describe where in the organization(s) the system resides, including the context (e.g., the political, administrative, geographic, or social climate) in which the system evaluation will be done.
- Describe the level of integration with other systems, if appropriate.
- Draw a flow chart of the system.
- Describe the components of the system. For example -What is the population under surveillance?
-What is the period of time of the data collection?
-What data are collected and how are they collected?
-What are the reporting sources of data for the system?
-How are the system's data managed (e.g., the transfer, entry, editing, storage, and back up of data)? Does the system comply with applicable standards for data formats and coding schemes? If not, why?
-How are the system's data analyzed and disseminated?
-What policies and procedures are in place to ensure patient privacy, data confidentiality, and system security? What is the policy and procedure for releasing data? Do these procedures comply with applicable federal and state statutes and regulations? If not, why? -Does the system comply with an applicable records management program? For example, are the system's records properly archived and/or disposed of?
Discussion. The purpose of the system indicates why the system exists, whereas its objectives relate to how the data are used for public health action. The objectives of a public health surveillance system, for example, might address immediate public health action, program planning and evaluation, and formation of research hypotheses (see Background). The purpose and objectives of the system, including the planned uses of its data, establish a frame of reference for evaluating specific components.
A public health surveillance system is dependent on a clear case definition for the health-related event under surveillance (7 ). The case definition of a health-related event can include clinical manifestations (i.e., symptoms), laboratory results, epidemiologic information (e.g., person, place, and time), and/or specified behaviors, as well as levels of certainty (e.g., confirmed/definite, probable/presumptive, or possible/suspected). The use of a standard case definition increases the specificity of reporting and improves the comparability of the health-related event reported from different sources of data, including geographic areas. Case definitions might exist for a variety of health-related events under surveillance, including diseases, injuries, adverse exposures, and risk factor or protective behaviors. For example, in the United States, CDC and the Council of State and Territorial Epidemiologists (CSTE) have agreed on standard case definitions for selected infectious diseases (16 ). In addition, CSTE publishes Position Papers that discuss and define a variety of health-related events (17 ). When possible, a public health surveillance system should use an established case definition, and if it does not, an explanation should be provided.
The evaluation should assess how well the public health surveillance system is integrated with other surveillance and health information systems (e.g., data exchange and sharing in multiple formats, and transformation of data). Streamlining related systems into an integrated public health surveillance network enables individual systems to meet specific data collection needs while avoiding the duplication of effort and lack of standardization that can arise from independent systems (18 ). An integrated system can address comorbidity concerns (e.g., persons infected with human immunodeficiency virus and Mycobacterium tuberculosis ); identify previously unrecognized risk factors; and provide the means for monitoring additional outcomes from a healthrelated event. When CDC's NEDSS is completed, it will electronically integrate and link together several types of surveillance activities and facilitate more accurate and timely reporting of disease information to CDC and state and local health departments (2 ).
CSTE has organized professional discussion among practicing public health epidemiologists at state and federal public health agencies. CSTE has also proposed a national public health surveillance system to serve as a basis for local and state public health agencies to a) prioritize surveillance and health information activities and b) advocate for necessary resources for public health agencies at all levels (19 ). This national public health system would be a conceptual framework and virtual surveillance system that incorporates both existing and new surveillance systems for healthrelated events and their determinants.
Listing the discrete steps that are taken in processing the health-event reports by the system and then depicting these steps in a flow chart is often useful. An example of a simplified flow chart for a generic public health surveillance system is included in this report (Figure 1). The mandates and business processes of the lead agency that operates the system and the participation of other agencies could be included in this chart. The architecture and data flow of the system can also be depicted in the chart (20,21 ). A chart of architecture and data flow should be sufficiently detailed to explain all of the functions of the system, including average times between steps and data transfers.
The description of the components of the public health surveillance system could include discussions related to public health informatics concerns, including comparable hardware and software, standard user interface, standard data format and coding, appropriate quality checks, and adherence to confidentiality and security standards (9 ). For example, comparable hardware and software, standard user interface, and standard data format and coding facilitate efficient data exchange, and a set of common data elements are important for effectively matching data within the system or to other systems.
To document the information needs of public health, CDC, in collaboration with state and local health departments, is developing the Public Health Conceptual Data Model to a) establish data standards for public health, including data definitions, component structures (e.g., for complex data types), code values, and data use; b) collaborate with national health informatics standard-setting bodies to define standards for the exchange of information among public health agencies and health-care providers; and c) construct computerized information systems that conform to established data and data interchange standards for use in the management of data relevant to public health (22 ). In addition, the description of the system's data management might address who is editing the data, how and at what levels the data are edited, and what checks are in place to ensure data quality.
In response to HIPAA mandates, various standard development organizations and terminology and coding groups are working collaboratively to harmonize their separate systems (23 ). For example, both the Accredited Standards Committee X12 (24 ), which has dealt principally with standards for health insurance transactions, and Health Level Seven (HL7) (25 ), which has dealt with standards for clinical messaging and exchange of clinical information with health-care organizations (e.g., hospitals), have collaborated on a standardized approach for providing supplementary information to support health-care claims (26 ). In the area of classification and coding of diseases and other medical terms, the National Library of Medicine has traditionally provided the Unified Medical Language System, a metathesaurus for clinical coding systems that allows terms in one coding system to be mapped to another (27 ). The passage of HIPAA and the anticipated adoption of standards for electronic medical records have increased efforts directed toward the integration of clinical terminologies ( 23) (e.g., the merge of the College of American Pathologists' Systematized Nomenclature of Medicine and the British Read Codes, the National Health Service thesaurus of health-care terms in Great Britain).
The data analysis description might indicate who analyzes the data, how they are analyzed, and how often. This description could also address how the system ensures that appropriate scientific methods are used to analyze the data.
The public health surveillance system should operate in a manner that allows effective dissemination of health data so that decision makers at all levels can readily understand the implications of the information (7 ). Options for disseminating data and/or information from the system include electronic data interchange; public-use data files; the Internet; press releases; newsletters; bulletins; annual and other types of reports; publication in scientific, peer-reviewed journals; and poster and oral presentations, including those at individual, community, and professional meetings. The audiences for health data and information can include public health practitioners, health-care providers, members of affected communities, professional and voluntary organizations, policymakers, the press, and the general public.
In conducting surveillance, public health agencies are authorized to collect personal health data about persons and thus have an obligation to protect against inappropriate use or release of that data. The protection of patient privacy (recognition of a person's right not to share information about him or herself), data confidentiality (assurance of authorized data sharing), and system security (assurance of authorized system access) is essential to maintaining the credibility of any surveillance system. This protection must ensure that data in a surveillance system regarding a person's health status are shared only with authorized persons. Physical, administrative, operational, and computer safeguards for securing the system and protecting its data must allow authorized access while denying access by unauthorized users.
A related concern in protecting health data is data release, including procedures for releasing record-level data; aggregate tabular data; and data in computer-based, interactive query systems. Even though personal identifiers are removed before data are released, the removal of these identifiers might not be a sufficient safeguard for sharing health data. For example, the inclusion of demographic information in a line-listed data file for a small number of cases could lead to indirect identification of a person even though personal identifiers were not provided. In the United States, CDC and CSTE have negotiated a policy for the release of data from the National Notifiable Disease Surveillance System (29 ) to facilitate its use for public health while preserving the confidentiality of the data (30 ). The policy is being evaluated for revision by CDC and CSTE.
Standards for the privacy of individually identifiable health data have been proposed in response to HIPAA (3 ). A model state law has been composed to address privacy, confidentiality, and security concerns arising from the acquisition, use, disclosure, and storage of health information by public health agencies at the state and local levels (31 ). In addition, the Federal Committee on Statistical Methodology's series of Statistical Policy Working Papers includes reviews of statistical methods used by federal agencies and their contractors that release statistical tables or microdata files that are collected from persons, businesses, or other units under a pledge of confidentiality. These working papers contain basic statistical methods to limit disclosure (e.g., rules for data suppression to protect privacy and to minimize mistaken inferences from small numbers) and provide recommendations for improving disclosure limitation practices (32 ).
A public health surveillance system might be legally required to participate in a records management program. Records can consist of a variety of materials (e.g., completed forms, electronic files, documents, and reports) that are connected with operating the surveillance system. The proper management of these records prevents a "loss of memory" or "cluttered memory" for the agency that operates the system, and enhances the system's ability to meet its objectives.
# B.3. Describe the Resources Used to Operate the Surveillance System
Definition. In this report, the methods for assessing resources cover only those resources directly required to operate a public health surveillance system. These resources are sometimes referred to as "direct costs" and include the personnel and financial resources expended in operating the system.
Methods. In describing these resources consider the following:
- Funding source(s): Specify the source of funding for the surveillance system. In the United States, public health surveillance often results from a collaboration among federal, state, and local governments.
- Personnel requirements: Estimate the time it takes to operate the system, including the collection, editing, analysis, and dissemination of data (e.g., person-time expended per year of operation). These measures can be converted to dollar estimates by multiplying the person-time by appropriate salary and benefit costs.
- Other resources: Determine the cost of other resources, including travel, training, supplies, computer and other equipment, and related services (e.g., mail, telephone, computer support, Internet connections, laboratory support, and hardware and software maintenance).
When appropriate, the description of the system's resources should consider all levels of the public health system, from the local health-care provider to municipal, county, state, and federal health agencies. Resource estimation for public health surveillance systems have been implemented in Vermont (Table 1) and Kentucky (Table 2).
Resource Estimation in Vermont. Two methods of collecting public health surveillance data in Vermont were compared (33 ). The passive system was already in place and consisted of unsolicited reports of notifiable diseases to the district offices or state health department. The active system was implemented in a probability sample of physician practices. Each week, a health department employee called these practitioners to solicit reports of selected notifiable diseases.
In comparing the two systems, an attempt was made to estimate their costs. The estimates of direct expenses were computed for the public health surveillance systems (Table 1).
Resource Estimation in Kentucky. Another example of resource estimation was provided by an assessment of the costs of a public health surveillance system involving the active solicitation of case reports of type A hepatitis in Kentucky (Table 2) (34 ). The resources that were invested into the direct operation of the system in 1983 were for personnel and telephone expenses and were estimated at $3,764 and $535, respectively. Nine more cases were found through this system than would have been found through the passive surveillance system, and an estimated seven hepatitis cases were prevented through administering prophylaxis to the contacts of the nine case-patients.
Discussion. This approach to assessing resources includes only those personnel and material resources required for the operation of surveillance and excludes a broader definition of costs that might be considered in a more comprehensive evaluation. For example, the assessment of resources could include the estimation of indirect costs (e.g., follow-up laboratory tests) and costs of secondary data sources (e.g., vital statistics or survey data).
The assessment of the system's operational resources should not be done in isolation of the program or initiative that relies on the public health surveillance system. A more formal economic evaluation of the system (i.e., judging costs relative to benefits) could be included with the resource description. Estimating the effect of the system on decision making, treatment, care, prevention, education, and/or research might be possible (35,36 ). For some surveillance systems, however, a more realistic approach would be to judge costs based on the objectives and usefulness of the system.
# Task C. Focus the Evaluation Design Definition
The direction and process of the evaluation must be focused to ensure that time and resources are used as efficiently as possible.
# Methods
Focusing the evaluation design for a public health surveillance system involves - determining the specific purpose of the evaluation (e.g., a change in practice);
- identifying stakeholders (Task A) who will receive the findings and recommendations of the evaluation (i.e., the intended users);
# MMWR July 27, 2001
- considering what will be done with the information generated from the evaluation (i.e., the intended uses);
- specifying the questions that will be answered by the evaluation; and
- determining standards for assessing the performance of the system.
# Discussion
Depending on the specific purpose of the evaluation, its design could be straightforward or complex. An effective evaluation design is contingent upon a) its specific purpose being understood by all of the stakeholders in the evaluation and b) persons who need to know the findings and recommendations of the design being committed to using the information generated from it. In addition, when multiple stakeholders are involved, agreements that clarify roles and responsibilities might need to be established among those who are implementing the evaluation.
Standards for assessing how the public health surveillance system performs establish what the system must accomplish to be considered successful in meeting its objectives. These standards specify, for example, what levels of usefulness and simplicity are relevant for the system, given its objectives. Approaches to setting useful standards for assessing the system's performance include a review of current scientific literature on the health-related event under surveillance and/or consultation with appropriate specialists, including users of the data.
# Task D. Gather Credible Evidence Regarding the Performance of the Surveillance System
Activities
- Indicate the level of usefulness by describing the actions taken as a result of analysis and interpretation of the data from the public health surveillance system.
Characterize the entities that have used the data to make decisions and take actions. List other anticipated uses of the data.
- Describe each of the following system attributes:
-Simplicity
# Discussion
Public health informatics concerns for public health surveillance systems (see Task B.2, Discussion) can be addressed in the evidence gathered regarding the performance of the system. Evidence of the system's performance must be viewed as credible. For example, the gathered evidence must be reliable, valid, and informative for its intended use. Many potential sources of evidence regarding the system's performance exist, including consultations with physicians, epidemiologists, statisticians, behavioral scientists, public health practitioners, laboratory directors, program managers, data providers, and data users.
# D.1. Indicate the Level of Usefulness
Definition. A public health surveillance system is useful if it contributes to the prevention and control of adverse health-related events, including an improved understanding of the public health implications of such events. A public health surveillance system can also be useful if it helps to determine that an adverse health-related event previously thought to be unimportant is actually important. In addition, data from a surveillance system can be useful in contributing to performance measures (37 ), including health indicators (38 ) that are used in needs assessments and accountability systems.
Methods. An assessment of the usefulness of a public health surveillance system should begin with a review of the objectives of the system and should consider the system's effect on policy decisions and disease-control programs. Depending on the objectives of a particular surveillance system, the system might be considered useful if it satisfactorily addresses at least one of the following questions. Does the system - detect diseases, injuries, or adverse or protective exposures of public importance in a timely way to permit accurate diagnosis or identification, prevention or treatment, and handling of contacts when appropriate?
- provide estimates of the magnitude of morbidity and mortality related to the health-related event under surveillance, including the identification of factors associated with the event?
- detect trends that signal changes in the occurrence of disease, injury, or adverse or protective exposure, including detection of epidemics (or outbreaks)?
- permit assessment of the effect of prevention and control programs?
- lead to improved clinical, behavioral, social, policy, or environmental practices? or
- stimulate research intended to lead to prevention or control?
A survey of persons who use data from the system might be helpful in gathering evidence regarding the usefulness of the system. The survey could be done either formally with standard methodology or informally.
Discussion. Usefulness might be affected by all the attributes of a public health surveillance system (see Task D.2, Describe Each System Attribute). For example, increased sensitivity might afford a greater opportunity for identifying outbreaks and understanding the natural course of an adverse health-related event in the population under surveillance. Improved timeliness allows control and prevention activities to be initiated earlier. Increased predictive value positive enables public health officials to more accurately focus resources for control and prevention measures. A representative surveillance system will better characterize the epidemiologic characteristics of a health-related event in a defined population. Public health surveillance systems that are simple, flexible, acceptable, and stable will likely be more complete and useful for public health action.
# D.2. Describe Each System Attribute
# D.2.a. Simplicity
Definition. The simplicity of a public health surveillance system refers to both its structure and ease of operation. Surveillance systems should be as simple as possible while still meeting their objectives.
Methods. A chart describing the flow of data and the lines of response in a surveillance system can help assess the simplicity or complexity of a surveillance system. A simplified flow chart for a generic surveillance system is included in this report (Figure 1).
The following measures (see Task B.2) might be considered in evaluating the simplicity of a system:
MMWR 15
- amount and type of data necessary to establish that the health-related event has occurred (i.e., the case definition has been met);
- amount and type of other data on cases (e.g., demographic, behavioral, and exposure information for the health-related event);
- number of organizations involved in receiving case reports;
- level of integration with other systems;
- method of collecting the data, including number and types of reporting sources, and time spent on collecting data;
- amount of follow-up that is necessary to update data on the case;
- method of managing the data, including time spent on transferring, entering, editing, storing, and backing up data;
- methods for analyzing and disseminating the data, including time spent on preparing the data for dissemination;
- staff training requirements; and
- time spent on maintaining the system.
Discussion. Thinking of the simplicity of a public health surveillance system from the design perspective might be useful. An example of a system that is simple in design is one with a case definition that is easy to apply (i.e., the case is easily ascertained) and in which the person identifying the case will also be the one analyzing and using the information. A more complex system might involve some of the following:
- special or follow-up laboratory tests to confirm the case;
- investigation of the case, including telephone contact or a home visit by public health personnel to collect detailed information;
- multiple levels of reporting (e.g., with the National Notifiable Diseases Surveillance System, case reports might start with the health-care provider who makes the diagnosis and pass through county and state health departments before going to CDC ); and
- integration of related systems whereby special training is required to collect and/ or interpret data.
Simplicity is closely related to acceptance and timeliness. Simplicity also affects the amount of resources required to operate the system.
# D.2.b. Flexibility
Definition. A flexible public health surveillance system can adapt to changing information needs or operating conditions with little additional time, personnel, or allocated funds. Flexible systems can accommodate, for example, new health-related events, changes in case definitions or technology, and variations in funding or reporting sources. In addition, systems that use standard data formats (e.g., in electronic data interchange) can be easily integrated with other systems and thus might be considered flexible.
# MMWR July 27, 2001
Methods. Flexibility is probably best evaluated retrospectively by observing how a system has responded to a new demand. An important characteristic of CDC's Behavioral Risk Factor Surveillance System (BRFSS) is its flexibility (39 ). Conducted in collaboration with state health departments, BRFSS is an ongoing sample survey that gathers and reports state-level prevalence data on health behaviors related to the leading preventable causes of death as well as data on preventive health practices. The system permits states to add questions of their own design to the BRFSS questionnaire but is uniform enough to allow state-to-state comparisons for certain questions. These state-specific questions can address emergent and locally important health concerns. In addition, states can stratify their BRFSS samples to estimate prevalence data for regions or counties within their respective states.
Discussion. Unless efforts have been made to adapt the public health surveillance system to another disease (or other health-related event), a revised case definition, additional data sources, new information technology, or changes in funding, assessing the flexibility of that system might be difficult. In the absence of practical experience, the design and workings of a system can be examined. Simpler systems might be more flexible (i.e., fewer components will need to be modified when adapting the system for a change in information needs or operating conditions).
# D.2.c. Data Quality
Definition. Data quality reflects the completeness and validity of the data recorded in the public health surveillance system.
Methods. Examining the percentage of "unknown" or "blank" responses to items on surveillance forms is a straightforward and easy measure of data quality. Data of high quality will have low percentages of such responses. However, a full assessment of the completeness and validity of the system's data might require a special study. Data values recorded in the surveillance system can be compared to "true" values through, for example, a review of sampled data (40 ), a special record linkage (41 ), or patient interview (42 ). In addition, the calculation of sensitivity (Task D.2.e) and predictive value positive (Task D.2.f) for the system's data fields might be useful in assessing data quality.
Quality of data is influenced by the performance of the screening and diagnostic tests (i.e., the case definition) for the health-related event, the clarity of hardcopy or electronic surveillance forms, the quality of training and supervision of persons who complete these surveillance forms, and the care exercised in data management. A review of these facets of a public health surveillance system provides an indirect measure of data quality.
Discussion. Most surveillance systems rely on more than simple case counts. Data commonly collected include the demographic characteristics of affected persons, details about the health-related event, and the presence or absence of potential risk factors. The quality of these data depends on their completeness and validity.
The acceptability (see Task D.2.d) and representativeness (Task D.2.g) of a public health surveillance system are related to data quality. With data of high quality, the system can be accepted by those who participate in it. In addition, the system can accurately represent the health-related event under surveillance.
# D.2.d. Acceptability
Definition. Acceptability reflects the willingness of persons and organizations to participate in the surveillance system.
Methods. Acceptability refers to the willingness of persons in the sponsoring agency that operates the system and persons outside the sponsoring agency (e.g., persons who are asked to report data) to use the system. To assess acceptability, the points of interaction between the system and its participants must be considered (Figure 1), including persons with the health-related event and those reporting cases.
Quantitative measures of acceptability can include
- subject or agency participation rate (if it is high, how quickly it was achieved);
- interview completion rates and question refusal rates (if the system involves interviews);
- completeness of report forms;
- physician, laboratory, or hospital/facility reporting rate; and
- timeliness of data reporting.
Some of these measures might be obtained from a review of surveillance report forms, whereas others would require special studies or surveys.
Discussion. Acceptability is a largely subjective attribute that encompasses the willingness of persons on whom the public health surveillance system depends to provide accurate, consistent, complete, and timely data. Some factors influencing the acceptability of a particular system are - the public health importance of the health-related event; | 7,870 | {
"id": "65da92cfb1b9470b7f437e4bc27e4c88ee43c2ac",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | The original document was printed with a HP Laserjet 5Si printer. Actual page numbers may vary with different printers and their formatting. Blue = hypertext link.# Recommended Shipbuilding Construction Guidelines for
Cruise Vessels Destined To Call on U. S. Ports
# BACKGROUND AND PURPOSE
The Centers for Disease Control and Prevention (CDC) established the Vessel Sanitation Program (VSP) in 1975 as a cooperative endeavor with the cruise vessel industry. VSP's goal is to assist the industry in fulfilling its responsibility for developing and implementing comprehensive sanitation programs to protect the health of passengers and crew members aboard cruise vessels.
Every cruise vessel that has a foreign itinerary, carries 13 or more passengers, and calls on a United States port is subject to biannual operational inspections and when necessary, to reinspection by the VSP. The vessel owner pays a fee, based on gross registered tonnage (GRT) of the vessel, for all operational inspections. The VSP Operations Manual, which is available on the VSP web site at www.cdc.gov/nceh/vsp, covers details of these inspections.
Additionally, on a voluntary basis, cruise vessel owners or shipyards that build or renovate cruise vessels may request plan reviews, on-site shipyard construction inspections and/or final construction inspections of new or remodeled vessels before their first or next operational inspection, as the case may be. The vessel owner or shipyard pays a fee, based on GRT of the vessel, for on-site and final construction inspections. VSP does not charge a fee for plan reviews or consultations. Section 3.0, Procedures for Making Requests for Plan Reviews and Construction-Related Inspections covers details pertaining to plan reviews, consultations, or construction inspections.
# The primary purpose of Recommended Shipbuilding Construction Guidelines for Cruise Vessels Destined To Call on U.S. Ports (hereinafter referred to as the guidelines)
is to provide a framework of consistency for the sanitary design and construction of cruise vessels in order to protect the health of passengers and crew aboard ship. CDC is committed to promoting high construction standards to protect the public's health and believes compliance with these recommended construction guidelines will help ensure a healthful environment on cruise vessels.
# CDC Construction Guidelines -AUGUST 2001 -page
For general guidance in developing this document, CDC reviewed many references from a variety of sources. These references are in Section 28.2, Standards, Codes and Other References Reviewed for Guidance.
CDC provides construction guidelines for various components of the vessel's facilities related to public health, such as food storage, preparation, and service; water bunkering, storage, disinfection, and distribution. CDC's position is that vessel owners and operators may select the equipment that best meets their needs. However, the equipment selected must be maintained over time to meet VSP's routine operational inspection requirements.
CDC's intention or purpose is not to limit the introduction of new designs, materials or technology for shipbuilding. A shipbuilder, owner, manufacturer, or other interested party may request that VSP periodically review or revise these construction guidelines on the basis of new information or technology. VSP reviews such requests in accordance with the criteria described in Section 2.0, Revisions and Recommended Changes.
New cruise vessels will comply with all international code requirements (e.g., International Maritime Organization (IMO) Conventions, including the Safety of Lifeat-Sea Convention (SOLAS), the International Convention for the Prevention of Pollution from Ships (MARPOL), the Tonnage and Load Line Convention, International Electric Code (IEC), International Plumbing Code (IPC), and International Standards Organization (ISO). This document does not crossreference related, and sometimes overlapping standards that new cruise vessels must meet.
These guidelines will apply to all newbuildings for which the keel is laid after August 1, 2001. The guidelines will apply also to major renovations performed after August 1, 2001. A major renovation is any change in the structural elements of the vessel covered by these guidelines. The guidelines do not apply to minor renovations. Minor renovations are small changes, such as the installation or removal of single pieces of equipment (e.g., refrigerator units, bain marie units) or single pipe runs. These guidelines will apply to all areas of the vessel affected by a renovation. VSP will inspect the entire vessel in accordance with the Vessel Sanitation Program Operations Manual during routine vessel sanitation inspections and reinspections.
# CDC Construction Guidelines -AUGUST 2001 -page 2.0 REVISIONS AND RECOMMENDED CHANGES
In cooperation with the industry, VSP will periodically review and revise the guidelines. VSP will give special consideration to shipyards and owners for ships that have had plan reviews conducted prior to a effective date of a revision of these guidelines. This will ensure that unfair burden is not placed on the shipyards and owners to make excessive changes to newbuildings previously agreed upon.
A shipbuilder, owner, manufacturer, or other interested party may ask VSP to review a construction guideline based on new technologies, concepts, or methods.
Recommendations for changes or additions to these guidelines must be submitted in writing, to the Chief, VSP. Section 29.2.1 includes the address. The recommendation should identify the section, describe the proposed change or addition and the reason for recommending the change or addition, and include research or test results and any other pertinent information that support change or addition. The VSP will coordinate a professional evaluation and consult with industry to determine whether to include the recommendation in the next revision.
VSP recognizes that the shipbuilding and cruise industries are constantly evolving and that these guidelines may require periodic revision. VSP will ask industry representatives and other knowledgeable parties to meet with VSP representatives periodically to review the guidelines and determine whether changes are necessary to keep up with the innovations in the industry.
# PROCEDURES FOR MAKING REQUESTS FOR PLAN REVIEWS, CONSULTATIONS, AND CONSTRUCTION-RELATED INSPECTIONS
In order to coordinate or schedule a plan review or construction-related inspection the shipyard, vessel owner, or other vessel representative may contact VSP and submit an official, written request as early as possible in the planning, construction, or renovation process. All official, written letters of requests for plan reviews, consultations, and construction-related inspections shall be directed to the Chief, VSP. The availability of VSP staff determines VSP's ability to schedule and honor these requests. Section 29.2, VSP Contact Numbers, contains a complete listing of contact addresses and telephone numbers.
After the initial contact, VSP will assign primary and secondary officers to coordinate with the vessel owner and shipyard. These officers will be the points of contact for the vessel from the time the plan review and subsequent consultations take place through the final construction inspection. Vessel
# CDC Construction Guidelines -AUGUST 2001 -page
representatives will provide points of contact to represent the owners, the shipyard, and key subcontractors. All parties will utilize these points of contact during consultations between any of the parties and VSP to ensure awareness of all consultative activities after conducting the plan review.
# Plan Reviews and Consultations
VSP normally conducts plan reviews for newbuildings a minimum of 18-24 months before the vessel is scheduled for delivery. Because of the variable time lines associated with major renovations and to allow time for any necessary changes, VSP coordinates the plan reviews for such projects well before the work begins. Normally, VSP assigns two officers conduct the review. Most plan reviews will take 2 working days and will be conducted in Atlanta or Fort Lauderdale. Representatives from the shipyard, the vessel owner, and the subcontractor(s) who will be doing most of the work will attend the review. These representatives shall bring all pertinent plans or drawings and equipment specifications for the areas covered in these guidelines, including but not limited to general arrangement plans; all foodrelated storage, preparation, and service area plans; potable and nonpotable water system plans with details on water inlets, i.e., sea chests, outlets, and backflow protection devices; ventilation system plans; and, if applicable, swimming pool and whirlpool spa plans.
VSP will prepare a Plan Review Report summarizing the recommendations made during the plan review and will submit the report to the shipyard and owner representatives.
Following the plan review, the shipyard will provide 1) a complete set of plans or drawings and specifications for the vessel, 2) any redrawn plans and, 3) a statement of corrective action outlining how each of the items identified in the Plan Review Report will be corrected. Additionally, the shipyard will send VSP copies of any major change orders in the areas covered by these guidelines that are made after the plan review. While the vessel is being built, shipyard representatives, the owner or other vessel representative may direct questions or requests for consultative services to the VSP project officers.
Questions or requests will be directed in writing to the officer(s) assigned to the project. The VSP officer(s) will coordinate the request with the owner and shipyard points of contact designated during the plan review. The person sending the request shall include the fax numbers of the contact person or project manager for the vessel owner, shipyard or subcontractor so that they may receive a copy of the VSP's response. A sample request form is included in Section 29.1.
# On-Site Construction Inspections
VSP conducts most on-site or shipyard construction inspections in shipyards outside the United States. So that VSP can process the required foreign travel orders for VSP officers, the shipyard must submit a formal, written letter of request to Chief, VSP, a minimum of 45 days before the inspection date. Section 29.1 includes a Sample Letter of Request. VSP encourages shipyards to contact the Chief, VSP and coordinate on-site construction inspections well before the 45-day minimum to better plan the actual inspection dates. If a shipyard requests an on-site construction inspection, VSP will advise the vessel owner of the inspection dates so that the owner's representatives are present. An on-site construction inspection normally requires the expertise of one to three officers, depending on the size of the vessel and whether it is the first of a hull design class or a subsequent hull in a series of the same class of vessels. The inspection, including travel, generally takes 5 working days. The on-site inspection should be conducted approximately 4 to 5 weeks before delivery of the vessel when the 90% of the areas of the vessel to be inspected are completed.
After the inspection, and before the ship's arrival in the United States, the shipyard will submit to VSP a statement of corrective action outlining how it will address and correct each item identified in the inspection report.
# Final Construction Inspections
At the request of a vessel owner or shipyard, VSP may conduct a final construction inspection. The vessel owner or shipyard will submit a formal, written request the Chief, VSP as soon as possible after the vessel is completed, or a minimum of 10 days before its arrival in the United States At the request of a vessel owner or shipyard and provided the vessel is not entering the United States market immediately, VSP may conduct final construction inspections outside the United States. If a final construction inspection is not requested, VSP generally will conduct an unannounced operational inspection within 4 weeks following the vessel's arrival in the United States. VSP conducts operational inspections in accordance with the VSP Operations Manual.
As soon as possible after the final construction inspection, the vessel owner or shipyard will submit a statement of corrective action to VSP. The statement will outline how they will address each item in the inspection report, including the projected date of completion. VSP generally schedules vessels that undergo final construction inspection in the United States for an unannounced operational inspection within 6 weeks of the vessel's construction inspection. VSP conducts operational inspections in accordance with the VSP Operations Manual.
# EQUIPMENT STANDARDS, TESTING, AND CERTIFICATION
Although these guidelines establish certain standards for equipment and materials installed on cruise vessels, the VSP does not test, certify, or otherwise endorse equipment or materials used by the cruise industry. Instead, the VSP accepts certification from independent testing laboratories such as NSF International, Underwriter's Laboratories (UL), the American National Standards Institute (ANSI), or other accredited institutions. In most cases, independent testing laboratories test equipment and materials to certain minimum standards which generally, but in some cases may not, meet the recommended standards established by these guidelines.
In these instances questionable equipment will be referred to a committee with participants from VSP, various members of the cruise ship industry and an independent testing organization. The committee will be responsible for determining if the equipment meets the recommended standards established in these guidelines.
Copies of test or certification standards are available from the previously mentioned independent testing laboratories. Equipment manufacturers and suppliers will not refer to VSP to approve their products.
# GENERAL DEFINITIONS
Accessible -Capable of being exposed for cleaning and inspection with the use of simple tools such as a screwdriver, pliers, or an open-end wrench.
Air break -A piping arrangement in which a drain from a fixture, appliance, or device discharges indirectly into another fixture, receptacle, or interceptor at a point below the flood-level rim. (Figure 1)
Air gap -The unobstructed vertical distance through the free atmosphere between the lowest opening from any pipe or faucet supplying water to a tank, plumbing fixture, or other device and the flood-level rim of the receptacle or receiving fixture. The air gap must be at least twice the diameter of the supply pipe or faucet or at least 25 mm (1 inch). (Figure 2)
Backflow -The flow of water or other liquids, mixtures, or substances into Figure 2 -Air Gap the distribution pipes of a potable supply of water from any source or sources other than the potable water supply. Back siphonage is one form of backflow.
Backflow, check, or non-return valve -A mechanical device installed in a waste line to prevent the reversal of flow under conditions of back pressure. In the check-valve type, the flap should swing into a recess when the line is flowing full in order to preclude obstructing the flow.
Backflow preventer -An approved backflow prevention plumbing device that must be used on potable water distribution lines where there is a direct connection or a potential connection between the potable water distribution system and other liquids, mixtures, or substances from any source other than the potable water supply. Some devices are designed for use under continuous water pressure, whereas others are non-pressure types. To ensure proper protection of the water supply, a thorough review of the water system shall be made to confirm that the appropriate device is selected for each specific application. The following are general types of backflow preventers and their uses: Atmospheric vacuum breaker -An approved backflow prevention plumbing device utilized on potable water lines where shut-off valves do not exist downstream from the device. The device is not approved for use when it is installed in a manner that will cause it to be under continuous water pressure. An atmospheric vacuum breaker must be installed at least 152 mm (6 inches) above the flood level rim of the fixture or container to which it is supplying water.
Continuous pressure backflow preventer -An approved backflow prevention plumbing device with two check valves and an intermediate atmospheric vent that is designed and approved for use under continuous water pressure (e.g., when shut-off valves are located downstream from the device).
Hose bib connection vacuum breaker -An approved backflow prevention plumbing device that attaches directly to a hose bib by way of a threaded head. This device uses a single check valve and vacuum breaker vent. It is not approved for use under continuous pressure (e.g., when a shut-off valve is located downstream from the device).
Reduced Pressure Principle Backflow Prevention Assembly (RP Assembly) -An assembly containing two independently acting approved check valves together with a hydraulically operating, mechanically independent pressure differential relief valve located between the check valves and at the same time below the first check valve. The unit shall include properly located resilient seated test cocks and tightly closing resilient seated shutoff valves at each end of the assembly.
Back-siphonage -The backward flow of used, contaminated, or polluted water from a plumbing fixture or vessel or other source into a water-supply pipe as a result of negative pressure in the pipe.
Black Water -Waste from toilets, urinals, medical sinks, and other similar facilities.
Blast Chiller -A unit specifically designed for rapid intermediate chilling of food products to 21 0 C (70 0 ) within 2 hours and 5 0 C (41 0 F) within an additional 4 hours.
Child Activity Facility -Facility for child-related activities where children do not require assistance using toilet facilities and may be old enough to come and go on their own. Child Care Facility -Facility for child-related activities where children are not yet out of diapers or require supervision using the toilet facilities, and are cared for by vessel staff.
Child Size Toilet -Toilet of appropriate height and having a seat size appropriate for the age and average size of the children that will use the toilet.
Corrosion-resistant -Capable of maintaining original surface characteristics under prolonged influence of the use environment, including the expected food contact and the normal use of cleaning compounds and sanitizing solutions.
Coved -A concave surface or molding that eliminates the usual angles of ninety degrees or less.
Cross-connection -Any unprotected actual or potential connection or structural arrangement between a public or a consumer's potable water system and any other source or system through which it is possible to introduce into any part of the potable system any used water, industrial fluid, gas, or substance other than the intended potable water with which the system is supplied. Bypass arrangements, jumper connection, removable section, swivel or change-over devices and other temporary or permanent devices which or because of which backflow can occur are considered to be cross-connections.
Easily cleanable -Fabricated with a material, finish, and design that allows for cleaning by normal methods.
Food contact surfaces -Surfaces of equipment and utensils with which food normally comes in contact and surfaces from which food may drain, drip, or splash back onto surfaces normally in contact with food.
Food display areas -Any area where food is displayed for consumption by passengers and/or crew.
Food handling areas -Any area where food is stored, processed, prepared, or served.
Food preparation areas -Any area where food is processed, cooked, or prepared for service.
Food service areas -Any area where food is presented to passengers or crew members (excluding individual cabin service).
Food storage areas -Any area where food or food products are stored.
Food transport areas -Any area through which unprepared or prepared food is transported during food preparation, storage, and service operations (excluding individual cabin service).
Grey water -All water including drainage from galleys, dishwashers, showers, laundries, and bath and washbasin drains. It does not include black water or bilge water from the machinery spaces.
Keel Laying -The date at which construction identifiable with a specific ship begins and when assembly of that ship comprises at least 50 tons or one per cent of the estimated mass of all structural material, whichever is less.
Non-food contact surfaces -All exposed surfaces, other than food contact or splash contact surfaces, of equipment located in food storage, preparation and service areas.
Non-potable fresh water -Fresh water that may not be halogenated but is intended for use in technical and other areas where potable water is not required (e.g., laundries, engine room, toilets, and waste-treatment areas and for washing decks in areas other than the vessel's hospital, food service, preparation, or storage areas).
Potable water (PW) -Fresh water that is intended for drinking, washing, bathing, or showering; for use in fresh water swimming pools and whirlpool spas; for use in the vessel's hospital; for handling, preparing, or cooking food; and for cleaning food storage and preparation areas, utensils, and equipment.
Potable water tanks -All tanks in which potable water is stored from bunkering and production for distribution and use as potable water.
Portable -A description of equipment that is readily removable or mounted on casters, gliders, or rollers; provided with a mechanical means so that it can be tilted safely for cleaning; or readily movable by one person.
Readily accessible -Exposed or capable of being exposed for cleaning or inspection without the use of tools.
Readily removable -Capable of being detached from the main unit without the use of tools.
Removable -Capable of being detached from the main unit with the use of simple tools such as a screwdriver, pliers, or an open end wrench.
Safe material -An article manufactured from or composed of materials that may not reasonably be expected to result, directly or indirectly, in their becoming a component of any food or otherwise affecting the characteristics of any food; an additive that is used as specified in Section 409 or 706 of the Federal Food, Drug, and Cosmetic Act; or other materials that are not additives but are used in conformity with applicable regulations of the Food and Drug Administration (FDA).
# Scupper -A conduit or collection basin that channels water runoff to a drain.
Sealant -Material used to fill seams to prevent the entry or leakage of liquid or moisture.
Sealed -Having no openings present that will permit the entry of soil or seepage of liquids.
Sealed seam -A seam that has no openings that would permit the entry of soil or liquid seepage.
Seam -An open juncture between two similar or dissimilar materials. Continuously welded junctures, ground and polished smooth, are not considered seams.
Sewage -Any liquid waste that contains animal or vegetable matter in suspension or solution, including liquids that contain chemicals in solution.
Smooth -means: a) A food contact surface that is free of pits and inclusions with a cleanability equal to or exceeding that of a No. 3 finish (100 grit) on stainless steel; b) A non-food contact surface of equipment that is equal to commercial grade hot-rolled steel and is free of visible scale; and c) A deck, bulkhead, or deckhead that has an even or level surface with no roughness or projections that render it difficult to clean.
Splash contact surfaces -Surfaces that are subject to routine splash, spillage or other soiling during normal use.
Direct splash surfaces -Areas adjacent to food contact surfaces that are subject to splash, drainage, or drippage onto food contact surfaces.
Indirect splash surfaces -Areas adjacent to food contact surfaces that are subject to splash, drainage, drippage, condensation, or spillage from food preparation and storage.
Technical Water -Fresh water NOT intended for 1) drinking, washing, bathing, or showering; 2) use in the vessel's hospital; 3) handling, preparing, or cooking food; and 4) cleaning food storage and preparation areas, utensils, and equipment.
Temperature Measuring Devices (TMDs) -Ambient air, and water temperature measuring devices that are scaled in Celsius or dually scaled in Celsius and Fahrenheit shall be designed to be easily readable and accurate to ± 1.5 0 C or ± 3 0 F.
Utility Sink -Any sink located in a food service area not used for handwashing and/or warewashing.
# GENERAL FACILITIES REQUIREMENTS
# Size and Flow
The size of the vessel, number of passengers and crew, types of foods or menus, number of meals or mealtimes, service or presentation of meals, and itinerary as well as the vessel owner's experience are many, but not all, of the factors to determine and influence the size of rooms or areas and the flow of food through a vessel. In general, food storage, preparation, and service areas; warewashing areas; and waste management areas shall be of adequate size to accommodate the vessel's passengers and crew on the vessel. Bulk food storage areas or provision rooms (frozen stores, refrigerated stores, and dry stores) shall be adequate for the vessel's itinerary. Adequate refrigeration and hot holding facilities, including temporary storage facilities, shall be available for all food preparation and service areas and for foods being transported to remote areas.
The flow of food through a vessel shall be arranged in a logical sequence that minimizes cross-traffic or backtracking and that allows for adequate separation of clean and soiled operations. An orderly, functional flow of food from the purveyor at dockside through the storage, preparation, and finishing areas to the service areas and finally, to the waste management area, shall be provided. The goal is to conduct production and service smoothly and rapidly in accordance with strict temperature-control requirements and to minimizing time and handling.
VSP will evaluate the adequacy of the size of a particular room or area and the flow of food through the vessel to those rooms or areas shall be evaluated primarily during the plan review process.
# Equipment Requirements
The following is a list of equipment required, depending on the level of service, in galleys and recommended for other areas:
Blast chillers incorporated into the design of passenger and crew galleys. More than one unit may be necessary depending on the size of the vessel, the unit's intended application, and the distances between the chillers and the storage and service areas.
Food preparation sinks in as many areas as necessary (i.e., in all meat, fish, and vegetable preparation rooms; cold pantries or garde mangers; and in any other areas where personnel wash or soak food). An automatic vegetable washing machine may be used in addition to food preparation sinks in vegetable preparation rooms. Seal all seams between adjoining deckhead or bulkhead panels that are more than 0.8 mm (1/32 inch) but less than 3 mm (1/8 inch) with an approved sealant. Cover all seams greater than 3 mm (1/8 inch) with appropriate profile strips. Properly seal all bulkhead and deckhead penetrations through which pipes or other conduits pass. Use stainless steel collars where gaps are greater than 3 mm (1/8 inch).
6.4.2 Reinforce all bulkheads sufficiently to prevent panels from buckling or becoming detached under normal operating conditions.
6.4.3 Door penetrations shall be completely welded indentations with no open voids. Locking pins shall be inserted into inverted nipples. This also applies to the penetrations around fire doors, in the thresholds and in bulkhead openings.
6.4.4 Install durable coving of at least a 10 mm (3/8 inch) radius as an integral part of the deck and bulkhead interface and at the juncture between decks and equipment foundations. Stainless steel or other coving, if installed, shall be of sufficient thickness so as to be durable and securely installed.
6.4.5 Decks shall be hard, durable, easily cleanable, non-skid and nonabsorbent. Completely seal all deck penetrations through which pipes or other conduits pass.
6.5 Deck Drains and Scuppers 6.5.1 Construct deck drains, scuppers, and deck sinks from stainless steel with smooth finished surfaces that are accessible for cleaning, designed to drain completely, and large enough to prevent overflow to adjacent deck surfaces.
6.5.2 Construct scupper and deck sink cover grates from stainless steel or other material that 1) meets the requirements for a smooth, easily cleanable surface; 2) is strong enough to maintain its original shape; and 3) exhibits no sharp edges. Ensure that scupper and deck sink cover grates are tight-fitting, readily removable for cleaning, and uniform in length where practical (e.g.,1 meter or 3 feet), so that they are interchangeable.
6.5.3 Place deck drains, scuppers, and deck sinks in low-traffic spaces such as in front of soup kettles, boilers, tilting pans, or braising pans. Size the deck drains, scuppers, and sinks in order to eliminate spillage and overflow to adjacent deck surfaces.
6.5.4 Provide sufficient deck drainage in all food service areas to prevent liquids from pooling on the decks. 6.5.5 Design deck and scupper drain lines to be a minimum of 65 mm (2 ½ inches) in diameter and to drain completely.. Provide cross-drain connections in order to prevent ponding and spillage from the scupper when the vessel is listing.
6.5.6 Ramps over thresholds shall be easily removable or sealed in place, sloped for easy roll-in and roll-out of trolleys, and be strong enough to maintain their shape. Ramps over scupper covers may be constructed as an integral part of the scupper system, provided that they are cleanable and durable.
6.5.7 Deck sinks may not be used as substitutes for deck drains. Independent deck drains are required. When a piece of equipment is installed adjacent to another piece of equipment or a bulkhead, it should be located to permit cleaning under, between and behind the equipment. The width of the space to be provided is dependent upon the distance from either end to the farthest point requiring cleaning.
# GENERAL HYGIENE FACILITIES REQUIREMENTS
8.1.1 When the distance to be cleaned is less than 0.6 m (2 feet) long, provide at least 150 mm (6 inches) of clear, unobstructed space between adjacent equipment and between the equipment and bulkheads.
8.1.2 When the distance to be cleaned is greater than 0.6 m (2 feet) long but less than 1.2 m (4 feet) long, provide at least 200 mm (8 inches) of clear, unobstructed space between adjacent equipment and between the equipment and bulkheads.
8.1.3 When the distance to be cleaned is greater than 1.20 m (4 feet) long but less than 1.8 m (6 feet) long, provide at least 300 mm (12 inches) of clear, unobstructed space between adjacent equipment and between the equipment and bulkheads.
8.1.4 When the distance to be cleaned is greater than 1.8 m (6 feet) long, provide at least 460 mm (18 inches) of clear, unobstructed space between adjacent equipment and between the equipment and bulkheads.
Figure 8.5 -Foundation Detail 8.2 Continuous weld all equipment that is not classified as portable to stainless steel pads or plates on the deck. The stainless steel welding shall have smooth edges, rounded corners, and no gaps. Attach equipment as an integral part of the deck surface with glue, epoxy, or other durable adhesive product. Ensure that the arrangement is smooth and easily cleanable. Construct equipment that locks in place so that it is free of gaps and crevices and easily cleanable.
8.3 Deck-mounted equipment that is not easily movable shall be sealed to the deck or elevated on legs that provide at least a 150 mm (6 inch) clearance between the deck and the equipment. If no part of the deck under the deckmounted equipment is more than 150 mm (6 inches) from the point of cleaning access, the clearance space may be only 100 mm (4 inches). Exceptions may also be granted if there are no barriers to cleanability, i.e., equipment, such as pulpers and warewashing machines with pipelines, motors and cables below where 150 mm (6 inches) clearance from the deck may not be practical.
8. 4 Provide a minimum of at least 150 mm (6 inches) between equipment and the deckheads. If proper clearance cannot be achieved, extend the equipment through the deckhead panels and seal appropriately.
8.5 When mounting equipment on a foundation or coaming, ensure that the foundation or coaming is at least 100 mm (4 inches) above the finished deck. Use cement or a continuous weld to seal equipment to the foundation or coaming. Provide a sealed-type foundation or coaming for equipment not mounted on legs. Ensure that the overhang of the equipment from the foundation or coaming does not exceed 100 mm (4 inches). Completely seal any overhang of equipment along the bottom (Figure 8.5).
8.6 Ensure that table-mounted equipment, unless portable, is either sealed to the tabletop or mounted on legs.
The length of the legs is dependent upon the horizontal distance of the table top under the equipment from either end to the farthest point requiring cleaning.
8.6.1. If the horizontal distance of the table top under the equipment is 500 mm (20 inches) or greater from the point of access for cleaning, mount the equipment on legs at least 100 mm (4 inches) above the tabletop.
8.6.2. If the horizontal distance of the table top under the equipment is less than 500 mm (20 inches) or greater than 75 mm (3 inches) from the point of access for cleaning, mount the equipment on legs at least 75 mm (3 inches) above the tabletop.
8.6.2. If the horizontal distance of the table top under the equipment less than 75 mm (3 inches) from the point of access for cleaning, mount the equipment on legs at least 50 mm (2 inches) above the tabletop.
# FASTENERS AND REQUIREMENTS FOR SECURING AND SEALING EQUIPMENT
9.1 Food Contact Surfaces 9.1.1 Attach all food contact surfaces or connections from food contact surfaces to adjacent splash zones to ensure a seamless, coved corner. Reinforce all bulkheads, deckheads, or decks receiving such attachments.
# LATCHES, HINGES, AND HANDLES
10.1 Ensure that built-in equipment latches, hinges and handles are durable, noncorroding, and easily cleanable. Do not use piano hinges in food contact or splash zones.
# GASKETS
11.1 Ensure that equipment gaskets for reach-in refrigerators, steamers, ice bins, ice cream freezers, and similar equipment are constructed of smooth, nonabsorbent, non-porous materials.
11.2 Close and seal gaskets at their ends and corners and seal hollow sections.
11.3 Ensure that refrigerator gaskets are designed to be removable.
11.4 Ensure that fasteners used to install gaskets conform with the requirements specified for Section 9.0.
# EQUIPMENT DRAIN LINES
12.1 Connect drain lines from all fixtures, sinks, appliances, compartments, refrigeration units, or devices that are used, designed for, or intended to be used in the preparation, processing, storage, or handling of food, ice, or drinks to appropriate waste systems by means of an air-gap or air-brake.
12.1.1 Use stainless steel or other easily cleanable rigid or flexible material in the construction of drain lines, and size drain lines appropriately. Provide a minimum interior diameter of 25 mm (1 inch) for custom built equipment 12.1.2 Slope drain lines from the evaporators, and extend them through the bulkheads or decks. Direct drain lines through an accessible air gap or air break to a deck scupper or drain below the deck level or to a scupper outside.
12.1.3 Install drain lines to minimize the horizontal distance from the source of the drainage to the discharge.
12.1.4 Installed horizontal drain lines at least 100mm (4 inches) above the deck and slope to drain.
12.1.5 Ensure that drain lines drain through an air break or air gap to a drain or scupper.
12.2 All drain lines (except condensate drain lines) from hood washing systems, cold top tables, bains-marie, dipper wells, food preparation sinks and warewashing sinks or machines shall conform to the following requirements:
12.2.1 Shall be less than 1m (3 feet) and free of sharp angles or corners, if designed to be cleaned in place by a brush.
12.2.2 Shall be readily removable for cleaning, if greater than 1.0 m (3 feet).
12.2.3 Shall drain through an air break or air gap to a drain or scupper.
12.2.4 Handwashing sinks, mop sinks and drinking fountains are not required to drain through an air break or air gap.
12.3 When possible, all installed equipment drain lines shall extend in a vertical line to a deck scupper drain. When this is not possible, the horizontal distance of the line shall be kept to a minimum.
# ELECTRICAL CONNECTIONS, PIPELINES, AND OTHER ATTACHED EQUIPMENT
13.1 Encase electrical wiring from permanently installed equipment in durable and easily cleanable material. Do not use braided or woven stainless steel electrical conduit outside of technical spaces or where it is subject to splash or soiling unless encased in easily cleanable plastic or similar easily cleanable material. Adjust the length of electrical cords to equipment that is not permanently mounted or fasten them in a manner that prevents the cords from lying on countertops.
13.2 Ensure that other bulkhead-or deckhead-mounted equipment such as phones, speakers, electrical control panels, or outlet boxes are sealed tight with the bulkhead or deckhead panels. Do not place in areas exposed to food splash.
13.3 Tightly seal any areas where electrical lines, steam, or water pipelines penetrate the panels or tiles of the deck, bulkhead or deckhead. In addition, seal any openings or void spaces around the electrical lines or the steam or water pipelines and the surrounding conduit or pipelines.
13.4 Enclose steam and water pipelines to kettles and boilers in stainless steel cabinets or position the pipelines behind bulkhead panels. Minimize the number of exposed pipelines. Cover any exposed, insulated pipelines with stainless steel or other durable, easily cleanable material.
# HOOD SYSTEMS
14.1 Install hood systems or direct duct exhaust over warewashing equipment (except undercounter warewashing machines) and over three-compartment sinks in pot wash areas where hot water is used for sanitizing,.
14.1.1 For warewashing machines with direct duct exhaust, such exhaust shall be directly connected to the hood exhaust trunk where hot water is used for sanitization.
14.1.2 Design all exhaust hoods over warewashing equipment or threecompartment sinks with a minimum 150 mm (6 inches) overhang from the edge of equipment so as to capture excess steam and heat.
14.1.3 Warewashing machines with direct duct exhaust to the ventilation system shall have a clean-out port in each duct that is located between the top of the warewashing machine and the hood system or deckhead.
14.1.4 The flat condensate drip pans located in the ducts from the warewashing machines shall be removable for cleaning.
14.2 Install hood systems above cooking equipment to ensure that they adequately remove excess steam and grease-laden vapors. Install hood systems or dedicated local ventilation to control excess heat and steam from bains-marie or steam tables.
14.3 Select proper sized exhaust and supply vents. Position and balance them appropriately for expected operating conditions to ensure proper air conditioning, and capture and exhaust of heat and steam.
14.4 Where filters are used, ensure that they are readily removable.
14.5 Ensure that vents and duct work are accessible for cleaning. (Hood washing systems are recommended for removal of grease generated from cooking equipment).
14.6 Use stainless steel with coved corners that provide at least a 10 mm (3/8 inch) radius to construct hood systems. Use continuous welds or profile strips on adjoining pieces of stainless steel. A drainage system is not required for normal grease condensate or cleaning solutions applied manually to hood assemblies. Drainage systems are required for hood assemblies using automatic clean-in-place systems.
14.7 Install ventilation systems in accordance with the manufacturer's recommendations. Test the system by utilizing a method that determines if the system is properly balanced for normal operating conditions. 15.3.7 Encase thermometer probes in a stainless steel conduit and place in the warmest part of the room where food is normally stored.
# PROVISION ROOMS AND WALK-IN REFRIGERATORS AND FREEZERS
# GALLEYS, FOOD PREPARATION ROOMS, AND PANTRIES
# Bulkheads and Deckheads
16.1.1 Construct bulkheads and deckheads, including doors, door frames, and columns with a high quality, corrosion resistant stainless steel.
Ensure that the gauge is thick enough so that the panels do not warp, flex, or separate under normal conditions. For seams greater than 1 mm (1/32 inch) but less than 3 mm (1/8 inch), use an approved sealant. For bulkhead and deckhead seams greater than 3 mm (1/8 inch), use only stainless steel profile strips.
16.1.2 Ensure that all bulkheads to which equipment is attached shall be of sufficient thickness or reinforcement to allow for the reception of fasteners or welding without compromising the quality and construction of the panels.
16.1.3 Install utility line connections through a stainless steel or other easily cleanable, food service approved conduit that is mounted away from bulkheads for ease in cleaning.
16.1.4 Seal back splash attachments to the bulkhead with a continuous-or tack-weld and polish. Use an approved sealant to make back splash attachment watertight.
# BUFFET LINES, WAITER STATIONS, BARS, BAR PANTRIES AND OTHER FOOD SERVICE AREAS
# Bulkheads and Deckheads
Bulkheads and deckheads may be constructed of decorative tiles; pressed metal panels; or other hard, durable, non-corroding materials. Stainless steel is not required in these areas. However, the materials used shall be easily cleanable.
Bar and bar pantry construction shall follow the same guidelines referenced in Sections 6.0 -14.7 and 17.0 -21.5.4.
# CDC Construction Guidelines
# CDC Construction Guidelines -AUGUST 2001 -page
Figure 17.3 -Sneeze Guard Detail
Sneeze guards shall be positioned in such a way that the sneeze guard panels intercept the line between the consumer's mouth and the displayed foods in accordance with NSF Standard 2. Factors such as the height of the food display counter, the presence or absence of a tray rail, and the distance between the edge of the display counter and the actual placement of the food shall be taken into account (Figure 17.3).
Tray rail surfaces shall be sealed and easily cleanable in accordance with guidelines for food splash zones.
# Beverage Delivery System
17.4.1 Install a stainless steel, vented, double-check valve backflow prevention device in all bars that have carbonation systems, e.g., multiflow beverage dispensing systems. Install the device before the carbonator and downstream from any copper or copper-alloy (e.g., brass) in the potable water-supply line.
17.4.2 Encase supply lines to the dispensing guns in a single tube. If the tube penetrates through any bulkhead or countertop, seal the penetration with a grommet.
17.4.3 Bulk dispensers of beverage delivery systems shall incorporate in their design a clean-in-place system that provides a means of flushing the entire interior of the dispensing lines in accordance with manufacturers' instructions.
# CDC Construction Guidelines -AUGUST 2001 -page
# WAREWASHING
18.1 Provide rinse hoses for pre-washing. In all food preparation areas, provide adequate table space for waste barrels, garbage grinder, or pulper system. Grinders are optional in pantries and bars. If a sink is to be used for prerinsing, provide a removable strainer.
18.2 For soiled landing tables with pulper systems, ensure that the pulper trough extends the full length of the table and that the trough slopes toward the pulper.
18.3 Seal the back edge of the soiled landing table to the bulkhead or provide a minimum of 460 mm (18 inch) clearance between the table and the bulkhead.
18.4 Design soiled landing tables to drain waste liquids and to prevent contamination of adjacent clean surfaces.
18.5 To prevent water from pooling, equip clean landing tables with across-thecounter gutters with drains at the exit from the machine and sloped to the scupper. Install a second gutter and drain line if the length of table is such that the first gutter at the exit from the machine does not effectively remove pooled water. Minimize the length of drain lines and when possible, place them in straight vertical lines with no angles.
18.6 Provide sufficient space for cleaning around and behind equipment (e.g., pulpers and warewashing machines). Section 8.0 covers spacing requirements. For pieces of equipment greater than 1.8 m (6 feet), provide a minimum of 460 mm (18 inches) of clearance.
18.7 Encase pulper wiring in a durable and easy to clean stainless steel or nonmetallic watertight conduit and raise it at least 150 mm (6 inches) above the deck. Elevate all warewashing machine components at least 150 mm (6 inches) above the deck, except as noted in Section 8.3.
18.9 Construct removable splash panels from stainless steel to protect the pulper and technical areas.
18.10 Construct grinder cones, pulper tables, and dish-landing tables from stainless steel with continuous welding. Construct platforms for supporting warewashing equipment from stainless steel. Avoid the use of painted steel.
18.11 Ensure that warewashing machines are designed and sized for their intended use and that they are installed according to the manufacturer's recommendations.
# CDC Construction Guidelines -AUGUST 2001 -page
18.12 Ensure that warewashing machines have an easily accessible and readable data plate. The plate, affixed to the machine by the manufacture, includes the machine's design and operating specifications and the following: a) temperatures required for washing, rinsing, and sanitizing; b) pressure required for the fresh water sanitizing rinse unless the machine is designed to use only a pumped sanitizing rinse; c) conveyor speed for conveyor machines or cycle time for stationary rack machines; and d) chemical concentration (if chemical sanitizers are used).
18.13 Ensure that three-compartment warewashing sinks are sized correctly for their intended use. Ensure that the sinks are large enough to submerge the largest piece of equipment used in the area that is served. Ensure that the sinks have coved, continuously welded, internal corners that are integral to the interior surfaces.
18.14 Install one of the following arrangements to prevent excessive contamination of rinse water with wash water splash:
a) an across-the-counter gutter with a drain dividing the wash compartment from the rinse compartment b) a splash shield at least 100 mm (4 inches) above the flood level rim of the sink between the wash and rinse compartments, or c) an overflow drain in the wash compartment 100 mm (4 inches) below the flood level.
18.15 Equip hot water sanitizing sinks with accessible and easily readable thermometers, a long-handled stainless steel wire basket, and a jacketed or coiled steam supply with a temperature control valve to control water temperature. Three-compartment sinks that utilize halogen for the sanitization step do not require the aforementioned items necessary for hot water sanitizing sinks.
18.15.1 Provide, at a minimum, three-compartment warewashing sinks with a separate pre-wash station for the main galley, crew galley, lido galley and other full-service galleys with pot-washing areas.
# CDC Construction Guidelines -AUGUST 2001 -page
18.15.2 For meat, fish and vegetable preparation areas, provide at least one three-compartment sink or an automatic warewashing machine with a pre-wash station. Ensure that the tanks are independent of the shell of the vessel and do not share a common wall with tanks containing non-potable water or other liquids. Provide a 460 mm (18 inch) cofferdam above and between tanks that are not for storage of potable water and also between the tanks and the hull. Skin or double-bottom tanks are not allowed for potable water storage.
Ensure that 1) the coating of the tanks is approved for use in potable water tanks, 2) all manufacturer's recommendations for application and drying or curing are followed, and 3) provide written documentation for 1 & 2.
21.6.1.3 Coat all items that penetrate the tank (e.g., bolts, pipes, pipe flanges) with the same product used for the tank's interior.
21.6.1.4 Ensure that the system is designed to be superchlorinated one tank at a time through the filling line.
21.6.1.5 Ensure that lines for non-potable liquids do not pass through potable water tanks. Minimize the use of nonpotable lines above potable water tanks. Lines above tanks shall not have any mechanical couplings. If coaming is present along the edges of the tank, provide slots along the top of the tank to allow leaking liquid to run off and be detected. 22.2 Ensure that air gaps, the most reliable method of backflow protection, are at least double the diameter of the supply pipe measured vertically above the overflow of the rim of the vessel. The airgap must not be less than 25 mm (1 inch).
# CDC Construction Guidelines
In high-hazard situations where air gaps are impractical or cannot be installed, use a reduced pressure principle backflow prevention assembly.
22.4 If reduced pressure principle backflow prevention assemblies are used, provide a test kit for testing the devices annually. All RP's shall be tested after installation.
22.5 Use air gaps or mechanical backflow prevention devices when water must be supplied under pressure.
22.6 Install atmospheric vacuum breakers 150 mm (6 inches) above the fixture flood level rim with no valves downstream from the device.
22.7 Pressure-type backflow preventers (e.g., carbonator backflow preventer) or double-check valves with intermediate atmospheric vents prevent both backsiphonage and backflow caused by back pressure and shall be used in continuous pressure-type applications.
22.8 Where potable water is directed to a black water tank for rinse down or other such us, it shall only be connected through an air gap. Reduced pressure principle backflow prevention assemblies are inadequate in this high hazard condition.
# WHIRLPOOL SPAS
24.1 Potable water supplied whirlpool systems shall be supplied through an air gap or approved backflow preventer.
24.2 Provide water filtration equipment that ensures a turn-over rate of at least once every 30 minutes and halogenation equipment that is capable of maintaining the appropriate levels of free-halogen throughout the use period.
24.3 Provide a temperature control mechanism to prevent the temperature from exceeding 40 0 C (104 0 F).
24.4 Design the overflow system so that water level is maintained.
24.5 Provide one skimmer for every 14 m² (150 square feet) or fraction thereof of water surface area.
24.6 Provide an independent whirlpool drainage system. If the whirlpool drainage system is connected to another drainage system, provide a double-check valve between the two.
24.7 Provide drains and ensure the bottom of the whirlpool slopes toward the drains to effect complete drainage.
24.8 Provide anti-vortex type drain covers that are constructed of durable easily visible, easily cleanable material and that meet ASME/ANSI A112.19.8M voluntary standard for suction fittings (figure 3a-3c), or other drains that prevent entrapment hazards as specified in U.S. Consumer Product Safety Publication 363-009801 (figure 4a-4b).
24.9 Design the system to permit daily shock treatment or superhalogenation in accordance with the VSP Operations Manual.
24.10 Install systems in a manner that permits routine visual inspection of the granular media filters in accordance with the VSP Operations Manual.
24.11 Ensure that the fill level of the whirlpool is at the skim gutter level.
24.12 Ensure that whirlpool overflows are either directed by gravity to the make-up tank for recirculation through the filter system or disposed of as waste.
24.13 Use self-priming, centrifugal pumps to recirculate whirlpool water.
24.14 Ensure that whirlpool equipment (e.g.; pumps and filters) has the capacity to turn-over the spa water every in 30 minutes.
24.29 Ensure that the whirlpool mechanical room and recirculation system are designed for easy and safe storage of chemicals and refilling of chemical feed tanks.
24.30 Ensure that drains are installed in the whirlpool mechanical room so as to allow for rapid draining of the entire pump and filter system and that a minimum 80 mm ( 3 inch) drain is installed on the lowest point of the system. For inspections occurring outside of the United States, we will reimburse the Vessel Sanitation Program for all expenses in connection with the on-site shipyard inspection and will make all necessary arrangements for lodging and transportation, which includes airfare and ground transportation in (CITY, STATE, COUNTRY). We will provide in-kind for lodging and transportation expenses. An invoice for all remaining expenses, such as en-route per diem and meals and miscellaneous expenses, including ground transportation to and from the airport nearest the representative's work site or residence, shall be sent to the following address:
# MISCELLANEOUS | 11,108 | {
"id": "be97768d1d8887f3b36c59fcc6ee84e1f78ec07c",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | These recommendations update the February 4, 2002, guidelines developed by the Public Health Service for the use of zidovudine (ZDV) to reduce the risk for perinatal human immunodeficiency virus type 1 (HIV-1) transmission. This report provides healthcare providers with information for discussion with HIV-1-infected pregnant women to enable such women to make an informed decision regarding the use of antiretroviral drugs during pregnancy and use of elective cesarean delivery to reduce perinatal HIV-1 transmission. Various circumstances that commonly occur in clinical practice are presented, and the factors influencing treatment considerations are highlighted in this report. The Perinatal HIV Guidelines Working Group recognizes that strategies to prevent perinatal transmission and concepts related to management of HIV disease in pregnant women are rapidly evolving and will continually review new data and provide regular updates to the guidelines. The most recent information is available from the HIV/AIDS Treatment Information Service (available at ). In February 1994, the results of Pediatric AIDS Clinical Trials Group (PACTG) Protocol 076 documented that ZDV chemoprophylaxis could reduce perinatal HIV-1 transmission by nearly 70%. Epidemiologic data have since confirmed the efficacy of ZDV for reduction of perinatal transmission and have extended this efficacy to children of women with advanced disease, low CD4 + T-lymphocyte counts, and prior ZDV therapy. Additionally, substantial advances have been made in the understanding of the pathogenesis of HIV-1 infection and in the treatment and monitoring of persons with HIV-1 disease. These advances have resulted in changes in standard antiretroviral therapy for HIV-1-infected adults. More aggressive combination drug regimens that maximally suppress viral replication are now recommended. Although considerations associated with pregnancy may affect decisions regarding timing and choice of therapy, pregnancy is not a reason to defer standard therapy. Use of antiretroviral drugs in pregnancy requires unique considerations, including the possible need to alter dosage as a result of physiologic changes associated with pregnancy, the potential for adverse short-or long-term effects on the fetus and newborn, and the effectiveness of the drugs in reducing the risk for perinatal transmission. Data to address many of these considerations are not yet available. Therefore, offering antiretroviral therapy to HIV-1-infected women during pregnancy, whether primarily for HIV-1 infection, for reduction of perinatal transmission, or for both purposes, should be accompanied by a discussion of the known and unknown short-and longterm benefits and risks of such therapy to infected women and their infants. Standard antiretroviral therapy should be discussed with and offered to HIV-1-infected pregnant women. Additionally, to prevent perinatal transmission, ZDV chemoprophylaxis should be incorporated into the antiretroviral regimen. regimen of zidovudine (ZDV) could reduce the risk for motherto-child human immunodeficiency virus type 1 (HIV-1) transmission by nearly 70% (1). The regimen includes oral ZDV initiated at 14-34 weeks' gestation and continued throughout pregnancy, followed by intravenous ZDV during labor and oral administration of ZDV to the infant for 6 weeks after delivery (Table 1). In August 1994, a U.S. Public Health Service (USPHS) task force issued recommendations for the use of ZDV for reduction of perinatal HIV-1 transmission *Information included in these guidelines may not represent approval by the Food and Drug Administration (FDA) or approved labeling for the particular product or indications in question. Specifically, the terms "safe" and "effective" may not be synonymous with the FDA-defined legal standards for product approval.# Introduction
In February 1994, the Pediatric AIDS Clinical Trials Group (PACTG) Protocol 076 demonstrated that a three-part
# U.S. Public Health Service Task Force Recommendations for Use of Antiretroviral Drugs in Pregnant HIV-1-Infected Women for Maternal Health and Interventions To Reduce Perinatal HIV-1
Transmission in the United States- (2), and in July 1995, USPHS issued recommendations for universal prenatal HIV-1 counseling and HIV-1 testing with consent for all pregnant women in the United States (3). Since the publication of the results of PACTG 076, epidemiologic studies in the United States and France have demonstrated dramatic decreases in perinatal transmission with incorporation of the PACTG 076 ZDV regimen into general clinical practice (4)(5)(6)(7)(8)(9). Since 1994, advances have been made in the understanding of the pathogenesis of HIV-1 infection and in the treatment and monitoring of HIV-1 disease. The rapidity and magnitude of viral turnover during all stages of HIV-1 infection are greater than previously recognized; plasma virions are estimated to have a mean half-life of only 6 hours (10). Thus, current therapeutic interventions focus on early initiation of aggressive combination antiretroviral regimens to maximally suppress viral replication, preserve immune function, and reduce the development of resistance (11). New, potent antiretroviral drugs that inhibit the protease enzyme of HIV-1 are now available. When a protease inhibitor is used in combination with nucleoside analog reverse transcriptase inhibitors, plasma HIV-1 RNA levels can be reduced for prolonged periods to levels that are undetectable by current assays. Improved clinical outcome and survival have been observed among adults receiving such regimens (12,13). Additionally, viral load can now be more directly quantified through assays that measure HIV-1 RNA copy number; these assays have provided powerful new tools to assess disease stage, risk for progression, and the effects of therapy. These advances have led to substantial
# TABLE 1. Pediatric AIDS Clinical Trials Group (PACTG) 076 zidovudine (ZDV) regimen
# Time of ZDV administration Regimen
Antepartum Oral administration of 100 mg ZDV five times daily,- initiated at 14-34 weeks' gestation and continued throughout the pregnancy.
# Intrapartum
During labor, intravenous administration of ZDV in a 1-hour initial dose of 2 mg/kg body weight, followed by a continuous infusion of 1 mg/kg body weight per hour until delivery.
# Postpartum
Oral administration of ZDV to the newborn (ZDV syrup at 2 mg/kg body weight/dose every 6 hours) for the first 6 weeks of life, beginning at 8-12 hours after birth. † - Oral ZDV, administered as 200 mg three times daily or 300 mg twice daily, is used in general clinical practice and is an acceptable alternative regimen to 100 mg orally five times daily. † Intravenous dosage for infants who cannot tolerate oral intake is 1.5 mg/ kg body weight intravenously every 6 hours.
changes in the standard of treatment and monitoring for HIV-1-infected adults in the United States (14).
Advances also have been made in the understanding of the pathogenesis of perinatal HIV-1 transmission. Most perinatal transmission likely occurs close to the time of or during childbirth (15). Additional data that demonstrate the short-term safety of the ZDV regimen are now available as a result of follow-up of infants and women enrolled in PACTG 076; however, data from studies of animals concerning the potential for transplacental carcinogenicity of ZDV affirm the need for long-term follow-up of children with antiretroviral exposure in utero (16).
These advances have implications for maternal and fetal health. Health-care providers considering the use of antiretroviral agents for HIV-1-infected women during pregnancy must take into account two separate but related issues: 1) antiretroviral treatment of maternal HIV-1 infection, and 2) antiretroviral chemoprophylaxis to reduce the risk for perinatal HIV-1 transmission. The benefits of antiretroviral therapy for a pregnant woman must be weighed against the risk of adverse events to the woman, fetus, and newborn. Although ZDV chemoprophylaxis alone has substantially reduced the risk for perinatal transmission, antiretroviral monotherapy is now considered suboptimal for treatment of HIV-1 infection, and combination drug regimens are considered the standard of care for therapy (14).
This report reviews the special considerations regarding use of antiretroviral drugs for pregnant women, updates the results of PACTG 076 and related clinical trials and epidemiologic studies, discusses use of HIV-1 RNA and antiretroviral drug resistance assays during pregnancy, provides updated recommendations on antiretroviral chemoprophylaxis for reducing perinatal transmission, and provides recommendations related to use of elective cesarean delivery as an intervention to reduce perinatal transmission.
These recommendations have been developed for use in the United States. Although perinatal HIV-1 transmission occurs worldwide, alternative strategies may be appropriate in other countries. Policies and practices in other countries regarding the use of antiretroviral drugs for reduction of perinatal HIV-1 transmission may differ from the recommendations in this report and will depend on local considerations, including availability and cost of ZDV, access by pregnant women to facilities for safe intravenous infusions during labor, and alternative interventions being evaluated in that area.
# Background Considerations Regarding Use of Antiretroviral Drugs by HIV-1-infected Pregnant Women and Their Infants
Treatment recommendations for pregnant women infected with HIV-1 have been based on the belief that therapies of known benefit to women should not be withheld during pregnancy unless there are known adverse effects on the mother, fetus, or infant and unless these adverse effects outweigh the benefit to the woman (17). Combination antiretroviral therapy, usually consisting of two nucleoside analog reverse transcriptase inhibitors and a protease inhibitor, is the recommended standard treatment for HIV-1-infected adults who are not pregnant (14). Pregnancy should not preclude the use of optimal therapeutic regimens. However, recommendations regarding the choice of antiretroviral drugs for treatment of infected pregnant women are subject to unique considerations. These include possible changes in dosing requirements resulting from physiologic changes associated with pregnancy, potential effects of antiretroviral drugs on the pregnant woman, and the potential short-and long-term effects of the antiretroviral drug on the fetus and newborn, which may not be known for certain antiretroviral drugs.
The decision to use any antiretroviral drug during pregnancy should be made by the woman after discussing with her healthcare provider the known and unknown benefits and risks to her and her fetus.
Physiologic changes that occur during pregnancy may affect the kinetics of drug absorption, distribution, biotransformation, and elimination, thereby also affecting requirements for drug dosing and potentially altering the susceptibility of the pregnant woman to drug toxicity. During pregnancy, gastrointestinal transit time becomes prolonged; body water and fat increase throughout gestation and are accompanied by increases in cardiac output, ventilation, and liver and renal blood flow; plasma protein concentrations decrease; renal sodium reabsorption increases; and changes occur in metabolic enzyme pathways in the liver. Placental transport of drugs, compartmentalization of drugs in the embryo/fetus and placenta, biotransformation of drugs by the fetus and placenta, and elimination of drugs by the fetus also can affect drug pharmacokinetics in the pregnant woman. Additional considerations regarding drug use in pregnancy are the effects of the drug on the fetus and newborn, including the potential for teratogenicity, mutagenicity, or carcinogenicity, and the pharmacokinetics and toxicity of transplacentally transferred drugs.
The potential harm to the fetus from maternal ingestion of a specific drug depends not only on the drug itself, but on the dose ingested, the gestational age of the fetus at exposure, the duration of exposure, the interaction with other agents to which the fetus is exposed, and, to an unknown extent, the genetic makeup of the mother and fetus.
Information regarding the safety of drugs in pregnancy is derived from animal toxicity data, anecdotal experience, registry data, and clinical trials. Data are limited for antiretroviral drugs, particularly when used in combination therapy. Drug choice should be individualized and must be based on discussion with the woman and available data from preclinical and clinical testing of the individual drugs.
Preclinical data include results of in vitro and animal in vivo screening tests for carcinogenicity, clastogenicity/ mutagenicity, and reproductive and teratogenic effects. However, the predictive value of such tests for adverse effects in humans is unknown. For example, of approximately 1,200 known animal teratogens, only about 30 are known to be teratogenic in humans (18). In addition to antiretroviral agents, certain drugs commonly used to treat HIV-1-related illnesses demonstrate positive findings on one or more of these screening tests. For example, acyclovir is positive in some in vitro carcinogenicity and clastogenicity assays and is associated with fetal abnormalities in rats; however, data collected on the basis of human experience from the Acyclovir in Pregnancy Registry have indicated no increased risk for birth defects in infants with in utero exposure to acyclovir (19). Limited data exist regarding placental passage and long-term animal carcinogenicity for the FDA-approved antiretroviral drugs (Table 2) (20).
# Combination Antiretroviral Therapy and Pregnancy Outcome
Data are conflicting as to whether receipt of combination antiretroviral therapy during pregnancy is associated with adverse pregnancy outcomes such as preterm delivery. A retrospective Swiss report evaluated the pregnancy outcome of 37 HIV-1-infected pregnant women treated with combination therapy; all received two reverse transcriptase inhibitors and 16 received one or two protease inhibitors (21). Almost 80% of women experienced one or more typical adverse effects of the drugs, such as anemia, nausea/vomiting, aminotransferase elevation, or hyperglycemia. A possible association of combination antiretroviral therapy with preterm births was noted; 10 of 30 babies were born prematurely. The preterm birth rate did not differ between women receiving
C
Negative
Positive (rodent, anasarca and skeletal malformations at 1000 mg/kg (35x human exposure) during organogenesis; not seen in rabbits)
Negative (osteomalacia when given to juvenile animals at high doses)
Negative
Positive (rodent, ventricular septal defect)
Positive (cynomologus monkey, anencephaly, anophthalmia, microophthalmia)
Negative (but extra ribs in rodents)
Negative (but cryptorchidism in rodents)
Negative Negative
Negative (but deficient ossification and thymic elongation in rats and rabbits)
Negative (but delayed skeletal ossification and increase in skeletal variations in rats at maternally toxic doses)
# Non-nucleoside reverse transcriptase inhibitors
Protease inhibitors - FDA pregnancy categories: A Adequate and well-controlled studies of pregnant women fail to demonstrate a risk to the fetus during the first trimester of pregnancy (and there is no evidence of risk during later trimesters). B Animal reproduction studies fail to demonstrate a risk to the fetus, and adequate and well-controlled studies of pregnant women have not been conducted. C Safety in human pregnancy has not been determined, animal studies are either positive for fetal risk or have not been conducted, and the drug should not be used unless the potential benefit outweighs the potential risk to the fetus. D Positive evidence of human fetal risk based on adverse reaction data from investigational or marketing experiences, but the potential benefits from the use of the drug in pregnant women may be acceptable despite its potential risks. X Studies with animals or reports of adverse reactions have indicated that the risk associated with the use of the drug for pregnant women clearly outweighs any possible benefit.
combination therapy with or without protease inhibitors. The contribution of maternal HIV-1 disease stage and other covariates that might be associated with a risk for prematurity was not assessed. The European Collaborative Study and the Swiss Mother + Child HIV-1 Cohort Study investigated the effects of combination retroviral therapy in a population of 3,920 motherchild pairs. Adjusting for CD4 + T-lymphocyte count (CD4 + count) and intravenous drug use, they found a 2.6-fold (95% confidence interval = 1.4-4.8) increased odds of preterm delivery for infants exposed to combination therapy with or without protease inhibitors compared with no treatment; women receiving combination therapy that had been initiated before their pregnancy were twice as likely to deliver prematurely as those starting therapy during the third trimester (22). However, combination therapy was received by only 323 (8%) women studied. Exposure to monotherapy was not associated with prematurity.
In contrast, in an observational study of pregnant women with HIV-1 infection in the United States (PACTG 367) in which 1,150 (78%) of 1,472 women received combination therapy, no association was found between receipt of combination therapy and preterm birth (23). The highest rate of preterm delivery was among women who had not received any antiretroviral therapy, which is consistent with several other reports demonstrating elevated preterm birth rates among untreated women with HIV-1 infection (24)(25)(26). In a French open-label study of 445 HIV-1-infected women receiving ZDV who had lamivudine (3TC) added to their therapy at 32 weeks' gestation, the rate of preterm delivery was 6%, similar to the 9% rate in a historical control group of women receiving only ZDV (27). Additionally, in a large meta-analysis of seven clinical studies that included 2,123 HIV-infected pregnant women who delivered infants during 1990-1998 and had received antenatal antiretroviral therapy and 1,143 women who did not receive antenatal antiretroviral therapy, use of multiple antiretroviral drugs as compared with no treatment or treatment with one drug was not associated with increased rates of preterm labor, low birth weight, low Apgar scores, or stillbirth (28).
Until more information is known, HIV-1-infected pregnant women who are receiving combination therapy for their HIV-1 infection should continue their provider-recommended regimen. They should receive careful, regular monitoring for pregnancy complications and for potential toxicities.
# Protease Inhibitor Therapy and Hyperglycemia
Hyperglycemia, new-onset diabetes mellitus, exacerbation of existing diabetes mellitus, and diabetic ketoacidosis have been reported with receipt of protease inhibitor antiretroviral drugs by HIV-1-infected patients (29)(30)(31)(32). In addition, pregnancy is itself a risk factor for hyperglycemia; it is unknown if the use of protease inhibitors will increase the risk for pregnancy-associated hyperglycemia. Clinicians caring for HIV-1-infected pregnant women who are receiving protease inhibitor therapy should be aware of the risk of this complication and closely monitor glucose levels. Symptoms of hyperglycemia should be discussed with pregnant women who are receiving protease inhibitors.
# Mitochondrial Toxicity and Nucleoside Analog Drugs
Nucleoside analog drugs are known to induce mitochondrial dysfunction because the drugs have varying affinity for mitochondrial gamma DNA polymerase. This affinity can interfere with mitochondrial replication, resulting in mitochondrial DNA depletion and dysfunction (33). The relative potency of the nucleosides in inhibiting mitochondrial gamma DNA polymerase in vitro is highest for zalcitabine (ddC), followed by didanosine (ddI), stavudine (d4T), 3TC, ZDV and abacavir (ABC) (34). Toxicity related to mitochondrial dysfunction has been reported to occur in infected patients receiving long-term treatment with nucleoside analogs and generally has resolved with discontinuation of the drug or drugs; a possible genetic susceptibility to these toxicities has been suggested (33). These toxicities may be of particular concern for pregnant women and infants with in utero exposure to nucleoside analog drugs.
# During Pregnancy
Clinical disorders linked to mitochondrial toxicity include neuropathy, myopathy, cardiomyopathy, pancreatitis, hepatic steatosis, and lactic acidosis. Among these disorders, symptomatic lactic acidosis and hepatic steatosis may have a female preponderance (35). These syndromes have similarities to rare but life-threatening syndromes that occur during pregnancy, most often during the third trimester: acute fatty liver, and the combination of hemolysis, elevated liver enzymes and low platelets (the HELLP syndrome). Several investigators have correlated these pregnancy-related disorders with a recessively inherited mitochondrial abnormality in the fetus/infant that results in an inability to oxidize fatty acids (36)(37)(38). Since the mother would be a heterozygotic carrier of the abnormal gene, the risk for liver toxicity might be increased during pregnancy because the mother would be unable to properly oxidize both maternal and accumulating fetal fatty acids (39). Additionally, animal studies have demonstrated that in late gestation, pregnant mice have significant reductions (25%-50%) in mitochondrial fatty acid oxidation and that exogeneously administered estradiol and progesterone can reproduce these effects (40,41); whether this can be translated to humans is unknown. However, these data suggest that a disorder of mitochondrial fatty acid oxidation in the mother or her fetus during late pregnancy may play a role in the development of acute fatty liver of pregnancy and HELLP syndrome and possibly contribute to susceptibility to antiretroviral-associated mitochondrial toxicity.
Lactic acidosis with microvacuolar hepatic steatosis is a toxicity related to nucleoside analog drugs that is thought to be related to mitochondrial toxicity; it has been reported to occur in infected persons treated with nucleoside analog drugs for long periods (>6 months). Initially, most cases were associated with ZDV, but later other nucleoside analog drugs, particularly d4T, have been associated with the syndrome. In a report from the FDA Spontaneous Adverse Event Program of 106 patients with this syndrome (60 receiving combination and 46 receiving single nucleoside analog therapy), typical initial symptoms included 1 to 6 weeks of nausea, vomiting, abdominal pain, dyspnea, and weakness (35). Metabolic acidosis with elevated serum lactate and elevated hepatic enzymes was common. Patients described in that report were predominantly female and overweight. The incidence of this syndrome may be increasing, possibly as a result of increased use of combination nucleoside analog therapy or increased recognition of the syndrome. In a cohort of infected patients receiving nucleoside analog therapy followed at Johns Hopkins University during 1989-1994, the incidence of the hepatic steatosis syndrome was 0.13% per year (42). However, in a report from a cohort of 964 HIV-1-infected persons followed in France for 2 years during 1997-1999, the incidence of symptomatic hyperlactatemia was 0.8% per year for all patients and 1.2% for patients receiving a regimen including d4T (43).
The frequency of this syndrome in pregnant HIV-1-infected women receiving nucleoside analog treatment is unknown. In 1999, Italian researchers reported a case of severe lactic acidosis in an infected pregnant woman who was receiving d4T-3TC at the time of conception and throughout pregnancy and who experienced symptoms and fetal death at 38 weeks' gestation (44). Bristol-Myers Squibb has reported three maternal deaths due to lactic acidosis, two with and one without accompanying pancreatitis, among women who were either pregnant or postpartum and whose antepartum therapy during pregnancy included d4T and ddI in combination with other antiretroviral agents (either a protease inhibitor or nevirapine) (45). All women were receiving treatment with these agents at the time of conception and continued for the duration of pregnancy; all presented late in gestation with symptomatic disease that progressed to death in the immediate postpartum period. Two cases were also associated with fetal death.
It is unclear if pregnancy augments the incidence of the lactic acidosis/hepatic steatosis syndrome that has been reported for nonpregnant persons receiving nucleoside analog treatment. However, because pregnancy itself can mimic some of the early symptoms of the lactic acidosis/ hepatic steatosis syndrome or be associated with other disorders of liver metabolism, these cases emphasize the need for physicians caring for HIV-1-infected pregnant women receiving nucleoside analog drugs to be alert for early signs of this syndrome. Pregnant women receiving nucleoside analog drugs should have hepatic enzymes and electrolytes assessed more frequently during the last trimester of pregnancy, and any new symptoms should be evaluated thoroughly. Additionally, because of the reports of several cases of maternal mortality secondary to lactic acidosis with prolonged use of the combination of d4T and ddI by HIV-1-infected pregnant women, clinicians should prescribe this antiretroviral combination during pregnancy with caution and generally only when other nucleoside analog drug combinations have failed or have caused unacceptable toxicity or side effects.
# In Utero Exposure
A study conducted in France reported that in a cohort of 1,754 uninfected infants born to HIV-1-infected women who received antiretroviral drugs during pregnancy, eight infants with in utero or neonatal exposure to either ZDV-3TC (four infants) or ZDV alone (four infants) developed indications of mitochondrial dysfunction after the first few months of life (46). Two of these infants (both of whom had been exposed to ZDV-3TC) contracted severe neurologic disease and died, three had mild to moderate symptoms, and three had no symptoms but had transient laboratory abnormalities. An association between these findings and in utero exposure to antiretroviral drugs has not been definitively established.
In infants followed through age 18 months in PACTG 076, the occurrence of neurologic events was rare; seizures occurred in one child exposed to ZDV and two exposed to placebo, and one child in each group had reported spasticity. Mortality at 18 months was 1.4% among infants given ZDV compared with 3.5% among those given placebo (47). The Perinatal Safety Review Working Group performed a retrospective review of deaths occurring among children born to HIV-1infected women and followed during 1986-1999 in five large prospective U.S. perinatal cohorts. No deaths similar to those reported from France or with clinical findings attributable to mitochondrial dysfunction were identified in a database of >16,000 uninfected children born to HIV-1-infected women with and without antiretroviral drug exposure (48). However, most of the infants with antiretroviral exposure had been exposed to ZDV alone and only a relatively small proportion (approximately 6%) had been exposed to ZDV-3TC. In an African perinatal trial (PETRA) that compared three regimens of ZDV-3TC (during pregnancy starting at 36 weeks' gestation, during labor, and through 1 week postpartum; during labor and postpartum; and during labor only) with placebo for prevention of transmission, data have been reviewed relating to neurologic adverse events among 1,798 children who participated. No increased risk of neurologic events was observed among children treated with ZDV-3TC compared with placebo, regardless of the intensity of treatment (49). Finally, in a study of 382 uninfected infants born to HIV-1infected women, echocardiograms were prospectively performed every 4 to 6 months during the first 5 years of life; 9% of infants had been exposed to ZDV prenatally (50). No significant differences in ventricular function were observed between infants exposed and not exposed to ZDV.
Even if the association of mitochondrial dysfunction and in utero antiretroviral exposures is demonstrated, the development of severe or fatal mitochondrial disease in these infants appears to be extremely rare and should be compared against the clear benefit of ZDV in reducing transmission of a fatal infection by nearly 70% (51). These results emphasize the importance of the existing Public Health Service recommendation for long-term follow-up for any child with in utero exposure to antiretroviral drugs.
# Antiretroviral Pregnancy Registry
Health-care providers who are treating HIV-1-infected pregnant women and their newborns are strongly advised to report instances of prenatal exposure to antiretroviral drugs (either alone or in combination) to the Antiretroviral Pregnancy Registry. This registry is an epidemiologic project to collect observational, nonexperimental data regarding antiretroviral exposure during pregnancy for the purpose of assessing the potential teratogenicity of these drugs. Registry data will be used to supplement animal toxicology studies and assist clinicians in weighing the potential risks and benefits of treatment for individual patients. The Antiretroviral Pregnancy Registry is a collaborative project of pharmaceutical manufacturers with an advisory committee of obstetric and pediatric practitioners. The registry does not use patient names, and registry staff obtain birth outcome follow-up information from the reporting physician. Referrals In 1996, final results were reported for all 419 infants enrolled in PACTG 076. The results concur with those initially reported in 1994; the Kaplan-Meier estimated HIV-1 transmission rate for infants who received placebo was 22.6%, compared with 7.6% for those who received ZDV, a 66% reduction in risk for transmission (52).
The mechanism by which ZDV reduced transmission in PACTG 076 participants has not been fully defined. The effect of ZDV on maternal HIV-1 RNA does not fully account for the observed efficacy of ZDV in reducing transmission. Preexposure prophylaxis of the fetus or infant may offer substantial protection. If so, transplacental passage of antiretroviral drugs would be crucial for prevention of transmission. Additionally, in placental perfusion studies, ZDV has been metabolized into the active triphosphate within the placenta (53,54), which could provide additional protection against in utero transmission. This phenomenon may be unique to ZDV because metabolism to the active triphosphate form within the placenta has not been observed in the other nucleoside analogs that have been evaluated (i.e., ddI and ddC) (55,56).
In PACTG 076, similar rates of congenital abnormalities occurred among infants with and without in utero ZDV exposure. Data from the Antiretroviral Pregnancy Registry also have demonstrated no increased risk for congenital abnormalities among infants born to women who receive ZDV antenatally compared with the general population (57). Among uninfected infants from PACTG 076 followed from birth to a median age of 4.2 years (range 3.2-5.6 years), no differences were noted in growth, neurodevelopment, or immunologic status between infants born to mothers who received ZDV compared with those born to mothers who received placebo (58). No malignancies have been observed in short-term (i.e., up to age 6 years) follow-up of >727 infants from PACTG 076 or from a prospective cohort study involving infants with in utero ZDV exposure (59). However, follow-up is too limited to provide a definitive assessment of carcinogenic risk with human exposure. Long-term monitoring continues to be recommended for all infants who have received in utero ZDV exposure or in utero exposure to any of the antiretroviral drugs.
The efficacy of ZDV chemoprophylaxis for reducing HIV-1 transmission among populations of infected women with characteristics unlike those of the PACTG 076 population has been evaluated in another perinatal protocol (PACTG 185) and in prospective cohort studies. PACTG 185 enrolled pregnant women with advanced HIV-1 disease and low CD4 + counts who were receiving antiretroviral therapy; 24% had received ZDV before the current pregnancy (60). All women and infants received the three-part ZDV regimen combined with either infusions of hyperimmune HIV-1 immunoglobulin (HIVIG) containing high levels of antibodies to HIV-1 or standard intravenous immunoglobulin (IVIG) without HIV-1 antibodies. Because advanced maternal HIV-1 disease has been associated with increased risk for perinatal transmission, the transmission rate in the control group was hypothesized to be 11%-15% despite the administration of ZDV. At the first interim analysis, the transmission rate for the combined group was only 4.8% and did not substantially differ by whether the women received HIVIG or IVIG or by duration of ZDV use (60). The results of this trial confirm the efficacy of ZDV observed in PACTG 076 and extend this efficacy to women with advanced disease, low CD4 + count, and prior ZDV therapy. Rates of perinatal transmission have been documented to be as low as 3%-4% among women with HIV-1 infection who receive all three components of the ZDV regimen, including women with advanced HIV-1 disease (6,60).
At least two studies suggest that antenatal use of combination antiretroviral regimens might further reduce transmission. In an open-label, nonrandomized study of 445 women with HIV-1 infection in France, 3TC was added at 32 weeks' gestation to standard ZDV prophylaxis; 3TC was also given to the infant for 6 weeks in addition to ZDV (27). The transmission rate in the ZDV-3TC group was 1.6% (95% CI = 0.7%-3.3%); in comparison, the transmission rate in a historical control group of women receiving only ZDV was 6.8% (95% CI = 5.1%-8.7%). In a longitudinal epidemiologic study conducted in the United States since 1990, transmission was observed in 20% of women with HIV-1 infection who received no antiretroviral treatment during pregnancy, 10.4% who received ZDV alone, 3.8% who received combination therapy without protease inhibitors, and 1.2% who received combination therapy with protease inhibitors (61).
# International Antiretroviral Prophylaxis Clinical Trials
In a trial evaluating short-course antenatal/intrapartum ZDV prophylaxis and perinatal transmission among nonbreastfeeding women in Thailand, administration of ZDV 300 mg twice daily for 4 weeks antenatally and 300 mg every 3 hours orally during labor was shown to reduce perinatal transmission by approximately 50% compared with placebo (62). The transmission rate was 19% in the placebo group versus 9% in the ZDV group. A second, four-arm factorial design trial in Thailand compared administration of ZDV antenatally starting at 28 or 36 weeks' gestation, orally intrapartum, and to the neonate for 3 days or 6 weeks. At an interim analysis, the transmission rate in the arm receiving ZDV antenatally starting at 36 weeks and postnatally for 3 days to the infant was 10%, which was significantly higher than for the longlong arm (antenatal starting at 28 weeks and infant administration for 6 weeks) (63). The transmission rate in the short-short arm of this study was similar to the 9% observed with short antenatal/intrapartum ZDV in the first Thai study. The rate of in utero transmission was higher among women in the short antenatal arms compared with those receiving longer antenatal therapy, suggesting that longer treatment of the infant cannot substitute for longer treatment of the mother.
A third trial in Africa (PETRA trial) among breastfeeding HIV-1-infected women has shown that a combination regimen of ZDV and 3TC administered starting at 36 weeks' gestation, orally intrapartum, and for 1 week postpartum to the woman and infant reduced transmission at age 6 weeks by approximately 50% compared with placebo (64). The transmission rate at age 6 weeks was 15% in the placebo group versus 6% with the three-part ZDV-3TC regimen. This efficacy is similar to the efficacy observed in the Thailand study of antepartum/intrapartum short-course ZDV in nonbreastfeeding women (62).
Investigators have identified two possible intrapartum/ postpartum regimens (either ZDV-3TC or nevirapine) that could provide an effective intrapartum/postpartum intervention for women for whom the diagnosis of HIV-1 is not made until near to or during labor. The PETRA African ZDV-3TC trial among breastfeeding HIV-1-infected women also demonstrated that an intrapartum/postpartum regimen, started during labor and continued for 1 week postpartum in the woman and infant, reduced transmission at age 6 weeks from 15% in the placebo group to 9% in the group receiving the two-part ZDV-3TC regimen, a reduction of 40% (64). In this trial, oral ZDV-3TC administered solely during the intrapartum period was not effective in lowering transmission. Another study in Uganda (HIVNET 012), again in a breastfeeding population, demonstrated that a single 200-mg oral dose of nevirapine given to the mother at onset of labor combined with a single 2-mg/kg oral dose given to her infant at age 48-72 hours reduced transmission by nearly 50% compared with a very short regimen of ZDV given orally during labor and to the infant for 1 week (65). Transmission at age 6 weeks was 12% in the nevirapine group compared with 21% in the ZDV group. A subsequent trial in South Africa demonstrated similar transmission rates with a modified HIVNET 012 nevirapine regimen (nevirapine given to the woman as a single dose during labor with a second dose at 48 hours postpartum, and a single dose to the infant at age 48 hours) compared with the PETRA regimen of oral ZDV-3TC during labor and for 1 week after delivery to the mother and infant (66). Transmission rates at age 8 weeks were 13.3% in the nevirapine arm and 10.9% in the ZDV-3TC arm.
Two clinical trials have suggested that the addition of the HIVNET 012 single-dose nevirapine regimen to short-course ZDV may provide increased efficacy in reducing perinatal transmission. A study of nonbreastfeeding women in Thailand compared a short-course ZDV regimen (starting at 28 weeks' gestation, given orally intrapartum, and for 1 week to the infant) with two combination regimens: short-course ZDV plus single-dose intrapartum/neonatal nevirapine, and short-course ZDV plus intrapartum maternal nevirapine only. In the short-course ZDV-only arm, enrollment was discontinued by the Data and Safety Monitoring Board at the first interim analysis because transmission was significantly higher among those receiving ZDV alone compared with those receiving the intrapartum/neonatal nevirapine combination regimen (67). The study is continuing to enroll to allow comparison of the two combination arms. A second open-label study in Cote d'Ivoire reported a 7.1% transmission rate at age 4 weeks with administration of short-course ZDV (starting at 36 weeks, given orally intrapartum, and for 1 week to the infant) combined with single-dose intrapartum/neonatal nevirapine. This was lower than for a nonconcurrent historical control group receiving ZDV alone (68).
In contrast to these studies, which evaluated combining single-dose nevirapine with short-course ZDV, a study in the United States, Europe, Brazil, and the Bahamas (PACTG 316) evaluated whether the addition of the HIVNET 012 singledose nevirapine regimen to standard antiretroviral therapy (at minimum the 3-part full ZDV regimen) would provide additional benefits in lowering transmission. In this study, 1,506 pregnant women with HIV-1 infection who were receiving antiretroviral therapy (77% were receiving combination antiretroviral regimens) were randomized to receive a single dose of nevirapine or nevirapine placebo at onset of labor, and their infants received a single dose (according to the maternal randomization) at age 48 hours. Transmission was not significantly different between groups, occurring in 1.6% of women in the placebo group and 1.4% among women in the nevirapine group (69).
Certain data indicate that postexposure antiretroviral prophylaxis of infants whose mothers did not receive antepartum or intrapartum antiretroviral drugs might provide some protection against transmission. Although data from some epidemiologic studies do not support efficacy of postnatal ZDV alone, other data demonstrate efficacy if ZDV is started rapidly following birth (6,70,71). In a study from North Carolina, the rate of infection among HIV-1-exposed infants who received only postpartum ZDV chemoprophylaxis was similar to that observed among infants who received no ZDV chemoprophylaxis (6). However, another epidemiologic study from New York State determined that administration of ZDV to the neonate for 6 weeks was associated with a significant reduction in transmission if the drug was initiated within 24 hours of birth (the majority of infants started within 12 hours) (70,71). Consistent with a possible preventive effect of rapid postexposure prophylaxis, a retrospective case-control study of health-care workers from the United States, France, and the United Kingdom who had nosocomial exposure to HIV-1-infected blood determined that postexposure use of ZDV was associated with reduced odds of contracting HIV-1 (adjusted odds ratio = 0.2; 95% CI = 0.1-0.6) (72). Several ongoing clinical trials are attempting to determine the optimal postexposure antiretroviral prophylaxis regimen for infants.
# Perinatal HIV-1 Transmission and Maternal HIV-1 RNA Copy Number
The correlation of HIV-1 RNA levels with risk for disease progression in nonpregnant infected adults suggests that HIV-1 RNA should be monitored during pregnancy at least as often as recommended for persons who are not pregnant (i.e., every 3 to 4 months or approximately once each trimester). In addition, HIV-1 RNA levels should be evaluated at 34-36 weeks of gestation to allow discussion of options for mode of delivery based on HIV-1 RNA results and clinical circumstances.
Although no data indicate that pregnancy accelerates HIV-1 disease progression, longitudinal measurements of HIV-1 RNA levels during and after pregnancy have been evaluated in only a limited number of prospective cohort studies. In one cohort of 198 HIV-1-infected women, plasma HIV-1 RNA levels were higher at 6 months postpartum than during pregnancy in many women; this increase was observed in women regardless of ZDV use during and after pregnancy (73).
Initial data regarding the correlation of viral load with risk for perinatal transmission were conflicting, with some studies suggesting an absolute correlation between HIV-1 RNA copy number and risk of transmission (74). However, although higher HIV-1 RNA levels have been observed among women who transmitted HIV-1 to their infants, overlap in HIV-1 RNA copy number has been observed in women who transmitted and those who did not transmit the virus. Transmission has been observed across the entire range of HIV-1 RNA levels (including in women with HIV-1 RNA copy number below the limit of detection of the assay), and the predictive value of RNA copy number for transmission in an individual woman has been relatively poor (73,75,76). In PACTG 076, antenatal maternal HIV-1 RNA copy number was associated with HIV-1 transmission in women receiving placebo. In women receiving ZDV, the relationship was markedly attenuated and no longer statistically significant ( 52). An HIV-1 RNA threshold below which there was no risk for transmission was not identified; ZDV was effective in reducing transmission regardless of maternal HIV-1 RNA copy number (52,77).
More recent data from larger numbers of ZDV-treated infected pregnant women indicate that HIV-1 RNA levels correlate with risk of transmission even among women treated with antiretroviral agents (62,(78)(79)(80). Although the risk for perinatal transmission in women with HIV-1 RNA below the level of assay quantitation appears to be extremely low, transmission from mother to infant has been reported among women with all levels of maternal HIV-1 RNA. Additionally, although HIV-1 RNA may be an important risk factor for transmission, other factors also appear to play a role (80)(81)(82).
Although there is a general correlation between viral load in plasma and in the genital tract, discordance has also been reported, particularly between HIV-1 proviral load in blood and genital secretions (83)(84)(85)(86). If exposure to HIV-1 in the maternal genital tract during delivery is a risk factor for perinatal transmission, plasma HIV-1 RNA levels might not always be an accurate indicator of risk. Long-term changes in one compartment (such as can occur with antiretroviral treatment) may or may not be associated with comparable changes in other body compartments. Further studies are needed to determine the effect of antiretroviral drugs on genital tract viral load and the association of such effects on the risk of perinatal HIV-1 transmission. In the short-course ZDV trial in Thailand, plasma and cervicovaginal HIV-1 RNA levels were reduced by ZDV treatment, and each independently correlated with perinatal transmission (87). The full ZDV chemoprophylaxis regimen, alone or in combination with other antiretroviral agents, including intravenous ZDV during delivery and the administration of ZDV to the infant for the first 6 weeks of life, should be discussed with and offered to all infected pregnant women regardless of their HIV-1 RNA level.
Results of epidemiologic and clinical trials suggest that women receiving highly active antiretroviral regimens that effectively reduce HIV-1 RNA to <1,000 copies/mL or undetectable levels have very low rates of perinatal transmission (27,61,69,88). However, since transmission can occur even at low or undetectable HIV-1 RNA copy numbers, RNA levels should not be a determining factor when deciding whether to use ZDV for chemoprophylaxis. Additionally, the efficacy of ZDV is not solely related to lowering viral load. In one study of 44 HIV-1-infected pregnant women, ZDV was effective in reducing transmission despite minimal effect on HIV-1 RNA levels (89). These results are similar to those observed in PACTG 076 (52). Antiretroviral prophylaxis reduces transmission even among women with HIV-1 RNA levels <1,000 copies/mL (90). Therefore, at a minimum, ZDV prophylaxis should be given even to women who have a very low or undetectable plasma viral load.
# Preconception Counseling and Care for HIV-1-Infected Women of Childbearing Age
Many women infected with HIV-1 (nearly 60% in some centers) enter pregnancy with a known diagnosis, and nearly half of these women enter the first trimester of pregnancy receiving treatment with single or multiagent antiretroviral therapy (91). Additionally, as many as 40% of women who have begun antiretroviral therapy before their pregnancy might require adjustment of their therapeutic regimen during their pregnancy course.
The American College of Obstetrics and Gynecology advocates extending to all women of childbearing age the opportunity to receive preconception counseling as a component of routine primary medical care. It is recognized that >40% of pregnancies may be unintended and that the diagnosis of pregnancy most frequently occurs late in the first trimester when organogenesis is nearly completed. Preconception care can identify risk factors for adverse maternal or fetal outcome (e.g., age, diabetes, hypertension), provide education and counseling targeted to the patient's individual needs, and treat or stabilize medical conditions before conception to optimize maternal and fetal outcomes (92).
For women with HIV-1 infection, preconception care must also focus on maternal infection status, viral load, immune status, and therapeutic regimen as well as education regarding perinatal transmission risks and prevention strategies, expectations for the child's future, and, where desired, effective contraception until the optimal maternal health status for pregnancy is achieved.
The following components of preconception counseling are recommended for HIV-1-infected women:
- selection of effective and appropriate contraceptive methods to reduce the likelihood of unintended pregnancy; - education and counseling about perinatal transmission risks, strategies to reduce those risks, and potential effects of HIV-1 or treatment on pregnancy course and outcomes; - initiation or modification of antiretroviral therapy: -avoid agents with potential reproductive toxicity for the developing fetus (e.g., efavirenz, hydroxyurea) (20), -choose agents effective in reducing the risk of perinatal HIV-1 transmission, -attain a stable, maximally suppressed maternal viral load, -evaluate and control for therapy-associated side effects that may adversely affect maternal/fetal health outcomes (e.g., hyperglycemia, anemia, hepatic toxicity), - evaluation and appropriate prophylaxis for opportunistic infections and administration of medical immunizations (e.g., influenza, pneumococcal, or hepatitis B vaccines) as indicated; - optimization of maternal nutritional status; - institution of the standard measures for preconception evaluation and management (e.g., assessment of reproductive and familial genetic history, screening for infectious diseases/sexually transmitted diseases, and initiation of folic acid supplementation); - screening for maternal psychological and substance abuse disorders; and - planning for perinatal consultation if desired or indicated. HIV-1-infected women of childbearing potential receive primary health-care services in various clinical settings, e.g., family planning, family medicine, internal medicine, obstetrics/ gynecology. It is imperative that primary health-care providers consider the fundamental principles of preconception counseling an integral component of comprehensive primary health care for improving maternal/child health outcomes.
# General Principles Regarding the Use of Antiretroviral Agents in Pregnancy
Medical care of the HIV-1-infected pregnant woman requires coordination and communication between the HIV specialist caring for the woman when she is not pregnant and her obstetrician. Decisions regarding use of antiretroviral drugs during pregnancy should be made by the woman after discussion with her health-care provider about the known and unknown benefits and risks of therapy. Initial evaluation of an infected pregnant woman should include an assessment of HIV-1 disease status and recommendations regarding antiretroviral treatment or alteration of her current antiretroviral regimen.
This assessment should include the following:
- evaluation of the degree of existing immunodeficiency determined by CD4 + count; - risk for disease progression as determined by the level of plasma RNA; - history of prior or current antiretroviral therapy; - gestational age; and - supportive care needs. Decisions regarding initiation of therapy should be the same for women who are not currently receiving antiretroviral therapy and for women who are not pregnant, with the additional consideration of the potential impact of such therapy on the fetus and infant (14). Similarly, for women currently receiving antiretroviral therapy, decisions regarding alterations in therapy should involve the same considerations as those used for women who are not pregnant. The three-part ZDV chemoprophylaxis regimen, alone or in combination with other antiretroviral agents, should be discussed with and offered to all infected pregnant women to reduce the risk for perinatal HIV-1 transmission.
Decisions regarding the use and choice of antiretroviral drugs during pregnancy are complex; several competing factors influencing risk and benefit must be weighed. Discussion regarding the use of antiretroviral drugs during pregnancy should include the following:
- what is known and not known about the effects of such drugs on the fetus and newborn, including lack of longterm outcome data on the use of any of the available antiretroviral drugs during pregnancy; - what treatment is recommended for the health of the HIV-1infected woman; and - the efficacy of ZDV for reduction of perinatal HIV-1 transmission. November 22,
Results from preclinical and animal studies and available clinical information about use of the various antiretroviral agents during pregnancy also should be discussed (20). The hypothetical risks of these drugs during pregnancy should be placed in perspective with the proven benefit of antiretroviral therapy for the health of the infected woman and the benefit of ZDV chemoprophylaxis for reducing the risk for HIV-1 transmission to her infant.
Discussion of treatment options should be noncoercive, and the final decision regarding use of antiretroviral drugs is the responsibility of the woman. Decisions regarding use and choice of antiretroviral drugs for persons who are not pregnant are becoming increasingly complicated as the standard of care moves toward simultaneous use of multiple antiretroviral drugs to suppress viral replication below detectable limits. These decisions are further complicated in pregnancy because the long-term consequences for the infant who has been exposed to antiretroviral drugs in utero are unknown. A woman's decision to refuse treatment with ZDV or other drugs should not result in punitive action or denial of care. Further, use of ZDV alone should not be denied to a woman who wishes to minimize exposure of the fetus to other antiretroviral drugs and therefore, after counseling, chooses to receive only ZDV during pregnancy to reduce the risk for perinatal transmission.
A long-term treatment plan should be developed after discussion between the patient and the health-care provider and should emphasize the importance of adherence to any prescribed antiretroviral regimen. Depending on individual circumstances, provision of support services, mental health services, and drug abuse treatment may be required. Coordination of services among prenatal care providers, primary care and HIV-1 specialty care providers, mental health and drug abuse treatment services, and public assistance programs is essential to ensure adherence of the infected woman to antiretroviral treatment regimens.
General counseling should include what is known regarding risk factors for perinatal transmission. Cigarette smoking, illicit drug use, and unprotected sexual intercourse with multiple partners during pregnancy have been associated with risk for perinatal HIV-1 transmission (93-97), and discontinuing these practices might reduce this risk. In addition, CDC recommends that infected women in the United States refrain from breastfeeding to avoid postnatal transmission of HIV-1 to their infants through breast milk (3,98); these recommendations also should be followed by women receiving antiretroviral therapy. Passage of antiretroviral drugs into breast milk has been evaluated for only a few antiretroviral drugs. ZDV, 3TC, and nevirapine can be detected in the breast milk of women, and ddI, d4T, abacavir, delavirdine, indinavir, ritonavir, saquinavir and amprenavir can be detected in the breast milk of lactating rats. Limited data are available regarding either the efficacy of antiretroviral therapy for the prevention of postnatal transmission of HIV-1 through breast milk or the toxicity of long-term antiretroviral exposure of the infant through breast milk.
Women who must temporarily discontinue therapy because of pregnancy-related hyperemesis should not resume therapy until sufficient time has elapsed to ensure that the drugs will be tolerated. To reduce the potential for emergence of resistance, if therapy requires temporary discontinuation for any reason during pregnancy, all drugs should be stopped and reintroduced simultaneously.
# Recommendations for Antiretroviral Chemoprophylaxis to Reduce Perinatal HIV-1 Transmission
The following recommendations for use of antiretroviral chemoprophylaxis to reduce the risk for perinatal transmission are based on situations that may be commonly encountered in clinical practice (Box 1), with relevant considerations highlighted in the subsequent discussion sections. These recommendations are only guidelines, and flexibility should be exercised according to the patient's individual circumstances. In the 1994 recommendations (2), six clinical situations were delineated on the basis of maternal CD4 + count, weeks of gestation, and prior antiretroviral use. Because current data indicate that the PACTG 076 ZDV regimen also is effective for women with advanced disease, low CD4 + count, and prior ZDV therapy, clinical situations based on CD4 + count and prior ZDV use are not presented. Additionally, because data indicate that most transmission occurs near the time of or during delivery, ZDV chemoprophylaxis is recommended regardless of weeks of gestation; thus, clinical situations based on weeks of gestation also are not presented.
The antenatal dosing regimen in PACTG 076 (100 mg administered orally five times daily) (Table 1) was selected on the basis of the standard ZDV dosage for adults at the time of the study. However, recent data have indicated that administration of ZDV three times daily will maintain intracellular ZDV triphosphate at levels comparable with those observed with more frequent dosing (99)(100)(101). Comparable clinical response also has been observed in some clinical trials among persons receiving ZDV twice daily (102)(103)(104). Thus, the current standard ZDV dosing regimen for adults is 200 mg three times daily, or 300 mg twice daily. Because the mechanism by which ZDV reduces perinatal transmission is not known, these
# BOX 1. Clinical situations and recommendations for use of antiretroviral drugs to reduce perinatal human immunodeficiency virus type 1 (HIV-1) transmission 1. HIV-1-infected pregnant women who have not received prior antiretroviral therapy
- Pregnant women with HIV-1 infection must receive standard clinical, immunologic, and virologic evaluation. Recommendations for initiation and choice of antiretroviral therapy should be based on the same parameters used for persons who are not pregnant, although the known and unknown risks and benefits of such therapy during pregnancy must be considered and discussed. - The three-part zidovudine (ZDV) chemoprophylaxis regimen, initiated after the first trimester, should be recommended for all pregnant women with HIV-1 infection regardless of antenatal HIV-1 RNA copy number to reduce the risk for perinatal transmission. - The combination of ZDV chemoprophylaxis with additional antiretroviral drugs for treatment of HIV-1 infection is recommended for infected women whose clinical, immunologic, or virologic status requires treatment or whose HIV-1 RNA is >1,000 copies/ mL regardless of clinical or immunologic status. - Women who are in the first trimester of pregnancy may consider delaying initiation of therapy until after 10-12 weeks' gestation.
# HIV-1-infected women receiving antiretroviral therapy during the current pregnancy
- HIV-1 infected women receiving antiretroviral therapy whose pregnancy is identified after the first trimester should continue therapy. ZDV should be a component of the antenatal antiretroviral treatment regimen after the first trimester whenever possible, although this may not always be feasible. - Women receiving antiretroviral therapy whose pregnancy is recognized during the first trimester should be counseled regarding the benefits and potential risks of antiretroviral administration during this period, and continuation of therapy should be considered. If therapy is discontinued during the first trimester, all drugs should be stopped and reintroduced simultaneously to avoid the development of drug resistance. - Regardless of the antepartum antiretroviral regimen, ZDV administration is recommended during the intrapartum period and for the newborn.
# HIV-1-infected women in labor who have had no prior therapy
- Several effective regimens are available for women who have had no prior therapy (Table 3):
-a single dose nevirapine at the onset of labor followed by a single dose of nevirapine for the newborn at age 48 hours, -oral ZDV and lamivudine (3TC) during labor, followed by 1 week of oral ZDV-3TC for the newborn, -intrapartum intravenous ZDV followed by 6 weeks of ZDV for the newborn, or -the 2-dose nevirapine regimen combined with intrapartum intravenous ZDV and 6 weeks of ZDV for the newborn. - In the immediate postpartum period, the woman should have appropriate assessments (e.g., CD4 + T-lymphocyte count and HIV-1 RNA copy number) to determine whether antiretroviral therapy is recommended for her own health.
# Infants born to mothers who have received no antiretroviral therapy during pregnancy or intrapartum
- The 6-week neonatal component of the ZDV chemoprophylactic regimen should be discussed with the mother and offered for the newborn. - ZDV should be initiated as soon as possible after delivery, preferably within 6-12 hours of birth. - Some clinicians might use ZDV in combination with other antiretroviral drugs, particularly if the mother is known or suspected to have ZDV-resistant virus. However, the efficacy of this approach for prevention of transmission is unknown, and appropriate dosing regimens for neonates are incompletely defined. - In the immediate postpartum period, the woman should undergo appropriate assessments (e.g., CD4 + count and HIV-1 RNA copy number) to determine if antiretroviral therapy is required for her own health. The infant should undergo early diagnostic testing so that if he or she is HIV infected, treatment can be initiated as soon as possible.
Note: Discussion of treatment options and recommendations should be noncoercive, and the final decision regarding use of antiretroviral drugs is the responsibility of the woman. A decision to not accept treatment with ZDV or other drugs should not result in punitive action or denial of care. Use of ZDV should not be denied to a woman who wishes to minimize exposure of the fetus to other antiretroviral drugs and who therefore chooses to receive only ZDV during pregnancy to reduce the risk for perinatal transmission.
dosing regimens may not have equivalent efficacy to that observed in PACTG 076. However, a regimen of two or three times daily is expected to increase adherence to the regimen. The recommended ZDV dosage for infants was derived from pharmacokinetic studies performed among full-term infants (105). ZDV is primarily cleared through hepatic glucuronidation to an inactive metabolite. The glucuronidation metabolic enzyme system is immature in neonates, leading to prolonged ZDV half-life and clearance compared with older infants (ZDV half-life: 3.1 hours versus 1.9 hours; clearance: 10.9 versus 19.0 mL/minute/kg body weight, respectively). Because premature infants have even greater immaturity in hepatic metabolic function than full-term infants, further prolongation of clearance may be expected. In a study of 15 premature infants who were at 26-33 weeks' gestation and who received different ZDV dosing regimens, mean ZDV half-life was 7.2 hours and mean clearance was 2.5 mL/minute/kg body weight during the first 10 days of life (106). At a mean age of 18 days, a decrease in half-life (4.4 hours) and increase in clearance (4.3 mL/minute/kg body weight) were found. The appropriate ZDV dosage for premature infants has not been defined but is being evaluated in a phase I clinical trial among premature infants <34 weeks' gestation. The dosing regimen being studied is 1.5 mg/kg body weight orally or intravenously every 12 hours for the first 2 weeks of life; for infants aged 2 to 6 weeks, the dose is increased to 2 mg/kg body weight every 8 hours.
# Clinical Situations and Recommendations for Use of Antiretroviral Prophylaxis 1. HIV-1-infected pregnant women who have not received prior antiretroviral therapy
Recommendation. Pregnant women with HIV-1 infection must receive standard clinical, immunologic, and virologic evaluation. Recommendations for initiation and choice of antiretroviral therapy should be based on the same parameters used for persons who are not pregnant, although the known and unknown risks and benefits of such therapy during pregnancy must be considered and discussed. The three-part ZDV chemoprophylaxis regimen, initiated after the first trimester, should be recommended for all pregnant women with HIV-1 infection regardless of antenatal HIV-1 RNA copy number to reduce the risk for perinatal transmission. The combination of ZDV chemoprophylaxis with additional antiretroviral drugs for treatment of HIV-1 infection is recommended for infected women whose clinical, immunologic, or virologic status requires treatment or whose HIV-1 RNA is >1,000 copies/ mL regardless of their clinical or immunologic status. Women who are in the first trimester of pregnancy may consider delaying initiation of therapy until after 10-12 weeks' gestation.
Discussion. When ZDV is administered in the three-part PACTG 076 regimen, perinatal transmission is reduced by approximately 70%. Although the mechanism by which ZDV reduces transmission is not known, protection is likely multifactorial. Preexposure prophylaxis of the infant is provided by passage of ZDV across the placenta so that inhibitory levels of the drug are present in the fetus during the birth process. Although placental passage of ZDV is excellent, that of other antiretroviral drugs is variable (Table 2). Therefore, when combination antiretroviral therapy is initiated during pregnancy, ZDV should be included as a component of antenatal therapy whenever possible. Because the mechanism by which ZDV reduces transmission is not known, the intrapartum and newborn ZDV components of the chemoprophylactic regimen should be administered to reduce perinatal HIV-1 transmission. If a woman does not receive ZDV as a component of her antenatal antiretroviral regimen, intrapartum and newborn ZDV should still be recommended.
Because of the evolving and complex nature of the management of HIV-1 infection, a specialist with experience in the treatment of pregnant women with HIV-1 infection should be involved in their care. Women should be informed that potent combination antiretroviral regimens have substantial benefit for their own health and may provide enhanced protection against perinatal transmission. Several studies have indicated that for women with low or undetectable HIV-1 RNA levels (e.g., <1,000 copies/mL) rates of perinatal transmission are extremely low, particularly when they have received antiretroviral therapy (61,78,79). However, there is no threshold below which lack of transmission can be assured, and the long-term effects of in utero exposure to multiple antiretroviral drugs are unknown. Decisions regarding the use and choice of an antiretroviral regimen should be individualized based on discussion with the woman about the following factors:
- her risk for disease progression and the risks and benefits of delaying initiation of therapy; - possible benefit of lowering viral load for reducing perinatal transmission; - potential drug toxicities and interactions with other drugs - the need for strict adherence to the prescribed drug schedule to avoid the development of drug resistance; - unknown long-term effects of in utero drug exposure on the infant; and - preclinical, animal, and clinical data relevant to use of the currently available antiretroviral agents during pregnancy.
Because the period of organogenesis (when the fetus is most susceptible to potential teratogenic effects of drugs) is during the first 10 weeks of gestation and the risks of antiretroviral therapy during that period are unknown, women in the first trimester of pregnancy might wish to delay initiation of therapy until after 10-12 weeks' gestation. This decision should be carefully considered by the health-care provider and the patient; a discussion should include an assessment of the woman's health status, the benefits and risks of delaying initiation of therapy for several weeks, and the fact that most perinatal HIV-1 transmission likely occurs late in pregnancy or during delivery. Treatment with efavirenz should be avoided during the first trimester because significant teratogenic effects in rhesus macaques were seen at drug exposures similar to those representing human exposure (Table 2) (20). Hydroxyurea is a potent teratogen in a variety of animal species and should also be avoided during the first trimester.
When initiation of antiretroviral therapy is considered optional on the basis of current guidelines for treatment of nonpregnant persons (14), infected pregnant women should be counseled regarding the potential benefits of standard combination therapy and should be offered such therapy, including the three-part ZDV chemoprophylaxis regimen. Although such women are at low risk for clinical disease progression if combination therapy is delayed, antiretroviral therapy that successfully reduces HIV-1 RNA to levels <1,000 copies/mL may substantially lower the risk of perinatal HIV-1 transmission and lessen the need for consideration of elective cesarean delivery as an intervention to reduce transmission risk.
When combination therapy is administered, the regimen should be chosen from those recommended for nonpregnant adults (14). Dual nucleoside analog therapy without the addition of either a protease inhibitor or nonnucleoside reverse transcriptase inhibitor is not recommended for nonpregnant adults because of the potential for inadequate viral suppression and rapid development of resistance (107). For pregnant women not meeting the criteria for antiretroviral therapy for their own health, and receiving antiretroviral drugs only for prevention of perinatal transmission (e.g., those with HIV-1 RNA <1,000 copies/mL), dual nucleoside therapy may be considered in selected circumstances. If combination therapy is given principally to reduce perinatal transmission and would have been optional if the woman were not pregnant, consideration may be given to discontinuing therapy postnatally, with the option to reinitiate treatment according to standard criteria for nonpregnant women. If drugs are discontinued postnatally, all drugs should be stopped simultaneously. Discussion regarding the decision to continue or stop combination therapy postpartum should occur before beginning therapy during pregnancy.
Antiretroviral prophylaxis has been beneficial in preventing perinatal transmission even for infected pregnant women with HIV-1 RNA levels <1,000 copies/mL. In a meta-analysis of factors associated with perinatal transmission among women whose infants were infected despite the women's having HIV-1 RNA <1,000 copies/mL at or near delivery, transmission was only 1.0% among women receiving antenatal antiretroviral therapy (primarily ZDV alone) compared with 9.8% among those receiving no antenatal therapy (90). Therefore, use of antiretroviral prophylaxis is recommended for all pregnant women with HIV-1 infection regardless of antenatal HIV-1 RNA level.
The time-limited use of ZDV alone during pregnancy for chemoprophylaxis against perinatal transmission is controversial. Standard combination antiretroviral regimens for treatment of HIV-1 infection should be discussed and should be offered to all pregnant women with HIV-1 infection regardless of viral load; they are recommended for all pregnant women with HIV-1 RNA levels >1,000 copies/mL. Some women may wish to restrict exposure of their fetus to antiretroviral drugs during pregnancy and still reduce the risk of transmitting HIV-1 to their infant. Additionally, for women with HIV-1 RNA levels <1,000 copies/mL, time-limited use of ZDV during the second and third trimesters of pregnancy is less likely to induce the development of resistance because of the limited viral replication existing in the patient and the time-limited exposure to the antiretroviral drug. For example, the development of ZDV resistance was unusual among the healthy population of women who participated in PACTG 076 (108). The use of ZDV chemoprophylaxis alone (or, in selected circumstances, dual nucleosides) during pregnancy might be an appropriate option for these women.
# HIV-1-infected women receiving antiretroviral therapy during the current pregnancy
Recommendation. HIV-1 infected women receiving antiretroviral therapy whose pregnancy is identified after the first trimester should continue therapy. ZDV should be a component of the antenatal antiretroviral treatment regimen after the first trimester whenever possible, although this may not always be feasible. Women receiving antiretroviral therapy whose pregnancy is recognized during the first trimester should be counseled regarding the benefits and potential risks of antiretroviral administration during this period, and continuation of therapy should be considered. If therapy is discontinued during the first trimester, all drugs should be stopped and reintroduced simultaneously to avoid the development of drug resistance. Regardless of the antepartum antiretroviral regimen, ZDV administration is recommended during the intrapartum period and for the newborn.
Discussion. Women who have been receiving antiretroviral treatment for their HIV-1 infection should continue treatment during pregnancy. Discontinuation of therapy could lead to an increase in viral load, which could result in decline in immune status and disease progression as well as adverse consequences for both the fetus and the woman.
Although ZDV should be a component of the antenatal antiretroviral treatment whenever possible, there may be circumstances, such as the occurrence of significant ZDV-related toxicity, when this is not feasible. Additionally, women receiving an antiretroviral regimen that does not contain ZDV but who have HIV-1 RNA levels that are consistently very low or undetectable (e.g., <1,000 copies/mL) have a very low risk of perinatal transmission (61), and there may be concerns that the addition of ZDV to the current regimen could compromise adherence to treatment.
The maternal antenatal antiretroviral treatment regimen should be continued on schedule as much as possible during labor to provide maximal virologic effect and to minimize the chance of development of drug resistance. If a woman has not received ZDV as a component of her antenatal therapeutic antiretroviral regimen, intravenous ZDV should still be administered during the intrapartum period whenever feasible. ZDV and d4T should not be administered together because of potential pharmacologic antagonsim; options for women receiving oral d4T as part of their antenatal therapy include either continuation of oral d4T during labor without intravenous ZDV or withholding oral d4T during the period of intravenous ZDV administration during labor. Additionally, the infant should receive the standard 6-week course of ZDV.
For women with suboptimal suppression of HIV-1 RNA (i.e., >1,000 copies/mL) near the time of delivery despite having received prenatal ZDV prophylaxis with or without combination antiretroviral therapy, it is not known if administration of additional antiretroviral drugs during labor and delivery provides added protection against perinatal transmission. In the HIVNET 012 study among Ugandan women who had not received antenatal antiretroviral therapy, a 2-dose nevirapine regimen (single dose to the woman at the onset of labor and single dose to the infant at age 48 hours) significantly reduced perinatal transmission compared with a very short intrapartum/1 week postpartum ZDV regimen (65). For women in the United States, Europe, Brazil, and the Bahamas receiving antenatal antiretroviral therapy, addition of the 2-dose nevirapine regimen did not result in lower transmission rates (69). Given the lack of further reduction of transmission with nevirapine added to one of the standard antepartum regimens used in developed countries and the potential development of nevirapine resistance (See Antiretroviral Drug Resistance and Resistance Testing in Pregnancy), addition of nevirapine during labor for women already receiving antiretroviral therapy is not recommended in the United States.
Women receiving antiretroviral therapy may realize they are pregnant early in gestation and want to consider temporarily stopping antiretroviral treatment until after the first trimester because of concern for potential teratogenicity. Data are insufficient to support or refute the teratogenic risk of antiretroviral drugs when administered during the first 10 weeks of gestation; certain drugs are of more concern than others (Table 2) (20). The decision to continue therapy during the first trimester should be carefully considered by the clinician and the pregnant woman. Discussions should include considerations such as gestational age of the fetus; the woman's clinical, immunologic, and virologic status; and the known and unknown potential effects of the antiretroviral drugs on the fetus. If antiretroviral therapy is discontinued during the first trimester, all agents should be stopped and restarted simultaneously in the second trimester to avoid the development of drug resistance. No data are available to address whether temporary discontinuation of therapy is harmful for the woman or fetus.
Health-care providers might consider administering ZDV in combination with other antiretroviral drugs to newborns of women with a history of prior antiretroviral therapy, particularly in situations in which the woman is infected with HIV-1 with documented high-level ZDV resistance, has had disease progression while receiving ZDV, or has had extensive prior ZDV monotherapy. The efficacy of this approach is unknown but would be analogous to the use of multiple agents for postexposure prophylaxis for adults after inadvertent exposure. However, the appropriate dosage and short-and long-term safety of many antiretroviral agents in the neonate has not been established. The half-lives of ZDV, 3TC, and nevirapine are prolonged during the neonatal period because of immature liver metabolism and renal function, requiring specific dosing adjustments when these agents are administered to neonates. Optimal dosages for protease inhibitors in the neonatal period are still under study. The infected woman should be counseled regarding the theoretical benefit of combination antiretroviral drugs for the neonate, potential risks, and available data on appropriate dosing. She should also be informed that using antiretroviral drugs in addition to ZDV for prophylaxis of newborns is of unknown efficacy in reducing risk of perinatal transmission.
# TABLE 3. Comparison of intrapartum/postpartum regimens for HIV-1-infected women in labor who have had no prior antiretroviral therapy
- ZDV, zidovudine; CI, confidence interval; 3TC, lamivudine.
† If the mother received nevirapine less than 1 hour before delivery, the infant should be given 2 mg/kg oral nevirapine as soon as possible after birth and again at 48-72 hours.
# HIV-1-Infected Women in Labor Who Have Had No Prior Therapy
Recommendation. Several effective regimens are available for intrapartum therapy for women who have had no prior therapy (Table 3):
- a single dose of nevirapine at onset of labor followed by a single dose of nevirapine for the newborn at age 48 hours; - oral ZDV and 3TC during labor, followed by 1 week of oral ZDV-3TC for the newborn;
- intrapartum intravenous ZDV followed by 6 weeks of ZDV for the newborn; and - the 2-dose nevirapine regimen combined with intrapartum intravenous ZDV and 6 weeks of ZDV for the newborn. In the immediate postpartum period, the woman should have appropriate assessments (e.g., CD4 + count and HIV-1 RNA copy number) to determine whether antiretroviral therapy is recommended for her own health. Discussion. Although intrapartum antiretroviral medications will not prevent perinatal transmission that occurs before labor, most transmission occurs near to or during labor and delivery. Preexposure prophylaxis for the fetus can be provided by giving the mother a drug that rapidly crosses the placenta to produce systemic antiretroviral drug levels in the fetus during intensive exposure to HIV-1 in maternal genital secretions and blood during birth.
Several intrapartum/neonatal antiretroviral prophylaxis regimens are applicable for women in labor who have had no prior antiretroviral therapy (Table 3). Two regimens, one using 2 doses of nevirapine (one each for the mother and infant) and the other a combination of ZDV and 3TC, were shown to reduce perinatal transmission in randomized clinical trials among breastfeeding women, and available epidemiologic data suggest the efficacy of a third, ZDV-only regimen. The fourth regimen, combining ZDV with nevirapine, is based upon theoretical considerations.
In the HIVNET 012 trial, conducted in Uganda, a regimen consisting of a single dose of oral nevirapine given to the woman at onset of labor and a single dose to the infant at age 48 hours was compared with oral ZDV given to the woman every 3 hours during labor and postnatally to the infant for 7 days (Table 3). At age 6 weeks, the rates of transmission were 12% (95% CI = 8%-16%) in the nevirapine arm versus 21% (95% CI = 16%-26%) in the ZDV arm, a 47% reduction (95% CI = 20%-64%) in transmission (65). No serious short-term toxicity was observed in either group. Because no placebo group was included, no conclusions can be drawn regarding the efficacy of the intrapartum/1-week neonatal ZDV regimen versus no treatment.
In the PETRA trial, conducted in Uganda, South Africa, and Tanzania, ZDV and 3TC were administered orally intrapartum and to the woman and infant for 7 days postnatally. Oral ZDV and 3TC were administered at the onset of labor and continued until delivery (Table 3). Postnatally, the woman and infant received ZDV and 3TC every 12 hours for 7 days. At age 6 weeks, the rates of transmission were 9% in the ZDV-3TC arm versus 15% in the placebo arm, a 40% reduction in transmission (64). However, no differences in transmission were observed when oral ZDV and 3TC were administered only during the intrapartum period (transmission of 14% in the ZDV-3TC arm versus 15% in the placebo arm), indicating that some postexposure prophylaxis is needed, at least in breastfeeding settings.
These clinical trials were conducted in Africa, where the majority of women breastfeed their infants. Because HIV-1 can be transmitted by breast milk and the highest risk period for such transmission is the first few months of life (109), the absolute transmission rates observed in the African trials may not be comparable to what might be observed with these regimens in HIV-1-infected women in the United States, where breastfeeding is not recommended. However, comparison of the percentage of reduction in transmission at early timepoints (e.g., 4-6 weeks) may be applicable. In the effective arms of the PETRA trial, antiretroviral drugs were administered postnatally to both the mother and the infant to reduce the risk of early transmission through breast milk. In the United States, administration of ZDV-3TC to the mother postnatally in addition to the infant would not be required for prophylaxis against transmission because HIV-1-infected women are advised not to breastfeed their infants (although ZDV-3TC might be indicated as part of a combination postnatal treatment regimen for the woman).
Epidemiologic data from New York State indicate that intravenous maternal intrapartum ZDV followed by oral ZDV for 6 weeks to the infant may significantly reduce transmission compared with no treatment (Table 3). Transmission rates were 10% (95% CI = 3%-22%) with intrapartum and neonatal ZDV compared with 27% (95% CI = 21%-33%) without ZDV, a 62% reduction in risk (95% CI = 19%-82%) (70,71). Similarly, in an epidemiologic study in North Carolina, intravenous intrapartum and 6-week oral neonatal ZDV treatment was associated with a transmission rate of 11%, compared with 31% without therapy (6). However, intrapartum ZDV combined with very short-term ZDV administration to infants postnatally, e.g., the 1-week postnatal infant ZDV course in HIVNET 012 (65), has not proved effective to date. This underscores the necessity of recommending a full 6-week course of infant treatment when ZDV alone is used.
No data are available to address the relative efficacy of these three intrapartum/neonatal antiretroviral regimens for prevention of transmission. In the absence of data to suggest the superiority of one or more of the possible regimens, choice should be based upon the specific circumstances of each woman. The 2-dose nevirapine regimen offers the advantage of lower cost, the possibility of directly observed therapy and increased adherence compared with the other two regimens. In a clinical trial (SAINT) in South Africa, which compared the 2-dose nevirapine and the intrapartum/ postpartum ZDV-3TC regimens, no significant differences were observed between the two regimens in terms of efficacy in reducing transmission or in maternal and infant toxicity (66).
It has not been determined if combining intravenous intrapartum/6-week neonatal oral ZDV with the 2-dose nevirapine regimen will provide additional benefit over that observed with each regimen alone. Clinical trial data have established that combination therapy is superior to single-drug therapy for treatment of persons with established infection and that infants born to women in labor who have not received any antiretroviral therapy are at high risk for infection. The 2-dose nevirapine regimen had no serious short-term drugassociated toxicity in the 313 mother-infant pairs exposed to the regimen in the HIVNET 012 trial. Nevirapine and ZDV are synergistic in inhibiting HIV-1 replication in vitro (110), and both nevirapine and ZDV rapidly cross the placenta to achieve drug levels in the infant nearly equal to those in the mother. In contrast to ZDV, nevirapine can decrease plasma HIV-1 RNA concentration by at least 1.3 log by 7 days after a single dose (111) and is active immediately against intracellular and extracellular virus (112). However, nevirapine resistance can be induced by a single mutation at codon 181, whereas high-level resistance to ZDV requires several mutations. Nevirapine resistance mutations were detected at 6 weeks postpartum in 19% of antiretroviral naive women and 15% of women receiving antiretroviral drugs during pregnancy who received single-dose nevirapine during labor (See Antiretroviral Drug Resistance and Resistance Testing in Pregnancy).
A theoretical benefit of combining the intrapartum/neonatal ZDV and nevirapine regimens would be the efficacy of this combination if the woman had acquired infection with HIV-1 that is resistant to either ZDV or nevirapine. Perinatal transmission of antiretroviral drug-resistant virus has been reported but appears to be unusual (6,113,114). Virus with low-level ZDV resistance may be less likely to establish infection than wild-type virus, and transmission may not occur even when maternal virus has high-level ZDV resistance (114)(115)(116)(117). Since the prevalence of drug-resistant virus is an evolving phenomenon, surveillance is needed to determine this prevalence in pregnant women over time and the risk of transmission of resistant viral strains. The potential benefits of combination prophylaxis with intrapartum/neonatal nevirapine and ZDV must be weighed against the increased cost, possible problems with nonadherence, potential short-and long-term toxicity, including the risk of emergence of nevirapine-resistant virus, and the lack of definitive data to show that combining the two intrapartum/postpartum regimens offers any additional benefit for prevention of transmission over the use of either drug alone.
# Infants Born to Mothers Who Have Received No Antiretroviral Therapy During Pregnancy or Intrapartum
Recommendation. The 6-week neonatal component of the ZDV chemoprophylactic regimen should be discussed with the mother and offered for the newborn. ZDV should be initiated as soon as possible after delivery, preferably within 6-12 hours of birth. Some clinicians may use ZDV in combination with other antiretroviral drugs, particularly if the mother is known or suspected to have ZDV-resistant virus. However, the efficacy of this approach for prevention of transmission is unknown, and appropriate dosing regimens for neonates are incompletely defined. In the immediate postpartum period, the woman should undergo appropriate assessments (e.g., CD4 + count and HIV-1 RNA copy number) to determine if antiretroviral therapy is required for her own health. The infant should undergo early diagnostic testing so that if he or she is HIV-1 infected, treatment can be initiated as soon as possible.
Discussion. Definitive data are not available to address whether ZDV administered only during the neonatal period would reduce the risk of perinatal transmission. Epidemiologic data from a New York State study indicate a decline in transmission when infants were given ZDV for the first 6 weeks of life compared with no prophylaxis (70,71). Transmission rates were 9% (95% CI = 4.1%-17.5%) with ZDV prophylaxis of newborns only (initiated within 48 hours after birth) versus 18% (95% CI = 7.7%-34.3%) with prophylaxis initiated after 48 hours, and 27% (95% CI = 21%-33%) with no ZDV prophylaxis (70). Epidemiologic data from North Carolina did not demonstrate a benefit of ZDV for newborns only compared with no prophylaxis (6). Transmission rates were 27% (95% CI = 8%-55%) with prophylaxis of newborns only and 31% (95% CI = 24%-39%) with no prophylaxis. The timing of initiation of infant prophylaxis was not defined in this study. Data from a case-control study of postexposure prophylaxis of health-care workers who had nosocomial percutaneous exposure to blood from HIV-1infected persons indicate that ZDV administration was associated with a 79% reduction in the risk for HIV-1 seroconversion following exposure (72). Postexposure prophylaxis also has prevented retroviral infection in some studies involving animals (118)(119)(120).
The interval during which benefit can be gained from postexposure prophylaxis is undefined. When prophylaxis was delayed beyond 48 hours after birth in the New York State study, no efficacy could be demonstrated. For most infants in this study, prophylaxis was initiated within 24 hours (71). Data from studies of animals indicate that the longer the delay in institution of prophylaxis, the less likely that infection will be prevented. In most studies of animals, antiretroviral prophylaxis initiated 24-36 hours after exposure has usually not been effective for preventing infection, although later administration has been associated with decreased viremia (118)(119)(120). In cats, ZDV treatment initiated within the first 4 days after challenge with feline leukemia virus afforded protection, whereas treatment initiated 1 week postexposure did not (121). The relevance of these animal studies to prevention of perinatal HIV-1 transmission in humans is unknown. HIV-1 infection is established in most infected infants by age 1-2 weeks. In a study of 271 infected infants, HIV-1 DNA polymerase chain reaction (PCR) was positive in 38% of samples from infants tested within 48 hours of birth. No substantial change in diagnostic sensitivity was observed within the first week of life, but detection increased rapidly during the second week of life, reaching 93% by age 14 days (122). Initiation of postexposure prophylaxis after age 2 days is not likely to be efficacious in preventing transmission, and by age 14 days, infection would already be established in most infants.
When the mother has received neither the antenatal nor intrapartum parts of the three-part ZDV regimen, administration of antiretroviral drugs to the newborn provides chemoprophylaxis only after HIV-1 exposure has already occurred. Some clinicians view this situation as analogous to nosocomial postexposure prophylaxis and may wish to provide ZDV in combination with one or more other antiretroviral agents. Such a decision must be accompanied by a discussion with the woman of the potential benefits and risks of this approach and the lack of data to address its efficacy and safety.
# Antiretroviral Drug Resistance and Resistance Testing in Pregnancy
The development of antiretroviral drug resistance is one of the major factors leading to therapy failure in HIV-1-infected persons. Resistant viral variants emerge under selective pressure, especially with incompletely suppressive regimens, because of the inherent mutation-prone process of reverse transcription with viral replication. The administration of combination antiretroviral therapy with maximal suppression of viral replication to undetectable levels limits the development of antiretroviral resistance in both pregnant and nonpregnant persons. Some have raised concern that using non-highly active antiretroviral regimens, such as ZDV monotherapy, for prophylaxis against perinatal transmission could result in the development of resistance, which, in turn, could influence perinatal transmission and limit future maternal therapeutic options. Additionally, the general implications of antiretroviral resistance for maternal, fetal, and newborn health are of increasing interest as more HIV-1-infected women enter pregnancy with prior exposure to antiretroviral drugs.
The prevalence of antiretroviral drug resistance mutations in virus from newly infected, therapy-naive persons has varied by geographic area and the type of assay used (genotypic versus phenotypic) (114,(123)(124)(125)(126). In surveys from the United States and Europe, rates of primary resistance mutations in the reverse transcriptase gene were >10% in the majority of studies and ranged as high as 23%. Primary resistance mutations in the protease gene ranged from 1% to 16%, and secondary mutations and polymorphisms of the protease gene were very common. The presence of high-level phenotypic resistance (>10-fold increase in 50% inhibitory concentration ) was uncommon but tended to occur among isolates with genotypic resistance. Lower level resistance (2.5-to 10fold decrease in susceptibility) was more common and tended to occur in the absence of genotypic mutations known to confer resistance.
The prevalence of resistance mutations during pregnancy also varies depending on the characteristics of the population studied. No high-level resistance to ZDV was detected at baseline among a subset of women enrolled in PACTG 076, all of whom had CD4 + counts >200 cells/mL and had received no or only limited prior ZDV therapy (108). Conversely, among women receiving ZDV for maternal health indications before 1994 in the Women and Infants Transmission Study (WITS), any ZDV resistance mutation was detected in 35 (25%) of 142 isolates, and high-level ZDV resistance was detected in 14 (10%) isolates (127). Codon 215 mutations, associated with high-level ZDV resistance, were detected in isolates from 9.6% of 62 consecutive women in the Swiss HIV-1 in Pregnancy Study (115). Similarly, in New York, codon 215 mutations were detected in no isolates from 33 women who delivered before 1997 and three (9.7%) from 31 women who delivered from 1997 to 1999; mutations were detected only among women with previous ZDV exposure (128). Among 220 pregnant women with prior ZDV exposure who were enrolled in the Perinatal AIDS Collaborative Transmission Study, virus with primary mutations conferring resistance to nucleoside analog drugs was observed in 17.7%, and primary or secondary resistance mutations in 22%; none of the women had virus containing primary nonnucleoside resistance mutations, 2.3% had secondary nonnucleoside resistance mutations, and 0.5% had virus with a primary mutation conferring resistance to protease inhibitors (117). In all these studies, women evaluated for resistance mutations were a subset of the larger studies, chosen because of detectable HIV-1 RNA levels with amplifiable virus and often because of clinical findings suggesting an increased risk of resistance. Thus, the rate of resistance mutations in the entire population is likely to be much lower than in the subsets.
The detection of ZDV or other resistance mutations was not associated with an increased risk of perinatal transmission in the PACTG 076, PACTG 185, Swiss cohort, or PACTS studies (108,115,117,129). In the WITS substudy, detection of ZDV resistance was not significantly associated with transmission on univariate analysis, but when adjusted for duration of ruptured membranes and total lymphocyte count, resistance mutations conferred an increased risk of transmission (127). Women in this cohort were receiving ZDV during pregnancy for their own health (mean CD4 + count at delivery 315 cells/mL), usually without intravenous ZDV during labor or ZDV for the infants. Factors associated with resistance at delivery included ZDV use before pregnancy, higher log HIV-1 RNA, and lower CD4 + count. Women with characteristics similar to those in the WITS substudy should be advised to take highly active antiretroviral therapy for their own health and for prevention of perinatal transmission. Although perinatal transmission of resistant virus has been reported (113,130), it appears to be unusual, and it is not clear that the presence of mutations increases the risk of transmission. In the WITS substudy, when a transmitting mother had a mixed viral population of wild-type and low-level resistant virus, only the wild-type virus was found in the infant, suggesting that virus with low-level ZDV resistance may be less transmissible (116).
Another concern is the potential for resistance developing in the mother during prophylaxis against perinatal transmission, which may then influence future therapy options. In some combination antiretroviral clinical trials, patients with previous ZDV therapy experienced less benefit from combination therapy than those who had never received prior antiretroviral therapy (12,131). However, in these studies the median duration of prior ZDV use was 12-20 months, and enrolled patients had more advanced disease and lower CD4 + counts than did the population of women enrolled in PACTG 076 or those for whom initiation of therapy would be considered optional. In one study, patients with 4 years postpartum, indicate no substantial differences in CD4 + count, HIV-1 RNA copy number, development of ZDV resistance, or time to progression to AIDS or death among women who received ZDV compared with those who received placebo (132).
Rapid development of resistance to 3TC has been reported among persons receiving dual nucleoside therapy without other agents. In a small study, the M184V 3TC resistance mutation was detectable by delivery in four (80%) of five women treated with ZDV-3TC during pregnancy (133). In a French cohort in which 3TC was added at 32 weeks' gestation to the PACTG 076 ZDV regimen, among 132 samples tested from 6 weeks postpartum, the M184V mutation was detected in 52 (39%); the prevalence of this mutation, before receipt of 3TC, was only 2% (27). ZDV resistance mutations included T215Y/F in nine (7%), M41L in nine (7%), and K70R in 14 (11%). In multivariate analyses, factors associated with detection of the M184V mutation after delivery included lower CD4 + count, higher HIV-1 RNA levels, and longer duration of 3TC therapy. Thus, dual nucleoside therapy is not recommended for treatment of nonpregnant persons with HIV-1 infection or pregnant women who fulfill criteria for initiation of antiretroviral therapy for their own health. These 3TC resistance mutations have also been noted in clinical trials of three drug combinations including 3TC (134,135). In selected circumstances, dual nucleoside therapy may be considered for pregnant women who are receiving antiretroviral agents for perinatal prophylaxis only. The potential benefits and risks of this approach have not been well studied, and concerns exist about the potential for inadequate viral suppression and rapid development of resistance with use of dual nucleoside treatment.
Selection of nevirapine-resistant virus has also been detected at 6 weeks postpartum in women receiving a single dose of nevirapine during labor. In HIVNET 012, in which antiretroviral-naive Ugandan women received a single dose of nevirapine during labor to prevent perinatal HIV-1 transmission, genotypic mutations associated with nevirapine resistance were detected at 6 weeks postpartum in samples from 21 (19%) of 111 women with detectable viral replication who received nevirapine (136). The rate of resistance was similar among mothers whose children were or were not infected. Development of resistance was associated with significantly higher baseline viral loads and lower CD4 + counts. Samples taken 12-24 months after delivery from a subset of these women no longer had detectable nevirapine resistance, suggesting that this regimen might be effective for perinatal prophylaxis in subsequent pregnancies. Implications concerning the transient development of detectable nevirapine genotypic resistance mutations from single-dose nevirapine for future maternal therapeutic options are unclear.
Further data are needed to assess the frequency of development of resistance with single-dose intrapartum nevirapine used alone versus with other agents such as ZDV in women who have not received antenatal treatment. In PACTG 316, in which single-dose nevirapine administered during labor and to the newborn was added to the woman's existing antiretroviral regimen, newly detectable nevirapine-resistance mutations were detected at 6 weeks postpartum in 14 (15%) of 95 women who received single-dose intrapartum nevirapine and had detectable HIV-1 RNA at delivery (137). The risk for development of a new nevirapine resistance mutation did not correlate with CD4 + count at delivery, HIV-1 RNA copy number, or type of antenatal antiretroviral treatment (resistance occurred in women receiving highly active antiretroviral therapy as well as ZDV monotherapy). Given lack of further reduction of transmission with nevirapine added to an established regimen (69) and the potential development of resistance, addition of nevirapine during labor for women already receiving antiretroviral therapy is not recommended.
The International AIDS Society-USA Panel and EuroGuidelines Group for HIV-1 Resistance recommend that all pregnant women with detectable HIV-1 RNA levels undergo resistance testing, even if they are antiretroviral naive, to try to maximize the response to antiretroviral drugs in pregnancy, although data to support an improved maternal outcome or reduced risk of perinatal transmission with routine resistance testing are not available (138,139). Until further data are available, resistance testing foHIV-1-infecteded pregnant women should be done for the same indications as for nonpregnant persons:
- those with acute infection;
- those who have virologic failure with persistently detectable HIV-1 RNA levels while receiving antenatal therapy, or suboptimal viral suppression after initiation of antiretroviral therapy; or - those with a high likelihood of having resistant virus, based on community prevalence of resistant virus, known drug resistance in the woman's sex partner, or other source of infection. The optimal prophylactic regimen for newborns of women with ZDV resistance is unknown. Therefore, antiretroviral prophylaxis of the infant born to a woman with known or suspected ZDV-resistant HIV-1 should be determined in consultation with pediatric infectious disease specialists.
Recommendations related to antiretroviral drug resistance and drug resistance testing for pregnant women with HIV-1 infection are listed here (Box 2).
# Perinatal HIV-1 Transmission and Mode of Delivery Transmission and Mode of Delivery
Optimal medical management during pregnancy should include antiretroviral therapy to suppress plasma HIV-1 RNA to undetectable levels. Labor and delivery management of HIV-1-infected pregnant women should focus on minimizing the risk for both perinatal transmission of HIV-1 and the potential for maternal and neonatal complications.
Several studies done before viral load testing and combination antiretroviral therapy became a routine part of clinical practice consistently showed that cesarean delivery (elective or scheduled) performed before onset of labor and rupture of membranes was associated with a significant decrease in perinatal HIV-1 transmission compared with other types of delivery, with reductions ranging from 55% to 80%. Data regarding transmission rates according to receipt of ZDV have been summarized (Table 4) (140,141).
The observational data comprised individual patient information from 15 prospective cohort studies, including more than 7,800 mother-child pairs, analyzed in a meta-analysis (140). In this meta-analysis, the rate of perinatal HIV-1 transmission among women undergoing elective cesarean delivery was significantly lower than that among similar women having either nonelective cesarean or vaginal delivery, regardless of whether they received ZDV. In an international randomized trial of mode of delivery, transmission was 1.8% among women randomized to elective cesarean delivery, many of whom received ZDV (141). Although the reduction in transmission after elective cesarean section versus vaginal delivery among women receiving ZDV in the randomized trial was similar to that seen in untreated women, this was not statistically significant. Additionally, in both studies, nonelective cesarean delivery (performed after onset of labor or rupture of membranes) was not associated with a significant decrease in transmission compared with vaginal delivery. The American College of Obstetricians and Gynecologists' (ACOG) Committee on Obstetric Practice, after reviewing these data, has issued a Committee Opinion concerning route of delivery recommending consideration of scheduled cesarean delivery for HIV-1-infected pregnant women with HIV-1 RNA levels >1,000 copies/ml near the time of delivery (142).
- All pregnant women should be offered highly active antiretroviral therapy to maximally suppress viral replication, reduce the risk of perinatal transmission, and minimize the risk of development of resistant virus. - For women for whom combination antiretroviral therapy would be considered optional (HIV-1 RNA <1,000 copies/ml) and who wish to restrict their exposure to antiretroviral drugs during pregnancy, monotherapy with the three-part zidovudine (ZDV) prophylaxis regimen (or in selected circumstances, dual nucleosides) should be offered. In these circumstances, the development of resistance should be minimized by limited viral replication (assuming HIV-1 RNA levels remain low) and the timelimited exposure to ZDV. Monotherapy with ZDV does not suppress HIV-1 replication to undetectable levels in most cases; theoretically, such therapy might select for ZDV-resistant viral variants, potentially limiting future treatment options. These considerations should be discussed with the pregnant woman. - Recommendations for resistance testing for HIV-1infected pregnant women are the same as for nonpregnant patients: acute HIV-1 infection, virologic failure, suboptimal viral suppression after initiation of antiretroviral therapy, or high likelihood of exposure to resistant virus based on community prevalence or source characteristics. - Women who have a history of presumed or documented ZDV resistance and are receiving antiretroviral regimens that do not include ZDV for their own health should still receive intravenous ZDV intrapartum and oral ZDV for their infants according to the PACTG 076 protocol whenever possible. A key mechanism by which ZDV reduces perinatal transmission is likely through pre-and postexposure prophylaxis of the infant, which may be less dependent on drug sensitivity than is reduction of viral replication. However, these women are not good candidates for ZDV alone. - Optimal antiretroviral prophylaxis of the infant born to a woman with HIV-1 known to be resistant to ZDV or other agents should be determined in consultation with pediatric infectious disease specialists, taking into account resistance patterns, available drug formulations, and infant pharmacokinetic data, when available. - If women receiving combination therapy require temporary discontinuation for any reason during pregnancy, all drugs should be stopped and reintroduced simultaneously to reduce the potential for emergence of resistance. - Optimal adherence to antiretroviral medications is a key part of the strategy to reduce the development of resistance. - Because the prevalence of drug-resistant virus is an evolving phenomenon, surveillance is needed to monitor the prevalence of drug-resistant virus in pregnant women over time and the risk of transmission of resistant viral strains.
# BOX 2. Recommendations related to antiretroviral drug resistance and drug resistance testing for pregnant women with human immunodeficiency virus type 1 (HIV-1) infection
# Transmission, Viral Load, and Combination Antiretroviral Therapy
The studies described previously report data from women not receiving combination antiretroviral therapy or undergoing routine viral load testing, and they do not differentiate in utero from intrapartum transmission. Whether cesarean delivery offers any benefit to the infants of women receiving highly active combination antiretroviral regimens who have low or undetectable maternal HIV-1 RNA levels is unknown. Studies evaluating vertical transmission rates according to maternal HIV-1 RNA copy number have used a variety of assays with different lower limits of detection, and transmission has been reported even when maternal HIV-1 RNA levels were below assay quantification (52,75,143,144). There does not appear to be a threshold of HIV-1 RNA levels below which lack of transmission can be assured. Nevertheless, on the basis of the upper limits of the 95% confidence interval reported for transmission from women who have undetectable viral load in late pregnancy, the highest rates of transmission among such women are similar to the observed rates of vertical transmission among women who receive ZDV and undergo elective cesarean delivery. Transmission occurred only once in four studies involving 29, 32, 107, and 198 women with undetectable viral load (<500 copies/mL ) late in pregnancy, 95% of whom were receiving at least ZDV and almost half receiving two or more antiretroviral agents (78,79,145,146). Scheduled cesarean delivery is unlikely to further reduce this low transmission rate among treated women with undetectable viral loads, nor would it prevent in utero transmission. Given the variability in quantification of HIV-1 RNA levels at low copy numbers, the variety of lower limits of quantification of the tests, and the similarly low levels of perinatal transmission of HIV-1 at levels <1,000 copies/mL, ACOG has chosen 1,000 copies/mL as the threshold above which to recommend scheduled cesarean delivery as an adjunct for prevention of transmission (142).
Similarly low vertical transmission rates have been observed among limited numbers of women receiving combination antiretroviral therapy during pregnancy. Three limited studies have shown transmission among one (6.7%) of 15 and none of 30 and 24 women receiving two or more antiretroviral drugs in combination during pregnancy (21,88,133). Additional studies in abstract form reported no transmission among 153 women receiving highly active combination antiretroviral therapy, whereas others have reported transmission rates of 1% (2/187) and 5.8% (3/52) among women receiving triple therapy including a protease inhibitor (147)(148)(149). Whether the low transmission rates with combination therapy are due to reduction in HIV-1 RNA to very low or undetectable levels or to some other mechanism (e.g., transplacental drug passage providing preexposure prophylaxis to the infant) is unknown because HIV-1 RNA levels were not reported. Thus, current data are insufficient to adequately assess whether the impact of combination antiretroviral therapy on vertical transmission is independent from its effect on viral load. Therefore, scheduled cesarean delivery is recommended for women with HIV-1 RNA >1,000 copies/mL near the time delivery, regardless of the type of antiretroviral therapy the woman is receiving.
# Maternal Risks by Mode of Delivery
Among women not infected with HIV-1, maternal morbidity and mortality are greater after cesarean than after vaginal delivery. Complications, especially postpartum infections, are approximately five to seven times more common after cesarean section performed after labor or membrane rupture compared with vaginal delivery (150,151). Complications after scheduled cesarean delivery are more common than with vaginal delivery but less than with urgent cesarean delivery (152)(153)(154)(155)(156). Factors that increase the risk of postoperative complications include low socioeconomic status, genital infections, obesity or malnutrition, smoking, and prolonged labor or membrane rupture.
In the European mode of delivery randomized trial among HIV-1-infected pregnant women, no major complications occurred in either the cesarean or vaginal delivery group (141). However, postpartum fever occurred in two (1.1%) of 183 women who delivered vaginally and 15 (6.7%) of 225 who delivered by cesarean section (p = 0.002). Substantial postpartum bleeding and anemia occurred at similar rates in the two groups. Among the 497 women enrolled in PACTG 185, only endometritis, wound infection, and pneumonia were increased among women delivered by scheduled or urgent cesarean section, compared with vaginal delivery (157). Complication rates were within the range previously reported for similar general obstetric populations. Finally, an analysis of nearly 1,200 women enrolled in WITS demonstrated increased rate of postpartum fever without documented source of infection among women undergoing elective cesarean delivery compared with spontaneous vaginal delivery, but hemorrhage, severe anemia, endometritis or urinary tract infections were not increased (158). In the latter two studies, cesarean deliveries before onset of labor and ruptured membranes were done for obstetric indications such as previous cesarean section or severe preeclampsia and not for prevention of HIV-1 transmission, possibly resulting in higher complication rates than might be observed for scheduled cesarean section performed solely to reduce perinatal transmission.
In a more recent study including a cohort of HIV-1infected women with a larger proportion of women undergoing scheduled cesarean delivery specifically for prevention of HIV-1 transmission, fever was increased after cesarean compared with vaginal delivery (159). In a multivariate analysis adjusted for maternal CD4 + count and antepartum hemorrhage, the relative risk of any postpartum complication was 1.85 (95% CI = 1.00-3.39) after elective cesarean delivery and 4.17 (95% CI = 2.32-7.49) after emergency cesarean delivery, compared with that for women delivering vaginally.
Febrile morbidity was increased among women with low CD4 + counts, which was consistent with findings in previous studies (160,161).
Several case-control studies and a cohort study have reported complication rates among HIV-1-infected versus uninfected women undergoing cesarean delivery, usually on an urgent rather than scheduled basis (160)(161)(162)(163)(164)(165)(166). All but one study detected an increase in postpartum fever or antibiotic use among the HIV-1-infected women, although increases in specific infections such as endometritis, wound infection, or pneumonia were found in some but not all studies. Complication rates were inversely related to CD4 + count or clinical stage of HIV-1 disease. In the one study in which it was evaluated, antiretroviral therapy with ZDV was associated with a decreased rate of infectious complications, although this was not statistically significant (odds ratio = 3.1, 95% CI = 0.07-1.3) (165).
In summary, data indicate that cesarean delivery is associated with a slightly greater risk of complications among HIV-1-infected women than observed among uninfected women, with the difference most notable among women with more advanced disease. Scheduled cesarean delivery for prevention of HIV-1 transmission poses a risk greater than that of vaginal delivery and less than that of urgent or emergent cesarean section. Complication rates in most studies were within the range reported in populations of HIV-1-uninfected women with similar risk factors and were not of sufficient frequency or severity to outweigh the potential benefit of reduced transmission among women at heightened risk of transmission. HIV-1-infected women should be counseled regarding the increased risks associated with cesarean delivery as well as the potential benefits based on their HIV-1 RNA levels and current antiretroviral therapy.
# Timing of Scheduled Cesarean Delivery
If the decision is made to perform a scheduled cesarean delivery to prevent HIV-1 transmission, ACOG recommends that it be done at 38 weeks' gestation, determined by using clinical and first or second trimester ultrasonographic estimates of gestational age and avoiding amniocentesis (142). For HIV-1-uninfected women, ACOG guidelines for scheduled cesarean delivery without confirmation of fetal lung maturity advise waiting until 39 completed weeks or the onset of labor to reduce the chance of complications in the neonate (167). Cesarean delivery at 38 versus 39 weeks entails a small absolute but substantially increased risk of development of infant respiratory distress requiring mechanical ventilation (168,169). This increased risk must be balanced against the potential risk for labor or membrane rupture before the woman would reach 39 weeks of gestation. Women should be informed of the potential risks and benefits to themselves and their infants in choosing the timing and mode of delivery.
# Intrapartum Management
For a scheduled cesarean delivery, intravenous ZDV should begin 3 hours before surgery, according to standard dosing recommendations (2). Other antiretroviral medications taken during pregnancy should not be interrupted near the time of delivery, regardless of route of delivery. Because maternal infectious morbidity is potentially increased, clinicians may opt to give perioperative antimicrobial prophylaxis. No controlled studies have evaluated the efficacy of antimicrobial prophylaxis specifically for HIV-1-infected women undergoing scheduled operative delivery (170).
Unanswered questions remain regarding the most appropriate management of labor in cases in which vaginal delivery is attempted. Increasing duration of membrane rupture has been demonstrated consistently to be a risk factor for perinatal transmission among women not receiving any antiretroviral therapy (93,143,171,172). Among women receiving ZDV, some studies have shown an increased risk of transmission with ruptured membranes for 4 or more hours before delivery (9,79), but others have not (78,145). The additive risk and the critical time of ruptured membranes for perinatal HIV-1 transmission in women with low viral loads and/or receiving combination antiretroviral therapy are unknown. Obstetric procedures increasing the risk of fetal exposure to maternal blood, such as amniocentesis and invasive monitoring, have been implicated in increasing vertical transmission rates by some but not all investigators (78,(173)(174)(175). If labor is progressing and membranes are intact, artificial rupture of membranes or invasive monitoring should be avoided. These procedures should be considered only when obstetrically indicated and the length of time for ruptured membranes or monitoring is anticipated to be short. If spontaneous rupture of membranes occurs before or early during the course of labor, interventions to decrease the interval to delivery, such as administration of pitocin, might be considered.
# Summary
Considerations related to counseling of the HIV-1-infected pregnant woman regarding risks for vertical transmission of HIV-1 to the fetus/neonate and to the obstetric care of such women include the following:
- Efforts to maximize the health of the pregnant woman, including the provision of highly active combination antiretroviral therapy, can be expected to correlate with both reduction in viral load and low rates of vertical transmission. At a minimum for the reduction of perinatal HIV-1 transmission, ZDV prophylaxis according to the PACTG 076 regimen is recommended unless the woman is intolerant of ZDV. - Plasma HIV-1 RNA levels should be monitored during pregnancy according to the guidelines for management of HIV-1-infected adults. The most recently determined viral load value should be used when counseling a woman regarding mode of delivery. - Perinatal HIV-1 transmission is reduced by scheduled cesarean section among women with unknown HIV-1 RNA levels who are not receiving antiretroviral therapy or receiving ZDV for prophylaxis of perinatal transmission. Plasma HIV-1 RNA levels were not available in these studies to assess the potential benefit among women with low plasma HIV-1 RNA levels. - Women with HIV-1 RNA levels >1,000 copies/mL should be counseled regarding the benefit of scheduled cesarean delivery in reducing the risk of vertical transmission. - Data are insufficient to evaluate the potential benefit of cesarean section for neonates of antiretroviral-treated women with plasma HIV-1 RNA levels below 1,000 copies/mL. Given the low rate of transmission among this group, it is unlikely that scheduled cesarean section would confer additional benefit in reduction of transmission. - Management of women originally scheduled for cesarean delivery who present with ruptured membranes must be individualized based on duration of rupture, progress of labor, plasma HIV-1 RNA level, current antiretroviral therapy, and other clinical factors. - Women should be informed of the risks associated with cesarean delivery, and these risks to the woman should be balanced with potential benefits expected for the neonate. - Women should be counseled regarding the limitations of the current data. The woman's autonomy to make an informed decision regarding route of delivery should be respected and honored.
# Clinical Situations
The following recommendations are based on various hypothetical situations that may be encountered in clinical practice (Box 3), with relevant considerations highlighted in the subsequent discussion sections. These recommendations are only guidelines, and flexibility should be exercised according to the patient's individual circumstances.
# HIV-1-infected women presenting in late pregnancy (after approximately 36 weeks
-f gestation), known to be HIV-1 infected but not receiving antiretroviral therapy, and whose results for HIV-1 RNA level and lymphocyte subsets are pending but unlikely to be available before delivery.
Recommendation. Therapy options should be discussed in detail. Antiretroviral therapy, including at least the PACTG 076 ZDV regimen, should be initiated. In counseling, the woman should be informed that scheduled cesarean section is likely to reduce the risk of transmission to her infant. She should also be informed of the increased risks to her of cesarean delivery, including increased rates of postoperative infection, anesthesia risks, and other surgical risks. If cesarean delivery is chosen, the procedure should be scheduled at 38 weeks of gestation, based on the best available clinical information. When scheduled cesarean section is performed, the woman should receive continuous intravenous ZDV infusion beginning 3 hours before surgery, and her infant should receive 6 weeks of ZDV therapy after birth. Options for continuing or initiating combination antiretroviral therapy after delivery should be discussed with the woman as soon as her viral load and lymphocyte subset results are available.
Discussion. Women in these circumstances are similar to women enrolled in the European randomized trial and those evaluated in the meta-analysis (140,141). In both studies, the population not receiving antiretroviral therapy was shown to have a significant reduction in transmission with cesarean section done before labor or membrane rupture. HIV-1 RNA levels were not available in these studies. Without current therapy, the HIV-1 RNA level are unlikely to be <1,000 copies/mL. Even if combination therapy were begun immediately, reduction in plasma HIV-1 RNA to undetectable levels usually takes several weeks, depending on the starting RNA level. ZDV monotherapy could be begun, with subsequent antiretroviral therapy decisions made after delivery based on the HIV-1 RNA level, CD4 + count, and the woman's preference regarding initiation of long-term combination therapy. Scheduled cesarean section and the three-part PACTG 076 ZDV regimen offer the best chance of preventing perinatal HIV-1 transmission in this setting.
# HIV-1-infected women presenting in late pregnancy
(after approximately 36 weeks of gestation), known to be HIV-1 infected but not receiving antiretroviral therapy, and whose results for HIV RNA level and lymphocyte subsets are pending but unlikely to be available before delivery - Therapy options should be discussed in detail. Antiretroviral therapy, including at least the PACTG 076 zidovudine (ZDV) regimen, should be initiated.
In counseling, the woman should be informed that scheduled cesarean section is likely to reduce the risk of transmission to her infant. She should also be informed of the increased risks to her of cesarean delivery, including increased rates of postoperative infection, anesthesia risks, and other surgical risks. - If cesarean delivery is chosen, the procedure should be scheduled at 38 weeks of gestation, based on the best available clinical information. When scheduled cesarean section is performed, the woman should receive continuous intravenous ZDV infusion beginning 3 hours before surgery, and her infant should receive 6 weeks of ZDV therapy after birth. Options for continuing or initiating combination antiretroviral therapy after delivery should be discussed with the woman as soon as her viral load and lymphocyte subset results are available.
# HIV-1-infected women who began prenatal care
early in the third trimester, are receiving highly active combination antiretroviral therapy, and have an initial virologic response but have HIV-1 RNA levels that remain substantially over 1,000 copies/mL at 36 weeks of gestation - The current combination antiretroviral regimen should be continued since the HIV-1 RNA level is dropping appropriately. The woman should be informed that although her HIV-1 RNA level is responding to the antiretroviral therapy, it is not likely to decline to <1,000 copies/mL before delivery. Therefore, scheduled cesarean section may provide additional benefit in preventing intrapartum transmission of HIV-1. She should also be informed of the increased risks to her of cesarean delivery, including increased rates of postoperative infection, anesthesia risks, and surgical risks.
- If she chooses scheduled cesarean delivery, it should be performed at 38 weeks' gestation, and intravenous ZDV should be begun at least 3 hours before surgery. Other antiretroviral medications should be continued on schedule as much as possible before and after surgery. The infant should receive oral ZDV for 6 weeks after birth. The importance of adhering to therapy after delivery for her own health should be emphasized. Recommendation. The current combination antiretroviral regimen should be continued because the HIV-1 RNA level is declining appropriately. The woman should be informed that although her HIV-1 RNA level is responding to the antiretroviral therapy, it is unlikely that it will reach <1,000 copies/mL before delivery. Therefore, scheduled cesarean delivery may provide additional benefit in preventing intrapartum transmission of HIV-1. She should also be informed of the increased risks to her of cesarean delivery, including increased rates of postoperative infection, anesthesia risks, and surgical risks. If she chooses scheduled cesarean section, it should be performed at 38 weeks' gestation, and intravenous ZDV should be begun at least 3 hours before surgery. Other antiretroviral medications should be continued on schedule as much as possible before and after surgery. The infant should receive oral ZDV for 6 weeks after birth. The importance of adhering to therapy after delivery for her own health should be emphasized.
# HIV-1-infected women receiving highly active com
Discussion. In cohorts of women receiving ZDV therapy with low rates of scheduled cesarean delivery, current data indicate that the rate of vertical transmission of HIV-1 is 1%-12% (mean 5.7%) when HIV-1 RNA levels near delivery are 1,000-10,000 copies/mL, and is 9%-29% (mean 12.6%) when HIV-1 RNA levels are >10,000 copies/mL (52,62,74,78,79,145). Although current combination antiretroviral therapy regimens may be expected to suppress HIV-1 RNA to undetectable levels with continued use, these levels are likely to still be detectable within the period of expected delivery. Scheduled cesarean delivery might further reduce the rate of intrapartum HIV-1 transmission and should be recommended to women with HIV-1 RNA levels >1,000 copies/mL. Although several studies have suggested low levels of vertical transmission of HIV-1 among pregnant women receiving combination antiretroviral therapy, each has included limited numbers of women and has not adjusted for maternal HIV-1 RNA levels (61,88,133,148). Thus, it is not clear if the impact on transmission is related to the lowering of maternal plasma HIV-1 RNA levels, preexposure prophylaxis of the infant, other mechanisms, or some combination. Until further data are available, women with HIV-1 RNA levels >1,000 copies/mL should be offered scheduled cesarean delivery regardless of maternal therapy.
Regardless of mode of delivery, the woman should receive the PACTG 076 intravenous ZDV regimen intrapartum, and the infant should receive ZDV for 6 weeks after birth. Other maternal drugs should be continued on schedule as much as possible to provide maximal effect and minimize the chance of development of viral resistance. Oral medications may be continued preoperatively with sips of water. Medications requiring food ingestion for absorption could be taken with liquid dietary supplements, but consultation with the attending anesthesiologist should be obtained before administering in the preoperative period. If maternal antiretroviral therapy must be interrupted temporarily in the peripartum period, all drugs (except for intrapartum intravenous ZDV) should be stopped and reinstituted simultaneously to minimize the chance of resistance developing.
Women with CD4 + counts 55,000 copies/mL before initiation of combination therapy during pregnancy are most likely to benefit from continued antiretroviral therapy after delivery (14). Discussion regarding plans for antiretroviral therapy after delivery should be initiated during pregnancy. If the woman elects to continue therapy after delivery, the importance of continued adherence despite the increased responsibilities of newborn care should be emphasized and any support available for the woman should be provided.
# HIV-1-infected women receiving highly active combination antiretroviral therapy who have an undetectable HIV-1 RNA level at 36 weeks of gestation.
Recommendation. The woman should be informed that her risk of perinatal transmission of HIV-1 with a persistently undetectable HIV-1 RNA level is low, probably 2% or less, even with vaginal delivery. No information is currently available on which to determine whether performing a scheduled cesarean delivery will lower her risk further. Cesarean delivery has an increased risk of complications for the woman compared with vaginal delivery, and these risks must be balanced against the uncertain benefit of cesarean delivery in this case.
Discussion. Scheduled cesarean delivery has been beneficial for women either receiving no antiretroviral therapy or receiving ZDV monotherapy, with rates of transmission of HIV-1 of approximately 1%-2% (140,141). Maternal HIV-1 RNA levels were not evaluated in these studies. Similar rates of transmission have been reported among women receiving antiretroviral therapy, with HIV-1 RNA levels undetectable near delivery (78,79,146). No data are available evaluating transmission rates by mode of delivery among women with undetectable HIV-1 RNA levels. Although a benefit of cesarean delivery in reducing transmission may be present, it would be of small magnitude given the low risk of transmission with vaginal delivery among women with HIV-1 RNA levels <1,000 copies/mL who are receiving maternal antiretroviral therapy. Any benefit must be weighed against the known increased risks to the woman with cesarean section compared with vaginal delivery, i.e., a severalfold increased risk of postpartum infections, including uterine infections and pneumonia, anesthesia risks, and surgical complications. However, given no data to indicate lack of benefit, if a woman chooses a scheduled cesarean delivery, her decision should be respected and cesarean delivery scheduled.
If vaginal delivery is chosen, the duration of ruptured membranes should be minimized because the transmission rate has been shown to increase with longer duration of membrane rupture among predominantly untreated women (143,171,172) and among ZDV-treated women in some (9,79) but not all studies (78,145). Fetal scalp electrodes and operative delivery with forceps or the vacuum extractor may increase the risk of transmission and should be avoided (173,174). Intravenous ZDV should be given during labor, and maternal drugs should be continued on schedule as much as possible to provide maximal effect and minimize the chance of development of viral resistance, and the infant should be treated with ZDV for 6 weeks after birth.
# HIV-1-Infected women who have elected scheduled cesarean section but present in early labor or shortly after rupture of membranes.
Recommendation. Intravenous ZDV should be started immediately since the woman is in labor or has ruptured membranes. If labor is progressing rapidly, the woman should be allowed to deliver vaginally. If cervical dilatation is minimal and a long period of labor is anticipated, the clinician may administer the loading dose of intravenous ZDV and proceed with cesarean section to minimize the duration of membrane rupture and avoid vaginal delivery. Alternatively, the clinician might begin pitocin augmentation to enhance contractions and potentially expedite delivery. If the woman is allowed to labor, scalp electrodes and other invasive monitoring and operative delivery should be avoided if possible. The infant should be treated with 6 weeks of ZDV therapy after birth.
Discussion. No data are available to address the question of whether performing cesarean section soon after membrane rupture to shorten labor and avoid vaginal delivery decreases the risk of vertical transmission of HIV-1. Most studies have shown the risk of transmission with cesarean section done after labor and membrane rupture for obstetric indications to be similar to that with vaginal delivery, although the duration of ruptured membranes in these women was often longer than 4 hours (141,176). When an effect was demonstrated, the risk of transmission was twice as high among women with ruptured membranes for >4 hours before delivery compared with those with shorter duration of membrane rupture, although the risk increased continuously with increasing duration of rupture (See Situation 3).
If elective cesarean delivery had been planned and the woman presents with a short duration of ruptured membranes or labor, she should be informed that the benefit of cesarean section under these circumstances is unclear and be allowed to reassess her decision. If the woman presents after 4 hours of membrane rupture, cesarean section is less likely to affect transmission of HIV-1. The woman should be informed that the benefit of cesarean section is unclear and that her risks of perioperative infection increase with increasing duration of ruptured membranes.
If cesarean delivery is chosen, the loading dose of ZDV should be administered while preparations are made for cesarean delivery and the infusion continued until cord clamping. Prophylactic antibiotics given after cord clamping have been shown to reduce the rate of postpartum infection among women of unknown HIV-1 status undergoing cesarean section after labor or rupture or membranes and should be used routinely in this setting (170). If vaginal delivery is chosen, intravenous ZDV and other antiretroviral agents the woman is currently taking should be administered and invasive procedures such as internal monitoring avoided. Pitocin should be used as needed to expedite delivery.
# Recommendations for Monitoring of Women and Their Infants Pregnant Woman and Fetus
HIV-1 infected pregnant women should be monitored according to the same standards for monitoring HIV-1infected persons who are not pregnant. This monitoring should include measurement of CD4 + counts and HIV-1 RNA levels approximately every trimester (i.e., every 3 to 4 months) to determine the need for antiretroviral therapy for maternal HIV-1 disease, whether such therapy should be altered, and whether prophylaxis against Pneumocystis carinii pneumonia should be initiated.
Changes in absolute CD4 + count during pregnancy may reflect the physiologic changes of pregnancy on hemodynamic parameters and blood volume as opposed to a long-term influence of pregnancy on CD4 + count; CD4 + percentage is likely more stable and might be a more accurate reflection of immune status during pregnancy (177,178). Long-range plans should be developed with the woman regarding continuity of MMWR November 22, medical care and antiretroviral therapy for her own health after the birth of her infant.
Monitoring for potential complications of administration of antiretroviral agents during pregnancy should be based on what is known about the side effects of the drugs the woman is receiving. For example, routine hematologic and liver enzyme monitoring is recommended for women receiving ZDV, and women receiving protease inhibitors should be monitored for the development of hyperglycemia. Because combination antiretroviral regimens have been used less extensively during pregnancy, more intensive monitoring may be warranted for women receiving drugs other than or in addition to ZDV.
Antepartum fetal monitoring for women who receive only ZDV chemoprophylaxis should be performed as clinically indicated because data do not indicate that ZDV use in pregnancy is associated with increased risk for fetal complications.
Less is known about the effect of combination antiretroviral therapy on the fetus during pregnancy. Thus, more intensive fetal monitoring should be considered for mothers receiving such therapy, including assessment of fetal anatomy with a level II ultrasound and continued assessment of fetal growth and wellbeing during the third trimester.
# Neonate
A complete blood count and differential should be performed on the newborn as a baseline evaluation before administration of ZDV. Anemia has been the primary complication of the 6-week ZDV regimen in the neonate; thus, repeat measurement of hemoglobin is required at a minimum after the completion of the 6-week ZDV regimen. If abnormal, repeat measurement should be performed at age 12 weeks, by which time any ZDV-related hematologic toxicity should be resolved. Infants who have anemia at birth or who are born prematurely warrant more intensive monitoring.
Data are limited concerning potential toxicities in infants whose mothers have received combination antiretroviral therapy. More intensive monitoring of hematologic and serum chemistry measurements during the first few weeks of life is advised for these infants. However, it should be noted that the clinical relevance of lactate levels in the neonatal period to assess potential for mitochondrial toxicity has not been adequately evaluated.
To prevent P. carinii pneumonia, all infants born to women with HIV-1 infection should begin prophylaxis at age 6 weeks, after completion of the ZDV prophylaxis regimen (179). Monitoring and diagnostic evaluation of HIV-1-exposed infants should follow current standards of care (180). Data do not indicate any delay in HIV-1 diagnosis in infants who have received the ZDV regimen (1,181). However, the effect of combination antiretroviral therapy in the mother or newborn on the sensitivity of infant virologic diagnostic testing is unknown. Infants with negative virologic test results during the first 6 weeks of life should have diagnostic evaluation repeated after completion of the neonatal antiretroviral prophylaxis regimen.
# Postpartum Follow-Up of Women
Comprehensive care and support services are important for women with HIV-1 infection and their families. Components of comprehensive care include the following medical and supportive care services:
- Primary, obstetric, pediatric, and HIV-1 specialty care,
- Family planning services,
- Mental health services,
- Substance-abuse treatment, and - Coordination of care through case management for the woman, her children, and other family members. Support services include case management, child care, respite care, assistance with basic life needs (e.g., housing, food, and transportation), and legal and advocacy services. This care should begin before pregnancy and should be continued throughout pregnancy and postpartum.
Maternal medical services during the postpartum period must be coordinated between obstetric care providers and HIV-1 specialists. Continuity of antiretroviral treatment when such treatment is required for the woman's HIV-1 infection is especially critical and must be ensured. Concerns have been raised about adherence to antiretroviral regimens during the postpartum period. Women should be counseled about the fact that the physical changes of the postpartum period, as well as the stresses and demands of caring for a new baby, can make adherence more difficult and additional support may be needed to maintain good adherence to their therapeutic antiretroviral regimen during this period (182,183). The health-care provider should be vigilant for signs of depression, which may require assessment and treatment and which may interfere with adherence. Poor adherence has been shown to be associated with virologic failure, development of resistance, and decreased long-term effectiveness of antiretroviral therapy (184)(185)(186)(187)(188)(189). Efforts to maintain good adherence during the postpartum period might prolong the effectiveness of therapy (14).
All women should receive comprehensive health-care services that continue after pregnancy for their own medical care and for assistance with family planning and contraception. In addition, this is a good time to review immunization status and update vaccines, assess the need for prophylaxis against opportunistic infections, and reemphasize safer sex practices.
Data from PACTG 076 and 288 do not indicate adverse effects through 4 years postpartum among women who received ZDV during pregnancy (47,132). Women who have received only ZDV chemoprophylaxis during pregnancy should receive appropriate evaluation to determine the need for antiretroviral therapy during the postpartum period.
# Long-Term Follow-Up of Infants
Data remain insufficient to address the effect that exposure to ZDV or other antiretroviral agents in utero might have on long-term risk for neoplasia or organ system toxicities in children. Data from follow-up of PACTG 076 infants through age 6 years do not indicate any differences in immunologic, neurologic, and growth parameters between infants who were exposed to the ZDV regimen and those who received placebo, and no malignancies have been seen (58,59). Continued evaluation of early and late effects of in utero antiretroviral exposure is ongoing through several mechanisms, including a long-term follow-up study in the Pediatric AIDS Clinical Trials Group (PACTG 219C), natural history studies, and HIV/AIDS surveillance conducted by state health departments and CDC. Because most of the available followup data relate to in utero exposure to antenatal ZDV alone and most pregnant women with HIV-1 infection currently receive combination therapy, it is critical that studies to evaluate potential adverse effects of in utero drug exposure continue to be supported.
Innovative methods are needed to provide follow-up of infants with in utero exposure to antiretroviral drugs. Information regarding such exposure should be part of the ongoing permanent medical record of the child, particularly for uninfected children. Children with in utero antiretroviral exposure who develop significant organ system abnormalities of unknown etiology, particularly of the nervous system or heart, should be evaluated for potential mitochondrial dysfunction (46). Follow-up of children with antiretroviral exposure should continue into adulthood because of the theoretical concerns regarding potential for carcinogenicity of the nucleoside analog antiretroviral drugs. Long-term follow-up should include yearly physical examinations of all children exposed to antiretroviral drugs and, for adolescent females, gynecologic evaluation with Pap smears.
HIV-1 surveillance databases from states that require HIV-1 reporting provide an opportunity to collect population-based information concerning in utero antiretroviral exposure. To the extent permitted by federal law and regulations, data from these confidential registries can be used to compare with information from birth defect and cancer registries to identify potential adverse outcomes.
# Clinical Research Needs
The following clinical research needs are relevant to the United States and other developed countries. Study findings continue to evolve rapidly, and research needs and clinical practice will require continued reassessment over time. The current guidelines do not attempt to address the complex research needs or antiretroviral prophylaxis recommendations for resource-limited international settings.
# Evaluation of Drug Safety and Pharmacokinetics
Many pregnant women with HIV-1 infection in the United States are receiving combination antiretroviral therapy for their own health care along with standard ZDV prophylaxis to reduce perinatal HIV-1 transmission. Additionally, recent data indicate that antenatal use of potent antiretroviral combinations capable of reducing plasma HIV-1 RNA copy number to very low or undetectable levels near the time of delivery may lower the risk of perinatal transmission to <2% (61,90). While the number of antiretroviral agents and combination regimens used for treatment of infected persons is increasing rapidly, the number of drugs evaluated in pregnant women remains limited.
Preclinical evaluations of antiretroviral drugs for potential pregnancy-and fetal-related toxicities need to be completed for all existing and new antiretroviral drugs. More data are needed regarding the safety and pharmacokinetics of antiretroviral drugs in pregnant women and their neonates, particularly when used in combination regimens. Further research is also needed on whether the effects of intensive combination treatment on viral load differ in various body compartments, such as plasma and genital tract secretions, and how this may relate to risk of perinatal transmission.
Continued careful assessment for potential short-and longterm consequences of antiretroviral drug use during pregnancy for both the woman and her child is important. Consequences of particular concern include mitochondrial dysfunction; hepatic, hematologic and other potential end-organ toxicities; development of antiretroviral drug resistance; and adverse effects on pregnancy outcome. Because the late consequences of in utero antiretroviral exposure for the child are unknown, innovative methods need to be developed to detect possible rare late toxicities of transient perinatal antiretroviral drug exposure that may not be observed until later in childhood or in adolescence or adulthood.
# Assessment of Drug Resistance
The risk of emerging drug resistance during pregnancy or the postpartum period requires further study. The administration of ZDV as a single drug for prophylaxis of transmission may increase the incidence of ZDV resistance mutations in women with viral replication that is not maximally suppressed. Administration of drugs such as nevirapine and 3TC, for which a single-point mutation can confer genotypic resistance, to pregnant women with inadequate viral suppression may result in the development of virus with genotypic drug resistance in a substantial proportion of the women (27,136,137). The clinical consequences of emergence o f genotypic resistance during pregnancy or in the postpartum period with respect to risk of transmission of resistant virus and future treatment options require further assessment.
# Optimizing Adherence
The complexity of combination antiretroviral regimens as well as drugs for prophylaxis against opportunistic infections often leads to poor adherence among HIV-1-infected persons. Innovative approaches are needed to improve adherence for women with HIV-1 infection during and following pregnancy and to ensure that infants receive ZDV prophylaxis.
# Role of Cesarean Delivery Among Women with Nondetectable Viral Load or with Short Duration of Ruptured Membranes
Elective cesarean delivery has increased among women with HIV-1 infection since the demonstration that delivery before labor and membrane rupture can reduce intrapartum HIV-1 transmission (140,141,190). Further study is needed regarding whether elective cesarean delivery provides clinically significant benefit to infected women with low or undetectable viral load who are receiving combination antiretroviral therapy, and also regarding the maternal and infant morbidity and mortality associated with operative delivery. Additionally, data from a meta-analysis by the International Perinatal HIV-1 Group indicate that the risk of perinatal transmission increases by 2% for every 1-hour increase in duration of membrane rupture in infected women with <24 hours of membrane rupture (191). Therefore, further study is also needed to evaluate the role of nonelective cesarean delivery in reducing perinatal transmission in women with very short duration of ruptured membranes and/or labor.
# Management of Women with Premature Rupture of Membranes
With evidence that increasing duration of membrane rupture is associated with an increasing transmission risk (191), more study is needed to determine the appropriate management of pregnant women with HIV-1 infection who present with ruptured membranes at different points in gestation.
# Offering Rapid Testing at Delivery to Late-Presenting Women
One of the groups still at high risk for transmitting HIV-1 to their infants is those women who have not received antenatal care and were not offered HIV-1 counseling and testing. The feasibility of offering counseling and rapid HIV-1 testing to women of unknown HIV-1 status who present while in labor requires further study. Additionally, the efficacy and acceptability of intrapartum/postpartum or postpartum infant interventions to reduce the risk of intrapartum transmission by women first identified as infected with HIV-1 during delivery needs to be assessed.
# ACCREDITATION Continuing Medical Education (CME).
CDC is accredited by the Accreditation Council for Continuing Medical Education (ACCME) to provide continuing medical education for physicians. CDC designates this educational activity for a maximum of 2.5 hours in category 1 credit toward the AMA Physician's Recognition Award. Each physician should claim only those hours of credit that he/she actually spent in the educational activity.
# Continuing Education Unit (CEU). CDC has been approved as an authorized provider of continuing education and training programs by the International Association for Continuing Education and Training and awards 0.25 Continuing Education Units (CEUs).
# Continuing Nursing Education (CNE).
This activity for 3.0 contact hours is provided by CDC, which is accredited as a provider of continuing education in nursing by the American Nurses Credentialing Center's Commission on Accreditation. A. The recommended gestational age to perform an elective (scheduled) cesarean delivery for prevention of mother-to-child HIV-1 transmission is 38 weeks' gestation. B. The risk of perinatal HIV-1 transmission has been shown to be reduced by scheduled cesarean section before labor for women with viral loads <1,000 copies/mL. C. When scheduled cesarean delivery is to be performed, the intravenous infusion of zidovudine (ZDV) prophylaxis should be initiated 3 hours before surgery. D. Women should make their own decisions regarding mode of delivery after discussing the known potential risks and benefits with their provider.
2. An initial clinical assessment of an HIV-1-infected pregnant woman should include. . . A. an evaluation of the degree of existing immunodeficiency determined by CD4 + count. B. history of prior or current antiretroviral therapy. C. gestational age. D. risk for disease progression as determined by the level of plasma HIV-1 RNA. E. supportive care needs. F. all of the above.
# Which of the following statements represent known complications of antiretroviral therapy in HIV-1-infected women?
A. Combination antiretroviral therapy during pregnancy is associated with adverse pregnancy outcome, including preterm delivery and low birth weight. B. Hyperglycemia has been observed with protease inhibitor treatment. C. Severe lactic acidosis in late pregnancy has been observed in HIV-1infected women who received d4T-ddI (stavudine-didanosine) as part of a combination antiretroviral regimen during pregnancy. D. B and C only. E. All of the above.
# The mechanism(s) by which antiretroviral prophylaxis reduces mother-to-child-transmission is. . .
A. reducing maternal antenatal plasma HIV-1 viral load. B. transplacental passage of antiretroviral agents to the fetus before passage through the birth canal, providing preexposure prophylaxis of the fetus. C. provision of antiretroviral drugs to the infant after birth, providing postexposure prophylaxis of the infant against virus or infected cells that may have entered the infant's circulation during delivery. D. all of the above.
# When prescribing antiretroviral therapy for a pregnant woman, a
clinician should. . . A. aim to decrease antenatal plasma HIV-1 viral load to undetectable or <1000 copies/ml. B. consider the short-and long-term effects of the antiretroviral drugs on the pregnant woman and on the fetus/infant. C. consider potential changes in antiretroviral drug dosing requirements caused by pregnancy. D. discuss with the woman her ability to adhere to the considered regimen. E. all of the above.
# Goal and Objectives
This MMWR provides recommendations regarding the treatment of pregnant women in the United States with human immunodeficiency virus type 1 (HIV-1) infection for both maternal health and the prevention of perinatal HIV-1 transmission. These recommendations were prepared by the U.S. Public Health Service Perinatal HIV Guidelines Working Group, which consists of public health, obstetric, and pediatric specialists and women infected with HIV-1. The goal of this report is to provide evidence-based guidance to public-and private-sector policy makers and clinical providers on antiretroviral treatment and obstetric management during pregnancy. Upon completion of this continuing education activity, the reader should be able to 1) describe the recommended antiretroviral regimens for women presenting to the health-care provider with a variety of clinical histories, 2) identify potential toxicities, 3) describe obstetric interventions to help reduce perinatal HIV-1 transmission, and 4) describe the information that pregnant women should receive when making decisions about antiretroviral therapy during pregnancy.
To receive continuing education credit, please answer all of the following questions.
6. Antiretroviral drugs that should be avoided in antiretroviral drug regimens for HIV-1-infected pregnant women or for HIV-1-infected women intending to become pregnant are. . . | 32,710 | {
"id": "0717b3d55365abffb2e93c5faaa342a98a1cf93b",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Please note: An erratum has been published for this issue. To view the erratum, please click here.CDC, our planners, and our content experts wish to disclose that they have no financial interests or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters. Presentations will not include any discussion of the unlabeled use of a product or a product under investigational use. CDC does not accept commercial support.# Introduction
In 1991, CDC published recommendations to prevent transmission of bloodborne viruses from infected health-care providers to patients while conducting exposure-prone invasive procedures (1). These recommendations did not prohibit the continued practice of invasive surgical techniques by HBVinfected surgeons, dentists, and others, provided that the nature of their illnesses and their practices are reviewed and overseen by expert review panels. Essential elements of the 1991 CDC recommendations relevant to HBV included that 1) there be no restriction of activities for any health-care provider who does not perform invasive (exposure-prone) procedures; 2) exposureprone procedures should be defined by the medical/surgical/ dental organizations and institutions at which the procedures are performed; 3) providers who perform exposure-prone procedures and who do not have serologic evidence of immunity to HBV from vaccination should know their HBsAg status and, if that is positive, also should know their hepatitis B e-antigen (HBeAg) status; and 4) providers who are infected with HBV (and are HBeAg-positive) should seek counsel from and perform procedures under the guidance of an expert review panel (1).
The 1991 recommendations also recommended that an HBV-infected health-care provider who performed exposureprone procedures, broadly defined, should notify patients
# Updated CDC Recommendations for the Management of Hepatitis B Virus-Infected Health-Care Providers and Students
Prepared by Scott D. Holmberg, MD Anil Suryaprasad, MD John W. Ward, MD Division of Viral Hepatitis, National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention
# Summary
This report updates the 1991 CDC recommendations for the management of hepatitis B virus (HBV)-infected health-care providers and students to reduce risk for transmitting HBV to patients during the conduct of exposure-prone invasive procedures (CDC. Recommendations for preventing transmission of human immunodeficiency virus and hepatitis B virus to patients during exposure-prone invasive procedures. MMWR 1991; ). This update reflects changes in the epidemiology of HBV infection in the United States and advances in the medical management of chronic HBV infection and policy directives issued by health authorities since 1991.
The primary goal of this report is to promote patient safety while providing risk management and practice guidance to HBVinfected health-care providers and students, particularly those performing exposure-prone procedures such as certain types of surgery. Because percutaneous injuries sustained by health-care personnel during certain surgical, obstetrical, and dental procedures provide a potential route of HBV transmission to patients as well as providers, this report emphasizes prevention of operator injuries and blood exposures during exposure-prone surgical, obstetrical, and dental procedures.
These updated recommendations reaffirm the 1991 CDC recommendation that HBV infection alone should not disqualify infected persons from the practice or study of surgery, dentistry, medicine, or allied health fields. The previous recommendations have been updated to include the following changes: no prenotification of patients of a health-care provider's or student's HBV status; use of HBV DNA serum levels rather than hepatitis B e-antigen status to monitor infectivity; and, for those health-care professionals requiring oversight, specific suggestions for composition of expert review panels and threshold value of serum HBV DNA considered "safe" for practice (<1,000 IU/ml). These recommendations also explicitly address the issue of medical and dental students who are discovered to have chronic HBV infection. For most chronically HBV-infected providers and students who conform to current standards for infection control, HBV infection status alone does not require any curtailing of their practices or supervised learning experiences. These updated recommendations outline the criteria for safe clinical practice of HBV-infected providers and students that can be used by the appropriate occupational or student health authorities to develop their own institutional policies. These recommendations also can be used by an institutional expert panel that monitors providers who perform exposure-prone procedures.
in advance regarding the provider's seropositivity. However, scientific data and clinical experience accumulated since 1991 demonstrate that the risk for HBV and other bloodborne virus transmission from providers in health-care settings is extremely low. In addition, improvements in infection control practices put into effect since 1991 have enhanced both health-care provider and patient protection from exposure to blood and bloodborne viruses in health-care settings.
This report is intended to guide the practices of chronically HBV-infected providers and students and the institutions that employ, oversee, or train them; it does not address those with acute HBV infection. This report is limited to the providerto-patient transmission of HBV; it does not address infection control measures to prevent bloodborne transmission of HBV to patients through receipt of human blood products, organs, or tissues because these measures have been described elsewhere (2). Nor does this report provide comprehensive guidance about prevention of patient-to-health-care provider bloodborne pathogen transmission because this guidance also has been published previously (3,4). On the basis of a through literature review, reports of providers who experienced curtailed scope of practice, and expert consultation, CDC considered the following issues when developing these recommendations:1) very rare or, for most types of clinical practice, no detected transmission of HBV from providers to patients; 2) nationally decreasing trends in the incidence of acute HBV infection in both the general population and health-care providers; 3) successful implementation and efficacy of policies promoting hepatitis B vaccination; 4) evolving and improving therapies for HBV infection; 5) guidelines in the United States and other developed countries that propose expert-based approaches to the risk management of infected health-care providers; 6) the adoption of Standard Precautions (formerly known as universal precautions) as a primary prevention intervention for the protection of patients and providers from infectious agent transmission; 7) the implementation of improved work practice and engineering controls, including safety devices; 8) the testing and vaccination of providers; 9) increasing availability of HBV viral load testing; and 10) instances of restrictions or prohibitions for HBV-infected providers and students that are not consistent with CDC and other previous recommendations.
# Methods
To update recommendations for the risk management of HBVinfected health-care providers and students, CDC considered data that have become available since the 1991 recommendations were published. Information reviewed was obtained through literature searches both by standard search engines (PubMed) and of other literature reviews used in guidelines developed by other professional organizations since 1991. Search terms used included "hepatitis B," "hepatitis B virus," or "HBV" with "healthcare," "health-care," "healthcare workers" or "providers" or "personnel"; "nosocomial" or "healthcare transmission"; and "healthcare worker-to-patient." However, these searches did not identify additional cases beyond the few already known to CDC and the experts consulted. To gather data on HBV transmission, CDC reviewed all hepatitis B outbreak investigations conducted by CDC and state officials since 1991. CDC national hepatitis surveillance data were examined for reports of acute HBV infection in persons with information about recent health care, as well as reports received regarding dismissal of HBVinfected health-care providers (i.e., surgeons) or prohibition from matriculation of medical, dental, and osteopathic students identified as HBV-infected after acceptance (see Actions Taken Against HBV-Infected Health Care Providers and Students).
Medical, dental, infection control, public health, infectious disease, and hepatology experts, officials, and representatives from government, academia, the public, organizations representing medical, dental and osteopathic colleges, and professional medical organizations were consulted.- Some were consulted at an initial meeting on June 4, 2011. All experts and organizations were provided draft copies of these recommendations as they were developed, and they provided insights, information, suggestions, and edits. In finalizing these recommendations, CDC considered all available information, including expert opinion, results of the literature review, findings of outbreak investigations, surveillance data, and reports of adverse actions taken against HBV-infected surgeons and students. 1) documented cases of confirmed transmission of HBV from health-care providers to patients are rare (up to eight cases from one surgeon in the United States since 1994), 2) it has not been possible to conduct case-control or cohort studies that estimate the rate of such rare events, and 3) data are insufficient to quantify the strength-of-evidence or enable the grading of a recommendation (5).
# Major Trends in
Nonetheless, CDC and state authorities have been able to detect instances of patient-to-patient transfer of HBV (and HCV) from unsafe injection and dialysis practices, sharing of blood-glucose monitoring equipment, and other unsanitary practices and techniques (6). One report from an oral surgery practice documented patient-to-patient HBV transmission, although a retrospective assessment did not identify inappropriate procedures (7). However, despite detecting patient-to-patient transmission, there is only one published report of health-care provider-to-patient transmission of HBV during exposure-prone procedures in the United States since 1994 (8). In that case, an orthopedic surgeon who was unaware of his HBV status and who had a very high level of HBV DNA (viral load >17 million IU/ml) (9) transmitted HBV to between two and eight patients during August 2008-May 2009 (10).
An international review of HBV health-care provider-topatient transmissions in other countries in which the HBV DNA levels (viral load) of the providers were measured has determined that 4 x 10 4 genome equivalents per ml (GE/ml) (roughly comparable to 8,000 international units (IU)/ml) was the lowest level of HBV DNA in any of several surgeons implicated in transmission of HBV to patients between 1992 and 2008 (9-15; Table 1). This lowest measurement was taken >3 months after the suspected transmission event, so the relevance of the HBV DNA viral load to transmissibility is unclear. In general, those surgeons who transmitted HBV to patients appear to have had HBV DNA viral loads well above 10 5 GE/ml (or above 20,000 IU/ml) at the earliest time that viral load was tested after transmission (Table 1). However, the few studies conducted in nonhuman primates have reported different results regarding the correlation between HBV DNA levels in blood and infectivity. One study found a correlation ( 16), but another did not (17).
In addition to the rarity of surgery-related transmission of HBV since 1994 (one reported instance), the most recent case of HBV transmission from a U.S. dental health-care provider to patients was reported in 1987 (18,19). Since this event, certain infection control measures are thought to have contributed to the absence of detected transmissions; such measures include widespread vaccination of dental health-care professionals, universal glove use, and adherence to the tenets of the 1991 Occupational Safety and Health Administration (OSHA) Bloodborne Pathogens Standard (20). Since 1991, no transmission of HBV has been reported in the United States or other developed countries from primary care providers, clinicians, medical or dental students, residents, nurses, other health-care providers, or any others who would not normally perform exposure-prone procedures (21).
# National Trends in Acute Hepatitis B Incidence and Prevalence
Symptomatic acute HBV infections in the United States, as reported through health departments to CDC, have declined approximately 85% from the early 1990s to 2009 (22), following the adoption of universal infant vaccination and catch-up vaccinations for children and adolescents (23). If declining trends continue, an ever-increasing proportion of patients receiving health care and their providers will be protected by receipt of hepatitis B vaccination.
Patient-to-health-care provider transmission of HBV also has declined markedly. Reflecting this finding, the reported number of acute HBV infections among providers in the United States, not all of which reflect occupational exposure, decreased from approximately 10,000 in 1983 to approximately 400 in 2002 (24) and to approximately 100 by 2009 (22).
# Treatments for Chronic Hepatitis B Infection
Medications for hepatitis B have been improving continually and are usually effective at reducing viral loads markedly or even to undetectable levels. Currently, seven therapeutic agents are approved by the Food and Drug Administration for the treatment of chronic hepatitis B, including two formulations of interferon (interferon alpha and pegylated interferon) and five nucleoside or nucleotide analogs (lamuvidine, telbivudine, abacavir, entecavir, and tenofovir). Among the approved analogs, both entecavir and tenofovir have potent antiviral activity as well as very low rates of drug resistance. Treatment with these agents reduces HBV DNA levels to undetectable or nearly undetectable levels in most treated persons (25)(26)(27). Virtually all treated patients, even those few still receiving older agents (e.g., lamuvidine), can expect to achieve a reduction of HBV DNA viral loads to very low levels within weeks or months of initiating therapy (25). The newer medications are effective in suppressing viral replication, and it is expected that they will be used for a newly identified HBV-infected health-care provider who is performing exposure-prone procedures and who has HBV virus levels above the threshold suggested in this report (1,000 IU/ml ) or as adopted by his or her institution's expert review panel. However, clinicians caring for infected healthcare providers or students who are not performing exposure-prone procedures and who are not subject to expert panel review should consider both the benefits and risks associated with life-long antiviral therapy for chronic HBV started at young ages (25).
# Consistency with Other Guidelines
Recommendations for the management of HBV-infected health-care providers and students have evolved in the United States and other developed countries (Table 2). In 2010, the Society for Healthcare Epidemiology of America (SHEA) issued updated guidelines that recommended a process for ensuring safe clinical practice by HBV-infected healthcare providers and students (28). These separate guidelines classify many invasive procedures and list those associated with potentially increased risk for provider-to-patient blood exposures (Category III procedures, in the SHEA guidelines). SHEA recommends restricting a provider's practice on the basis of the provider's HBV DNA blood levels and the conduct of certain invasive procedures considered exposure prone. The SHEA guidelines also address the current therapeutic interventions that reduce the viral loads and the infectiousness of HBV-infected personnel. For providers practicing certain exposure-prone procedures, SHEA recommends that they maintain HBV blood levels <10 4 GE/ml, i.e., depending on the assay used, approximately 2,000 IU/ml (exposure prone, Category III) procedures, or cease surgery until they can reestablish a viral load level below that threshold.
Restrictions based on the provider's HBV DNA blood levels also exist in guidelines published by some European countries and Canada (Table 2) (21,(29)(30)(31)(32)(33)(34)(35)(36). No guidelines from any developed country recommend the systematic prohibition of invasive surgical or dental practices by qualified health-care providers whose chronic HBV infection is monitored.
The generally permissive principles delineated in the CDC 1991 recommendations also have been reiterated in recent Advisory Committee on Immunization Practices (ACIP) recommendations on immunization of health-care personnel in the United States for HBV infection (37). ACIP recommends that HBV-infected persons who perform highly exposure-prone procedures should be monitored by a panel of experts drawn from diverse disciplines and perspectives to ensure balanced recommendations. However, the ACIP recommendations do not require that HBV-infected persons who do not perform such procedures have their clinical duties restricted or managed by a special panel because of HBV infection alone.
# Prevention Strategies
# Standard Precautions
Strategies to promote patient safety and to prevent transmission of bloodborne viruses in health-care settings include hepatitis B vaccination of susceptible health-care personnel and the use of primary prevention (i.e., preventing exposures and therefore infection) by strict adherence to the tenets of standard (universal) infection control precautions, the use of safer devices (engineering controls), and the implementation of work practice controls (e.g., not recapping needles) to prevent injuries that confer risks for HBV transmission to patients and their providers. Public health officials in the United States base Standard Precautions on the premise that all blood and blood-containing body fluids are potentially infectious (3,4). Since 1996, CDC has specified the routine use of Standard Precautions (38,39) that include use of protective equipment in appropriate circumstances, implementation of both work practice controls and engineering controls, and adherence to meticulous standards for cleaning and reusing patient care equipment. For example, double-gloving now is practiced widely, and the evidence to demonstrate the feasibility and efficacy of this and other interventions is extensive (40)(41)(42)(43)(44).
# Work Practice and Engineering Controls
Parenteral exposures are mainly responsible for HBV transmission in health-care settings. Work practice modifications in the past 20 years have been important in mitigating such exposures. Examples of such modifications include the practice of not resheathing needles, the use of puncture-resistant needle and sharp object disposal containers, avoidance of unnecessary phlebotomies and other unnecessary needle and sharp object use, the use of ports and other needleless vascular access when practical or possible, and the avoidance of unnecessary intravenous catheters by using needleless or protected needle infusion systems.
# Testing and Vaccination of Health-Care Providers
Recommendations generated over the past 20 years, both in the United States and other developed countries, urge all health-care providers to know their HBV and other bloodborne virus infection status (21), especially if they are at risk for HBV infection (37,45). OSHA mandates that hepatitis B vaccine be made available to health-care providers who are susceptible to HBV infection and that they be urged to be vaccinated (Bloodborne Pathogens Standard ) These guidelines stipulate that the employer make available the hepatitis B vaccine and vaccination series to all employees who have occupational exposure and that postexposure evaluation and follow-up be provided to all employees who have an exposure incident.
Approximately 25% or more of medical and dental students (46,47) and many physicians, surgeons, and dentists in the United States have been born to mothers in or from countries in Asia (including India), Africa, and the Middle East with high and intermediate endemicity for HBV. CDC recommends that all health-care providers at risk for HBV infection be tested and that all those found to be susceptible should receive vaccine (37). Such testing is likely to detect chronically infected healthcare providers and students. Recommendations to ensure safe practice of health-care providers identified as chronic carriers of HBV should have reasonable and feasible oversight by the relevant school, hospital, or other health-care facility.
# Actions Taken Against HBV-Infected Health-Care Providers and Students
CDC is aware of several recent instances in which HBV-infected persons have been threatened with dismissal or actually dismissed from surgical practice on the basis of their HBV infection, and others have had their acceptances to medical or dental schools rescinded or deferred because of their infection (Joan M. Block, Hepatitis B Foundation, Anna S. F. Lok, University of Michigan Medical Center, personal communications, 2011). Some of these instances have involved requirements that the infected provider, applicant, or student demonstrate undetectable HBV viral load or hepatitis B e-antigen negativity and, in at least one case, that this be demonstrated continuously by weekly testing. These actions might not be based on clear written guidance and procedures at the institutions involved (48,49).
# Technical and Ethical Issues in Developing Recommendations
# Monitoring HBV DNA Level and Hepatitis B e Antigen (HBeAg)
Whereas the 1991 recommendations assessed the infectivity of surgeons and others performing invasive procedures based on the presence of HBeAg, documented transmissions of HBV to patients from several HBeAg-negative surgeons (12,15,50) led to examination of correlations between HBeAg and HBV viral load. Some of these HBeAg-negative persons, despite high rates of viral replication, might harbor pre-core mutants of the virus: that is, loss of HBeAg expression might result from a single nucleotide substitution that results in a stop codon preventing transcription (51,52). Persons with such HBV strains who test HBeAg-negative might nonetheless be infectious (despite the mutation) and even have a high concentration of virions in their blood.
Recent guidelines from other bodies (Table 2) have recommended using HBV DNA serum levels in preference to HBeAg in determining infectivity. Several studies have documented numerous HBeAg-negative persons who have high circulating levels of HBV DNA, i.e., viral loads often 10 5 IU/ ml or more by various commercial assays: 78 HBeAg-negative Australian patients with median HBV DNA of 38,000 IU/ ml (determined by the Siemens Versant HBV DNA 3.0 assay) (53); 48 HBeAg-negative Greek patients with a median HBV DNA of 76,000 IU/ml (by Roche Amplicor HBV-Monitor) (54); 165 HBeAg-negative Korean patients with a mean HBV DNA of 155,000 IU/ml (by Roche COBAS TaqMan) (55); and 47 HBeAg-negative Chinese patients with median HBV DNA blood levels of 960,000 copies/ml (about 200,000 IU/ ml) (by PG Biotech PCR) (56). On the basis of these data, monitoring quantitative HBV DNA levels provides better information to serve as a predictive indicator of infectivity than is provided by monitoring HBeAg status alone.
# Specifying Exposure-Prone Procedures
In general, three conditions are necessary for health-care personnel to pose a risk for bloodborne virus transmission to patients. First, the health-care provider must be sufficiently viremic (i.e., have infectious virus circulating in the bloodstream). Second, the health-care provider must have an injury (e.g., a puncture wound) or a condition (e.g., nonintact skin) that allows exposure to his/her blood or other infectious body fluids. Third, the provider's blood or infectious body fluid must come in direct contact with a patient's wound, traumatized tissue, mucous membranes, or similar portal of entry during an exposure-prone procedure. The vast majority of HBV-infected health-care personnel pose no risk for patients because they do not perform activities in which both the second and third conditions are met.
Beyond meeting these three basic conditions, defining exposure-prone invasive procedures that pose a risk for HBV transmission between infected provider and patient has been problematic in the development of all recommendations and guidelines; this process is made especially difficult by varying surgical techniques used by health-care providers doing the same procedure. More recent guidelines and published articles indicate that exposure-prone procedures can be defined broadly, and lists of potentially exposure-prone procedures have been developed (28,31,60). Principles cited are that exposureprone procedures include those in which access for surgery is difficult (28) or those in which needlestick injuries are likely to occur (60), typically in very closed and unvisualized operating spaces in which double gloving and the skin integrity of the operator might be compromised (Box).
Defining exposure-prone procedures in dentistry and oral surgery has been particularly difficult. Many intra-oral procedures (e.g., injection or scaling) occur in a confined cavity and might lead to injuries to the operator (61), so some institutions have considered these procedures to be exposureprone. However, no transmission of HBV from a U.S. dentist to a patient has been reported since 1987, and no transmission has ever been reported from a dental or medical student. Thus, Category I Procedures (Box) include only major oral surgery, and do not include the procedures that medical and dental students or most dentists would be performing or assisting.
In addition to these lists of specific procedures, an institutional expert review panel convened to oversee an HBVinfected surgeon or other health-care provider performing exposure-prone procedures may consult the classification of such procedures (Box) for guidance. Given the variety of procedures, practices, and providers, each HBV-infected health-care provider performing potentially exposure-prone procedures will need individual consideration. However, this
# Assessing a Safe Level of HBV DNA
Review of information concerning six HBeAg-negative surgeons who had transmitted hepatitis B to patients and whose HBV DNA had been determined (using both Chiron Quantiplex Branched DNA assay and Roche Amplicor HBV DNA Monitor assay) showed the lowest value (at one laboratory) in one surgeon to be 40,000 copies/ml (approximately 8,000 IU/ml) (9). However, because this quantification was performed more than 3 months after the transmission had taken place, correlative relevance is uncertain.
In 2003, recommendations from the Netherlands set the level above which health-care providers should not be performing exposure prone procedures at HBV DNA levels 10 5 GE/ml or above (approximately 20,000 IU/ml). A larger European consortium set this restriction at HBV DNA levels ≥10 4 GE/ml (approximately 2,000 IU/ml) (33) for persons who are HBeAg-negative. In 2010, this latter threshold, without a requirement for e-antigen negativity, was adopted in the U.S. SHEA Guidelines (28). U.K. guidelines for HBV-infected providers who are HBeAg-negative require these providers to achieve or maintain HBV DNA levels of <10 3 GE/ml (less than approximately 200 IU/ml) (31,57).
Although newer assays such as real-time polymerase chain reaction (PCR) tests are expected to reduce the level of detection for HBV DNA to 10-20 IU/ml, this level could be undetectable in some assays in use in the United States. The lower limit of detection for four assays currently in use are 200 IU/ml (qualitative assay); 30-350 IU/ml (branched DNA assay); 30 IU/ ml (real-time PCR assay); and 10 IU/ml (real-time PCR assay). Thus, any requirement for demonstration of a viral load <200 IU/ml will need to specify the use of an assay (usually real-time PCR) that can detect loads well below that threshold.
# Fluctuating HBV DNA Levels
Persons who achieve and maintain HBV DNA blood concentrations below some designated threshold level or attain an undetectable level might have HBV DNA that is transiently elevated and detectable but not necessarily transmissible. Such instances might represent infrequent detections of virus at very low levels despite long-term suppression of virus on therapy (58) but also could represent, especially for persons taking older therapies, breakthrough of antiviral-drug resistant HBV (59). As assays become increasingly sensitive (newer ones can detect circulating HBV DNA down to 20-30 IU/ml), such transient elevations will be recognized increasingly and will trigger more frequent follow-up. If such an elevation in detectable HBV DNA represents not spontaneous fluctuation (sometimes referred to as a blip) but rather therapeutic drug failure (i.e., breakthrough), then appropriate change in therapy may be considered. evaluation should not define exposure-prone procedures too broadly; the great majority of surgical and dental procedures have not been associated with the transmission of HBV.
# Notification of Patients of HBV-Infected Health-Care Providers
There is no clear justification for or benefit from routine notification of the HBV infection status of a health-care provider to his or her patient with the exception of instances in which an infected provider transmits HBV to one or more patients or documented instances in which a provider exposes a patient to a bloodborne infection. Routine mandatory disclosure might actually be counterproductive to public health, as providers and students might perceive that a positive test would lead to loss of practice or educational opportunities. This misperception might lead to avoidance of HBV testing, of hepatitis B vaccination (if susceptible), of treatment and management (if infected), or of compliance with practice oversight from an expert panel (if infected and practicing exposure-prone procedures). In general, a requirement for disclosure is accepted to be an insurmountable barrier to practice and might limit patient and community access to quality medical care.
# Ethical Considerations
On July 18, 2011, the Consult Subcommittee of CDC's Public Health Ethics Committee reviewed these proposed recommendations. The reviewing team also included three external ethicists. The opinion of the Consult Subcommittee was that guidelines that allow providers with HBV to practice while requiring those doing exposure-prone procedures to be monitored to maintain low load strikes the right balance between protecting patients' interests and providers' rights. The Consult Subcommittee also noted that providers have an ethical and professional obligation to know their HBV status and to act on such knowledge accordingly (CDC Public Health Ethics Committee, personal communication, 2011). The Consult Subcommittee supported the new recommendation that mandatory disclosure of provider HBV status to patients was no longer warranted and that the 1991 recommendation for disclosure was discriminatory and unwarranted.
In addition, the Consult Subcommittee determined that there was no scientific or ethical basis for the restrictions that some medical and dental schools have placed on HBV-infected students and concluded that such restrictions were detrimental to the professions as well as to the individual students.
# BOX. CDC classification of exposure-prone patient care procedures
# Category I. Procedures known or likely to pose an Category II. All other invasive and noninvasive procedures increased risk of percutaneous injury to a health-care
These and similar procedures are not included in Category I as provider that have resulted in provider-to-patient they pose low or no risk for percutaneous injury to a health-care transmission of hepatitis B virus (HBV)
provider or, if a percutaneous injury occurs, it usually happens These procedures are limited to major abdominal, cardiothoracic, outside a patient's body and generally does not pose a risk for and orthopedic surgery, repair of major traumatic injuries, provider-to-patient blood exposure. These include abdominal and vaginal hysterectomy, caesarean section, vaginal
- surgical and obstetrical/gynecologic procedures that do deliveries, and major oral or maxillofacial surgery (e.g., fracture not involve the techniques listed for Category I; reductions). Techniques that have been demonstrated to increase
- the use of needles or other sharp devices when the health-care the risk for health-care provider percutaneous injury and providerprovider's hands are outside a body cavity (e.g., phlebotomy, to-patient blood exposure include placing and maintaining peripheral and central intravascular - digital palpation of a needle tip in a body cavity and/or lines, administering medication by injection, performing - the simultaneous presence of a health care provider's needle biopsies, or lumbar puncture); fingers and a needle or other sharp instrument or object
- dental procedures other than major oral or maxillofacial (e.g., bone spicule) in a poorly visualized or highly surgery; confined anatomic site.
- insertion of tubes (e.g., nasogastric, endotracheal, rectal, or Category I procedures, especially those that have been urinary catheters); implicated in HBV transmission, are not ordinarily
- endoscopic or bronchoscopic procedures; performed by students fulfilling the essential functions of a
- internal examination with a gloved hand that does not medical or dental school education.
involve the use of sharp devices (e.g., vaginal, oral, and rectal examination; and - procedures that involve external physical touch (e.g., general physical or eye examinations or blood pressure checks).
# Guidance for Expert Review Panels at Institutions
HBV infection in health-care providers and students who do not perform invasive exposure-prone procedures should be managed as a personal health issue and does not require special panel oversight. However, for providers who perform exposure-prone procedures, all recent guidelines advocate the constitution of an expert panel to provide oversight of the infected health-care provider's practice (Table 2).
For HBV-infected providers performing exposure-prone procedures, expert review panels should evaluate the infected provider's clinical and viral burden status; assess his or her practices, procedures and techniques, experience, and adherence to recommended surgical and dental technique; provide recommendations, counseling, and oversight of the provider's continued practice or study within the institution; and investigate and notify appropriate persons and authorities (e.g., risk management or, if need be, licensure boards) for suspected and documented breaches (62) in procedure or incidents resulting in patient exposure. The panel should reinforce the need for Standard Precautions (e.g., double gloving, regular glove changes, and use of blunt surgical needles). Panels may appropriately provide counseling about alternate procedures or specialty paths, especially for providers, students, residents, and others early in their careers, as long as this is not coercion or limitation (perceived or actual) of the provider or student.
The members of the expert review panel may be selected from, but should not necessarily be limited to, the following: one or more persons with expertise in the provider's specialty; infectious disease and hospital epidemiology specialists; liver disease specialists (gastroenterologists); the infected providers' occupational health, student health, or primary care physicians; ethicists; human resource professionals; hospital or school administrators; and legal counsel. Certain members of the panel should be familiar with issues relating to bloodborne pathogens and their infectivity.
In instances when it is generally accepted (or thought) that a patient might have been exposed to the blood of an infected health-care provider, institutions should have in place a protocol for communicating to the patient that such an exposure might have occurred. The patient should receive appropriate follow-up including post-exposure vaccination or receipt of hepatitis B immune globulin and testing (i.e., similar to the reverse situation of prophylaxis for providers exposed to the blood of an HBV-infected patient).
The confidentiality of the infected provider or student should be respected. Certain expert review panels might elect to consider cases without knowledge of the name of the infected provider or student. However, awareness of the infected provider's or student's identity might be unavoidable. In such cases, respect for the confidentiality of the person under review should be accorded as it is for any other patient.
# Recommendations for Chronically HBV-Infected Health-Care Providers and Students
CDC recommends the following measures for the management of hepatitis B virus-infected health-care providers and students:
# Practice Scope
- Chronic HBV infection in itself should not preclude the practice or study of medicine, surgery, dentistry, or allied health professions. Standard Precautions should be adhered to rigorously in all health-care settings for the protection of both patient and provider. - CDC discourages constraints that restrict chronically HBVinfected health-care providers and students from the practice or study of medicine, dentistry, or surgery, such as -repeated demonstration of persistently nondetectable viral loads on a greater than semiannual frequency; -prenotification of patients of the HBV-infection status of their care giver; -mandatory antiviral therapy with no other option such as maintenance of low viral load without therapy; and -forced change of practice, arbitrary exclusion from exposure-prone procedures, or any other restriction that essentially prohibits the health-care provider from practice or the student from study.
# Hepatitis B Vaccination and Screening
- All health-care providers and students should receive hepatitis B vaccine according to current CDC recommendations (37,45,63). Vaccination (3-dose series) should be followed by assessment of hepatitis B surface antibody to determine vaccination immunogenicity and, if necessary, revaccination. Health-care providers who do not have protective concentration of anti-HBs (>10 mIU/ml) after revaccination (i.e., after receiving a total of 6 doses) should be tested for HBsAg and anti-HBc to determine their infection status (37). - Prevaccination serologic testing is not indicated for most persons being vaccinated, except for those providers and students at increased risk for HBV infection (37), such as those born to mothers in or from endemic countries and sexually active men who have sex with men (64).
- Providers who are performing exposure-prone procedures also should receive prevaccination testing for chronic HBV infection. Exposure of a patient to the blood of an HBV-infected health-care provider, in the performance of any procedure, should be handled with postexposure prophylaxis and testing of the patient in a manner similar to the reverse situation (i.e., prophylaxis for providers exposed to the blood of an HBV-infected patient) (65).
# Expert Panel Oversight Not Needed
- Providers, residents, and medical and dental students with active HBV infection (i.e., those who are HBsAg-positive) who do not perform exposure-prone procedures but who practice non-or minimally invasive procedures (Category II, Box) should not be subject to any restrictions of their activities or study. They do not need to achieve low or undetectable levels of circulating HBV DNA, hepatitis e-antigen negativity, or have review and oversight by an expert review panel, as recommended for those performing exposure-prone procedures. However, they should receive medical care for their condition by clinicians, which might be in the setting of student or occupational health.
# Expert Panel Oversight Recommended
- Surgeons, including oral surgeons, obstetrician/gynecologists, surgical residents, and others who perform exposure-prone procedures, i.e., those listed under Category I activities (Box), should fulfill the following criteria: -Consonant with the 1991 recommendations and Advisory Committee on Immunization Practices (ACIP) recommendations (37), their procedures should be guided by review of a duly constituted expert review panel with a balanced perspective (i.e., providers' and students' personal, occupational or student health physicians, infectious disease specialists, epidemiologists, ethicists and others as indicated above) regarding the procedures that they can perform and prospective oversight of their practice (28). Confidentiality of the health-care provider's or student's HBV serologic status should be maintained. -HBV-infected providers can conduct exposure-prone procedures if a low or undetectable HBV viral load is documented by regular testing at least every 6 months unless higher levels require more frequent testing; for example, as drug therapy is added or modified or testing is repeated to determine if elevations above a threshold are transient. -CDC recommends that an HBV level 1,000 IU/ml (5,000 GE/ml) or its equivalent is an appropriate threshold for a review panel to adopt. Monitoring should be conducted with an assay that can detect as low as 10-30 IU/ml, especially if the individual institutional expert review panel wishes to adopt a lower threshold. -Spontaneous fluctuations (blips) of HBV DNA levels and treatment failures might both present as higher-thanthreshold (1,000 IU/ml; 5,000 GE/ml) values. This will require the HBV-infected provider to abstain from performing exposure-prone procedures, while subsequent retesting occurs, and if needed, modifications or additions to the health-care provider's drug therapy and other reasonable steps are taken.
# Institutional Policies and Procedures
- Hospitals, medical and dental schools, and other institutions should have written policies and procedures for the identification and management of HBV-infected health-care providers, students, and school applicants. These policies should include the ability to identify and convene an expert review panel (see Guidance for Expert Review Panels) aware of these and other relevant guidelines and recommendations before considering the management of HBV-infected providers performing exposure-prone procedures. | 8,364 | {
"id": "a16a102524d27f6f14be3de68defab6d698a9a4b",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Despite sustained high coverage for childhood pertussis vaccination, pertussis remains poorly controlled in the United States. A total of 16,858 pertussis cases and 12 infant deaths were reported in 2009 (1; CDC, unpublished data, 2009). Although 2005 recommendations by the Advisory Committee on Immunization Practices (ACIP) called for vaccination with tetanus toxoid, reduced diphtheria toxoid and acellular pertussis (Tdap) for adolescents and adults to improve immunity against pertussis, Tdap coverage is 56% among adolescents and <6% among adults (2,3). In October 2010, ACIP recommended expanded use of Tdap. This report provides the updated recommendations, summarizes the safety and effectiveness data considered by ACIP, and provides guidance for implementing the recommendations. ACIP recommends a single Tdap dose for persons aged 11 through18 years who have completed the recommended childhood diphtheria and tetanus toxoids and pertussis/diphtheria and tetanus toxoids and acellular pertussis (DTP/DTaP) vaccination series and for adults aged 19 through 64 years (4,5). Two Tdap vaccines are available in the United States. Boostrix (GlaxoSmithKline Biologicals, Rixensart, Belgium) is licensed for use in persons aged 10 through 64 years, and Adacel (Sanofi Pasteur, Toronto, Canada) is licensed for use in persons aged 11 through 64 years. Both Tdap products are licensed for use at an interval of at least 5 years between the tetanus and diphtheria toxoids (Td) and Tdap dose. On October 27, 2010, ACIP approved the following additional recommendations: 1) use of Tdap regardless of interval since the last tetanus-or diphtheria-toxoid containing vaccine, 2) use of Tdap in certain adults aged 65 years and older, and 3) use of Tdap in undervaccinated children aged 7 through 10 years.#
The Pertussis Vaccines Working Group of ACIP reviewed published and unpublished Tdap immunogenicity and safety data from clinical trials and observational studies on use of Tdap. The Working Group also considered the epidemiology of pertussis, provider and program feedback, and data on the barriers to receipt of Tdap. The Working Group then presented policy options for consideration to the full ACIP. These additional recommendations are intended to remove identified barriers and programmatic gaps that contribute to suboptimal vaccination coverage. An important barrier that limited vaccination of persons with Tdap was unknown history of Td booster. Programmatic gaps included lack of a licensed Tdap vaccine for children aged 7 through 10 years and adults aged 65 years and older. In light of the recent increase of pertussis in the United States, the additional recommendations are made to facilitate use of Tdap to reduce the burden of disease and risk for transmission to infants (Box).
# Timing of Tdap Following Td
Safety. When Tdap was licensed in 2005, the safety of administering a booster dose of Tdap at intervals <5 years after Td or pediatric DTP/DTaP had not been studied in adults. However, evaluations in children and adolescents suggested that the safety of intervals as short as 18 months was acceptable (6). Rates of local and systemic reactions after Tdap vaccination in adults were lower than or comparable to rates in adolescents during U.S. prelicensure trials; therefore, the safety of using intervals as short as 2 years between Td and Tdap in adults was inferred (4).
Additional data on the safety of administering Tdap <5 years after Td are now available. Two studies were conducted with 387 persons aged 18 through 76 years who received a Tdap or combined Tdapinactivated polio vaccine (Tdap-IPV) vaccination either within 21 days, or <2 years following a previous Td-containing vaccine (7,8). Tdap-IPV vaccine is not licensed in the United States. In both studies, immediate or short-term adverse events (e.g., 30 minutes to 2 weeks) after receipt of Tdap or Tdap-IPV were examined. The majority of these events were limited to local reactions, including pain (68%--83%), erythema (20%--25%), and swelling (19%--38%) (7,8). Serious adverse events related to the receipt of Tdap or Tdap-IPV shortly after Td or Td-IPV vaccinations did not occur. However, the number of subjects in these studies was small and does not exclude the potential for rare, but serious, adverse events.
Guidance for use. ACIP recommends that pertussis vaccination, when indicated, should not be delayed and that Tdap should be administered regardless of interval since the last tetanus or diphtheria toxoid-containing vaccine. ACIP concluded that while longer intervals between Td and Tdap vaccination could decrease the occurrence of local reactions, the benefits of protection against pertussis outweigh the potential risk for adverse events.
# Adults Aged 65 Years and Older
Unpublished data from trials for Adacel (N = 1,170) and Boostrix (N = 1,104) on the safety and immunogenicity of Tdap in adults aged 65 years and older who received vaccine were provided to ACIP by Sanofi Pasteur and GlaxoSmithKline.
# Safety.
For both Tdap vaccines, the frequency and severity of adverse events in persons aged 65 years and older were comparable to those in persons aged less than 65 years. No increase in local or generalized reactions in Tdap recipients was observed, compared with persons who received Td. No serious adverse events were considered related to vaccination. ACIP reviewed data on vaccine-related adverse events from the Vaccine Adverse Event Reporting System (VAERS). VAERS is a passive surveillance system jointly administered by CDC and the Food and Drug Administration that accepts reports from vaccine manufacturers, health-care providers, and vaccine recipients for vaccine safety. VAERS can be prone to overreporting or underreporting and inconsistency in the quality and completeness of reports. During September 2005--September 2010, a total of 243 VAERS reports were received regarding adults aged 65 years and older administered Tdap, out of 10,981 total VAERS reports on Tdap among recipients of all ages (CDC, unpublished data, 2010). Of the 243 reports regarding adults aged 65 years and older, 232 (96%) were nonserious. The most frequent adverse events after Tdap were local reactions, comprising 37% of all events. Eleven serious events were reported, including two deaths among persons with multiple underlying conditions. Although VAERS cannot assess causality, after review of data, it is unlikely the deaths were related to vaccine receipt. Postmarketing VAERS data also suggest that Tdap vaccine safety in adults aged 65 years and older is comparable to that of Td vaccine. Because Tdap is not licensed for use in this age group, comparisons between these reports and other reports need to be interpreted with caution.
Immunogenicity. Both Tdap vaccines showed that immune responses to diphtheria and tetanus toxoids were noninferior to responses produced by Td. In both Tdap vaccines, immune responses were observed to the pertussis antigens. For Boostrix, immune responses to pertussis antigens (pertussis toxin , filamentous hemagglutinin , and pertactin ) were noninferior to those observed following a 3-dose primary pertussis vaccination series, as defined by the Vaccines and Related Biological Products Advisory Committee (VRBPAC) (9). For Adacel, immune responses to all pertussis antigens (PT, FHA, PRN, and fimbriae ) occurred (4.1 to 15.1-fold geometric mean concentration increases). ACIP concluded that both Tdap vaccines would provide pertussis protection in persons aged 65 years and older.
Guidance for use. ACIP recommends that adults aged 65 years and older (e.g., grandparents, childcare providers, and health-care practitioners) who have or who anticipate having close contact with an infant less than 12 months of age and who previously have not received Tdap should receive a single dose of Tdap to protect against pertussis and reduce the likelihood of transmission. For other adults aged 65 years and older, a single dose of Tdap vaccine may be given instead of Td vaccine, in persons who have not previously received Tdap. Tdap can be administered regardless of interval since the last tetanus-or diphtheria-toxoid containing vaccine. After receipt of Tdap, persons should continue to receive Td for routine booster immunization against tetanus and diphtheria, according to previously published guidelines (4). Either Tdap vaccine product may be used. Further recommendations on the use of both Tdap vaccines in adults aged 65 years and older will be forthcoming should one or more Tdap products be licensed for use in this age group.
# Undervaccinated Children Aged 7 through 10 Years
No data have been published regarding the safety or immunogenicity of Tdap in children aged 7 through 10 years who have never received pertussis-containing vaccines. One published study assessed the use of Tdap-IPV vaccine as the fifth dose of acellular pertussis vaccine in children aged 4 through 8 years (10). A subanalysis of the study data comparing safety and immunogenicity results among children aged 4 through 6 years (n = 703) and 7 through 8 years (n = 118) was provided to ACIP by GlaxoSmithKline. Three additional published studies have assessed use of Tdap in lieu of the fifth DTaP dose in children aged 4 through 6 years who had received 4 previous doses of DTaP (11--13). These three studies enrolled 609 subjects who received either Tdap or Tdap-IPV in lieu of the fifth DTaP dose.
# Safety.
In each study, no increase in risk of severe local reactions or systemic adverse events was observed. The most commonly reported adverse events within 15 days after receipt of Tdap were pain (40%--56%), erythema (34%--53%), and swelling (24%--45%). Fewer local reactions were observed or reported among Tdap or Tdap-IPV recipients compared with those who received DTaP or DTaP-IPV, but the differences were not statistically significant. No differences were noted when children aged 4 through 6 and 7 through 8 years were compared with respect to solicited or unsolicited adverse reactions following vaccination with Tdap-IPV. ACIP concluded that the overall safety of Tdap and frequency of local reactions in undervaccinated children likely would be similar to those observed in children who received 4 doses of DTaP.
Immunogenicity. Immune response to Tdap-IPV was comparable between children aged 4 through 6 and those aged 7 through 8 years, according to the GlaxoSmithKline subanalysis. In both age groups, at least 99.9% of Tdap-IPV recipients had seroprotective levels of antibodies for diphtheria and tetanus, and responses to pertussis antigens were comparable to those observed following a 3-dose primary pertussis vaccination series as defined by VRBPAC.
In children aged 4 through 6 years, the immune response following receipt of Tdap (Boostrix or Adacel) was comparable to DTaP or DTaP-IPV (11,12). All subjects had seroprotective antibody levels for diphtheria and tetanus 4 to 6 weeks after vaccination. For pertussis antigens, one study observed no significant difference between Boostrix and DTaP recipients in response rates to any of three pertussis antigens in the vaccines, with similar effects on cell-mediated immune responses 3.5 years after vaccination (12). Another study demonstrated a fourfold increase in four pertussis antibodies in the majority of children receiving Adacel or DTaP-IPV (11).
Guidance for use. ACIP recommends that children aged 7 through 10 years who are not fully vaccinated- against pertussis and for whom no contraindication to pertussis vaccine exists should receive a single dose of Tdap to provide protection against pertussis. If additional doses of tetanus and diphtheria toxoid--containing vaccines are needed, then children aged 7 through10 years should be vaccinated according to catch-up guidance, with Tdap preferred as the first dose (5). Tdap is recommended in this age group because of its reduced antigen content compared with DTaP, resulting in reduced reactogenicity. Currently, Tdap is recommended only for a single dose across all age groups. Further guidance will be forthcoming on timing of revaccination in persons who have received Tdap previously.
# General Recommendations
For routine use, adolescents aged 11 through 18 years who have completed the recommended childhood diphtheria and tetanus toxoids and pertussis/diphtheria and tetanus toxoids and acellular pertussis (DTP/DTaP) vaccination series and adults aged 19 through 64 years should receive a single dose of Tdap. Adolescents should preferably receive Tdap at the 11 to 12 year-old preventive health-care visit.
# Timing of Tdap
- Can be administered regardless of interval since the last tetanus-or diphtheria-toxoid containing vaccine.
# Adults Aged 65 years and Older
- Those who have or anticipate having close contact with an infant aged less than 12 months should receive a single dose of Tdap.
- Other adults ages 65 years and older may be given a single dose of Tdap.
# Children Aged 7 Through 10 Years
- Those not fully vaccinated against pertussis- and for whom no contraindication to pertussis vaccine exists should receive a single dose of Tdap.
- Those never vaccinated against tetanus, diphtheria, or pertussis or who have unknown vaccination status should receive a series of three vaccinations containing tetanus and diphtheria toxoids. The first of these three doses should be Tdap. - Fully vaccinated is defined as 5 doses of DTaP or 4 doses of DTaP if the fourth dose was administered on or after the fourth birthday.
Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of Health and Human Services. | 3,083 | {
"id": "ebe02c11d209c49468be21fec13e27b90f736daf",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Suggested Citation: N a tio n a l C e n te r fo r In ju ry P re v e n tio n a n d C o n tro l. In te rim p la n n in g g u id a n c e fo r p re p a re d n e s s a n d re s p o n s e to a m ass c a s u a lty e v e n t re s u ltin g fro m te rro ris t use o f e x p lo s iv e s . A tla n ta , G A : C e n te rs fo r D is e a s e C o n tro l a n d P re v e n tio n ; 2 0 1 0 . Disclaimer T he fin d in g s a n d c o n c lu s io n s in this re p o rt a re th o se o f the a u th o rs a n d d o n o t n e c e s s a rily re p re s e n t the v ie w s o f the C e n te rs fo r D is e a s e C o n tro l a n d P re v e n tio n .#
Patient id e n tific a tio n 8
Public In fo rm a tio n 8
# Conclusion 29
# Executive Summary
Explosive devices are the most com m on weapons used by terrorists. T he damage inflicted in recent events in India, Pakistan, Spain, Israel, and the United Kingdom demonstrates the impact o f detonating explosives in densely populated civilian areas.
Explosions can produce instantaneous havoc, resulting in numerous patients with complex, technically challenging injuries not com m only seen after natural disasters. Because many patients self-evacuate after a terrorist attack, prehospital care may be difficult to coordinate and hospitals near the scene can expect to receive a large influx, or surge, o f patients after a terrorist strike.
T he threat o f terrorism exists at a time when hospitals in the United States are already struggling to care for patients who present during routine operations each day. Hospitals and emergency health care systems are stressed and face enormous challenges. W ith the occurrence o f a mass casualty event (M CE), health systems would be expected to confront these issues in organization and leadership, personnel, infrastructure and capacity, com m unication, triage and transportation, logistics, and legal and ethical challenges.
T he purpose o f this interim guidance is to provide inform ation and insight to assist public policy and health system leaders in preparing for and responding to an M C E caused by terrorist use of explosives (TUE). This docum ent provides practical inform ation to prom ote comprehensive mass casualty care in the event o f a T U E event and focuses on two areas:
1. leadership in preparing for and responding to a T U E event, and 2. effective care o f patients in the prehospital and hospital environments during a T U E event.
This guidance recognizes the critical role that strategic leadership can have on the success or failure o f preparing for and responding to a terrorist bombing. It outlines im portant leadership strategies for successfully preparing for and managing a T U E mass casualty event,
including the concept o f meta-leadership. Effective meta-leaders employ influence over authority and activate change above and beyond established lines o f their decision-making and control. T hey are driven by a purpose broader than that prescribed by their formal roles. Therefore, they are motivated and act in ways that transcend usual organizational confines, enabling them to successfully confront challenges and barriers in communication, organization and response, standards o f care, and surge capacity.
T he successful medical response to an M C E depends on effectively coordinating three critical areas o f patient care: 1) prehospital care, 2) casualty distribution, and 3) hospital care.
Critical steps must be taken throughout the response to ensure rapid and efficient patient triage, effective and appropriate distribution o f patients to available hospitals and health care facilities, and proper management o f the surge o f patients at receiving hospitals.
# Introduction Purpose
T he purpose o f this interim planning guidance is to provide valuable inform ation and insight to help public policy and health system leaders at all levels prepare for and respond to a mass casualty
# Primary Objectives
T he ultim ate aims o f this guidance docum ent are to:
1. improve decision making during T U E -M C E events, strengthen system and clinical responses, and reduce m orbidity and mortality;
2. identify leadership strategies that improve preparedness for and response to T U E -M C E events;
3. prom ote connectivity, coordination, integration, and consistency between the medical response com m unity and emergency management;
4. encourage health system resilience and maximize the ability to provide adequate medical services during an M CE;
5. enhance the quality o f existing M C E preparedness and response programs used by medical response entities; and 6. provide a resource tool that could be applied during exercises and lower intensity emergency events.
# Background and Structure
Terrorists worldwide have repeatedly shown their willingness and ability to use explosives to inflict significant death, destruction, and fear. A sudden and unpredictable bombing-related M C E requires an immediate response; disrupts com m unication systems; interrupts transportation o f casualties, medical personnel, and supplies; and may overwhelm the capacity o f responding agencies.
Even though explosives are the prim ary weapons used by terrorists, the U.S. health care system has m inimal experience in treating patients with explosion-related injuries. Detonating devices in crowded public places results in complex, technically challenging injuries not commonly seen after natural disasters. Deficiencies in response capability could result in increased m orbidity and mortality as well as stress and fear in the community.
Because o f the injuries sustained by large numbers o f people, explosions produce unique management challenges for health providers, beginning with an immediate surge o f patients into surrounding health care facilities. T he potential for large numbers of patients arriving w ithin a few hours may stress and limit the ability of emergency medical services (EMS) systems, hospitals, and other health care facilities to care for critically injured victims.4-6
T he ongoing and increasing threat o f terrorist activities, combined with docum ented evidence o f decreasing emergency care capacity within the U.S. health care system,7-14 requires proactively preparing for these situations. Health care and public health systems, individual hospitals, and health care personnel must collaborate to ensure that strategies are in place to address these key challenges:
- receive, evaluate, and treat large numbers o f injured patients,
- rapidly identify and stabilize the most critically injured,
- evaluate response efforts, and
- conduct exercises and strategic planning for future events.
# Nature of Explosions
An explosion is caused by the sudden chemical conversion o f a solid or liquid into a gas with resultant energy release. Explosive devices are categorized as either high-order explosives (HE, such as C4 and T N T ) or low-order explosives (LE, such as pipe bombs, gunpowder, and Molotov cocktails).
H E detonation involves supersonic, instantaneous transformation o f the solid or liquid into a gas occupying the same physical space under extremely high pressure. These high-pressure gases rapidly expand outward in all directions from their point o f formation as an overpressure blast wave. The extent and pattern o f injuries produced by an explosion are determined by several factors:
- am ount and com position o f the explosive material,
- delivery method,
- distance between the victim and the blast,
- setting (open vs. closed space, structural collapse, intervening barriers), and
- other accompanying environmental hazards. D uring an M CE, health care systems will be confronted with increased demands and decreased availability o f resources. Regional health care systems best understand their own needs and resources and must, therefore, develop specific disaster medical surge capacity and capability plans.
# Nature of Injuries
# Expected Health Systems Challenges
Emergency departm ents (EDs) routinely operate above capacity, with prehospital personnel occasionally forced to wait for extended periods before transferring patient care to hospital staff.
# Leadership
Effective preparedness and response dem and an established, functional leadership structure with clear organizational responsibilities. In many instances, particularly at a local operational level, such preparation has not occurred. Confusion over roles and responsibilities may occur and increases the potential for redundant efforts or gaps in decision-making and response.
# Key Health System Challenges
# Community and media relations
Responding to terrorist bombings requires meta-leadership.
Meta-leaders are vital in preparing for and responding to bombings, and their roles extend far beyond hospitals and emergency services.
Detailed inform ation about meta-leadership and planning needs in this area is provided in C hapter 2.
# Prehospital care
Prehospital
# Hospital care
In responding to a terrorist bombing, hospitals must prepare to address large numbers o f patients in a short period o f time. Such preparedness will affect not only emergency and traum a services but also other medical, paramedical, administrative, logistical, and security functions. Decisions and policies developed in advance o f a bom bing should reflect state and local regulations and guidance.
A full exploration o f the many aspects o f hospital care relevant in a bom bing aftermath is contained in C hapter 5.
# Community an d media relations
T he com m unity targeted by a bom bing suffers the most extensive physical and psychological effects and should be part o f preparedness planning. Involving com m unity organizations, religious institutions, and local businesses in planning and response efforts can help to calm fears and prepare people should a bom bing occur. Another critical partner in this education effort is the local media.
Guidance for com m unication and inform ation sharing is included throughout this document.
potential secondary explosi and im plementing patient and distributing patients.
# CHAPTER TWO
# Principles for Health Systems' Preparedness in Emergencies
To prepare for a terrorist use o f explosives-mass casualty event (M CE), health systems leaders must focus on 12 principles.
# Provide Meta-Leadership
Managing a bom bing crisis requires more than good leadership; it requires meta-leadership. The prefix m eta has many meanings, including a more comprehensive form o f a process (e.g., meta Meta-leaders possess unique mindsets and skills, often going beyond the scope o f their experiences.
They are also able to build strong alliances with a diverse array o f leaders before an event occurs.
T he five dimensions o f a meta-leader, which m ust be used with flexibility and adaptability, are
- The Person of the Meta-Leader: Meta-leaders lead themselves and others out o f the "basement" to higher levels o f thinking and functioning.
- Situational Awareness: A problem, change, or crisis compels the meta-leader to respond.
- Leading the Silo: T he meta-leader triggers and models confidence, inspiring others to excellence.
- Leading Up: The meta-leader leads up the chain o f com m and and guides political, business, and com m unity leaders.
- Leading Cross-System Connectivity: Meta-leaders strategically and intentionally devise cross-silo linkages that leverage expertise, resources, and information.
# M e ta -leaders build and m aintain
Effective meta-leaders initiate change outside o f their previously established lines o f decision-making and control.
# relationships and establish clear
They are driven by a purpose broader than that prescribed by channels o f com m unication.
# Be Proactive and Expect the Unexpected
Preparedness must be undertaken ahead o f time. Crisis situations are bad times for planning. No m atter how carefully developed a response plan, unexpected events are likely to occur. Recognizing the likelihood o f unexpected events will allow for appropriate preparation during the response effort.
Crisis leaders should expect that planning will be imperfect and learn to expect the unexpected.
# Learn From Others
# Develop Connected Emergency Plans
Preparedness and response plans should build upon each other and be based on existing federal and state plans using standard protocols, processes, tools, and terminology. Phone numbers should be checked and updated regularly.
# Communicate During a Mass Casualty Event
# Be Prepared fo r Legal and Ethical Issues
Preparedness should include consideration o f all potential legal and ethical problems that could be related to mass casualty response. Ethical considerations should be explicit during preparedness so that critical decisions made during crises can be based on the spirit o f the ethical
The rationale for m odifying standards o f judgments that guided the p lanning p rocess.
care in an em ergency is tha t more patients w ill survive a terrorist attack.
A lte r Standards of Care
T he system should be refocused during crisis response to accomplish the greatest good for the com m unity (i.e., save the most victims). T he rationale for modifying standards o f care in an emergency is that more patients will survive a terrorist attack if key lifesaving interventions are provided to the greatest num ber o f casualties likely to benefit from care. Hospitals and emergency medical services systems above surge capacity will require autonom y to alter regular standards of care and shift to emergency critical care practices. However, no universally accepted methodology for this adjustm ent exists, and the process is associated with potential ethical, societal, medical, and legal issues.
# Maximize availability o f emergency medical services personnel and resources
- M odify and extend shifts, bring personnel from home, and recruit medical and nonmedical volunteers as appropriate.
- Prepare for excessive strain on EMS answering points and dispatch.
- Concentrate on preserving the com m unication system among EMS, other emergency responders, and hospitals and design contingencies for alternative communication.
Institute call-screening strategies to determine the level o f urgency required to address calls, including preset recommendations for various call scenarios from the anxious public, survivors, families o f missing persons, and potential volunteers.
Maximize the efficiency o f available vehicles, coordinate all ambulance services, bring ambulances to full capacity, deploy alternative vehicles, and consider air transportation for prim ary distribution (from the field to the hospital) and for secondary distribution (relocation from one hospital to another).
# Assess the situation and care required
Interim Planning G uid an ce for Preparedness a nd Response to a Mass Casualty Event Result i ng from Terro rist Use of E x p lo sive s
# Protect on-scene personnel
- Recognize that first responders may represent a num ber o f disciplines in addition to EMS, including bomb squads, firefighters, search and rescue, hazardous materials responders, media, volunteers, and law enforcement providing scene security, investigation, and traffic control.
- Before searching for casualties, receive permission from the incident com m ander to ensure that the area is safe for first responders to enter and that the threat o f secondary device detonation has been evaluated.
- Protect EMS personnel and other first responders from exposure to environmental and infectious pathogens.
# Stage and triage patients
- Remove victims from direct hazard impact areas and stage them into the EMS system for triage and distribution to definitive care.
- Establish patient holding areas to prepare for formal triage and treatm ent protocols.
Depending on the situation, patients may move through the defined holding areas or go directly into rapid triage and distribution to hospitals.
- Shift health care priorities to those critically injured patients who are most likely to survive.
- Focus treatment o f casualties in the field on basic medical care primarily directed toward stabilizing life-threatening medical conditions.
# Provide appropriate transportation and distribution o f patients
- Provide adequate transportation and be prepared to balance distribution to appropriate medical facilities.
- Do not assume that casualties will be distributed to appropriate facilities. C hapter 4 discusses factors to consider in planning for most effectively distributing casualties after a TUE.
# Manage fatalities
- Prior to a bom bing event, address such issues as cataloging o f bodies; availability o f body bags and refrigerator trucks; and return o f bodies, hum an remains, and personal belongings to authorized persons.
- Following a T U E-M C E, avoid transporting bodies and remains from the scene to hospital treatm ent areas.
- Pay attention to and be respectful o f varying religious beliefs when handling bodies and remains.
- Consider designating alternate sites outside o f hospitals for managing and storing hum an remains.
- If possible, docum ent the identity o f the dead, hum an remains, and associated personal belongings. As soon as possible after the crisis, begin to identify hum an remains using scientific means (e.g., dental records, pathology, anthropology, fingerprints, D N A samples).
# Levels of Patient Distribution
The two levels o f patient distribution are primary and secondary.
Primary distribution refers to moving patients from the scene to the hospital. T he three methods currently in use for prim ary distribution, ranked from most to least desirable, are equal numbers o f casualties to each regional hospital on a rotating basis.
- Spontaneous Primary Distribution: Although the least desirable, this distribution method is the most common. Ambulances and other vehicles transport victims to the closest hospital, with no connectivity, control, or coordination. Secondary distribution refers to moving patients from the first receiving hospital to a second medical facility to receive either a higher or more specialized level o f care or less specialized care. By practicing this secondary distribution, casualties can be redistributed from overloaded hospitals and care sites to less affected ones. All hospitals must develop formal and practical relationships with designated traum a and specialty centers to ensure that, when necessary, casualties will have access to appropriate levels o f care.
# Levels of Patient Distribution
# Effective and Controlled Distribution
Controlled distribution o f casualties during the response to a terrorist bom bing is critical for m atching needs to resources and minimizing hospital overload.
- In m atching patients to hospitals, take patient needs and hospital capabilities into account. T he vast majority of survivors o f a bom bing event will have m inor injuries and will likely be discharged after evaluation and treatm ent in an emergency department.
- Centralize coordination o f patient transport and distribution to minimize hospital overloading and maximize use o f all available medical facilities including hospitals and clinics.
A ll hospitals must develop form al and practical relationships with designated traum a and specialty centers to ensure that, when necessary, casualties w ill have access to appropriate levels o f care.
# Surge Capacities and Capabilities for Hospitals Introduction
T he major challenges that hospitals will face in a mass casualty event (MCE) include surge capacity and capability issues in emergency and traum a services, as well as medical, paramedical, administrative, logistical, and security challenges. Difficult decisions will have to be made regarding the allocation o f available resources. These decisions should reflect state and local regulations and be developed before an MCE.
# Common Challenges fo r Hospitals in Terrorist Bombing Afterm ath
Terrorist use o f explosives (TUE) often creates four distinct types o f mass events: 1) mass casualty events, 2) mass fatality events, 3) mass anxiety events, and 4) mass onlooker events (e.g., families, media, curiosity seekers, volunteers, politicians, public officials). Hospital emergency leaders should consider these events and be prepared for their simultaneous occurrence. Hospitals should formulate contingency plans to deal with the initial surge o f walking wounded patients. Less severely injured patients, including the walking wounded and worried well, often self transport from the scene to the nearest hospital immediately after the event. These patients may
# Predicting pa tien t inflow
- not have been triaged by emergency medical services (EMS),
- arrive at the hospital before the more severely injured and may continue to arrive for several hours, and
- overwhelm the receiving hospital and delay treatment o f more critically injured patients.
# Delays in declaring a mass casualty event
T he three com m on delays in declaring an M C E that may complicate hospital surge capacity are
- Late Incident Recognition: Incident recognition is the point in time at which hospital leadership becomes aware that a significant event is evolving. Limited or ineffective situational awareness is the main factor preventing adequate response.
- Delayed Notification and Activation: Delays in delivering lifesaving interventions and definitive care are caused by taking a reactive approach (partial, gradual, and linear activation of emergency systems). A proactive approach, which involves full and simultaneous activation of all emergency systems followed by gradual withdrawal based on gathered inform ation, helps avoid delay.
- Linear Mobilization of Resources: Linear transition (a form o f reactive approach) from normal operations to appropriate response level causes delays. The transition should be proactive, simultaneous, and nonlinear in scale and scope. Extensive discussion and planning support linear activation and should be reserved until after the response.
# Time constraints
T he response to a TU E -M C E requires rapid intervention and should be based primarily at the local level. Local emergency operation plans that are routinely exercised and integrated into regular operations will function effectively.
# Lim ited health care workforce
Health care workers may not report during an emergency, either because they cannot reach the facility or are concerned for their safety or that o f family members. To minimize staffing shortages, planning must include provisions for the security o f health care workers and their families. Not adequately addressing their concerns may lower the motivation for personnel to report to work.
# Poor triage
Commonly, the triage process will not function as expected because o f stress that contributes to Protocols should be simple, short, realistic, workable, and practical. They should cover interaction with other key agencies; be evaluated continuously (threats, lessons learned, experiences); enable functioning as an integrated and unified system during emergency; be easily compiled into binders, color-coded by type o f incident; be located in an easily accessible place; and be revised as soon as new inform ation compels a change in the plan and on predeterm ined revision dates.
# Surge capacity an d capability map
Hospitals should develop a planning framework (surge capacity and capability map) that presents all available and relevant internal and external resources. This framework should be transparent, updated, and shared with key disaster response participants, both during preparedness and response.
# Exercises an d drills
Prior to a TU E -M C E , m andatory regular exercises with executive officers, meta-leaders, com m unity representatives, and all relevant agencies should be conducted. These drills should include annual, unannounced limited-scale exercises and the use o f smart casualties (people posing as casualties).
Four levels o f drills are valuable:
1. focal (vertical) exercise for tasks specific to mass casualty events, 2. T he following principles should be incorporated into protocols:
# Redundant systems
- Develop flexible triage protocols based on uniform criteria for mass casualty triage18
(see Figure 1) and strive for systems that are:
- simple, easy to remember, and amenable to quick m em ory aids;
- applicable to all ages and patient populations; and
- easily modified for changes in resource availability and patient conditions.
- Develop a color-coded patient (LSI) prioritization protocol: red (immediate), yellow (delayed), green (minimal), grey (expectant), and black (deceased).
- Consider lifesaving interventions for each patient when:
- equipm ent is readily available,
- intervention is within provider's scope o f practice,
- procedure can be quickly performed, and
- continuing post-procedure care does not require provider's presence at bedside.
- Reserve the use o f imaging and lab testing for clinical conditions based on flexible triage protocols.
- Reduce provider docum entation and other administrative responsibilities.
# Figure 1: M odel Uniform Core Criteria
Step 2 -Assess:
Ind ivid ual Assessment
Interim Planning G uid an ce for Preparedness and Response to a Mass Casualty Event Resulting from Terrorist Use of E x p lo sive s
# Hospital Incident Command System
As an incident unfolds and details begin to emerge, the hospital management team should quickly transition from reactive to proactive management. communications.
T he H IC S requires the following main components:
- Incident Commander: the individual who assumes overall authority and responsibility for the hospital preparedness and response to a mass casualty event. This individual is also responsible for activating the emergency plan.
- Command Center: the location from which the hospital's response to the emergency will be coordinated. This facility should be equipped with multiple alternative means o f internal and external com m unication, stable power source, and security.
- Operations Officer: the individual who gathers and builds an overall picture o f the event and records any actions taken by the HICS.
- Liason Officer: the individual responsible for com m unicating with external relevant agencies, including EMS, other health facilities, law enforcement, fire, search and rescue, public health, local office o f emergency preparedness, and military.
- Public Information: the organizational hub for contact with the media and coordinated com m unication between the hospital and the public. Prepared consistent messaging should be available through the HICS.
# Mass casualty event sites
Primary emergency response areas must be identified to handle the crisis. Specifically, these areas include - hospital com m and center,
- triage sites: main and alternative,
- immediate treatment,
- delayed treatment,
- m inor injuries,
- decontam ination, and
- fatality management site.
Designated areas should also be named for the worried well, media, family, and visitors. A relative center can be used as a resource for individuals searching for inform ation about their missing relatives. These centers can include water, food, mental health professionals, clergy, and simple pharmacy needs.
# Security
Hospitals should maintain control and security within their boundaries as law enforcement resources may be severely taxed. Hospitals may be targets for bom bing attacks, and security officials should m aintain increased vigilance. Strategies may include
- increasing uniform ed security presence,
- ensuring security for hospital personnel, supplies, and assets,
- enhancing m onitoring o f sensitive entry points and hospital surroundings,
- addressing crowd control and handling the influx o f individuals looking for missing relatives, and
- preventing terrorists from targeting hospital facilities.
# Recovery: Ending the emergency status
Just as preparing for rapidly transitioning from normal to emergency status is critical, preparing to quickly return to routine activities is also im portant. Debriefing o f personnel can reduce event-related mental health effects and obtain input on lessons learned for use in future crises.
# Staff capacity
The capacity to mobilize adequate numbers o f qualified personnel (particularly traum a teams) to care for victims is essential. The kinds o f expertise required include emergency medicine, traum a surgery, intensive care, hospitalists, anesthesiology, otolaryngology, mental health, pharmacy, blood bank, radiology, pediatrics, nursing, administration, and support. Consider the following issues: - compensate for extra hours and injuries,
- use licensed professionals outside their normal scopes o f practice or license geography,
- call in off-duty hospital staff and arrange transportation,
- prepare to manage health professional volunteers and other untrained individuals who appear spontaneously,
- change staff scheduling as needed, and
- make mental health professionals available for first responders and staff counseling.
# M edical supplies
To increase access to supplies, pre-arranged contracts with commercial vendors should be activated.
Pre-equipped mobile carts reserved for disaster contingencies and critical equipm ent should be transported to specific M C E sites.
# Blood bank
# Space capacity
To care for a patient surge, hospitals must augment care space dramatically. Admission capacity will be determined by the num ber o f available beds and hospital personnel. Specific considerations may include
- Facilitating Inflow: D uring an emergency event, the hospital experiences a massive inflow of - Space Availability: Hospital plans should be designed specifically to receive a massive inflow and distribute patients along predetermined routes to specified sites. Plans should consider the following areas:
- Diversion: Divert additional patients to lesscrowded facilities.
- Emergency departm ent: Discharge ED patients who can continue their care at hom e or another medical facility (reverse triage).
- Intensive care unit (ICU): Move IC U patients who can safely be managed to other care units.
- Hospital beds: Identify and discharge inpatients who may continue their care at hom e or another medical facility.
- Operating rooms (ORs): Determine availability o f ORs, cancel elective surgeries and procedures, and prepare multiple ORs for emergent procedures.
- Alternative Care Sites: Use pre-designated alternative care sites to increase space capacity, and add beds and supplementary equipment.
- Mass Mortuary Site: Establish a space to store bodies o f patients who die; this space should be in or near the hospital but away from the ED.
# Victim tracking
In an M C E, hospitals are overwhelmed with a sudden influx o f casualties and fatalities. Using a casualty tracking data system that is coordinated across all medical facilities is essential. T he system should be capable o f registering, docum enting, and tracking victims to help make families' search for missing relatives as efficient as possible.
# Hospital decompression
Large numbers o f casualties from a terrorist incident commonly self-refer or self-transport to hospitals in the immediate vicinity o f the event. Three main approaches enable hospital facilities to prevent system collapse through decompression:
1. Outside Diversion: Additional casualties and other patients should be directed to other facilities that have sufficient capacity.
# Patient identification
In an M C E, hospitals are overwhelmed with a sudden influx o f patients and fatalities. D ata systems, coordinated across all medical facilities, can help hospitals register, docum ent, and track victims.
Through this system, citizens can call any hospital throughout the region to locate family members.
T he system could include the following features:
- digital photographs o f each incoming victim with altered mental status upon arrival to the hospital;
- input o f digital pictures and any descriptions o f victims and their personal belongings into a computerized database;
- placement o f each patient's belongings in a prepared sack that accompanies the patient at all times, and
- personnel or trained volunteers to staff telephones and assist with victim identification and family liaison.
# Public Information
- Leaders have m uch influence over the expectations, understanding, and responses o f both individuals and communities to an M CE. T he management o f the acute situation sets the tone for the ways society will respond. Accurately describing ongoing efforts and successfully forecasting predictable events will enhance the credibility o f authorities and diminish negative outcomes. Informing the public in a timely m anner can decrease the flow o f worried well patients and lessen demands on the health care system. This com munications campaign should be a joint effort by EMS, hospitals, and other health care facilities.
# Conclusion
T he purpose o f this docum ent is to prepare policy planners to respond to terrorist bombings and mass casualty events. T he majority o f the inform ation focuses on first responders, hospital administrators, and hospital staff, as they are most likely to be affected.
T he docum ent discusses the concept o f meta-leadership, which draws on natural leaders who work in various settings and use their skills to help direct both their own organizations' responses and the inter-organizational responses that will be critical to a successful response. This docum ent offers these leaders interim guidance for developing plans to meet the needs o f specific facilities and locations.
Effective preparation will help maintain critical systems and can improve both the clinical and psychological outcomes o f the people affected by terrorist bombings and mass casualty events. | 6,545 | {
"id": "e06948c0693c0f0d8078d928e64ee1044c7a777a",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | This report updates the 2000 recommendations by the Advisory Committee on Immunization Practices (ACIP) on the use of influenza vaccine and antiviral agents (MMWR 2000;49:1-38). The 2001 recommendations include new or updated information regarding a) the cost-effectiveness of influenza vaccination; b) the influenza vaccine supply; c) neuraminidase-inhibitor antiviral drugs; d) the 2001-2002 trivalent vaccine virus strains, which are A/Moscow/10/ 99 (H3N2)-like, A/New Caledonia/20/99 (H1N1)-like, and B/Sichuan/379/99-like strains; and e) extension of the optimal time period for vaccination through November. A link to this report and other information regarding influenza can be accessed at the website for the Influenza Branch, Division of Viral and Rickettsial Diseases, National Center for Infectious Diseases, CDC at . 2 MMWR April 20, 2001 years. The ACIP recommends the use of strategies to improve vaccination levels, including the use of reminder/recall systems and standing orders programs (19,20 ). Although influenza vaccination remains the cornerstone for the control and treatment of influenza, updated information is also presented on antiviral medications because these agents are an adjunct to vaccine.# INTRODUCTION
Epidemics of influenza typically occur during the winter months and are responsible for an average of approximately 20,000 deaths per year in the United States (1,2 ). Influenza viruses also can cause pandemics, during which rates of illness and death from influenza-related complications can increase dramatically worldwide. Influenza viruses cause disease among all age groups (3)(4)(5). Rates of infection are highest among children, but rates of serious illness and death are highest among persons aged >65 years and persons of any age who have medical conditions that place them at increased risk for complications from influenza (3,(6)(7)(8).
Influenza vaccination is the primary method for preventing influenza and its severe complications. In this report from the Advisory Committee on Immunization Practices (ACIP), the primary target groups recommended for annual vaccination are a) groups that are at increased risk for influenza-related complications (e.g., persons aged >65 years and persons of any age with certain chronic medical conditions); b) the group aged 50-64 years because this group has an elevated prevalence of certain chronic medical conditions; and c) persons who live with or care for persons at high risk (e.g., health-care workers and household members who have frequent contact with persons at high risk and can transmit influenza infections to these persons at high risk). Vaccination is associated with reductions in influenza-related respiratory illness and physician visits among all age groups, hospitalization and death among persons at high risk, otitis media among children, and work absenteeism among adults (9)(10)(11)(12)(13)(14)(15)(16)(17)(18). Although influenza vaccination levels have increased substantially, further improvements in vaccine coverage levels are needed, particularly among persons at high risk aged <65
# Primary Changes in the Recommendations
These recommendations include five principal changes:
- Information regarding the cost-effectiveness of influenza vaccination has been added.
- Information regarding the influenza vaccine supply has been added.
- Information regarding neuraminidase-inhibitor antiviral drugs has been updated.
- The 2001-2002 trivalent vaccine virus strains are A/Moscow/10/99 (H3N2)-like, A/ New Caledonia/20/99 (H1N1)-like, and B/Sichuan/379/99-like strains.
- The recommended optimal time period for vaccinating individuals is October-November.
# Influenza and Its Burden
# Biology of Influenza
Influenza A and B are the two types of influenza viruses that cause epidemic human disease (21 ). Influenza A viruses are further categorized into subtypes on the basis of two surface antigens: hemagglutinin (H) and neuraminidase (N). Influenza B viruses are not categorized into subtypes. Since 1977, influenza A (H1N1) viruses, influenza A (H3N2) viruses, and influenza B viruses have been in global circulation. Both influenza A and B viruses are further separated into groups on the basis of antigenic characteristics. New influenza virus variants result from frequent antigenic change (i.e., antigenic drift) resulting from point mutations that occur during viral replication. Influenza B viruses undergo antigenic drift less rapidly than influenza A viruses.
A person's immunity to the surface antigens, especially hemagglutinin, reduces the likelihood of infection and severity of disease if infection occurs (22 ). Antibody against one influenza virus type or subtype confers limited or no protection against another influenza virus type or subtype. Furthermore, antibody to one antigenic variant of influenza virus might not protect against a new antigenic variant of the same type or subtype (23 ). Frequent development of antigenic variants through antigenic drift is the virologic basis for seasonal epidemics and the reason for the incorporation of one or more new strains in each year's influenza vaccine.
# Clinical Signs and Symptoms of Influenza
Influenza viruses are spread from person-to-person primarily through the coughing and sneezing of infected persons (21 ). The incubation period for influenza is 1-4 days, with an average of 2 days (24 ). Persons can be infectious starting the day before symptoms begin through approximately 5 days after illness onset; children can be infectious for a longer period.
Uncomplicated influenza illness is characterized by the abrupt onset of constitutional and respiratory signs and symptoms (e.g., fever, myalgia, headache, severe malaise, nonproductive cough, sore throat, and rhinitis) (25 ). Respiratory illness caused by influenza is difficult to distinguish from illness caused by other respiratory pathogens on the basis of symptoms alone (see Role of Laboratory Diagnosis section). Reported sensitivity and specificity of clinical definitions for influenza-like illness that include fever and cough have ranged from 63% to 78% and 55% to 71%, respectively, compared with viral culture (26,27 ). Sensitivity and predictive value of clinical definitions can vary, depending on the degree of co-circulation of other respiratory pathogens and the level of influenza activity (28 ).
Influenza illness typically resolves after several days for most persons, although cough and malaise can persist for >2 weeks. In some persons, influenza can exacerbate underlying medical conditions (e.g., pulmonary or cardiac disease), lead to secondary bacterial pneumonia or primary influenza viral pneumonia, or occur as part of a coinfection with other viral or bacterial pathogens (29 ). Influenza infection has also been associated with encephalopathy, transverse myelitis, Reye syndrome, myositis, myocarditis, and pericarditis (29 ).
# Hospitalizations and Deaths from Influenza
The risks for complications, hospitalizations, and deaths from influenza are higher among persons aged >65 years, very young children, and persons of any age with certain underlying health conditions than among healthy older children and younger adults (1,(30)(31)(32)(33). Estimated rates of influenza-associated hospitalizations have varied substantially by age group in studies conducted during different influenza epidemics (Table 1).
Among children aged 0-4 years, hospitalization rates have ranged from approximately 500/100,000 population for those with high-risk conditions to 100/100,000 population for those without high-risk conditions (34,35 ). Within the 0-4 age group, hospitalization rates are highest among children aged 0-1 years and are comparable to rates found among persons >65 years (36,37 ) (Table 1).
During influenza epidemics from 1969-1970 through 1994-1995, the estimated overall number of influenza-associated hospitalizations in the United States has ranged from approximately 16,000 to 220,000/epidemic. An average of approximately 114,000 influenza-related excess hospitalizations occurred per year, with 57% of all hospitalizations occurring among persons aged <65 years. Since the 1968 influenza A (H3N2) virus pandemic, the greatest numbers of influenza-associated hospitalizations have occurred during epidemics caused by type A(H3N2) viruses, with an estimated average of 142,000 influenza-associated hospitalizations per year (38 ).
During influenza epidemics, influenza-related deaths can result from pneumonia as well as from exacerbations of cardiopulmonary conditions and other chronic diseases. In studies of influenza epidemics occurring from 1972-1973 through 1994-1995, excess deaths (i.e., the number of influenza-related deaths above a projected baseline of expected deaths) occurred during 19 of 23 influenza epidemics (39 ) (Influenza Branch, Division of Viral and Rickettsial Diseases , National Center for Infectious Diseases , CDC, unpublished data, 1998). During those 19 influenza seasons, estimated rates of influenza-associated deaths ranged from approximately 30 to >150 deaths/100,000 persons aged >65 years (Influenza Branch, DVRD, NCID, CDC, unpublished data, 1998). Older adults currently account for >90% of deaths attributed to MMWR April 20, 2001 pneumonia and influenza (40 ). From 1972From -1973From through 1994From -1995, >20,000 influenzaassociated deaths were estimated to occur during each of 11 different U.S. epidemics, and >40,000 influenza-associated deaths were estimated for each of 6 of these 11 epidemics (39 ) (Influenza Branch, DVRD, NCID, CDC, unpublished data, 1998). In the United States, pneumonia and influenza deaths might be increasing in part because the number of elderly persons is increasing (41 ).
# Options for Controlling Influenza
In the United States, the main option for reducing the impact of influenza is immunoprophylaxis with inactivated (i.e., killed virus) vaccine (see Recommendations for the Use of Influenza Vaccine). Vaccinating persons at high risk for complications before the influenza season each year is the most effective means of reducing the impact of influenza. Vaccination coverage can be increased by administering vaccine to persons during hospitalizations or routine health-care visits before the influenza season, making special visits to physicians' offices or clinics unnecessary. When vaccine and epidemic strains are well-matched, achieving increased vaccination rates among persons living in closed settings (e.g., nursing homes and other chronic-care facilities) and among staff can reduce the risk for outbreaks by inducing herd immunity (14 ). Vaccination of healthcare workers and other persons in close contact with persons in groups at high risk can also reduce transmission of influenza and subsequent influenza-related complications. The use of influenza-specific antiviral drugs for chemoprophylaxis or treatment of influenza is an important adjunct to vaccine (see Recommendations for the Use of Antiviral Agents for Influenza). However, antiviral medications are not a substitute for vaccination.
# Influenza Vaccine Composition
Influenza vaccine contains three strains (i.e., two type A and one type B), representing the influenza viruses likely to circulate in the United States in the upcoming winter. The vaccine is made from highly purified, egg-grown viruses that have been made noninfectious (i.e., inactivated) (42 ). Subvirion and purified surface-antigen preparations are available. Because the vaccine viruses are initially grown in embryonated hens' eggs, the vaccine might contain small amounts of residual egg protein. Influenza vaccine distributed in the United States might also contain thimerosal, a mercurycontaining compound, as the preservative (43 ). Manufacturing processes differ by manufacturer. Certain manufacturers might use additional compounds to inactivate the influenza viruses, and they might use an antibiotic to prevent bacterial contamination. Package inserts should be consulted for additional information.
The trivalent influenza vaccine prepared for the 2001-2002 season will include A/ Moscow/10/99 (H3N2)-like, A/New Caledonia/20/99 (H1N1)-like, and B/Sichuan/379/99like antigens. For the A/Moscow/10/99 (H3N2)-like antigen, manufacturers will use the antigenically equivalent A/Panama/2007/99 (H3N2) virus; and for the B/Sichuan/379/99like antigen, they will use one of the antigenically equivalent viruses B/Johannesburg/ 5/99, B/Victoria/504/2000, or B/Guangdong/120/2000. These viruses will be used because of their growth properties and because they are representative of currently circulating A (H3N2) and B viruses.
# Effectiveness of Inactivated Influenza Vaccine
The effectiveness of influenza vaccine depends primarily on the age and immunocompetence of the vaccine recipient and the degree of similarity between the viruses in the vaccine and those in circulation. Most vaccinated children and young adults develop high postvaccination hemagglutination-inhibition antibody titers (44,45 ). These antibody titers are protective against illness caused by strains similar to those in the vaccine (45)(46)(47). When the vaccine and circulating viruses are antigenically similar, influenza vaccine prevents influenza illness in approximately 70%-90% of healthy persons aged <65 years (48 ). Vaccination of healthy adults also has resulted in decreased work absenteeism and decreased use of health-care resources, including the use of antibiotics, when the vaccine and circulating viruses are well-matched (10)(11)(12)(13)49,50 ). Other studies suggest that the use of trivalent inactivated influenza vaccine decreases the incidence of influenza-associated otitis media and the use of antibiotics among children (17,18 ). Elderly persons and persons with certain chronic diseases might develop lower postvaccination antibody titers than healthy young adults and thus can remain susceptible to influenza-related upper respiratory tract infection (51)(52)(53). However, among such persons, the vaccine can be effective in preventing secondary complications and reducing the risk for influenza-related hospitalization and death (14)(15)(16). Among elderly persons living outside of nursing homes or similar chronic-care facilities, influenza vaccine is 30%-70% effective in preventing hospitalization for pneumonia and influenza (16,54 ). Among elderly persons residing in nursing homes, influenza vaccine is most effective in preventing severe illness, secondary complications, and deaths. Among this population, the vaccine can be 50%-60% effective in preventing hospitalization or pneumonia and 80% effective in preventing death, even though the effectiveness in preventing influenza illness often ranges from 30% to 40% (55,56 ).
# Cost-Effectiveness of Influenza Vaccine
Influenza vaccination can reduce both health-care costs and productivity losses associated with influenza illness. Economic studies of influenza vaccination of persons aged >65 years conducted in the United States have found overall societal cost-savings and substantial reductions in hospitalization and death (16,54,57 ). Studies of adults aged 65 years, vaccination resulted in a net savings per quality-adjusted-life-year (QALY) gained and resulted in costs of $23-$256/QALY among younger age groups. Additional studies of the relative cost-effectiveness and cost-utility of influenza vaccination among children and among adults aged <65 years are needed and should be designed to account for year-to-year variations in influenza attack rates, illness severity, and vaccine efficacy when evaluating the long-term costs and benefits of annual vaccination.
# Vaccination Coverage Levels
Among persons aged >65 years, influenza vaccination levels increased from 33% in 1989 (61 ) to 63% in 1997 and 1998 (62 ), surpassing the Healthy People 2000 goal of 60% (63 ). Although influenza vaccination coverage increased through 1997 among black, Hispanic, and white populations, vaccination levels among blacks and Hispanics continue to lag behind those among whites (62,64 ). In 1998, the influenza vaccination rate among persons aged >65 years were 66% among non-Hispanic whites, 46% among non-Hispanic blacks, and 50% among Hispanics (62 ).
Possible reasons for the increase in influenza vaccination levels among persons aged >65 years through 1997 include greater acceptance of preventive medical services by practitioners, increased delivery and administration of vaccine by health-care providers and sources other than physicians, new information regarding influenza vaccine effectiveness, cost-effectiveness, and safety, and the initiation of Medicare reimbursement for influenza vaccination in 1993 (9,15,16,55,56,65,66 ). Continued monitoring is needed to determine if vaccination coverage among persons aged >65 years has reached a peak or plateau. The Healthy People 2010 objective is to achieve vaccination coverage for 90% of persons aged >65 years (67 ).
In 1997 and 1998, vaccination rate estimates among nursing home residents were 64%-82% and 83%, respectively (68,69 ). The Healthy People 2010 goal is to achieve influenza vaccination of 90% of nursing home residents, an increase from the Healthy People 2000 goal of 80% (63,67 ).
In 1998, the overall vaccination rate for adults aged 18-64 years with high-risk conditions was 31%, far short of the Healthy People 2000 goal of 60% (62,63 ). Among persons aged 50-64 years, 43% of those with chronic medical conditions and 29% of those without chronic medical conditions received influenza vaccine. Only 23% of adults younger than 50 years with high-risk conditions were vaccinated (National Immunization Program , CDC, unpublished data, 2000).
Reported vaccination rates of children at high risk are low. One study conducted among patients in health maintenance organizations found influenza vaccination rates ranging from 9% to 10% among asthmatic children (70 ), and a rate of 25% was found among children with severe-to-moderate asthma who attended an allergy and immunology clinic (71 ). Increasing vaccination coverage among persons who have high-risk conditions and are aged <65 years, including children at high risk, is the highest priority for expanding influenza vaccine use.
Annual vaccination is recommended for health-care workers. Nonetheless, the National Health Interview Survey found vaccination rates of only 34% and 37% among health-care workers in the 1997 and 1998 surveys, respectively (72; NIP, CDC, unpublished data, 2001). Vaccination of health-care workers has been associated with reduced work absenteeism (10 ) and fewer deaths among nursing home patients (73,74 ).
Limited information is available regarding the use of influenza vaccine among pregnant women. Among women aged 18-44 years without diabetes responding to the 1999 Behavioral Risk Factor Surveillance Survey, those reporting they were pregnant were less likely to report influenza vaccination in the past 12 months (9.6%) than those not pregnant (15.7%). Vaccination coverage among pregnant women did not significantly change during 1997-1999, whereas coverage among nonpregnant women increased from 14.4% in 1997. Though not directly measuring influenza vaccination among women who were past the second trimester of pregnancy during influenza season, these data indicate low compliance with the ACIP recommendations for pregnant women (75 ). In a study of influenza vaccine acceptance by pregnant women, 71% offered the vaccine chose to be vaccinated (76 ). However, a 1999 survey of obstetricians and gynecologists determined that only 39% gave influenza vaccine to obstetric patients although 86% agree that pregnant women's risk for influenza-related morbidity and mortality increased in the last two trimesters (77 ).
# RECOMMENDATIONS FOR THE USE OF INFLUENZA VACCINE
Influenza vaccine is strongly recommended for any person aged >6 months whobecause of age or underlying medical condition -is at increased risk for complications of influenza. In addition, health-care workers and other individuals (including household members) in close contact with persons at high risk should be vaccinated to decrease the risk for transmitting influenza to persons at high risk. Influenza vaccine also can be administered to any person aged >6 months to reduce the chance of becoming infected with influenza.
# Target Groups for Vaccination
# Persons at Increased Risk for Complications
Vaccination is recommended for the following groups of persons who are at increased risk for complications from influenza:
- persons aged >65 years;
- residents of nursing homes and other chronic-care facilities that house persons of any age who have chronic medical conditions;
- adults and children who have chronic disorders of the pulmonary or cardiovascular systems, including asthma;
- adults and children who have required regular medical follow-up or hospitalization during the preceding year because of chronic metabolic diseases (including diabetes mellitus), renal dysfunction, hemoglobinopathies, or immunosuppression (including immunosuppression caused by medications or by human immunodeficiency virus);
- children and teenagers (aged 6 months-18 years) who are receiving long-term aspirin therapy and, therefore, might be at risk for developing Reye syndrome after influenza infection; and
- women who will be in the second or third trimester of pregnancy during the influenza season. CDC, unpublished data, 2000). Influenza vaccine has been recommended for this entire age group to raise the low vaccination rates among persons in this age group with high-risk conditions. Age-based strategies have been more successful in increasing vaccine coverage than patient-selection strategies based on medical conditions. Persons aged 50-64 years without high-risk conditions also receive benefit from vaccination in the form of decreased rates of influenza illness, decreased work absenteeism, and decreased need for medical visits and medication, including antibiotics (10)(11)(12)(13). Further, 50 years is an age when other preventive services begin and when routine assessment of vaccination and other preventive services has been recommended (78,79 ).
# Persons Who Can Transmit Influenza to Those at High Risk
Persons who are clinically or subclinically infected can transmit influenza virus to persons at high risk for complications from influenza. Decreasing transmission of influenza from caregivers to persons at high risk might reduce influenza-related deaths among persons at high risk. Evidence from two studies indicates that vaccination of health-care workers is associated with decreased deaths among nursing home patients (73,74 ). Vaccination of health-care workers and others in close contact with persons at high risk, including household members, is recommended. The following groups should be vaccinated:
- physicians, nurses, and other personnel in both hospital and outpatient-care settings, including emergency response workers;
- employees of nursing homes and chronic-care facilities who have contact with patients or residents;
- employees of assisted living and other residences for persons in groups at high risk;
- persons who provide home care to persons in groups at high risk; and
- household members (including children) of persons in groups at high risk.
# Influenza Vaccine Supply
In 2000, difficulties with growing and processing the influenza A (H3N2) vaccine strain and other manufacturing problems resulted in substantial delays in the distribution of the 2000-2001 influenza vaccine (80 ). In October 2000, ACIP recommended that persons at highest risk of influenza-related complications (i.e., persons aged >65 years and those aged <65 years with high-risk medical conditions) and health-care workers receive vaccine first. ACIP also recommended that special efforts be made to vaccinate all persons aged 50-64 years, beginning in December, and to continue efforts to vaccinate groups at high risk through December and later (81 ). The possibility of future influenza vaccine delivery delays or vaccine shortages remains. Steps to address such situations include identification and implementation of ways to strengthen the influenza vaccine supply, to improve targeted delivery of vaccine to groups at high risk, and to further encourage the administration of vaccine throughout the influenza season.
# MMWR April 20, 2001
# Additional Information Regarding Vaccination of Specific Populations Pregnant Women
Influenza-associated excess deaths among pregnant women were documented during the pandemics of 1918-1919 and 1957-1958 (82-85 ). Case reports and limited studies also suggest that pregnancy can increase the risk for serious medical complications of influenza as a result of increases in heart rate, stroke volume, and oxygen consumption; decreases in lung capacity; and changes in immunologic function (86)(87)(88)(89). A study of the impact of influenza during 17 interpandemic influenza seasons demonstrated that the relative risk for hospitalization for selected cardiorespiratory conditions among pregnant women enrolled in Medicaid increased from 1.4 during weeks 14-20 of gestation to 4.7 during weeks 37-42 in comparison with women who were 1-6 months postpartum (90 ). Women in their third trimester of pregnancy were hospitalized at a rate (i.e., 250/100,000 pregnant women) comparable with that of nonpregnant women who had high-risk medical conditions. Using data from this study, researchers estimated that an average of 1-2 hospitalizations could be prevented for every 1,000 pregnant women vaccinated. Women who will be beyond the first trimester of pregnancy (>14 weeks' gestation) during the influenza season should be vaccinated. Pregnant women who have medical conditions that increase their risk for complications from influenza should be vaccinated before the influenza season, regardless of the stage of pregnancy.
Because currently available influenza vaccine is an inactivated vaccine, experts consider influenza vaccination safe during any stage of pregnancy. A study of influenza vaccination of >2,000 pregnant women demonstrated no adverse fetal effects associated with influenza vaccine (91 ). However, additional data are needed to confirm the safety of vaccination during pregnancy. Some experts prefer to administer influenza vaccine during the second trimester to avoid a coincidental association with spontaneous abortion, which is common in the first trimester, and because exposures to vaccines traditionally have been avoided during the first trimester.
Influenza vaccine distributed in the United States contains thimerosal, a mercurycontaining compound, as a preservative. This preservative has been used in U.S. vaccines since the 1930s. No data or evidence exists of any harm caused by the level of mercury exposure that might occur from influenza vaccination. Because pregnant women are at increased risk for influenza-related complications and because a substantial safety margin has been incorporated into the health guidance values for organic mercury exposure, the benefit of influenza vaccine outweighs the potential risks for thimerosal (92,93 ).
# Persons Infected with HIV
Limited information is available regarding the frequency and severity of influenza illness or the benefits of influenza vaccination among persons with HIV infection (94,95 ). However, a retrospective study of young and middle-aged women enrolled in Tennessee's Medicaid program found that the attributable risk for cardiopulmonary hospitalizations among women with HIV infection was higher during influenza seasons than during the peri-influenza periods. The risk for hospitalization was higher for HIVinfected women than for women with other well-recognized high-risk conditions, including chronic heart and lung diseases (96 ). Another study estimated that the risk for MMWR 11 influenza-related death was 9.4-14.6/10,000 persons with AIDS compared with rates of 0.09-0.10/10,000 among all persons aged 25-54 years and 6.4-7.0/10,000 among persons aged >65 years (97 ). Other reports demonstrate that influenza symptoms might be prolonged and the risk for complications from influenza increased for certain HIVinfected persons (98,99 ). Influenza vaccination has been shown to produce substantial antibody titers against influenza in vaccinated HIV-infected persons who have minimal acquired immunodeficiency syndrome-related symptoms and high CD4+ T-lymphocyte cell counts (100-103 ). A small, randomized, placebo-controlled trial found that influenza vaccine was highly effective in preventing symptomatic, laboratory-confirmed influenza infection among HIV-infected persons with a mean of 400 CD4+ T-lymphocyte cells/mm 3 ; a limited number of persons with CD4+ T-lymphocyte cell counts of <200 were included in that study (95 ). Among patients who have advanced HIV disease and low CD4+ T-lymphocyte cell counts, influenza vaccine might not induce protective antibody titers (102,103 ); a second dose of vaccine does not improve the immune response in these persons (103,104 ).
One study found that HIV RNA levels increased transiently in one HIV-infected patient after influenza infection (105 ). Studies have demonstrated a transient (i.e., 2-4-week) increase in replication of HIV-1 in the plasma or peripheral blood mononuclear cells of HIV-infected persons after vaccine administration (102,106 ). Other studies using similar laboratory techniques have not documented a substantial increase in the replication of HIV (107)(108)(109). Deterioration of CD4+ T-lymphocyte cell counts or progression of HIV disease have not been demonstrated among HIV-infected persons after influenza vaccination compared with unvaccinated persons (103,110 ). Limited information is available concerning the effect of antiretroviral therapy on increases in HIV RNA levels after either natural influenza infection or influenza vaccination (94,111 ). Because influenza can result in serious illness and because influenza vaccination can result in the production of protective antibody titers, vaccination will benefit HIV-infected patients, including HIVinfected pregnant women.
# Breastfeeding Mothers
Influenza vaccine does not affect the safety of mothers who are breastfeeding or their infants. Breastfeeding does not adversely affect the immune response and is not a contraindication for vaccination.
# Travelers
The risk for exposure to influenza during travel depends on the time of year and destination. In the tropics, influenza can occur throughout the year. In the temperate regions of the Southern Hemisphere, the majority of influenza activity occurs during April-September. In temperate climate zones of the Northern and Southern Hemispheres, travelers also can be exposed to influenza during the summer, especially when traveling as part of large organized tourist groups that include persons from areas of the world where influenza viruses are circulating. Persons at high risk for complications of influenza who were not vaccinated with influenza vaccine during the preceding fall or winter should consider receiving influenza vaccine before travel if they plan to - travel to the tropics;
- travel with large organized tourist groups at any time of year; or - travel to the Southern Hemisphere during April-September.
# MMWR April 20, 2001
No information is available regarding the benefits of revaccinating persons before summer travel who were already vaccinated in the preceding fall. Persons at high risk who received the previous season's vaccine before travel should be revaccinated with the current vaccine in the following fall or winter. Persons aged >50 years and others at high risk might wish to consult with their physicians before embarking on travel during the summer to discuss the symptoms and risks for influenza and the advisability of carrying antiviral medications for either prophylaxis or treatment of influenza.
# General Population
In addition to the groups for which annual influenza vaccination is recommended, physicians should administer influenza vaccine to any person who wishes to reduce the likelihood of becoming ill with influenza (the vaccine can be administered to children as young as age 6 months), depending on vaccine availability (see Vaccine Supply). Persons who provide essential community services should be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Students or other persons in institutional settings (e.g., those who reside in dormitories) should be encouraged to receive vaccine to minimize the disruption of routine activities during epidemics.
# Persons Who Should Not Be Vaccinated
Inactivated influenza vaccine should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine without first consulting a physician (see Side Effects and Adverse Reactions). Prophylactic use of antiviral agents is an option for preventing influenza among such persons. However, persons who have a history of anaphylactic hypersensitivity to vaccine components but who are also at high risk for complications of influenza can benefit from vaccine after appropriate allergy evaluation and desensitization. Information regarding vaccine components can be found in package inserts from each manufacturer.
Persons with acute febrile illness usually should not be vaccinated until their symptoms have abated. However, minor illnesses with or without fever do not contraindicate the use of influenza vaccine, particularly among children with mild upper respiratory tract infection or allergic rhinitis.
# Timing of Annual Vaccination
The optimal time to vaccinate persons in groups at high risk is usually during October-November. However, to avoid missed opportunities for vaccination, influenza vaccine should be offered to persons at high risk when they are seen by health-care providers for routine care or are hospitalized in September, provided that vaccine is available. In addition, health-care providers should also continue to offer vaccine to unvaccinated persons after November and throughout the influenza season even after influenza activity has been documented in the community. In the United States, seasonal influenza activity can begin to increase as early as November or December but has not reached peak levels in the majority of recent seasons until late December through early March (Table 2) (81,112 ). Therefore, although the timing of influenza activity can vary by region, vaccine administered after November is likely to be beneficial in most influenza seasons. Adults develop peak antibody protection against influenza infection 2 weeks after vaccination (113,114 ).
Persons planning substantial organized vaccination campaigns might consider scheduling these events after mid-October. Although influenza vaccine generally becomes available by September, the availability of vaccine in any location cannot be ensured consistently in the early fall. Scheduling campaigns after mid-October will minimize the need for cancellations because vaccine is unavailable. In facilities housing elderly persons (e.g., nursing homes), vaccination before October generally should be avoided because antibody levels in such individuals can begin to decline within a few months after vaccination (115,116 ). (For information regarding vaccination of travelers, see Travelers.)
# Dosage
Dosage recommendations vary according to age group (Table 3). Among previously unvaccinated children aged 1 months apart are recommended for satisfactory antibody responses. If possible, the second dose should be administered before December. Among adults, studies have indicated little or no improvement in antibody response when a second dose is administered during the same season (117)(118)(119)(120). Even when the current influenza vaccine contains one or more of the antigens administered in previous years, annual vaccination with the current vaccine is necessary because immunity declines during the year following vaccination (115,116 ).
# MMWR April 20, 2001
Vaccine prepared for a previous influenza season should not be administered to provide protection for the current season.
# Use of Inactivated Influenza Vaccine Among Children
Of the three influenza vaccines currently licensed in the United States, two influenza vaccines (Flushield™, from Wyeth Laboratories, Inc., and Fluzone ® split, from Aventis Pasteur, Inc.) are approved for use among persons aged >6 months. One other influenza vaccine, Fluvirin ® (Evans Vaccines Ltd.), is labeled in the United States for use only among persons aged >4 years because its efficacy among younger persons has not been demonstrated. Providers should use influenza vaccine that has been approved for vaccinating children aged 6 months-3 years.
# Route
The intramuscular route is recommended for influenza vaccine. Adults and older children should be vaccinated in the deltoid muscle. A needle length >1 inches can be considered for these age groups because needles <1 inch might be of insufficient length to penetrate muscle tissue in certain adults and older children (121 ). Infants and young children should be vaccinated in the anterolateral aspect of the thigh (122 ).
# Side Effects and Adverse Reactions
When educating patients regarding potential side effects, clinicians should emphasize that a) inactivated influenza vaccine contains noninfectious killed viruses and cannot cause influenza; and b) coincidental respiratory disease unrelated to influenza vaccination can occur after vaccination.
# Local Reactions
In placebo-controlled blinded studies, the most frequent side effect of vaccination is soreness at the vaccination site (affecting 10%-64% of patients) that lasts <2 days (123)(124)(125). These local reactions generally are mild and rarely interfere with the person's ability to conduct usual daily activities.
# Systemic Reactions
Fever, malaise, myalgia, and other systemic symptoms can occur following vaccination and most often affect persons who have had no prior exposure to the influenza virus antigens in the vaccine (e.g., young children) (126,127 ). These reactions begin 6-12 hours after vaccination and can persist for 1-2 days.
Recent placebo-controlled trials demonstrate that among elderly persons and healthy young adults, administration of split-virus influenza vaccine is not associated with higher rates of systemic symptoms (e.g., fever, malaise, myalgia, and headache) when compared with placebo injections (123,125 ).
Immediate -presumably allergic -reactions (e.g., hives, angioedema, allergic asthma, and systemic anaphylaxis) rarely occur after influenza vaccination (128 ). These reactions probably result from hypersensitivity to some vaccine component; most reactions likely are caused by residual egg protein. Although current influenza vaccines contain only a small quantity of egg protein, this protein can induce immediate hypersensitivity reactions among persons who have severe egg allergy. Persons who have developed hives, have had swelling of the lips or tongue, or have experienced acute respiratory distress or collapse after eating eggs should consult a physician for appropriate evaluation to help determine if vaccine should be administered. Persons who have documented immunoglobulin E (IgE)-mediated hypersensitivity to eggs -including those who have had occupational asthma or other allergic responses to egg protein -might also be at increased risk for allergic reactions to influenza vaccine, and consultation with a physician should be considered. Protocols have been published for safely administering influenza vaccine to persons with egg allergies (129,130 ).
Hypersensitivity reactions to any vaccine component can occur. Although exposure to vaccines containing thimerosal can lead to induction of hypersensitivity, most patients do not develop reactions to thimerosal when it is administered as a component of vaccines, even when patch or intradermal tests for thimerosal indicate hypersensitivity (131,132 ). When reported, hypersensitivity to thimerosal usually has consisted of local, delayed-type hypersensitivity reactions (131 ).
# Guillain-Barré Syndrome
The 1976 swine influenza vaccine was associated with an increased frequency of Guillain-Barré syndrome (GBS) (133,134 ). Among persons who received the swine influenza vaccine in 1976, the rate of GBS that exceeded the background rate was <10 cases/1,000,000 persons vaccinated. Evidence for a causal relationship of GBS with subsequent vaccines prepared from other influenza viruses is unclear. Obtaining strong epidemiologic evidence for a possible small increase in risk is difficult for such a rare condition as GBS, which has an annual incidence of 10-20 cases/1,000,000 adults (135 ), and stretches the limits of epidemiologic investigation. More definitive data probably will require the use of other methodologies (e.g., laboratory studies of the pathophysiology of GBS).
During three of four influenza seasons studied during 1977-1991, the overall relative risk estimates for GBS after influenza vaccination were slightly elevated but were not statistically significant in any of these studies (136)(137)(138). However, in a study of the 1992-1993 and 1993-1994 seasons, the overall relative risk for GBS was 1.7 (95% confidence interval = 1.0-2.8; p = 0.04) during the 6 weeks after vaccination, representing approximately 1 additional case of GBS/1,000,000 persons vaccinated. The combined number of GBS cases peaked 2 weeks after vaccination (139 ). Thus, investigations to date indicate no substantial increase in GBS associated with influenza vaccines (other than the swine influenza vaccine in 1976) and that, if influenza vaccine does pose a risk, it is probably slightly more than one additional case per million persons vaccinated. Cases of GBS after influenza infection have been reported, but no epidemiologic studies have documented such an association (140,141 ). Substantial evidence exists that several infectious illnesses, most notably Campylobacter jejuni, as well as upperrespiratory tract infections in general are associated with GBS (135,(142)(143)(144).
Even if GBS were a true side effect of vaccination in the years after 1976, the estimated risk for GBS of approximately 1 additional case/1,000,000 persons vaccinated is substantially less than the risk for severe influenza, which could be prevented by vaccination among all age groups, especially persons aged >65 years and those who have medical indications for influenza vaccination (Table 1) (see Hospitalizations and Deaths from Influenza). The potential benefits of influenza vaccination in preventing serious illness, hospitalization, and death greatly outweigh the possible risks for developing vaccine-associated GBS. The average case-fatality ratio for GBS is 6% and increases with age (135,145 ). No evidence indicates that the case-fatality ratio for GBS differs among vaccinated persons and those not vaccinated. The incidence of GBS among the general population is low, but persons with a history of GBS have a substantially greater likelihood of subsequently developing GBS than persons without such a history (136,146 ). Thus, the likelihood of coincidentally developing GBS after influenza vaccination is expected to be greater among persons with a history of GBS than among persons with no history of this syndrome. Whether influenza vaccination specifically might increase the risk for recurrence of GBS is not known; therefore, avoiding vaccinating persons who are not at high risk for severe influenza complications and who are known to have developed GBS within 6 weeks after a previous influenza vaccination is prudent. As an alternative, physicians might consider the use of influenza antiviral chemoprophylaxis for these persons. Although data are limited, for most persons who have a history of GBS and who are at high risk for severe complications from influenza, the established benefits of influenza vaccination justify yearly vaccination.
# Simultaneous Administration of Other Vaccines, Including Childhood Vaccines
The target groups for influenza and pneumococcal vaccination overlap considerably (147 ). For persons at high risk who have not previously been vaccinated with pneumococcal vaccine, health-care providers should strongly consider administering pneumococcal and influenza vaccines concurrently. Both vaccines can be administered at the same time at different sites without increasing side effects (148,149 ). However, influenza vaccine is administered each year, whereas pneumococcal vaccine is not. A patient's verbal history is acceptable for determining prior pneumococcal vaccination status. When indicated, pneumococcal vaccine should be administered to patients who are uncertain regarding their vaccination history (147 ). Children at high risk for influenza-related complications can receive influenza vaccine at the same time they receive other routine vaccinations.
# Strategies for Implementing These Recommendations in Health-Care Settings
Successful vaccination programs combine publicity and education for health-care workers and other potential vaccine recipients, a plan for identifying persons at high risk, use of reminder/recall systems, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine (19 ). Use of standing orders programs is recommended for long-term care facilities (e.g., nursing homes and skilled nursing facilities) under the supervision of a medical director to ensure the administration of recommended vaccinations for adults. Other settings (e.g., inpatient and outpatient facilities, managed care organizations, assisted living facilities, correctional facilities, pharmacies, adult workplaces, and home health-care agencies) are encouraged to introduce standing orders programs as well (20 ). Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described in the following sections.
# Outpatient Facilities Providing Ongoing Care
Staff in facilities providing ongoing medical care (e.g., physicians' offices, public health clinics, employee health clinics, hemodialysis centers, hospital specialty-care clinics, and outpatient rehabilitation programs) should identify and label the medical records of patients who should receive vaccination. Vaccine should be offered during visits beginning in September and throughout the influenza season. The offer of vaccination and its receipt or refusal should be documented in the medical record. Patients for whom vaccination is recommended who do not have regularly scheduled visits during the fall should be reminded by mail or telephone of the need for vaccination.
# Outpatient Facilities Providing Episodic or Acute Care
Acute health-care facilities (e.g., emergency rooms and walk-in clinics) should offer vaccinations to persons for whom vaccination is recommended or provide written information regarding why, where, and how to obtain the vaccine. This written information should be available in languages appropriate for the populations served by the facility.
# Nursing Homes and Other Residential Long-Term Care Facilities
Vaccination should be routinely provided to all residents of chronic-care facilities with the concurrence of attending physicians. Consent for vaccination should be obtained from the resident or a family member at the time of admission to the facility or anytime afterwards. All residents should be vaccinated at one time, preceding the influenza season. Residents admitted during the winter months after completion of the vaccination program should be vaccinated at the time of admission.
# Acute-Care Hospitals
Persons of all ages (including children) with high-risk conditions and persons aged >50 years who are hospitalized at any time during September-March should be offered and strongly encouraged to receive influenza vaccine before they are discharged. In one study, 39%-46% of patients hospitalized during the winter with influenza-related diagnoses had been hospitalized during the preceding autumn (150 ). Thus, the hospital serves as a setting in which persons at increased risk for subsequent hospitalization can be identified and vaccinated. Use of standing orders in this setting has been successful in increasing vaccination of hospitalized persons (151 ).
# Visiting Nurses and Others Providing Home Care to Persons at High Risk
Nursing-care plans should identify patients for whom vaccination is recommended, and vaccine should be administered in the home, if necessary. Caregivers and other persons in the household (including children) should be referred for vaccination.
# Other Facilities Providing Services to Persons Aged >50 Years
Such facilities as assisted-living facilities, retirement communities, and recreation centers should offer unvaccinated residents and attendees vaccine on site before the influenza season. Staff education should emphasize the need for influenza vaccine.
# MMWR April 20, 2001
# Health-Care Workers
Before the influenza season, health-care facilities should offer influenza vaccinations to all personnel, including night and weekend staff. Particular emphasis should be placed on providing vaccinations for persons who care for members of groups at high risk. Efforts should be made to educate health-care workers regarding the benefits of vaccination and the potential health consequences of influenza illness for themselves and their patients. Measures should be taken to provide all health-care workers convenient access to influenza vaccine at the work site, free of charge, as part of employee health programs.
# Evolving Developments Related to Influenza Vaccine Potential New Vaccines
Intranasally administered, cold-adapted, live, attenuated, influenza virus vaccines (LAIVs) are being used in Russia and have been under development in the United States since the 1960s (152)(153)(154)(155)(156). The viruses in these vaccines replicate in the upper respiratory tract and elicit a specific protective immune response. LAIVs have been studied as monovalent, bivalent, and trivalent formulations (155,156 ). LAIVs consist of live viruses that induce minimal symptoms (i.e., attenuated) and that replicate poorly at temperatures found in the lower respiratory tract (i.e., temperature-sensitive). Possible advantages of LAIVs are their potential to induce a broad mucosal and systemic immune response, ease of administration, and the acceptability of an intranasal route of administration compared with injectable vaccines. In a 5-year study that compared trivalent inactivated vaccine and bivalent LAIVs (administered by nose drops) and that used related but different vaccine strains, the two vaccines were found to be approximately equivalent in terms of effectiveness (157 ). In a recent study of children aged 15-71 months, an intranasally administered trivalent LAIV was 93% effective in preventing culture-positive influenza A (H3N2) and B infections, reduced otitis media among vaccinated children by 30%, and reduced otitis media with concomitant antibiotic use by 35% compared with unvaccinated children (158 ). In a follow-up study during the 1997-1998 season, the trivalent LAIV was 86% effective in preventing culture-positive influenza among children, despite a poor match between the vaccine's influenza A (H3N2) component and the predominant circulating influenza A (H3N2) virus (159 ). A study conducted among healthy adults during the same season found a 9%-24% reduction in febrile respiratory illnesses and 13%-28% reduction in lost work days (160 ). No study has directly compared the efficacy or effectiveness of trivalent inactivated vaccine and trivalent LAIV.
# Potential Addition of Young Children to Groups Recommended for Vaccination
During 1998, the ACIP formed a working group to explore issues related to the potential expansion of recommendations for the use of influenza vaccine. The ACIP influenza working group is considering the impact of influenza among young children as well as the potential safety issues and logistic and economic consequences of recommending routine vaccination of young healthy children.
Studies indicate that rates of hospitalization are higher among young children than older children when influenza viruses are in circulation (34,36,37,161,162 ). The increased rates of hospitalization are comparable with rates for other groups at high risk.
However, the interpretation of these findings has been confounded by cocirculation of respiratory syncytial viruses, which are a cause of serious respiratory viral illness among children and which frequently circulate during the same time as influenza viruses (163)(164)(165). Recent studies have attempted to separate the effects of respiratory syncytial viruses and influenza viruses on rates of hospitalization among children aged <5 years who do not have high-risk conditions (36,37 ). Both studies indicate that otherwise healthy children aged <2 years, and possibly children aged 2-4 years, are at increased risk for influenza-related hospitalization compared with older healthy children (Table 1).
Because very young healthy children are at increased risk for influenza-related hospitalization, the ACIP is studying the benefits, risks, economic consequences and logistical issues associated with routine immunization of this age group. Meanwhile, ACIP continues to support vaccination of healthy children aged >6 months whose parents wish to decrease their child's risk for influenza infection, in addition to vaccinating children with high-risk medical conditions.
# RECOMMENDATIONS FOR THE USE OF ANTIVIRAL AGENTS FOR INFLUENZA
Antiviral drugs for influenza are an adjunct to influenza vaccine for the control and prevention of influenza. However, these agents are not a substitute for vaccination. Four currently licensed influenza antiviral agents are available in the United States: amantadine, rimantadine, zanamivir, and oseltamivir.
Amantadine and rimantadine are chemically related antiviral drugs with activity against influenza A viruses but not influenza B viruses. Amantadine was approved in 1966 for prophylaxis of influenza A (H2N2) infection and was later approved in 1976 for the treatment and prophylaxis of influenza type A virus infections among adults and children aged >1 years. Rimantadine was approved in 1993 for treatment and prophylaxis of infection among adults and prophylaxis among children. Although rimantadine is approved only for prophylaxis of infection among children, certain experts in the management of influenza consider it appropriate for treatment among children (see American Academy of Pediatrics, 2000 Red Book, in Additional Information Regarding Influenza Infection Control Among Specific Populations).
Zanamivir and oseltamivir are neuraminidase inhibitors with activity against both influenza A and B viruses. Both zanamivir and oseltamivir were approved in 1999 for the treatment of uncomplicated influenza infections. Zanamivir is approved for treatment for persons aged >7 years, and oseltamivir is approved for treatment for persons aged >1 years. In 2000, oseltamivir was approved for prophylaxis of persons aged >13 years.
The four drugs differ in terms of their pharmacokinetics, side effects, and costs. An overview of the indications, use, administration, and known primary side effects of these medications is presented in the following sections. Information contained in this report might not represent Food and Drug Administration approval or approved labeling for the antiviral agents described. Package inserts should be consulted for additional information.
# Role of Laboratory Diagnosis
Appropriate treatment of patients with respiratory illness depends on accurate and timely diagnosis. The early diagnosis of influenza can reduce the inappropriate use of antibiotics and provide the option of using antiviral therapy. However, because certain bacterial infections can produce symptoms similar to influenza, bacterial infections should be considered and appropriately treated if suspected. In addition, bacterial infections can occur as a complication of influenza. Influenza surveillance information as well as diagnostic testing can aid clinical judgment and help guide treatment decisions. Influenza surveillance by state and local health departments and CDC can provide information regarding the presence of influenza viruses in the community. Surveillance can also identify the predominant circulating types, subtypes, and strains of influenza.
Diagnostic tests available for influenza include viral culture, serology, rapid antigen testing, and immunofluorescence (24 ). Sensitivity and specificity of any test for influenza might vary by the laboratory that performs the test and by the type of test used. As with any diagnostic test, results should be evaluated in the context of other clinical information available to the physician.
Several commercial rapid diagnostic tests are available that can be used by laboratories in outpatient settings to detect influenza viruses within 30 minutes (24,166 ). These rapid tests differ in the types of influenza virus they can detect and whether or not they can distinguish between influenza types. Different tests can detect a) only influenza A viruses; b) both influenza A and B viruses but not distinguish between the two types, or c) both influenza A and B and distinguish between the two. Sensitivity and specificity of rapid tests are lower than for viral culture and vary by test. In addition, the types of specimens acceptable for use (i.e., throat swab, nasal wash, or nasal swab) also vary. Package inserts and the laboratory performing the test should be consulted for more details.
Despite the availability of rapid diagnostic tests, the collection of clinical specimens for viral culture is critical, because only culture isolates can provide specific information regarding circulating influenza subtypes and strains. This information is needed to compare current circulating influenza strains with vaccine strains, to guide decisions regarding influenza treatment and prophylaxis, and to formulate vaccine for the coming year. Virus isolates also are needed to monitor the emergence of antiviral resistance and the emergence of novel influenza A subtypes that might pose a pandemic threat.
# Indications for Use Treatment
When administered within 2 days of illness onset to otherwise healthy adults, amantadine and rimantadine can reduce the duration of uncomplicated influenza A illness, and zanamivir and oseltamivir can reduce the duration of uncomplicated influenza A and B illness by approximately 1 day (49,(167)(168)(169)(170)(171)(172)(173)(174)(175)(176)(177)(178)(179)(180). More clinical data are available concerning the effectiveness of zanamivir and oseltamivir for treatment of influenza A infection than for treatment of influenza B infection (169,(174)(175)(176)(177)(178)(179)(181)(182)(183)(184). However, in vitro data (185)(186)(187)(188)(189)(190), studies of treatment among mice and ferrets (186,187,191,192 ), and clinical studies have documented that zanamivir and oseltamivir have activity against influenza B viruses (173,(177)(178)(179)183,184 ).
None of the four antiviral agents has been demonstrated to be effective in preventing serious influenza-related complications (e.g., bacterial or viral pneumonia or exacerbation of chronic diseases). Evidence for the effectiveness of these four antiviral drugs is based principally on studies of patients with uncomplicated influenza (193 ). Data are limited and inconclusive concerning the effectiveness of amantadine, rimantadine, zanamivir, and oseltamivir for treatment of influenza among persons at high risk for serious complications of influenza (167,169,170,172,173,180,(194)(195)(196)(197). Fewer studies of the efficacy of influenza antivirals have been conducted among pediatric populations compared with adults (167,170,176,177,196,198,199 ). One study of oseltamivir treatment documented a decreased incidence of otitis media among children (177 ).
To reduce the emergence of antiviral drug-resistant viruses, amantadine or rimantadine therapy for persons with influenza-like illness should be discontinued as soon as clinically warranted, generally after 3-5 days of treatment or within 24-48 hours after the disappearance of signs and symptoms. The recommended duration of treatment with either zanamivir or oseltamivir is 5 days.
# Prophylaxis
Chemoprophylactic drugs are not a substitute for vaccination, although they are critical adjuncts in the prevention and control of influenza. Both amantadine and rimantadine are indicated for the prophylaxis of influenza A infection, but are not effective against influenza B. Both drugs are approximately 70%-90% effective in preventing illness from influenza A infection (49,167,196 ). When used as prophylaxis, these antiviral agents can prevent illness while permitting subclinical infection and the development of protective antibody against circulating influenza viruses. Therefore, certain persons who take these drugs will develop protective immune responses to circulating influenza viruses. Amantadine and rimantadine do not interfere with the antibody response to the vaccine (167 ). Both drugs have been studied extensively among nursing home populations as a component of influenza outbreak control programs, which can limit the spread of influenza within chronic care institutions (167,195,(200)(201)(202).
Among the neuraminidase inhibitor antivirals, zanamivir and oseltamivir, only oseltamivir has been approved for prophylaxis, but community studies of healthy adults indicate that both drugs are similarly effective in preventing febrile, laboratoryconfirmed influenza illness (efficacy: zanamivir, 84%; oseltamivir, 82%) (203,204 ). Both antiviral agents have also been reported to prevent influenza illness among persons given chemoprophylaxis after a household member was diagnosed with influenza (183,205 ). Experience with prophylactic use of these agents in institutional settings or among patients with chronic medical conditions is limited (179,(206)(207)(208)(209)(210)(211). One 6-week study of oseltamivir prophylaxis among nursing home residents found a 92% reduction in influenza illness (179,212 ). Use of zanamivir has not been reported to impair the immunologic response to influenza vaccine (178,213 ). Data are not available on the efficacy of any of the four antiviral agents in preventing influenza among severely immune compromised persons.
When determining the timing and duration for administering influenza antiviral medications for prophylaxis, factors related to cost, compliance, and potential side effects should be considered. To be maximally effective as prophylaxis, the drug must be taken each day for the duration of influenza activity in the community. However, to be most cost-effective, one study of amantadine or rimantadine prophylaxis reported that the drugs should be taken only during the period of peak influenza activity in a community (214 ).
# MMWR April 20, 2001
Persons at High Risk Who Are Vaccinated After Influenza Activity Has Begun. Persons at high risk for complications of influenza still can be vaccinated after an outbreak of influenza has begun in a community. However, the development of antibodies in adults after vaccination can take as long as 2 weeks (118,119 ). When influenza vaccine is given while influenza viruses are circulating, chemoprophylaxis should be considered for persons at high risk during the time from vaccination until immunity has developed. Children who receive influenza vaccine for the first time can require as long as 6 weeks of prophylaxis (i.e., prophylaxis for 4 weeks after the first dose of vaccine and an additional 2 weeks of prophylaxis after the second dose).
Persons Who Provide Care to Those at High Risk. To reduce the spread of virus to persons at high risk during community or institutional outbreaks, chemoprophylaxis during peak influenza activity can be considered for unvaccinated persons who have frequent contact with persons at high risk. Persons with frequent contact include employees of hospitals, clinics, and chronic-care facilities, household members, visiting nurses, and volunteer workers. If an outbreak is caused by a variant strain of influenza that might not be controlled by the vaccine, chemoprophylaxis should be considered for all such persons, regardless of their vaccination status.
Persons Who Have Immune Deficiency. Chemoprophylaxis can be considered for persons at high risk who are expected to have an inadequate antibody response to influenza vaccine. This category includes persons infected with HIV, especially those with advanced HIV disease. No published data are available concerning possible efficacy of chemoprophylaxis among persons with HIV infection or interactions with other drugs used to manage HIV infection. Such patients should be monitored closely if chemoprophylaxis is administered.
Other Persons. Chemoprophylaxis throughout the influenza season or during peak influenza activity might be appropriate for persons at high risk who should not be vaccinated. Chemoprophylaxis can also be offered to persons who wish to avoid influenza illness. Health-care providers and patients should make this decision on an individual basis.
# Control of Influenza Outbreaks in Institutions
The use of antiviral drugs for treatment and prophylaxis of influenza is an important component of institutional outbreak control. In addition to the use of antiviral medications, other outbreak control measures include instituting droplet precautions and establishing cohorts of patients with confirmed or suspected influenza, re-offering influenza vaccinations to unvaccinated staff and patients, restricting staff movement between wards or buildings, and restricting contact between ill staff or visitors and patients (215)(216)(217). (For additional information regarding outbreak control in specific settings, refer to additional references in Additional Information Regarding Influenza Infection Control Among Specific Populations.)
Most published reports on the use of antiviral agents to control institutional influenza outbreaks are based on studies of influenza A outbreaks among nursing home populations where amantadine or rimantadine were used (167,195,(200)(201)(202). Less information is available concerning the use of oseltamivir in influenza A or B institutional outbreaks (210,212 ). When confirmed or suspected outbreaks of influenza occur in institutions that house persons at high risk, chemoprophylaxis should be started as early as possible to reduce the spread of the virus. In these situations, having preapproved orders from physicians or plans to obtain orders for antiviral medications on short notice is extremely useful.
When institutional outbreaks occur, chemoprophylaxis should be administered to all residents -regardless of whether they received influenza vaccinations during the previous fall -and should continue for >2 weeks or until approximately 1 week after the end of the outbreak. The dosage for each resident should be determined individually. Chemoprophylaxis also can be offered to unvaccinated staff who provide care to persons at high risk. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza that is not wellmatched by the vaccine.
In addition to nursing homes, chemoprophylaxis also can be considered for controlling influenza outbreaks in other closed or semiclosed settings (e.g., dormitories or other settings where persons live in close proximity). For example, chemoprophylaxis with rimantadine has been used successfully to control an influenza A outbreak aboard a large cruise ship (218 ).
To limit the potential transmission of drug-resistant virus during institutional outbreaks, whether in chronic or acute-care settings or other closed settings, measures should be taken to reduce contact as much as possible between persons taking anti-viral drugs for treatment and other persons, including those taking chemoprophylaxis (see Antiviral Drug-Resistant Strains of Influenza).
# Dosage
Dosage recommendations vary by age group and medical conditions (Table 4).
# Children
Amantadine. The use of amantadine among children aged 10 years is 200 mg/day (100 mg twice a day); however, for children weighing <40 kg, prescribing 5 mg/kg/day, regardless of age, is advisable (219 ).
Rimantadine. Rimantadine is approved for prophylaxis among children aged >1 years and for treatment in children aged >13 years. Although rimantadine is approved only for prophylaxis of infection among children, certain experts in the management of influenza consider it appropriate for treatment among children (see American Academy of Pediatrics, 2000 Red Book, in Additional Information Regarding Influenza Infection Control Among Specific Populations). The use of rimantadine among children aged 10 years is 200 mg/day (100 mg twice a day); however, for children weighing <40 kg, prescribing 5 mg/kg/day, regardless of age, is recommended (220 ).
Zanamivir. Zanamivir is not approved for use among children aged 7 years is two inhalations (one 5-mg blister per inhalation for a total dose of 10 mg) twice daily (approximately 12 hours apart) (178 ). † † Not applicable. § § Elderly residents of nursing-homes should be administered only 100 mg/day of rimantadine. A reduction in dosage to 100 mg/day should be considered for all persons aged >65 years if they experience side effects when taking 200 mg/day. ¶ ¶ Zanamivir is administered via inhalation by using a plastic device included in the package with the medication. Patients will benefit from instruction and demonstration of correct use of the device. * Zanamivir is not approved for prophylaxis.
† † † A reduction in the dose of oseltamivir is recommended for persons with creatinine clearance 15-23 kg, the dose is 45 mg twice a day; for children weighting >23-40 kg, the dose is 60 mg twice a day; and for children weighing >40 kg, the dose is 75 mg twice a day.
# MMWR 25
Oseltamivir. Oseltamivir is not approved for use among persons aged 15-23 kg, the dose is 45 mg twice a day; for those weighing >23-40 kg, the dose is 60 mg twice a day; and for children weighing >40 kg, the dose is 75 mg twice a day. The treatment dosage for persons >13 years is 75 mg twice daily. For children >13 years, the recommended dose for prophylaxis is 75 mg once a day (179 ).
# Persons Aged >65 Years
Amantadine. The daily dose of amantadine for persons aged >65 years should not exceed 100 mg for prophylaxis or treatment, because renal function declines with increasing age. For certain elderly persons, the dose should be further reduced.
Rimantadine. Among elderly persons, the incidence and severity of central nervous system (CNS) side effects are substantially lower among those taking rimantadine at a dosage of 100 mg/day than among those taking amantadine at dosages adjusted for estimated renal clearance (221 ). However, chronically ill elderly persons have had a higher incidence of CNS and gastrointestinal symptoms and serum concentrations two to four times higher than among healthy, younger persons when rimantadine has been administered at a dosage of 200 mg/day (167 ).
For elderly nursing home residents, the dosage of rimantadine should be reduced to 100 mg/day for prophylaxis or treatment. For other elderly persons, further studies are needed to determine the optimal dosage. However, a reduction in dosage to 100 mg/day should be considered for all persons aged >65 years who experience side effects when taking a dosage of 200 mg/day.
Zanamivir and Oseltamivir. No reduction in dosage is recommended on the basis of age alone.
# Persons with Impaired Renal Function
Amantadine. A reduction in dosage is recommended for patients with creatinine clearance <50 mL/min/1.73m 2 . Guidelines for amantadine dosage on the basis of creatinine clearance are found in the package insert. Because recommended dosages on the basis of creatinine clearance might provide only an approximation of the optimal dose for a given patient, such persons should be observed carefully for adverse reactions. If necessary, further reduction in the dose or discontinuation of the drug might be indicated because of side effects. Hemodialysis contributes minimally to amantadine clearance (222 ).
Rimantadine. A reduction in dosage to 100 mg/day is recommended for persons with creatinine clearance <10 mL/min. Because of the potential for accumulation of rimantadine and its metabolites, patients with any degree of renal insufficiency, including elderly persons, should be monitored for adverse effects, and either the dosage should be reduced or the drug should be discontinued, if necessary. Hemodialysis contributes minimally to drug clearance (223 ).
Zanamivir. Limited data are available regarding the safety and efficacy of zanamivir for patients with impaired renal function. Among patients with renal failure who were administered a single intravenous dose of zanamivir, decreases in renal clearance, increases in half-life, and increased systemic exposure to zanamivir were observed (178,224 ). However, a small number of healthy volunteers who were administered high doses of intravenous zanamivir tolerated systemic levels of zanamivir that were much MMWR April 20, 2001 higher than those resulting from administration of zanamivir by oral inhalation at the recommended dose (225,226 ). On the basis of these considerations, the manufacturer recommends no dose adjustment for inhaled zanamivir for a 5-day course of treatment for patients with either mild-to-moderate or severe impairment in renal function (178 ).
Oseltamivir. Serum concentrations of oseltamivir carboxylate (GS4071), the active metabolite of oseltamivir, increase with declining renal function (182,179 ). For patients with creatinine clearance of 10-30 mL/min (179 ), a reduction of the treatment dose of oseltamivir to 75 mg once daily and in the prophylaxis dose to 75 mg every other day is recommended. No treatment or prophylaxis dosing recommendations are available for patients undergoing routine renal dialysis treatment.
# Persons with Liver Disease
Amantadine. No increase in adverse reactions to amantadine has been observed among persons with liver disease. Rare instances of reversible elevation of liver enzymes among patients receiving amantadine have been reported, although a specific relationship between the drug and such changes has not been established (227 ).
Rimantadine. A reduction in dosage to 100 mg/day is recommended for persons with severe hepatic dysfunction.
Zanamivir and Oseltamivir. Neither of these medications has been studied among persons with hepatic dysfunction.
# Persons with Seizure Disorders
Amantadine. An increased incidence of seizures has been reported among patients with a history of seizure disorders who have received amantadine (228 ). Patients with seizure disorders should be observed closely for possible increased seizure activity when taking amantadine.
Rimantadine. Seizures (or seizure-like activity) have been reported among persons with a history of seizures who were not receiving anticonvulsant medication while taking rimantadine (229 ). The extent to which rimantadine might increase the incidence of seizures among persons with seizure disorders has not been adequately evaluated.
Zanamivir and Oseltamivir. Seizure events have been reported during postmarketing use of zanamivir and oseltamivir, although no epidemiologic studies have reported any increased risk for seizures with either zanamivir or oseltamivir use.
# Route
Amantadine, rimantadine, and oseltamivir are administered orally. Amantadine and rimantadine are available in tablet or syrup form, and oseltamivir is available in capsule or oral suspension form (178,179 ). Zanamivir is available as a dry powder that is self-administered via oral inhalation by using a plastic device included in the package with the medication. Patients will benefit from instruction and demonstration of correct use of this device (178 ).
# MMWR 27
# Pharmacokinetics Amantadine
Approximately 90% of amantadine is excreted unchanged in the urine by glomerular filtration and tubular secretion (200,(230)(231)(232)(233). Thus, renal clearance of amantadine is reduced substantially among persons with renal insufficiency, and dosages might need to be decreased (see Dosage) (Table 4).
# Rimantadine
Approximately 75% of rimantadine is metabolized by the liver (196 ). The safety and pharmacokinetics of rimantadine among persons with liver disease have been evaluated only after single-dose administration (196,234 ). In a study of persons with chronic liver disease (most with stabilized cirrhosis), no alterations in liver function were observed after a single dose (175,217 ). However, for persons with severe liver dysfunction, the apparent clearance of rimantadine was 50% lower than that reported for persons without liver disease (220 ).
Rimantadine and its metabolites are excreted by the kidneys. The safety and pharmacokinetics of rimantadine among patients with renal insufficiency have been evaluated only after single-dose administration (196,223 ). Further studies are needed to determine multiple-dose pharmacokinetics and the most appropriate dosages for patients with renal insufficiency. In a single-dose study of patients with anuric renal failure, the apparent clearance of rimantadine was approximately 40% lower, and the elimination half-life was approximately 1.6-fold greater than that among healthy persons of the same age (223 ). Hemodialysis did not contribute to drug clearance. In studies of persons with less severe renal disease, drug clearance was also reduced, and plasma concentrations were higher than those among control patients without renal disease who were the same weight, age, and sex (220,235 ).
# Zanamivir
In studies of healthy volunteers, approximately 7%-21% of the orally inhaled zanamivir dose reached the lungs, and 70%-87% was deposited in the oropharynx (236,237 ). Approximately 4%-17% of the total amount of orally inhaled zanamivir is systemically absorbed. Systemically absorbed zanamivir has a half-life of 2.5-5.1 hours and is excreted unchanged in the urine. Unabsorbed drug is excreted in the feces (178,226 ).
# Oseltamivir
Approximately 80% of orally administered oseltamivir is absorbed systemically (182 ). Absorbed oseltamivir is metabolized to oseltamivir carboxylate, the active neuraminidase inhibitor, primarily by hepatic esterases. Oseltamivir carboxylate has a half-life of 6-10 hours and is excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway (179,238 ). Unmetabolized oseltamivir also is excreted in the urine by glomerular filtration and tubular secretion (238 ).
# MMWR April 20, 2001
# Side Effects and Adverse Reactions
When considering the use of influenza antiviral medications (i.e., choice of antiviral drug, dose, and duration of therapy), clinicians must consider the patient's age, weight, and renal function (Table 4); presence of other medical conditions; indications for use (i.e., prophylaxis or therapy); and the potential for interaction with other medications.
# Amantadine and Rimantadine
Both amantadine and rimantadine can cause CNS and gastrointestinal side effects when administered to young, healthy adults at equivalent dosages of 200 mg/day. However, incidence of CNS side effects (e.g., nervousness, anxiety, difficulty concentrating, and lightheadedness) is higher among persons taking amantadine than among those taking rimantadine (239 ). In a 6-week study of prophylaxis among healthy adults, approximately 6% of participants taking rimantadine at a dosage of 200 mg/day experienced >1 CNS symptoms, compared with approximately 13% of those taking the same dosage of amantadine and 4% of those taking placebo (239 ). A study of elderly persons also demonstrated fewer CNS side effects associated with rimantadine compared with amantadine (221 ). Gastrointestinal side effects (e.g., nausea and anorexia) occur in approximately 1%-3% of persons taking either drug, compared with 1% of persons receiving the placebo (239 ).
Side effects associated with amantadine and rimantadine are usually mild and cease soon after discontinuing the drug. Side effects can diminish or disappear after the first week, despite continued drug ingestion. However, serious side effects have been observed (e.g., marked behavioral changes, delirium, hallucinations, agitation, and seizures) (228 ). These more severe side effects have been associated with high plasma drug concentrations and have been observed most often among persons who have renal insufficiency, seizure disorders, or certain psychiatric disorders and among elderly persons who have been taking amantadine as prophylaxis at a dosage of 200 mg/ day (200 ). Clinical observations and studies have indicated that lowering the dosage of amantadine among these persons reduces the incidence and severity of such side effects (Table 4). In acute overdosage of amantadine, CNS, renal, respiratory, and cardiac toxicity, including arrhythmias, have been reported (219 ). Because rimantadine has been marketed for a shorter period than amantadine, its safety among certain patient populations (e.g. chronically ill and elderly persons) has been evaluated less frequently.
# Zanamivir
In a study of zanamivir treatment of influenza-like illness among persons with asthma or chronic obstructive pulmonary disease where study medication was administered after the use of a b 2 -agonist, 13% of patients receiving zanamivir and 14% of patients who received placebo (inhaled powdered lactose vehicle) experienced a >20% decline in forced expiratory volume in 1 second (FEV1) after treatment (178,180 ). However, in a phase I study of persons with mild or moderate asthma who did not have influenzalike illness, 1 of 13 patients experienced bronchospasm following administration of zanamivir (178 ). In addition, during postmarketing surveillance, cases of respiratory function deterioration following inhalation of zanamivir have been reported. Certain patients had underlying airways disease (e.g., asthma or chronic obstructive pulmonary disease). Because of the risk for serious adverse events and because the efficacy has not been demonstrated in this population, zanamivir is generally not recommended for treatment for patients with underlying airway disease (178 ). If physicians decide to prescribe zanamivir to patients with underlying chronic respiratory disease after carefully considering potential risks and benefits, the drug should be used with caution under conditions of proper monitoring and supportive care, including the availability of short-acting bronchodilators (193 ). Patients with asthma or chronic obstructive pulmonary disease who use zanamivir are advised to a) have a fast-acting inhaled bronchodilator available when inhaling zanamivir and b) stop using zanamivir and contact their physician if they develop difficulty breathing (178 ). No clear evidence is available regarding the safety or efficacy of zanamivir for persons with underlying respiratory or cardiac disease or for persons with complications of acute influenza (193 ).
In clinical treatment studies of persons with uncomplicated influenza, the frequencies of adverse events were similar for persons receiving inhaled zanamivir and those receiving placebo (i.e., inhaled lactose vehicle alone) (168)(169)(170)(171)(172)(173)178,236 ). The most common adverse events reported by both groups were diarrhea; nausea; sinusitis; nasal signs and symptoms; bronchitis; cough; headache; dizziness; and ear, nose, and throat infections (150,151,153,154,191 ). Each of these symptoms was reported by <5% of persons in the clinical treatment studies combined (178 ).
# Oseltamivir
Nausea and vomiting were reported more frequently among adults receiving oseltamivir for treatment (nausea without vomiting, approximately 10%; vomiting, approximately 9%) than among persons receiving placebo (nausea without vomiting, approximately 6%; vomiting, approximately 3%) (174,175,179,240 ). Among children treated with oseltamivir, 14.3% had vomiting compared with 8.5% of placebo recipients. Overall, 1% discontinued the drug secondary to this side effect (177 ), whereas a limited number of adults enrolled in clinical treatment trials of oseltamivir discontinued treatment because of these symptoms (179 ). Similar types and rates of adverse events were found in studies of oseltamivir prophylaxis (179 ). Nausea and vomiting might be less severe if oseltamivir is taken with food (179,240 ).
# Use During Pregnancy
No clinical studies have been conducted regarding the safety or efficacy of amantadine, rimantadine, zanamivir, or oseltamivir for pregnant women; only two cases of amantadine use for severe influenza illness during the third trimester have been reported (89,241 ). However, both amantadine and rimantadine have been demonstrated in animal studies to be teratogenic and embryotoxic when administered at very high doses (219,220 ). Because of the unknown effects of influenza antiviral drugs on pregnant women and their fetuses, these four drugs should be used during pregnancy only if the potential benefit justifies the potential risk to the embryo or fetus (see package inserts ).
# Drug Interactions
Careful observation is advised when amantadine is administered concurrently with drugs that affect CNS, especially CNS stimulants. Concomitant administration of antihistamines or anticholinergic drugs can increase the incidence of adverse CNS reactions (167 ). No clinically significant interactions between rimantadine and other drugs have been identified.
# MMWR April 20, 2001
Clinical data are limited regarding drug interactions with zanamivir. However, no known drug interactions have been reported, and no clinically important drug interactions have been predicted on the basis of in vitro data and data from studies of rats (178,242 ).
Limited clinical data are available regarding drug interactions with oseltamivir. Because oseltamivir and oseltamivir carboxylate are excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway, a potential exists for interaction with other agents excreted by this pathway. For example, coadministration of oseltamivir and probenecid resulted in reduced clearance of oseltamivir carboxylate by approximately 50% and a corresponding approximate twofold increase in the plasma levels of oseltamivir carboxylate (179,238 ).
No published data are available concerning the safety or efficacy of using combinations of any of these four influenza antiviral drugs. For more detailed information concerning potential drug interactions for any of these influenza antiviral drugs, package inserts should be consulted.
# Antiviral Drug-Resistant Strains of Influenza
Amantadine-resistant viruses are cross-resistant to rimantadine and vice versa (243 ). Drug-resistant viruses can appear in approximately one third of patients when either amantadine or rimantadine is used for therapy (199,244 ). During the course of amantadine or rimantadine therapy, resistant influenza strains can replace sensitive strains within 2-3 days of starting therapy (244,245 ). Resistant viruses have been isolated from persons who live at home or in an institution where other residents are taking or have recently taken amantadine or rimantadine as therapy (246,247 ); however, the frequency with which resistant viruses are transmitted and their impact on efforts to control influenza are unknown. Amantadine-and rimantadine-resistant viruses are not more virulent or transmissible than sensitive viruses (248 ). The screening of epidemic strains of influenza A has rarely detected amantadine-and rimantadine-resistant viruses (244,249,250 ).
Persons who have influenza A infection and who are treated with either amantadine or rimantadine can shed sensitive viruses early in the course of treatment and later shed drug-resistant viruses, especially after 5-7 days of therapy (199 ). Such persons can benefit from therapy even when resistant viruses emerge.
Resistance to zanamivir and oseltamivir can be induced in influenza A and B viruses in vitro (251)(252)(253)(254)(255)(256)(257)(258), but induction of resistance requires several passages in cell culture. By contrast, resistance to amantadine and rimantadine in vitro can be induced with fewer passages in cell culture (259,260 ). Development of viral resistance to zanamivir and oseltamivir during treatment has been identified but does not appear to be frequent (179,(261)(262)(263)(264). In clinical treatment studies using oseltamivir, 1.3% of posttreatment isolates from patients aged >13 years and 8.6% among patients aged 1-12 years had decreased susceptibility to oseltamivir (179 ). No isolates with reduced susceptibility to zanamivir have been reported from clinical trials, although the number of posttreatment isolates tested is limited (265 ), and the risk for emergence of zanamivir resistant isolates cannot be quantified (178 ). Only one clinical isolate with reduced susceptibility to zanamivir, obtained from an immunocompromised child on prolonged therapy, has been reported (262 ). Currently available diagnostic tests are not optimal for detecting clinical resistance, and better tests as well as more testing are needed before firm conclusions can be reached (265 ). Postmarketing surveillance for neuraminidase inhibitor-resistant influenza viruses is being conducted.
# SOURCES OF INFORMATION REGARDING INFLUENZA AND ITS SURVEILLANCE
Information regarding influenza surveillance is available through the CDC Voice Information System (influenza update) at (888) 232-3228; CDC Fax Information Service at (888) 232-3299; or website for the Influenza Branch, DVRD, NCID, CDC at . During October-May, the information is updated at least every other week. In addition, periodic updates regarding influenza are published in the weekly MMWR. State and local health departments should be consulted regarding availability of influenza vaccine, access to vaccination programs, information regarding state or local influenza activity, and for reporting influenza outbreaks and receiving advice regarding outbreak control.
# ADDITIONAL INFORMATION REGARDING INFLUENZA INFECTION CONTROL AMONG SPECIFIC POPULATIONS
Each year, the ACIP provides general, annually updated information regarding the control and prevention of influenza. Other documents on the control and prevention of influenza among specific populations (e.g., immunocompromised persons, health-care workers, hospitals, and travelers) are also available in the following publications:
# Recommendations and Reports
# Continuing Education Activity
Sponsored by CDC
# Prevention and Control of Influenza Recommendations of the Advisory Committee on Immunization Practices (ACIP) EXPIRATION -April 20, 2002
You must complete and return the response form electronically or by mail by April 20, 2002, to receive continuing education credit. If you answer all of the questions, you will receive an award letter for 1. 25
# ACCREDITATION Continuing Medical Education (CME). CDC is accredited by the Accreditation Council for Continuing Medical Education
(ACCME) to provide continuing medical education for physicians. CDC designates this educational activity for a maximum of 1.25 hour in category 1 credit toward the AMA Physician's Recognition Award. Each physician should claim only those hours of credit that he/she actually spent in the educational activity.
# Continuing Education Unit (CEU). CDC has been approved as an authorized provider of continuing education and training programs by the International Association for Continuing Education and Training and awards 0.1 Continuing Education Unit (CEUs).
# Continuing Nursing Education (CNE).
This activity for 1.3 contact hours is provided by CDC, which is accredited as a provider of continuing education in nursing by the American Nurses Credentialing Center's Commission on Accreditation.
# CE-2 MMWR
April 20, 2001
# GOAL AND OBJECTIVES
This MMWR provides recommendations regarding the prevention and control of influenza. These recommendations were developed by CDC staff and the Influenza Working Group of the Advisory Committee on Immunization Practices (ACIP). The goal of this report is to provide guidance for the use of influenza vaccine and influenza antiviral agents in the United States. Upon completion of this educational activity, the reader should be able to a) describe the disease burden of influenza in the United States; b) describe the characteristics of the currently licensed influenza vaccine; c) list the primary target groups for annual influenza vaccination; and d) recognize the most common adverse reactions following administration of influenza vaccine.
To receive continuing education credit, please answer all of the following questions.
# MMWR
The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on Friday of each week, send an e-mail message to [email protected]. The body content should read SUBscribe mmwr-toc. Electronic copy also is available from CDC's World-Wide Web server at / or from CDC's file transfer protocol server at ftp/. To subscribe for paper copy, contact Superintendent of Documents, U.S. Government PrintingOffice, Washington, DC 20402; telephone (202) 512-1800.
Data in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. The reporting week concludes at close of business on Friday; compiled data on a national basis are officially released to the public on the following Friday. Address inquiries about the MMWR Series, including material to be considered for publication, to: Editor, MMWR Series, Mailstop C-08, CDC, 1600 Clifton Rd., N.E., Atlanta, GA 30333; telephone (888) 232-3228.
All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated.
# IU.S. Government Printing Office: 2001-633-173/48219 Region IV | 19,299 | {
"id": "f1a4a2ea235620f166f9e333bb46d628f80f7e30",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # Introduction
Epidemics of influenza typically occur during the winter months and are responsible for an average of approximately 20,000 deaths/year in the United States (1,2). Influenza viruses also can cause pandemics, during which rates of illness and death from influenza-related complications can increase dramatically worldwide. Influenza viruses cause disease among all age groups (3)(4)(5). Rates of infection are highest among children, but rates of serious illness and death are highest among persons aged >65 years and persons of any age who have medical conditions that place them at increased risk for complications from influenza (3,(6)(7)(8).
Influenza vaccination is the primary method for preventing influenza and its severe complications. In this report from the Advisory Committee on Immunization Practices (ACIP), the primary target groups recommended for annual vaccination are 1) groups who are at increased risk for influenza-related complications (e.g., persons aged >65 years and persons of
# Prevention and Control of Influenza
# Recommendations of the Advisory Committee on Immunization Practices (ACIP)
# Summary
This report updates the 2001 recommendations by the Advisory Committee on Immunization Practices (ACIP) regarding the use of influenza vaccine and antiviral agents (MMWR 2001;50: . The 2002 recommendations include new or updated information regarding 1) the timing of influenza vaccination by risk group; 2) influenza vaccine for children aged 6-23 months; 3) the 2002-2003 trivalent vaccine virus strains: A/Moscow/10/99 (H3N2)-like, A/New Caledonia/20/99 (H1N1)like, and B/Hong Kong/330/2001-like strains; and 4) availability of certain influenza vaccine doses with reduced thimerosal content. A link to this report and other information related to influenza can be accessed at the website for the Influenza Branch, Division of Viral and Rickettsial Diseases, National Center for Infectious Diseases, CDC, at / flu/fluvirus.htm.
any age with certain chronic medical conditions); 2) persons aged 50-64 years, because this group has an elevated prevalence of certain chronic medical conditions; and 3) persons who live with or care for persons at high risk (e.g., health-care workers and household members who have frequent contact with persons at high risk and can transmit influenza to persons at high risk). Vaccination is associated with reductions in influenza-related respiratory illness and physician visits among all age groups, hospitalization and death among persons at high risk, otitis media among children, and work absenteeism among adults (9)(10)(11)(12)(13)(14)(15)(16)(17)(18). Although influenza vaccination levels increased substantially during the 1990s, further improvements in vaccine coverage levels are needed, chiefly among persons aged <65 years at high risk. The ACIP recommends using strategies to improve vaccination levels, including using reminder/ recall systems and standing orders programs (19,20). Although influenza vaccination remains the cornerstone for the control and treatment of influenza, information is also presented regarding antiviral medications, because these agents are an adjunct to vaccine.
# Primary Changes and Updates in the Recommendations
The 2002 recommendations include five principal changes or updates, as follows:
1. The optimal time to receive influenza vaccine is during October and November. However, because of vaccine distribution delays during the past 2 years, ACIP recommends that vaccination efforts in October focus on persons at greatest risk for influenza-related complications and health-care workers and that vaccination of other groups begin in November. 2. Vaccination efforts for all groups should continue into December and later, for as long as vaccine is available.
# Influenza and Its Burden
Biology of Influenza Influenza A and B are the two types of influenza viruses that cause epidemic human disease (21). Influenza A viruses are further categorized into subtypes on the basis of two surface antigens: hemagglutinin (H) and neuraminidase (N). Influenza B viruses are not categorized into subtypes. Since 1977, influenza A (H1N1) viruses, influenza A (H3N2) viruses, and influenza B viruses have been in global circulation. Influenza A (H1N2) viruses that probably emerged after genetic reassortment between human A (H3N2) and A (H1N1) viruses have been detected recently in many countries. Both influenza A and B viruses are further separated into groups on the basis of antigenic characteristics. New influenza virus variants result from frequent antigenic change (i.e., antigenic drift) resulting from point mutations that occur during viral replication. Influenza B viruses undergo antigenic drift less rapidly than influenza A viruses.
A person's immunity to the surface antigens, especially hemagglutinin, reduces the likelihood of infection and severity of disease if infection occurs (22). Antibody against one influenza virus type or subtype confers limited or no protection against another. Furthermore, antibody to one antigenic variant of influenza virus might not protect against a new antigenic variant of the same type or subtype (23). Frequent development of antigenic variants through antigenic drift is the virologic basis for seasonal epidemics and the reason for the incorporation of >1 new strains in each year's influenza vaccine.
# Clinical Signs and Symptoms of Influenza
Influenza viruses are spread from person-to-person primarily through the coughing and sneezing of infected persons (21). The incubation period for influenza is 1-4 days, with an average of 2 days (24). Adults and children typically are infectious from the day before symptoms begin until approximately 5 days after illness onset. Children can be infectious for a longer period, and very young children can shed virus for <6 days before their illness onset. Severely immunocompromised persons can shed virus for weeks (25)(26)(27).
Uncomplicated influenza illness is characterized by the abrupt onset of constitutional and respiratory signs and symptoms (e.g., fever, myalgia, headache, severe malaise, nonproductive cough, sore throat, and rhinitis) (28). Respiratory illness caused by influenza is difficult to distinguish from illness caused by other respiratory pathogens on the basis of symptoms alone (see Role of Laboratory Diagnosis). Reported sensitivities and specificities of clinical definitions for influenza-like illness that include fever and cough have ranged from 63% to 78% and 55% to 71%, respectively, compared with viral culture (29,30). Sensitivity and predictive value of clinical definitions can vary, depending on the degree of co-circulation of other respiratory pathogens and the level of influenza activity (31).
Influenza illness typically resolves after a limited number of days for the majority of persons, although cough and malaise can persist for >2 weeks. Among certain persons, influenza can exacerbate underlying medical conditions (e.g., pulmonary or cardiac disease), lead to secondary bacterial pneumonia or primary influenza viral pneumonia, or occur as part of a coinfection with other viral or bacterial pathogens (32). Influenza infection has also been associated with encephalopathy, transverse myelitis, Reye syndrome, myositis, myocarditis, and pericarditis (32).
# Hospitalizations and Deaths from Influenza
The risks for complications, hospitalizations, and deaths from influenza are higher among persons aged >65 years, very young children, and persons of any age with certain underlying health conditions than among healthy older children and younger adults (1,2,7,9,(33)(34)(35). Estimated rates of influenzaassociated hospitalizations have varied substantially by age group in studies conducted during different influenza epidemics (Table 1).
Among children aged 0-4 years, hospitalization rates have ranged from approximately 500/100,000 population for those New Engl J Med 2000;342:232-9. § § Outcomes were for acute pulmonary conditions. Influenza-attributable hospitalization rates for children at high risk were not included in this study. ¶ ¶ Source: Barker WH, Mullooly JP. Impact of epidemic type A influenza in a defined adult population. Am J Epidemiol 1980;112:798-811. * Outcomes were limited to hospitalizations in which either pneumonia or influenza was listed as the first condition on discharge records (Simonsen) or included anywhere in the list of discharge diagnoses (Barker). † † † Source: Simonsen L, Fukuda, K, Schonberger LB, Cox NJ. Impact of influenza epidemics on hospitalizations. J Infect Dis 2000;181:831-7. § § § Persons at high risk and not at high risk are combined. ¶ ¶ ¶ The low estimate is the average during influenza A(H1N1) or influenza B-predominate seasons, and the high estimate is the average during influenza A (H3N2)-predominate seasons.
with high-risk conditions to 100/100,000 population for those without high-risk conditions (36)(37)(38)(39). Within the 0-4 age group, hospitalization rates are highest among children aged 0-1 years and are comparable to rates found among persons aged >65 years (38,39) (Table 1).
During influenza epidemics from 1969-1970 through 1994-1995, the estimated overall number of influenzaassociated hospitalizations in the United States ranged from approximately 16,000 to 220,000/epidemic. An average of approximately 114,000 influenza-related excess hospitalizations occurred per year, with 57% of all hospitalizations occurring among persons aged <65 years. Since the 1968 influenza A (H3N2) virus pandemic, the greatest numbers of influenza-associated hospitalizations have occurred during epidemics caused by type A (H3N2) viruses, with an estimated average of 142,000 influenza-associated hospitalizations per year (40).
Influenza-related deaths can result from pneumonia as well as from exacerbations of cardiopulmonary conditions and other chronic diseases. In studies of influenza epidemics occurring from 1972-1973 through 1994-1995, excess deaths (i.e., the number of influenza-related deaths above a projected baseline of expected deaths) occurred during 19 of 23 influenza epidemics (41) (unpublished data, Influenza Branch, Division of Viral and Rickettsial Diseases , National Center for Infectious Diseases , CDC, 1998). During those 19 influenza seasons, estimated rates of influenza-associated deaths ranged from approximately 30 to >150 deaths/100,000 persons aged >65 years (unpublished data, Influenza Branch, DVRD, NCID, CDC, 1998). Older adults account for >90% of deaths attributed to pneumonia and influenza (42). From 1972From -1973From through 1994From -1995, >20,000 influenzaassociated deaths were estimated to occur during each of 11 different U.S. epidemics, and >40,000 influenza-associated deaths were estimated for each of 6 of these 11 epidemics (41) (unpublished data, Influenza Branch, DVRD, NCID, CDC, 1998). In the United States, pneumonia and influenza deaths might be increasing in part because the number of older persons is increasing (43).
# Options for Controlling Influenza
In the United States, the main option for reducing the impact of influenza is immunoprophylaxis with inactivated (i.e., killed virus) vaccine (see Recommendations for Using Influenza Vaccine). Vaccinating persons at high risk for complications each year before seasonal increases in influenza virus circulation is the most effective means of reducing the impact of influenza. Vaccination coverage can be increased by administering vaccine to persons during hospitalizations or routine health-care visits before the influenza season, rendering special visits to physicians' offices or clinics unnecessary. When vaccine and epidemic strains are well-matched, achieving increased vaccination rates among persons living in closed settings (e.g., nursing homes and other chronic-care facilities) and among staff can reduce the risk for outbreaks by inducing herd immunity (14). Vaccination of health-care workers and other persons in close contact with persons in groups at high risk can also reduce transmission of influenza and subsequent influenza-related complications. Using influenza-specific antiviral drugs for chemoprophylaxis or treatment of influenza is a key adjunct to vaccine (see Recommendations for Using Antiviral Agents for Influenza). However, antiviral medications are not a substitute for vaccination.
# Influenza Vaccine Composition
Influenza vaccines are standardized to contain the hemagglutinins of strains (i.e., typically two type A and one type B), representing the influenza viruses likely to circulate in the United States in the upcoming winter. The vaccine is made from highly purified, egg-grown viruses that have been made noninfectious (i.e., inactivated) (44). Subvirion and purified surface-antigen preparations are available. Because the vaccine viruses are initially grown in embryonated hens' eggs, the vaccine might contain limited amounts of residual egg protein.
Manufacturing processes differ by manufacturer. Manufacturers might use different compounds to inactivate influenza viruses and add antibiotics to prevent bacterial contamination. Package inserts should be consulted for additional information.
Influenza vaccine distributed in the United States might also contain thimerosal, a mercury-containing compound, as the preservative (45). Thimerosal has been used as a preservative in vaccines since the 1930s. Although no evidence of harm caused by low levels of thimerosal in vaccines has been reported, in 1999, the U.S. Public Health Service and other organizations recommended that efforts be made to reduce the thimerosal content in vaccines to decrease total mercury exposure, chiefly among infants and pregnant woman (45,46). Since mid-2001, routinely administered, noninfluenza childhood vaccines for the U.S. market have been manufactured either without or with only trace amounts of thimerosal to provide a substantial reduction in the total mercury exposure from vaccines for children (47).
For the 2002-2003 influenza season, a limited number of individually packaged doses (i.e., single-dose syringes) of reduced thimerosal-content influenza vaccine (4 years (see Vaccine Use for Young Children, By Manufacturer). Multidose vials and single-dose syringes of influenza vaccine containing approximately 25 mcg thimerosal/0.5 mL-dose are also available as they have been in past years. Because of the known risks for severe illness from influenza infection and the benefits of vaccination, and because a substantial safety margin has been incorporated into the health guidance values for organic mercury exposure, the benefit of influenza vaccine with reduced or standard thimerosal content outweighs the theoretical risk, if any, from thimerosal (45,48). The removal of thimerosal from other vaccines further reduces the theoretical risk from thimerosal in influenza vaccines.
The trivalent influenza vaccine recommended for the 2002-2003 season includes A/Moscow/10/99 (H3N2)-like, A/New Caledonia/20/99 (H1N1)-like, and B/Hong Kong/330/2001like antigens. For the A/Moscow/10/99 (H3N2)-like antigen, manufacturers will use the antigenically equivalent A/Panama/ 2007/99 (H3N2) virus. For the B/Hong Kong/330/2001-like antigen, the actual B strains that will be included in the vaccine will be announced later. These viruses will be used because of their growth properties and because they are representative of influenza viruses likely to circulate in the United States during the 2002-2003 influenza season. Because circulating influenza A (H1N2) viruses are a reasortant of influenza A (H1N1) and (H3N2) viruses, antibody directed against influenza A (H1N1) and influenza (H3N2) vaccine strains will provide protection against circulating influenza A (H1N2) viruses.
# Effectiveness of Inactivated Influenza Vaccine
The effectiveness of influenza vaccine depends primarily on the age and immunocompetence of the vaccine recipient and the degree of similarity between the viruses in the vaccine and those in circulation. The majority of vaccinated children and young adults develop high postvaccination hemagglutinationinhibition antibody titers (49)(50)(51). These antibody titers are protective against illness caused by strains similar to those in the vaccine (50)(51)(52)(53). When the vaccine and circulating viruses are antigenically similar, influenza vaccine prevents influenza illness among approximately 70%-90% of healthy adults aged <65 years (10,13,54,55). Vaccination of healthy adults also has resulted in decreased work absenteeism and decreased use of health-care resources, including use of antibiotics, when the vaccine and circulating viruses are well-matched (10)(11)(12)(13)55,56).
Children as young as age 6 months can develop protective levels of antibody after influenza vaccination (49,50,(57)(58)(59)(60), although the antibody response among children at high risk might be lower than among healthy children (61,62). In a randomized study among children aged 1-15 years, inactivated influenza vaccine was 77%-91% effective against influenza respiratory illness and was 44%-49%, 74%-76%, and 70%-81% effective against influenza seroconversion among children aged 1-5, 6-10, and 11-15 years, respectively (51). One study (63) reported a vaccine efficacy of 56% against influenza illness among healthy children aged 3-9 years, and another study (64) found vaccine efficacy of 22%-54% and 60%-78% among children with asthma aged 2-6 years and 7-14 years, respectively. A 2-year randomized study of children aged 6-24 months determined that >89% of children seroconverted to all three vaccine strains during both years; vaccine efficacy was 66% (95% confidence intervals = 34% and 82%) against culture-confirmed influenza during year 1 among 411 children and was -7% (95% CI = -247% and 67%) during year 2 among 375 children. However, no overall reduction in otitis media was reported (65). Other studies report that using trivalent inactivated influenza vaccine decreases the incidence of influenza-associated otitis media among young children by approximately 30% (17,18).
Older persons and persons with certain chronic diseases might develop lower postvaccination antibody titers than healthy young adults and thus can remain susceptible to influenza-related upper respiratory tract infection (66)(67)(68). A randomized trial among noninstitutionalized persons aged >60 years reported a vaccine efficacy of 58% against influenza respiratory illness, but indicated that efficacy might be lower among those aged >70 years (69). The vaccine can also be effective in preventing secondary complications and reducing the risk for influenza-related hospitalization and death (14)(15)(16)70). Among elderly persons living outside nursing homes or similar chronic-care facilities, influenza vaccine is 30%-70% effective in preventing hospitalization for pneumonia and influenza (16,71). Among elderly persons residing in nursing homes, influenza vaccine is most effective in preventing severe illness, secondary complications, and deaths. Among this population, the vaccine can be 50%-60% effective in preventing hospitalization or pneumonia and 80% effective in preventing death, although the effectiveness in preventing influenza illness often ranges from 30% to 40% (72,73).
# Cost-Effectiveness of Influenza Vaccine
Influenza vaccination can reduce both health-care costs and productivity losses associated with influenza illness. Economic studies of influenza vaccination of persons aged >65 years conducted in the United States have reported overall societal costsavings and substantial reductions in hospitalization and death (16,71,74). Studies of adults aged 65 years, vaccination resulted in a net savings per quality-adjusted-life-year (QALY) gained and resulted in costs of $23-$256/QALY among younger age groups. Additional studies of the relative cost-effectiveness and cost-utility of influenza vaccination among children and among adults aged <65 years are needed and should be designed to account for year-to-year variations in influenza attack rates, illness severity, and vaccine efficacy when evaluating the long-term costs and benefits of annual vaccination.
# Vaccination Coverage Levels
Among persons aged >65 years, influenza vaccination levels increased from 33% in 1989 (79) to 66% in 1999 (80), surpassing the Healthy People 2000 goal of 60% (81). Although 1999 influenza vaccination coverage reached the highest levels recorded among black, Hispanic, and white populations, vaccination levels among blacks and Hispanics continue to lag behind those among whites (80,82). In 1999, the influenza vaccination rates among persons aged >65 years were 68% among non-Hispanic whites, 50% among non-Hispanic blacks, and 55% among Hispanics (80). Possible reasons for the increase in influenza vaccination levels among persons aged >65 years through 1999 include greater acceptance of preventive medical services by practitioners, increased delivery and administration of vaccine by health-care providers and sources other than physicians, new information regarding influenza vaccine effectiveness, cost-effectiveness, and safety, and the initiation of Medicare reimbursement for influenza vaccination in 1993 (9,15,16,72,73,83,84).
Influenza vaccination levels among persons interviewed during 2000 were not substantially different from 1999 levels among persons aged >65 years (64% in 2000 versus 66% in 1999) and persons aged 50-64 years (35% in 2000 versus 34% in 1999) (80). The percentage of adults interviewed during the first quarter of 2001 who reported influenza vaccination during the past 12 months was lower than the percentage reported by adults interviewed during the first quarter of 2000 (63% versus 68% among those aged >65 years; 32% versus 37% among those aged 50-64 years). Delays in influenza vaccine supply during fall 2000 probably contributed to these declines in vaccination levels (see Vaccine Supply). Continued annual monitoring is needed to determine the effects of vaccine supply delays and other factors on vaccination coverage among persons aged >50 years. The Healthy People 2010 objective is to achieve vaccination coverage for 90% of persons aged >65 years (85). Additional strategies are needed to achieve this Healthy People 2010 objective in all segments of the population and to reduce racial and ethnic disparities in vaccine coverage.
In 1997 and 1998, vaccination rate estimates among nursing home residents were 64%-82% and 83%, respectively (86,87). The Healthy People 2010 goal is to achieve influenza vaccination of 90% of nursing home residents, an increase from the Healthy People 2000 goal of 80% (81,85).
In 2000, the overall vaccination rate for adults aged 18-64 years with high-risk conditions was 32%, far short of the Healthy People 2000 goal of 60% (unpublished data, National Immunization Program , CDC, 2000) (81). Among persons aged 50-64 years, 44% of those with chronic medical conditions and 31% of those without chronic medical conditions received influenza vaccine. Only 25% of adults aged <50 years with high-risk conditions were vaccinated.
Reported vaccination rates of children at high risk are low. One study conducted among patients in health maintenance organizations reported influenza vaccination rates ranging from 9% to 10% among children with asthma (88), and a rate of 25% was reported among children with severe-to-moderate asthma who attended an allergy and immunology clinic (89). However, a study conducted in a pediatric clinic demonstrated an increase in the vaccination rate of children with asthma or reactive airways disease of 5%-32% after implementing a reminder/recall system (90). Increasing vaccination coverage among persons who have high-risk conditions and are aged <65 years, including children at high risk, is the highest priority for expanding influenza vaccine use.
Annual vaccination is recommended for health-care workers. Nonetheless, the National Health Interview Survey (NHIS) indicated vaccination rates of only 34% and 38% among health-care workers in the 1997 and 2000 surveys, respectively (91) (unpublished NHIS data, NIP, CDC, 2002). Vaccination of health-care workers has been associated with reduced work absenteeism (10) and fewer deaths among nursing home patients (92,93).
Limited information is available regarding the use of influenza vaccine among pregnant women. Among women aged 18-44 years without diabetes responding to the 1999 Behavioral Risk Factor Surveillance Survey, those reporting they were pregnant were less likely to report influenza vaccination during the past 12 months (9.6%) than those not pregnant (15.7%). Vaccination coverage among pregnant women did not substantially change during 1997-1999, whereas coverage among nonpregnant women increased from 14.4% in 1997. Similar results were determined by using the 1997-2000 NHIS data, excluding pregnant women who reported diabetes, heart disease, lung disease, and other selected high-risk conditions (unpublished NHIS data, NIP, CDC, 2002). Although not directly measuring influenza vaccination among women who were past the second trimester of pregnancy during influenza season, these data indicate low compliance with the ACIP recommendations for pregnant women (94). In a study of influenza vaccine acceptance by pregnant women, 71% who were offered the vaccine chose to be vaccinated (95). However, a 1999 survey of obstetricians and gynecologists determined that only 39% gave influenza vaccine to obstetric patients, although 86% agree that pregnant women's risk for influenza-related morbidity and mortality increases during the last two trimesters (96).
# Recommendations for Using Influenza Vaccine
Influenza vaccine is strongly recommended for any person aged >6 months who is at increased risk for complications from influenza. In addition, health-care workers and other persons (including household members) in close contact with persons at high risk should be vaccinated to decrease the risk for transmitting influenza to persons at high risk. Influenza vaccine also can be administered to any person aged >6 months to reduce the probability of becoming infected with influenza.
# Target Groups for Vaccination
# Persons at Increased Risk for Complications
Vaccination is recommended for the following groups of persons who are at increased risk for complications from influenza:
# Persons Aged 50-64 Years
Vaccination is recommended for persons aged 50-64 years because this group has an increased prevalence of persons with high-risk conditions. Approximately 43 million persons in the United States are aged 50-64 years, and 10-14 million (24%-32%) have >1 high-risk medical conditions (unpublished data, NIP, CDC, 2002). Influenza vaccine has been recommended for this entire age group to increase the low vaccination rates among persons in this age group with high-risk conditions. Age-based strategies are more successful in increasing vaccine coverage than patient-selection strategies based on medical conditions. Persons aged 50-64 years without high-risk conditions also receive benefit from vaccination in the form of decreased rates of influenza illness, decreased work absenteeism, and decreased need for medical visits and medication, including antibiotics (10)(11)(12)(13). Further, 50 years is an age when other preventive services begin and when routine assessment of vaccination and other preventive services has been recommended (97,98).
# Persons Who Can Transmit Influenza to Those at High Risk
Persons who are clinically or subclinically infected can transmit influenza virus to persons at high risk for complications from influenza. Decreasing transmission of influenza from caregivers to persons at high risk might reduce influenzarelated deaths among persons at high risk. Evidence from two studies indicates that vaccination of health-care workers is associated with decreased deaths among nursing home patients (92,93). Vaccination of health-care workers and others in close contact with persons at high risk, including household members, is recommended. The following groups should be vaccinated:
- physicians, nurses, and other personnel in both hospital and outpatient-care settings, including medical emergency response workers (e.g., paramedics and emergency medical technicians); - employees of nursing homes and chronic-care facilities who have contact with patients or residents; - employees of assisted living and other residences for persons in groups at high risk; - persons who provide home care to persons in groups at high risk; and - household members (including children) of persons in groups at high risk. In addition, because children aged 0-23 months are at increased risk for influenza-related hospitalization (37-39), vaccination is encouraged for their household contacts and out-of-home caretakers, particularly for contacts of children aged 0-5 months because influenza vaccines have not been approved by the U.S. Food and Drug Administration (FDA) for use among children aged <6 months (see Healthy Young Children).
# Additional Information Regarding Vaccination of Specific Populations Pregnant Women
Influenza-associated excess deaths among pregnant women were documented during the pandemics of 1918-1919 and 1957-1958 (99-102). Case reports and limited studies also indicate that pregnancy can increase the risk for serious medical complications of influenza as a result of increases in heart rate, stroke volume, and oxygen consumption; decreases in lung capacity; and changes in immunologic function (103)(104)(105)(106). A study of the impact of influenza during 17 interpandemic influenza seasons demonstrated that the relative risk for hospitalization for selected cardiorespiratory conditions among pregnant women enrolled in Medicaid increased from 1.4 during weeks 14-20 of gestation to 4.7 during weeks 37-42 in comparison with women who were 1-6 months postpartum (107). Women in their third trimester of pregnancy were hospitalized at a rate (i.e., 250/100,000 pregnant women) comparable with that of nonpregnant women who had high-risk medical conditions. By using data from this study, researchers estimated that an average of 1-2 hospitalizations could be prevented for every 1,000 pregnant women vaccinated.
Because of the increased risk for influenza-related complications, women who will be beyond the first trimester of pregnancy (>14 weeks of gestation) during the influenza season should be vaccinated. Certain providers prefer to administer influenza vaccine during the second trimester to avoid a coincidental association with spontaneous abortion, which is common in the first trimester, and because exposures to vaccines traditionally have been avoided during the first trimester (108). Pregnant women who have medical conditions that increase their risk for complications from influenza should be vaccinated before the influenza season, regardless of the stage of pregnancy. A study of influenza vaccination of >2,000 pregnant women demonstrated no adverse fetal effects associated with influenza vaccine (109). However, additional data are needed to confirm the safety of vaccination during pregnancy.
The majority of influenza vaccine distributed in the United States contains thimerosal, a mercury-containing compound, as a preservative, but influenza vaccine with reduced thimerosal content might be available in limited quantities (see Influenza Vaccine Composition). Thimerosal has been used in U.S. vaccines since the 1930s. No data or evidence exists of any harm caused by the level of mercury exposure that might occur from influenza vaccination. Because pregnant women are at increased risk for influenza-related complications and because a substantial safety margin has been incorporated into the health guidance values for organic mercury exposure, the benefit of influenza vaccine with reduced or standard thimerosal content outweighs the potential risk, if any, for thimerosal (45,48).
# Persons Infected with HIV
Limited information is available regarding the frequency and severity of influenza illness or the benefits of influenza vaccination among persons with HIV infection (110,111). However, a retrospective study of young and middle-aged women enrolled in Tennessee's Medicaid program found that the attributable-risk for cardiopulmonary hospitalizations among women with HIV infection was higher during influenza seasons than during the peri-influenza periods. The risk for hospitalization was higher for HIV-infected women than for women with other well-recognized high-risk conditions, including chronic heart and lung diseases (112). Another study estimated that the risk for influenza-related death was 9.4-14.6/10,000 persons with AIDS, compared with rates of 0.09-0.10/10,000 among all persons aged 25-54 years and 6.4-7.0/ 10,000 among persons aged >65 years (113). Other reports demonstrate that influenza symptoms might be prolonged and the risk for complications from influenza increased for certain HIV-infected persons (114)(115)(116).
Influenza vaccination has been demonstrated to produce substantial antibody titers against influenza among vaccinated HIV-infected persons who have minimal acquired immunodeficiency syndrome-related symptoms and high CD4 + T-lymphocyte cell counts (117)(118)(119)(120). A limited, randomized, placebo-controlled trial determined that influenza vaccine was highly effective in preventing symptomatic, laboratoryconfirmed influenza infection among HIV-infected persons with a mean of 400 CD4 + T-lymphocyte cells/mm 3 ; a limited number of persons with CD4 + T-lymphocyte cell counts of 100 CD4 + cells and among those with <30,000 viral copies of HIV type 1/mL (116). Among patients who have advanced HIV disease and low CD4 + T-lymphocyte cell counts, influenza vaccine might not induce protective antibody titers (119,120); a second dose of vaccine does not improve the immune response in these persons (120,121).
One study reported that HIV RNA levels increased transiently in one HIV-infected patient after influenza infection (122). Studies have demonstrated a transient (i.e., 2-4-week) increase in replication of HIV-1 in the plasma or peripheral blood mononuclear cells of HIV-infected persons after vaccine administration (119,123). Other studies using similar laboratory techniques have not documented a substantial increase in the replication of HIV (124)(125)(126). Deterioration of CD4 + T-lymphocyte cell counts or progression of HIV disease have not been demonstrated among HIV-infected persons after influenza vaccination compared with unvaccinated persons (120,127). Limited information is available concerning the effect of antiretroviral therapy on increases in HIV RNA levels after either natural influenza infection or influenza vaccination (110,128). Because influenza can result in serious illness and because influenza vaccination can result in the production of protective antibody titers, vaccination will benefit HIV-infected patients, including HIV-infected pregnant women.
# Breast-Feeding Mothers
Influenza vaccine does not affect the safety of mothers who are breast-feeding or their infants. Breast-feeding does not adversely affect the immune response and is not a contraindication for vaccination.
# Travelers
The risk for exposure to influenza during travel depends on the time of year and destination. In the tropics, influenza can occur throughout the year. In the temperate regions of the Southern Hemisphere, the majority of influenza activity occurs during April-September. In temperate climate zones of the Northern and Southern Hemispheres, travelers also can be exposed to influenza during the summer, especially when traveling as part of organized tourist groups that include persons from areas of the world where influenza viruses are circulating. Persons at high risk for complications of influenza who were not vaccinated with influenza vaccine during the preceding fall or winter should consider receiving influenza vaccine before travel if they plan to
- travel to the tropics;
- travel with organized tourist groups at any time of year; or - travel to the Southern Hemisphere during April-September. No information is available regarding the benefits of revaccinating persons before summer travel who were already vaccinated in the preceding fall. Persons at high risk who received the previous season's vaccine before travel should be revaccinated with the current vaccine in the following fall or winter. Persons aged >50 years and others at high risk might wish to consult with their physicians before embarking on travel during the summer to discuss the symptoms and risks for influenza and the advisability of carrying antiviral medications for either prophylaxis or treatment of influenza.
# General Population
In addition to the groups for which annual influenza vaccination is recommended, physicians should administer influenza vaccine to any person who wishes to reduce the likelihood of becoming ill with influenza (the vaccine can be administered to children aged >6 months), depending on vaccine availability (see Vaccine Supply). Persons who provide essential community services should be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Students or other persons in institutional settings (e.g., those who reside in dormitories) should be encouraged to receive vaccine to minimize the disruption of routine activities during epidemics.
# Healthy Young Children
Studies indicate that rates of hospitalization are higher among young children than older children when influenza viruses are in circulation (36)(37)(38)129,130). The increased rates of hospitalization are comparable with rates for other groups considered at high risk for influenza-related complications. However, the interpretation of these findings has been confounded by co-circulation of respiratory syncytial viruses, which are a cause of serious respiratory viral illness among children and which frequently circulate during the same time as influenza viruses (131)(132)(133). Two recent studies have attempted to separate the effects of respiratory syncytial viruses and influenza viruses on rates of hospitalization among children who do not have highrisk conditions (37,38). Both studies reported that otherwise healthy children aged <2 years, and possibly children aged 2-4 years, are at increased risk for influenza-related hospitalization compared with older healthy children (Table 1). Among the Tennessee Medicaid population during 1973-1993, healthy children aged 6 months-<3 years had rates of influenzaassociated hospitalization comparable with or higher than rates among children aged 3-14 years with high-risk conditions (Table 1) (37,39).
Because children aged 6-23 months are at substantially increased risk for influenza-related hospitalizations, influenza vaccination of all children in this age group is encouraged when feasible. However, before a full recommendation to annually vaccinate all children aged 6-23 months can be made, ACIP, the American Academy of Pediatrics, and the American Academy of Family Physicians recognize that certain key concerns must be addressed. These concerns include increasing efforts to educate parents and providers regarding the impact of influenza and the potential benefits and risks of vaccination among young children, clarification of practical strategies for annual vaccination of children, certain ones of whom will require two doses within the same season, and reimbursement for vaccination. ACIP will provide updated information as these concerns are addressed. A full recommendation could be made by 2003-2005. In the interim, ACIP continues to strongly recommend influenza vaccination of adults and children aged >6 months who have high-risk medical conditions.
The current inactivated influenza vaccine is not approved by FDA for use among children aged <6 months, the pediatric group at greatest risk for influenza-related complications (37). Vaccinating their household contacts and out-of-home caretakers might decrease the probability of influenza among these children.
# Persons Who Should Not Be Vaccinated
Inactivated influenza vaccine should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine without first consulting a physician (see Side Effects and Adverse Reactions). Prophylactic use of antiviral agents is an option for preventing influenza among such persons. However, persons who have a history of anaphylactic hypersensitivity to vaccine components but who are also at high risk for complications from influenza can benefit from vaccine after appropriate allergy evaluation and desensitization. Information regarding vaccine components can be found in package inserts from each manufacturer. Persons with acute febrile illness usually should not be vaccinated until their symptoms have abated. However, minor illnesses with or without fever do not contraindicate the use of influenza vaccine, particularly among children with mild upper respiratory tract infection or allergic rhinitis.
# Timing of Annual Vaccination Vaccination in October and November
The optimal time to vaccinate is usually during October-November. However, because of substantial vaccine distribution delays during the 2000-2001 and 2001-2002 influenza seasons and the possibility of similar situations in future years, ACIP recommends that vaccine providers focus their vaccination efforts in October and earlier on persons at high risk and health-care workers. Vaccination of children aged <9 years who are receiving vaccine for the first time should also begin in October because they need a booster dose 1 month after the initial dose. Vaccination of all other groups should begin in November, including household members of persons at high risk, healthy persons aged 50-64 years, and other persons who wish to decrease their risk for influenza infection. Materials to assist providers in prioritizing early vaccination are available at (for information regarding vaccination of travelers, see the Travelers section in this report).
# Vaccination in December and Later
After November, certain persons who should or want to receive influenza vaccine remain unvaccinated. In addition, substantial amounts of vaccine have remained unused during the past two influenza seasons. To improve vaccine coverage and use, chiefly among persons at high risk and health-care workers, influenza vaccine should continue to be offered in December and throughout the influenza season as long as vaccine supplies are available, even after influenza activity has been documented in the community. In the United States, seasonal influenza activity can begin to increase as early as November or December, but influenza activity has not reached peak levels in the majority of recent seasons until late December through early March (Table 2). Therefore, although the timing of influenza activity can vary by region, vaccine administered after November is likely to be beneficial in the majority of influenza seasons. Adults develop peak antibody protection against influenza infection 2 weeks after vaccination (134,135).
# Vaccination Before October
To avoid missed opportunities for vaccination of persons at high risk for serious complications, such persons should be offered vaccine beginning in September during routine healthcare visits or during hospitalizations, if vaccine is available. In facilities housing older persons (e.g., nursing homes), vaccination before October typically should be avoided because antibody levels in such persons can begin to decline within a limited time after vaccination (136).
# Timing of Organized Vaccination Campaigns
Persons planning substantial organized vaccination campaigns should consider scheduling these events after mid-October because the availability of vaccine in any location cannot be ensured consistently in the early fall. Scheduling campaigns after mid-October will minimize the need for cancellations because vaccine is unavailable. Campaigns conducted before November should focus efforts on vaccination of persons at high risk, health-care workers, and household contacts of persons at high-risk to the extent feasible.
# Dosage
Dosage recommendations vary according to age group (Table 3). Among previously unvaccinated children aged 1 months apart are recommended for satisfactory antibody responses. If possible, the second dose should be administered before December. Among adults, studies have indicated limited or no improvement in antibody response when a second dose is administered during the same season (137)(138)(139). Even when the current influenza vaccine contains >1 antigens administered in previous years, annual vaccination with the current vaccine is necessary because immunity declines during the year after vaccination (140,141).
Vaccine prepared for a previous influenza season should not be administered to provide protection for the current season.
# Vaccine Use Among Young Children, By Manufacturer
Providers should use influenza vaccine that has been approved by FDA for vaccinating children aged 6 months-3 years. Influenza vaccines from Wyeth Laboratories, Inc.
# Route
The intramuscular route is recommended for influenza vaccine. Adults and older children should be vaccinated in the deltoid muscle. A needle length of >1 inches can be considered for these age groups because needles <1 inch might be of insufficient length to penetrate muscle tissue in certain adults and older children (142). Infants and young children should be vaccinated in the anterolateral aspect of the thigh (47).
# Side Effects and Adverse Reactions
When educating patients regarding potential side effects, clinicians should emphasize that 1) inactivated influenza vaccine contains noninfectious killed viruses and cannot cause influenza; and 2) coincidental respiratory disease unrelated to influenza vaccination can occur after vaccination.
# Local Reactions
In placebo-controlled studies among adults, the most frequent side effect of vaccination is soreness at the vaccination site (affecting 10%-64% of patients) that lasts <2 days (13,(143)(144)(145). These local reactions typically are mild and rarely interfere with the person's ability to conduct usual daily activities. One study (62) reported 20%-28% of asthmatic children aged 9 months-18 years had local pain and swelling and another study (60) reported that 23% of children aged 6 months-4 years with chronic heart or lung disease had local reactions. A different study (59) reported no difference in local reactions among 53 children aged 6 months-6 years with high-risk medical conditions or among 305 healthy children aged 3-12 years in a placebo-controlled trial of inactivated influenza vaccine. In a study of 12 children aged 5-32 months, no substantial local or systemic reactions were noted (146).
# Systemic Reactions
Fever, malaise, myalgia, and other systemic symptoms can occur after vaccination and most often affect persons who have had no prior exposure to the influenza virus antigens in the vaccine (e.g., young children) (147,148). These reactions begin 6-12 hours after vaccination and can persist for 1-2 days. Recent placebo-controlled trials demonstrate that among older persons and healthy young adults, administration of splitvirus influenza vaccine is not associated with higher rates of systemic symptoms (e.g., fever, malaise, myalgia, and headache) when compared with placebo injections (13,(143)(144)(145).
Less information from published studies is available for children compared with adults. In a study of 791 healthy children (51), postvaccination fever was noted among 11.5% of children aged 1-5 years, 4.6% among children aged 6-10 years, and 5.1% among children aged 11-15 years. Among children at high risk, one study of 52 children aged 6 months-4 years reported fever among 27% and irritability and insomnia among 25% (60); a study among 33 children aged 6-18 months reported that one child had irritability and one had a fever and seizure after vaccination (149). No placebo comparison was made in these studies. However, in pediatric trials of A/ New Jersey/76 swine influenza vaccine, no difference occurred between placebo and split-virus vaccine groups in febrile reactions after injection, although the vaccine was associated with mild local tenderness or erythema (59). Limited data regarding potential adverse events after influenza vaccination are available from the Vaccine Adverse Event Reporting System (VAERS). During January 1, 1991-July 16, 2001, VAERS received 789 reports of adverse events among children aged <18 years, including 89 reporting adverse events among children aged 6-23 months. The number of influenza vaccine doses received by children during this time period is unknown. The most frequently reported events were fever, injection-site reactions, and rash (unpublished data, CDC, 2001). Because of the limitations of spontaneous reporting systems, determining causality for specific types of adverse events, with the exception of injection-site reactions, is usually not possible by using VAERS data alone.
Immediate -presumably allergic -reactions (e.g., hives, angioedema, allergic asthma, and systemic anaphylaxis) rarely occur after influenza vaccination (150). These reactions probably result from hypersensitivity to certain vaccine components; the majority of reactions probably are caused by residual egg protein. Although influenza vaccines contain only a limited quantity of egg protein, this protein can induce immediate hypersensitivity reactions among persons who have severe egg allergy. Persons who have experienced hives, have had swelling of the lips or tongue, or have experienced acute respiratory distress or collapse after eating eggs should consult a physician for appropriate evaluation to help determine if vaccine should be administered. Persons who have documented immunoglobulin E (IgE)-mediated hypersensitivity to eggsincluding those who have had occupational asthma or other allergic responses to egg protein -might also be at increased risk for allergic reactions to influenza vaccine, and consultation with a physician should be considered. Protocols have been published for safely administering influenza vaccine to persons with egg allergies (151,152).
Hypersensitivity reactions to any vaccine component can occur. Although exposure to vaccines containing thimerosal can lead to induction of hypersensitivity, the majority of patients do not experience reactions to thimerosal when it is administered as a component of vaccines, even when patch or intradermal tests for thimerosal indicate hypersensitivity (153,154). When reported, hypersensitivity to thimerosal usually has consisted of local, delayed-type hypersensitivity reactions (153).
# Guillain-Barré Syndrome
The 1976 swine influenza vaccine was associated with an increased frequency of Guillain-Barré syndrome (GBS) (155,156). Among persons who received the swine influenza vaccine in 1976, the rate of GBS that exceeded the background rate was 25 years than persons <25 years (155). Evidence for a causal relationship of GBS with subsequent vaccines prepared from other influenza viruses is unclear. Obtaining strong epidemiologic evidence for a possible limited increase in risk is difficult for such a rare condition as GBS, which has an annual incidence of 10-20 cases/1,000,000 adults (157), and stretches the limits of epidemiologic investigation. More definitive data probably will require the use of other methodologies (e.g., laboratory studies of the pathophysiology of GBS).
During three of four influenza seasons studied during 1977-1991, the overall relative risk estimates for GBS after influenza vaccination were slightly elevated but were not statistically significant in any of these studies (158)(159)(160). However, in a study of the 1992-1993 and 1993-1994 seasons, the overall relative risk for GBS was 1.7 (95% confidence interval = 1.0-2.8; p = 0.04) during the 6 weeks after vaccination, representing approximately 1 additional case of GBS/1,000,000 persons vaccinated. The combined number of GBS cases peaked 2 weeks after vaccination (161). Thus, investigations to date indicate no substantial increase in GBS associated with influenza vaccines (other than the swine influenza vaccine in 1976) and that, if influenza vaccine does pose a risk, it is probably slightly more than 1 additional case/1,000,000 persons vaccinated. Cases of GBS after influenza infection have been reported, but no epidemiologic studies have documented such an association (162,163). Substantial evidence exists that several infectious illnesses, most notably Campylobacter jejuni, as well as upper-respiratory tract infections typically are associated with GBS (157,(164)(165)(166).
Even if GBS were a true side effect of vaccination in the years after 1976, the estimated risk for GBS of approximately additional case/1,000,000 persons vaccinated is substantially less than the risk for severe influenza, which could be prevented by vaccination among all age groups, and chiefly persons aged >65 years and those who have medical indications for influenza vaccination (Table 1) (see Hospitalizations and Deaths from Influenza). The potential benefits of influenza vaccination in preventing serious illness, hospitalization, and death greatly outweigh the possible risks for developing vaccine-associated GBS. The average case-fatality ratio for GBS is 6% and increases with age (157,167). No evidence indicates that the case-fatality ratio for GBS differs among vaccinated persons and those not vaccinated.
The incidence of GBS among the general population is low, but persons with a history of GBS have a substantially greater likelihood of subsequently developing GBS than persons without such a history (158,168). Thus, the likelihood of coincidentally developing GBS after influenza vaccination is expected to be greater among persons with a history of GBS than among persons with no history of this syndrome. Whether influenza vaccination specifically might increase the risk for recurrence of GBS is unknown; therefore, avoiding vaccinating persons who are not at high risk for severe influenza complications and who are known to have developed GBS within 6 weeks after a previous influenza vaccination is prudent. As an alternative, physicians might consider the use of influenza antiviral chemoprophylaxis for these persons. Although data are limited, for the majority of persons who have a history of GBS and who are at high risk for severe complications from influenza, the established benefits of influenza vaccination justify yearly vaccination.
# Simultaneous Administration of Other Vaccines, Including Childhood Vaccines
Adult target groups for influenza and pneumococcal polysaccharide vaccination overlap considerably (169). For persons at high risk who have not previously been vaccinated with pneumococcal vaccine, health-care providers should strongly consider administering pneumococcal polysaccharide and influenza vaccines concurrently. Both vaccines can be administered at the same time at different sites without increasing side effects (170,171). However, influenza vaccine is administered each year, whereas pneumococcal vaccine is not. A patient's verbal history is acceptable for determining prior pneumococcal vaccination status. When indicated, pneumococcal vaccine should be administered to patients who are uncertain regarding their vaccination history (169). No studies regarding the simultaneous administration of inactivated influenza vaccine and other childhood vaccines have been conducted. However, typically, inactivated vaccines do not interfere with the immune response to other inactivated or live vaccines (47), and children at high risk for influenzarelated complications, including those aged 6-23 months, can receive influenza vaccine at the same time they receive other routine vaccinations.
# Strategies for Implementing These Recommendations in Health-Care Settings
Successful vaccination programs combine publicity and education for health-care workers and other potential vaccine recipients, a plan for identifying persons at high risk, use of reminder/recall systems, and efforts to remove administrative and financial barriers that prevent persons from receiving vaccine (19). Using standing orders programs is recommended for long-term care facilities (e.g., nursing homes and skilled nursing facilities) under the supervision of a medical director to ensure the administration of recommended vaccinations for adults. Other settings (e.g., inpatient and outpatient facilities, managed care organizations, assisted living facilities, correctional facilities, pharmacies, adult workplaces, and home health-care agencies) are encouraged to introduce standing orders programs as well (20). Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described in the following sections.
# Outpatient Facilities Providing Ongoing Care
Staff in facilities providing ongoing medical care (e.g., physicians' offices, public health clinics, employee health clinics, hemodialysis centers, hospital specialty-care clinics, and outpatient rehabilitation programs) should identify and label the medical records of patients who should receive vaccination. Vaccine should be offered during visits beginning in September and throughout the influenza season. The offer of vaccination and its receipt or refusal should be documented in the medical record. Patients for whom vaccination is recommended who do not have regularly scheduled visits during the fall should be reminded by mail, telephone, or other means of the need for vaccination.
# Outpatient Facilities Providing Episodic or Acute Care
Beginning in each September, acute health-care facilities (e.g., emergency rooms and walk-in clinics) should offer vaccinations to persons for whom vaccination is recommended or provide written information regarding why, where, and how to obtain the vaccine. This written information should be available in languages appropriate for the populations served by the facility.
# Nursing Homes and Other Residential Long-Term Care Facilities
During October and November each year, vaccination should be routinely provided to all residents of chronic-care facilities with the concurrence of attending physicians. Consent for vaccination should be obtained from the resident or a family member at the time of admission to the facility or anytime afterwards. All residents should be vaccinated at one time, preceding the influenza season. Residents admitted through March after completion of the facility's vaccination program should be vaccinated at the time of admission.
# Acute-Care Hospitals
Persons of all ages (including children) with high-risk conditions and persons aged >50 years who are hospitalized at any time during September-March should be offered and strongly encouraged to receive influenza vaccine before they are discharged. In one study, 39%-46% of patients hospitalized during the winter with influenza-related diagnoses had been hospitalized during the preceding autumn (172). Thus, the hospital is a setting in which persons at increased risk for subsequent hospitalization can be identified and vaccinated. Using standing orders in hospitals increases vaccination rates among hospitalized persons (173).
# Visiting Nurses and Others Providing Home Care to Persons at High Risk
Beginning in September, nursing-care plans should identify patients for whom vaccination is recommended, and vaccine should be administered in the home, if necessary. Caregivers and other persons in the household (including children) should be referred for vaccination.
# Other Facilities Providing Services to Persons Aged >50 Years
Beginning in October, such facilities as assisted-living facilities, retirement communities, and recreation centers should offer unvaccinated residents and attendees vaccination on site before the influenza season. Staff education should emphasize the need for influenza vaccination.
# Health-Care Workers
Beginning in October each year, health-care facilities should offer influenza vaccinations to all personnel, including night and weekend staff. Particular emphasis should be placed on providing vaccinations for persons who care for members of groups at high risk. Efforts should be made to educate healthcare workers regarding the benefits of vaccination and the potential health consequences of influenza illness for themselves and their patients. Measures should be taken to provide all health-care workers convenient access to influenza vaccination at the work site, free of charge, as part of employee health programs.
# Influenza Vaccine Supply
In 2000, difficulties with growing and processing the influenza A (H3N2) vaccine strain and other manufacturing problems resulted in substantial delays in the distribution of 2000-2001 influenza vaccine and fewer vaccine doses than were distributed in 1999 (174). In 2001, a less severe delay occurred. By December 2001, approximately 87.7 million doses of vaccine were produced, more than in any year except the 1976-1977 swine influenza vaccine campaign (175). In July 2001, ACIP issued supplemental recommendations in anticipation of the delay in 2001-2002 vaccine distribution (176).
The possibility of future influenza vaccine delivery delays or vaccine shortages remains. Steps to address such situations include identification and implementation of ways to strengthen the influenza vaccine supply, to improve targeted delivery of vaccine to groups at high risk when delays or shortages are expected, and to encourage the administration of vaccine throughout the influenza season every year.
# Potential New Vaccine
Intranasally administered, cold-adapted, live, attenuated, influenza virus vaccines (LAIVs) are being used in Russia and have been under development in the United States since the 1960s (177)(178)(179)(180)(181). LAIVs have been studied as monovalent, bivalent, and trivalent formulations (180,181). LAIVs consist of live viruses that replicate in the upper respiratory tract, that induce minimal symptoms (i.e., are attenuated) and that replicate poorly at temperatures found in the lower respiratory tract (i.e., are temperature-sensitive). Possible advantages of LAIVs are their potential to induce a broad mucosal and systemic immune response, ease of administration, and the acceptability of an intranasal rather than intramuscular route of administration. In a 5-year study that compared trivalent inactivated vaccine and bivalent LAIVs (administered by nose drops) and that used related but different vaccine strains, the two vaccines were found to be approximately equivalent in terms of effectiveness (51,182). In a 1996-1997 study of children aged 15-71 months, an intranasally administered trivalent LAIV was 93% effective in preventing culture-positive influenza A (H3N2) and B infections, reduced febrile otitis media among vaccinated children by 30%, and reduced otitis media with concomitant antibiotic use by 35% compared with unvaccinated children (183). In a follow-up study during the 1997-1998 season, the trivalent LAIV was 86% effective in preventing culture-positive influenza among children, despite a suboptimal match between the vaccine's influenza A (H3N2) component and the predominant circulating influenza A (H3N2) virus (184). A study conducted among healthy adults during the same season found a 9%-24% reduction in febrile respiratory illnesses and a 13%-28% reduction in lost work days (185). No study has directly compared the efficacy or effectiveness of trivalent inactivated vaccine and trivalent LAIV. An application for licensure of a LAIV is under review by FDA.
# Recommendations for Using Antiviral Agents for Influenza
Antiviral drugs for influenza are an adjunct to influenza vaccine for controlling and preventing influenza. However, these agents are not a substitute for vaccination. Four licensed influenza antiviral agents are available in the United States: amantadine, rimantadine, zanamivir, and oseltamivir.
Amantadine and rimantadine are chemically related antiviral drugs known as adamantanes with activity against influenza A viruses but not influenza B viruses. Amantadine was approved in 1966 for chemoprophylaxis of influenza A (H2N2) infection and was later approved in 1976 for treatment and chemoprophylaxis of influenza type A virus infections among adults and children aged >1 years. Rimantadine was approved in 1993 for treatment and chemoprophylaxis of infection among adults and prophylaxis among children. Although rimantadine is approved only for chemoprophylaxis of infection among children, certain experts in the management of influenza consider it appropriate for treatment among children (186).
Zanamivir and oseltamivir are chemically related antiviral drugs known as neuraminidase inhibitors, which inhibit neuraminidase and have activity against both influenza A and B viruses. Both zanamivir and oseltamivir were approved in 1999 for treating uncomplicated influenza infections. Zanamivir is approved for treating persons aged >7 years, and oseltamivir is approved for treatment for persons aged >1 years. In 2000, oseltamivir was approved for chemoprophylaxis of influenza among persons aged >13 years.
The four drugs differ in terms of their pharmacokinetics, side effects, routes of administration, approved age groups, dosages, and costs. An overview of the indications, use, administration, and known primary side effects of these medications is presented in the following sections. Information contained in this report might not represent FDA approval or approved labeling for the antiviral agents described. Package inserts should be consulted for additional information.
# Role of Laboratory Diagnosis
Appropriate treatment of patients with respiratory illness depends on accurate and timely diagnosis. The early diagnosis of influenza can reduce the inappropriate use of antibiotics and provide the option of using antiviral therapy. However, because certain bacterial infections can produce symptoms similar to influenza, bacterial infections should be considered and appropriately treated if suspected. In addition, bacterial infections can occur as a complication of influenza.
Influenza surveillance information as well as diagnostic testing can aid clinical judgment and guide treatment decisions. The accuracy of clinical diagnosis of influenza based on symptoms alone is limited because symptoms from illness caused by other pathogens can overlap considerably with influenza (28)(29)(30). Influenza surveillance by state and local health departments and CDC can provide information regarding the presence of influenza viruses in the community. Surveillance can also identify the predominant circulating types, subtypes, and strains of influenza.
Diagnostic tests available for influenza include viral culture, serology, rapid antigen testing, polymerase chain reaction (PCR) and immunofluorescence (24). Sensitivity and specificity of any test for influenza might vary by the laboratory that performs the test, the type of test used, and the type of specimen tested. Among respiratory specimens for viral isolation or rapid detection, nasopharyngeal specimens are typically more effective than throat swab specimens (187). As with any diagnostic test, results should be evaluated in the context of other clinical information available to the physician.
Commercial rapid diagnostic tests are available that can be used by laboratories in outpatient settings to detect influenza viruses within 30 minutes (24,188). These rapid tests differ in the types of influenza viruses they can detect and whether they can distinguish between influenza types. Different tests can detect 1) only influenza A viruses; 2) both influenza A and B viruses but not distinguish between the two types; or 3) both influenza A and B and distinguish between the two. The types of specimens acceptable for use (i.e., throat swab, nasal wash, or nasal swab) also vary by test. The specificity and, in particular, the sensitivity of rapid tests are lower than for viral culture and vary by test. Because of the lower sensitivity of the rapid tests, physicians should consider confirming negative tests with viral culture or other means. Package inserts and the laboratory performing the test should be consulted for more details. Additional information regarding diagnostic testing is available at / diseases/flu/flu_dx_table.htm.
Despite the availability of rapid diagnostic tests, the collection of clinical specimens for viral culture is critical, because only culture isolates can provide specific information regarding circulating influenza subtypes and strains. This information is needed to compare current circulating influenza strains with vaccine strains, to guide decisions regarding influenza treatment and chemoprophylaxis, and to formulate vaccine for the coming year. Virus isolates also are needed to monitor the emergence of antiviral resistance and the emergence of novel influenza A subtypes that might pose a pandemic threat.
# Indications for Use
# Treatment
When administered within 2 days of illness onset to otherwise healthy adults, amantadine and rimantadine can reduce the duration of uncomplicated influenza A illness, and zanamivir and oseltamivir can reduce the duration of uncomplicated influenza A and B illness by approximately 1 day compared with placebo (55,(189)(190)(191)(192)(193)(194)(195)(196)(197)(198)(199)(200)(201)(202). More clinical data are available concerning the efficacy of zanamivir and oseltamivir for treatment of influenza A infection than for treatment of influenza B infection (191)(192)(193)(194)(195)(196)(197)(198)(199)(200)(201)(202)(203)(204)(205)(206). However, in vitro data and studies of treatment among mice and ferrets (207)(208)(209)(210)(211)(212)(213)(214), in addition to clinical studies, have documented that zanamivir and oseltamivir have activity against influenza B viruses (195,(199)(200)(201)205,206).
None of the four antiviral agents has been demonstrated to be effective in preventing serious influenza-related complications (e.g., bacterial or viral pneumonia or exacerbation of chronic diseases). Evidence for the effectiveness of these four antiviral drugs is based principally on studies of patients with uncomplicated influenza (215). Data are limited and inconclusive concerning the effectiveness of amantadine, rimantadine, zanamivir, and oseltamivir for treatment of influenza among persons at high risk for serious complications of influenza (189,191,192,194,195,202,(216)(217)(218)(219)(220). Fewer studies of the efficacy of influenza antivirals have been conducted among pediatric populations compared with adults (189,192,198,199,218,221,222). One study of oseltamivir treatment documented a decreased incidence of otitis media among children (199).
To reduce the emergence of antiviral drug-resistant viruses, amantadine or rimantadine therapy for persons with influenza A illness should be discontinued as soon as clinically warranted, typically after 3-5 days of treatment or within 24-48 hours after the disappearance of signs and symptoms. The recommended duration of treatment with either zanamivir or oseltamivir is 5 days.
# Chemoprophylaxis
Chemoprophylactic drugs are not a substitute for vaccination, although they are critical adjuncts in the prevention and control of influenza. Both amantadine and rimantadine are indicated for the chemoprophylaxis of influenza A infection, but not influenza B. Both drugs are approximately 70%-90% effective in preventing illness from influenza A infection (55,189,218). When used as prophylaxis, these antiviral agents can prevent illness while permitting subclinical infection and the development of protective antibody against circulating influenza viruses. Therefore, certain persons who take these drugs will develop protective immune responses to circulating influenza viruses. Amantadine and rimantadine do not interfere with the antibody response to the vaccine (189). Both drugs have been studied extensively among nursing home populations as a component of influenza outbreak control programs, which can limit the spread of influenza within chronic care institutions (189,217,(223)(224)(225).
Among the neuraminidase inhibitor antivirals, zanamivir and oseltamivir, only oseltamivir has been approved for prophylaxis, but community studies of healthy adults indicate that both drugs are similarly effective in preventing febrile, laboratory-confirmed influenza illness (efficacy: zanamivir, 84%; oseltamivir, 82%) (226,227). Both antiviral agents have also been reported to prevent influenza illness among persons given chemoprophylaxis after a household member was diagnosed with influenza (205,228). Experience with prophylactic use of these agents in institutional settings or among patients with chronic medical conditions is limited in comparison with the adamantanes (201,220,(229)(230)(231)(232). One 6-week study of oseltamivir prophylaxis among nursing home residents reported a 92% reduction in influenza illness (201,233). Use of zanamivir has not been reported to impair the immunologic response to influenza vaccine (200,234). Data are not available on the efficacy of any of the four antiviral agents in preventing influenza among severely immune compromised persons.
When determining the timing and duration for administering influenza antiviral medications for prophylaxis, factors related to cost, compliance, and potential side effects should be considered. To be maximally effective as prophylaxis, the drug must be taken each day for the duration of influenza activity in the community. However, to be most costeffective, one study of amantadine or rimantadine prophylaxis reported that the drugs should be taken only during the period of peak influenza activity in a community (235).
# Persons at High Risk Who Are Vaccinated After Influenza Activity Has
Begun. Persons at high risk for complications of influenza still can be vaccinated after an outbreak of influenza has begun in a community. However, the development of antibodies in adults after vaccination can take approximately 2 weeks (134,135). When influenza vaccine is administered while influenza viruses are circulating, chemoprophylaxis should be considered for persons at high risk during the time from vaccination until immunity has developed. Children aged <9 years who receive influenza vaccine for the first time can require 6 weeks of prophylaxis (i.e., prophylaxis for 4 weeks after the first dose of vaccine and an additional 2 weeks of prophylaxis after the second dose).
Persons Who Provide Care to Those at High Risk. To reduce the spread of virus to persons at high risk during community or institutional outbreaks, chemoprophylaxis during peak influenza activity can be considered for unvaccinated persons who have frequent contact with persons at high risk. Persons with frequent contact include employees of hospitals, clinics, and chronic-care facilities, household members, visiting nurses, and volunteer workers. If an outbreak is caused by a variant strain of influenza that might not be controlled by the vaccine, chemoprophylaxis should be considered for all such persons, regardless of their vaccination status.
Persons Who Have Immune Deficiency. Chemoprophylaxis can be considered for persons at high risk who are expected to have an inadequate antibody response to influenza vaccine. This category includes persons infected with HIV, chiefly those with advanced HIV disease. No published data are available concerning possible efficacy of chemoprophylaxis among persons with HIV infection or interactions with other drugs used to manage HIV infection. Such patients should be monitored closely if chemoprophylaxis is administered.
Other Persons. Chemoprophylaxis throughout the influenza season or during peak influenza activity might be appropriate for persons at high risk who should not be vaccinated. Chemoprophylaxis can also be offered to persons who wish to avoid influenza illness. Health-care providers and patients should make this decision on an individual basis.
# Control of Influenza Outbreaks in Institutions
Using antiviral drugs for treatment and prophylaxis of influenza is a key component of institutional outbreak control. In addition to using antiviral medications, other outbreak control measures include instituting droplet precautions and establishing cohorts of patients with confirmed or suspected influenza, re-offering influenza vaccinations to unvaccinated staff and patients, restricting staff movement between wards or buildings, and restricting contact between ill staff or visitors and patients (236-238) (for additional information regarding outbreak control in specific settings, see Additional Information Regarding Influenza Infection Control Among Specific Populations).
The majority of published reports concerning the use of antiviral agents to control institutional influenza outbreaks are based on studies of influenza A outbreaks among nursing home populations where amantadine or rimantadine were used (189,217,(223)(224)(225)235). Less information is available concerning the use of neuraminidase inhibitors in influenza A or B institutional outbreaks (220,231,233). When confirmed or suspected outbreaks of influenza occur in institutions that house persons at high risk, chemoprophylaxis should be started as early as possible to reduce the spread of the virus. In these situations, having preapproved orders from physicians or plans to obtain orders for antiviral medications on short notice is useful.
When institutional outbreaks occur, chemoprophylaxis should be administered to all residents -regardless of whether they received influenza vaccinations during the previous fall -and should continue for >2 weeks. If surveillance indicates that new cases continue to occur, chemoprophylaxis should be continued until approximately 1 week after the end of the outbreak. The dosage for each resident should be determined individually. Chemoprophylaxis also can be offered to unvaccinated staff who provide care to persons at high risk. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza that is not well-matched by the vaccine.
In addition to nursing homes, chemoprophylaxis also can be considered for controlling influenza outbreaks in other closed or semiclosed settings (e.g., dormitories or other settings where persons live in close proximity). For example, chemoprophylaxis with rimantadine has been used successfully to control an influenza A outbreak aboard a cruise ship (239).
To limit the potential transmission of drug-resistant virus during institutional outbreaks, whether in chronic or acutecare settings or other closed settings, measures should be taken to reduce contact as much as possible between persons taking antiviral drugs for treatment and other persons, including those taking chemoprophylaxis (see Antiviral Drug-Resistant Strains of Influenza).
# Dosage
Dosage recommendations vary by age group and medical conditions (Table 4).
# MMWR April 12,
Children Amantadine. Use of amantadine among children aged 65 years, if they experience possible side effects when taking 200 mg/day. * Zanamivir is administered through inhalation by using a plastic device included in the medication package. Patients will benefit from instruction and demonstration of correct use of the device. † † † Zanamivir is not approved for prophylaxis. § § § A reduction in the dose of oseltamivir is recommended for persons with creatinine clearance 15-23 kg, the dose is 45 mg twice a day. For children who weigh >23-40 kg, the dose is 60 mg twice a day. And, for children who weigh >40 kg, the dose is 75 mg twice a day. (219,240). Rimantadine. Rimantadine is approved for prophylaxis among children aged >1 years and for treatment and prophylaxis among adults. Although rimantadine is approved only for prophylaxis of infection among children, certain specialists in the management of influenza consider rimantadine appropriate for treatment among children (186). Use of rimantadine among children aged 10 years is 200 mg/day (100 mg twice a day); however, for children weighing <40 kg, prescribing 5 mg/kg/day, regardless of age, is recommended (241).
Zanamivir. Zanamivir is approved for treatment among children aged >7 years. The recommended dosage of zanamivir for treatment of influenza is two inhalations (one 5-mg blister per inhalation for a total dose of 10 mg) twice daily (approximately 12 hours apart) (200).
Oseltamivir. Oseltamivir is approved for treatment among persons aged >1 year and for chemoprophylaxis among persons age >13 years. Recommended treatment dosages for children vary by the weight of the child: the dosage recommendation for children who weigh 15-23 kg, the dosage is 45 mg twice a day; for those weighing >23-40 kg, the dosage is 60 mg twice a day; and for children weighing >40 kg, the dosage is 75 mg twice a day. The treatment dosage for persons >13 years is 75 mg twice daily. For children >13 years, the recommended dosage for prophylaxis is 75 mg once a day (201).
# Persons Aged >65 Years
Amantadine. The daily dosage of amantadine for persons aged >65 years should not exceed 100 mg for prophylaxis or treatment, because renal function declines with increasing age. For certain older persons, the dosage should be further reduced.
Rimantadine. Among older persons, the incidence and severity of central nervous system (CNS) side effects are substantially lower among those taking rimantadine at a dosage of 100 mg/day than among those taking amantadine at dosages adjusted for estimated renal clearance (242). However, chronically ill older persons have had a higher incidence of CNS and gastrointestinal symptoms and serum concentrations two to four times higher than among healthy, younger persons when rimantadine has been administered at a dosage of 200 mg/day (189).
For prophylaxis among persons aged >65 years, the recommended dosage is 100 mg/day. For treatment of older persons in the community, a reduction in dosage to 100 mg/day should be considered if they experience side effects when taking a dosage of 200 mg/day. For treatment of older nursing home residents, the dosage of rimantadine should be reduced to 100 mg/day (241).
Zanamivir and Oseltamivir. No reduction in dosage is recommended on the basis of age alone.
# Persons with Impaired Renal Function
Amantadine. A reduction in dosage is recommended for patients with creatinine clearance <50 mL/min/1.73m 2 . Guidelines for amantadine dosage on the basis of creatinine clearance are found in the package insert. Because recommended dosages on the basis of creatinine clearance might provide only an approximation of the optimal dose for a given patient, such persons should be observed carefully for adverse reactions. If necessary, further reduction in the dose or discontinuation of the drug might be indicated because of side effects. Hemodialysis contributes minimally to amantadine clearance (240,243).
Rimantadine. A reduction in dosage to 100 mg/day is recommended for persons with creatinine clearance <10 mL/min. Because of the potential for accumulation of rimantadine and its metabolites, patients with any degree of renal insufficiency, including older persons, should be monitored for adverse effects, and either the dosage should be reduced or the drug should be discontinued, if necessary. Hemodialysis contributes minimally to drug clearance (244).
Zanamivir. Limited data are available regarding the safety and efficacy of zanamivir for patients with impaired renal function. Among patients with renal failure who were administered a single intravenous dose of zanamivir, decreases in renal clearance, increases in half-life, and increased systemic exposure to zanamivir were observed (200,245). However, a limited number of healthy volunteers who were administered high doses of intravenous zanamivir tolerated systemic levels of zanamivir that were substantially higher than those resulting from administration of zanamivir by oral inhalation at the recommended dose (246,247). On the basis of these considerations, the manufacturer recommends no dose adjustment for inhaled zanamivir for a 5-day course of treatment for patients with either mild-to-moderate or severe impairment in renal function (200).
Oseltamivir. Serum concentrations of oseltamivir carboxylate (GS4071), the active metabolite of oseltamivir, increase with declining renal function (201,204). For patients with creatinine clearance of 10-30 mL/min (201), a reduction of the treatment dosage of oseltamivir to 75 mg once daily and in the prophylaxis dosage to 75 mg every other day is recommended. No treatment or prophylaxis dosing recommendations are available for patients undergoing routine renal dialysis treatment.
# Persons with Liver Disease
Amantadine. No increase in adverse reactions to amantadine has been observed among persons with liver disease. Rare instances of reversible elevation of liver enzymes among patients receiving amantadine have been reported, although a specific relationship between the drug and such changes has not been established (248).
Rimantadine. A reduction in dosage to 100 mg/day is recommended for persons with severe hepatic dysfunction.
Zanamivir and Oseltamivir. Neither of these medications has been studied among persons with hepatic dysfunction.
# Persons with Seizure Disorders
Amantadine. An increased incidence of seizures has been reported among patients with a history of seizure disorders who have received amantadine (249). Patients with seizure disorders should be observed closely for possible increased seizure activity when taking amantadine.
Rimantadine. Seizures (or seizure-like activity) have been reported among persons with a history of seizures who were not receiving anticonvulsant medication while taking rimantadine (250). The extent to which rimantadine might increase the incidence of seizures among persons with seizure disorders has not been adequately evaluated.
Zanamivir and Oseltamivir. Seizure events have been reported during postmarketing use of zanamivir and oseltamivir, although no epidemiologic studies have reported any increased risk for seizures with either zanamivir or oseltamivir use.
# Route
Amantadine, rimantadine, and oseltamivir are administered orally. Amantadine and rimantadine are available in tablet or syrup form, and oseltamivir is available in capsule or oral suspension form (178,179). Zanamivir is available as a dry powder that is self-administered via oral inhalation by using a plastic device included in the package with the medication. Patients will benefit from instruction and demonstration of correct use of this device (200).
# Pharmacokinetics Amantadine
Approximately 90% of amantadine is excreted unchanged in the urine by glomerular filtration and tubular secretion (223,(251)(252)(253)(254). Thus, renal clearance of amantadine is reduced substantially among persons with renal insufficiency, and dosages might need to be decreased (see Dosage) (Table 4).
# Rimantadine
Approximately 75% of rimantadine is metabolized by the liver (218). The safety and pharmacokinetics of rimantadine among persons with liver disease have been evaluated only after single-dose administration (218,255). In a study of persons with chronic liver disease (the majority with stabilized cirrhosis), no alterations in liver function were observed after a single dose. However, for persons with severe liver dysfunction, the apparent clearance of rimantadine was 50% lower than that reported for persons without liver disease (241).
Rimantadine and its metabolites are excreted by the kidneys. The safety and pharmacokinetics of rimantadine among patients with renal insufficiency have been evaluated only after single-dose administration (218,244). Further studies are needed to determine multiple-dose pharmacokinetics and the most appropriate dosages for patients with renal insufficiency. In a single-dose study of patients with anuric renal failure, the apparent clearance of rimantadine was approximately 40% lower, and the elimination half-life was approximately 1.6fold greater than that among healthy persons of the same age (244). Hemodialysis did not contribute to drug clearance. In studies of persons with less severe renal disease, drug clearance was also reduced, and plasma concentrations were higher than those among control patients without renal disease who were the same weight, age, and sex (241,256).
# Zanamivir
In studies of healthy volunteers, approximately 7%-21% of the orally inhaled zanamivir dose reached the lungs, and 70%-87% was deposited in the oropharynx (200,257). Approximately 4%-17% of the total amount of orally inhaled zanamivir is systemically absorbed. Systemically absorbed zanamivir has a half-life of 2.5-5.1 hours and is excreted unchanged in the urine. Unabsorbed drug is excreted in the feces (200,247).
# Oseltamivir
Approximately 80% of orally administered oseltamivir is absorbed systemically (204). Absorbed oseltamivir is metabolized to oseltamivir carboxylate, the active neuraminidase inhibitor, primarily by hepatic esterases. Oseltamivir carboxylate has a half-life of 6-10 hours and is excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway (201,258). Unmetabolized oseltamivir also is excreted in the urine by glomerular filtration and tubular secretion (258).
# Side Effects and Adverse Reactions
When considering the use of influenza antiviral medications (i.e., choice of antiviral drug, dosage, and duration of therapy), clinicians must consider the patient's age, weight, and renal function (Table 4); presence of other medical conditions; indications for use (i.e., prophylaxis or therapy); and the potential for interaction with other medications.
# Amantadine and Rimantadine
Both amantadine and rimantadine can cause CNS and gastrointestinal side effects when administered to young, healthy adults at equivalent dosages of 200 mg/day. However, incidence of CNS side effects (e.g., nervousness, anxiety, insomnia, difficulty concentrating, and lightheadedness) is higher among persons taking amantadine than among those taking rimantadine (259). In a 6-week study of prophylaxis among healthy adults, approximately 6% of participants taking rimantadine at a dosage of 200 mg/day experienced >1 CNS symptoms, compared with approximately 13% of those taking the same dosage of amantadine and 4% of those taking placebo (259). A study of older persons also demonstrated fewer CNS side effects associated with rimantadine compared with amantadine (242). Gastrointestinal side effects (e.g., nausea and anorexia) occur in approximately 1%-3% of persons taking either drug, compared with 1% of persons receiving the placebo (259).
Side effects associated with amantadine and rimantadine are usually mild and cease soon after discontinuing the drug. Side effects can diminish or disappear after the first week, despite continued drug ingestion. However, serious side effects have been observed (e.g., marked behavioral changes, delirium, hallucinations, agitation, and seizures) (240,249). These more severe side effects have been associated with high plasma drug concentrations and have been observed most often among persons who have renal insufficiency, seizure disorders, or certain psychiatric disorders and among older persons who have been taking amantadine as prophylaxis at a dosage of 200 mg/ day (223). Clinical observations and studies have indicated that lowering the dosage of amantadine among these persons reduces the incidence and severity of such side effects (Table 4). In acute overdosage of amantadine, CNS, renal, respiratory, and cardiac toxicity, including arrhythmias, have been reported (240). Because rimantadine has been marketed for a shorter period than amantadine, its safety among certain patient populations (e.g., chronically ill and elderly persons) has been evaluated less frequently. Because amantadine has anticholinergic effects and might cause mydriasis, it should not be used for patients with untreated angle closure glaucoma (240).
# Zanamivir
In a study of zanamivir treatment of influenza-like illness among persons with asthma or chronic obstructive pulmonary disease where study medication was administered after using a B2-agonist, 13% of patients receiving zanamivir and 14% of patients who received placebo (inhaled powdered lactose vehicle) experienced a >20% decline in forced expiratory volume in 1 second (FEV1) after treatment (200,202). However, in a study of persons with mild or moderate asthma who did not have influenza-like illness, 1 of 13 patients experienced bronchospasm after administration of zanamivir (200). In addition, during postmarketing surveillance, cases of respiratory function deterioration after inhalation of zanamivir have been reported. Certain patients had underlying airways disease (e.g., asthma or chronic obstructive pulmonary disease). Because of the risk for serious adverse events and because the efficacy has not been demonstrated among this population, zanamivir is generally not recommended for treatment for patients with underlying airway disease (200). If physicians decide to prescribe zanamivir to patients with underlying chronic respiratory disease after carefully considering potential risks and benefits, the drug should be used with caution under conditions of proper monitoring and supportive care, including the availability of short-acting bronchodilators (215). Patients with asthma or chronic obstructive pulmonary disease who use zanamivir are advised to 1) have a fast-acting inhaled bronchodilator available when inhaling zanamivir and 2) stop using zanamivir and contact their physician if they develop difficulty breathing (200). No clear evidence is available regarding the safety or efficacy of zanamivir for persons with underlying respiratory or cardiac disease or for persons with complications of acute influenza (215). Allergic reactions, including oropharyngeal or facial edema, have also been reported during postmarketing surveillance (200,220).
In clinical treatment studies of persons with uncomplicated influenza, the frequencies of adverse events were similar for persons receiving inhaled zanamivir and those receiving placebo (i.e., inhaled lactose vehicle alone) (190)(191)(192)(193)(194)(195)220). The most common adverse events reported by both groups were diarrhea; nausea; sinusitis; nasal signs and symptoms; bronchitis; cough; headache; dizziness; and ear, nose, and throat infections. Each of these symptoms was reported by <5% of persons in the clinical treatment studies combined (200).
# Oseltamivir
Nausea and vomiting were reported more frequently among adults receiving oseltamivir for treatment (nausea without vomiting, approximately 10%; vomiting, approximately 9%) than among persons receiving placebo (nausea without vomiting, approximately 6%; vomiting, approximately 3%) (196,197,201,260). Among children treated with oseltamivir, 14.3% had vomiting, compared with 8.5% of placebo recipients. Overall, 1% discontinued the drug secondary to this side effect (199), whereas a limited number of adults enrolled in clinical treatment trials of oseltamivir discontinued treatment because of these symptoms (201). Similar types and rates of adverse events were found in studies of oseltamivir prophylaxis (201). Nausea and vomiting might be less severe if oseltamivir is taken with food (201,260).
# Use During Pregnancy
No clinical studies have been conducted regarding the safety or efficacy of amantadine, rimantadine, zanamivir, or oseltamivir for pregnant women; only two cases of amantadine use for severe influenza illness during the third trimester have been reported (105,106). However, both amantadine and rimantadine have been demonstrated in animal studies to be teratogenic and embryotoxic when administered at very high doses (240,241). Because of the unknown effects of influenza antiviral drugs on pregnant women and their fetuses, these four drugs should be used during pregnancy only if the potential benefit justifies the potential risk to the embryo or fetus (see package inserts for additional information ).
# Drug Interactions
Careful observation is advised when amantadine is administered concurrently with drugs that affect CNS, especially CNS stimulants. Concomitant administration of antihistamines or anticholinergic drugs can increase the incidence of adverse CNS reactions (189). No clinically significant interactions between rimantadine and other drugs have been identified.
Clinical data are limited regarding drug interactions with zanamivir. However, no known drug interactions have been reported, and no clinically important drug interactions have been predicted on the basis of in vitro data and data from studies involving rats (200,261).
Limited clinical data are available regarding drug interactions with oseltamivir. Because oseltamivir and oseltamivir carboxylate are excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway, a potential exists for interaction with other agents excreted by this pathway. For example, coadministration of oseltamivir and probenecid resulted in reduced clearance of oseltamivir carboxylate by approximately 50% and a corresponding approximate twofold increase in the plasma levels of oseltamivir carboxylate (201,258).
No published data are available concerning the safety or efficacy of using combinations of any of these four influenza antiviral drugs. For more detailed information concerning potential drug interactions for any of these influenza antiviral drugs, package inserts should be consulted.
# Antiviral Drug-Resistant Strains of Influenza
Amantadine-resistant viruses are cross-resistant to rimantadine and vice versa (262). Drug-resistant viruses can appear in approximately one third of patients when either amantadine or rimantadine is used for therapy (222,263,264). During the course of amantadine or rimantadine therapy, resistant influenza strains can replace sensitive strains within 2-3 days of starting therapy (263,265). Resistant viruses have been isolated from persons who live at home or in an institution where other residents are taking or have recently taken amantadine or rimantadine as therapy (266,267); however, the frequency with which resistant viruses are transmitted and their impact on efforts to control influenza are unknown. Amantadine-and rimantadine-resistant viruses are not more virulent or transmissible than sensitive viruses (268). The screening of epidemic strains of influenza A has rarely detected amantadine-and rimantadine-resistant viruses (263,269,270).
Persons who have influenza A infection and who are treated with either amantadine or rimantadine can shed sensitive viruses early in the course of treatment and later shed drugresistant viruses, especially after 5-7 days of therapy (222). Such persons can benefit from therapy even when resistant viruses emerge.
Resistance to zanamivir and oseltamivir can be induced in influenza A and B viruses in vitro (271)(272)(273)(274)(275)(276)(277)(278), but induction of resistance requires multiple passages in cell culture. By contrast, resistance to amantadine and rimantadine in vitro can be induced with fewer passages in cell culture (279,280). Development of viral resistance to zanamivir and oseltamivir during treatment has been identified but does not appear to be frequent (201,(281)(282)(283)(284). In clinical treatment studies using oseltamivir, 1.3% of posttreatment isolates from patients aged >13 years and 8.6% among patients aged 1-12 years had decreased susceptibility to oseltamivir (201). No isolates with reduced susceptibility to zanamivir have been reported from clinical trials, although the number of posttreatment isolates tested is limited (285), and the risk for emergence of zanamivir-resistant isolates cannot be quantified (200). Only one clinical isolate with reduced susceptibility to zanamivir, obtained from an immunocompromised child on prolonged therapy, has been reported (282). Available diagnostic tests are not optimal for detecting clinical resistance to the neuraminidase inhibitor antiviral drugs, and additional tests are being developed. (285,286). Postmarketing surveillance for neuraminidase inhibitor-resistant influenza viruses is being conducted (287).
# Sources of Information Regarding Influenza and Its Surveillance
Information regarding influenza surveillance, prevention, detection, and control is available on CDC/NCID's website at . Surveillance information is available through the CDC Voice Information System (influenza update) at 888-232-3228 or CDC Fax Information Service at 888-232-3299. During October-May, surveillance information is updated at least every other week. In addition, periodic updates regarding influenza are published in the MMWR (weekly). Additional information regarding influenza vaccine can be obtained at CDC/NIP's website at or by calling the NIP hotline at 800-232-2522 (English) or 800-232-0233 (Spanish). State and local health departments should be consulted concerning availability of influenza vaccine, access to vaccination programs, information related to state or local influenza activity, and for reporting influenza outbreaks and receiving advice concerning outbreak control.
# Additional Information Regarding Influenza Infection Control Among Specific Populations
Each year, ACIP provides general, annually updated information regarding the control and prevention of influenza. Other reports on the control and prevention of influenza among specific populations (e.g., immunocompromised persons, health-care workers, hospitals, and travelers) are also available in the following publications:
- Garner JS, for the Hospital Infection Control Practices Advisory Committee.
# MMWR
The Morbidity and Mortality Weekly Report (MMWR) series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on Friday of each week, send an e-mail message to [email protected]. The body content should read SUBscribe mmwr-toc. Electronic copy also is available from CDC's Internet server at or from CDC's file transfer protocol server at ftp/ Publications/mmwr. To subscribe for paper copy, contact Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone 202-512-1800.
Data in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. The reporting week concludes at close of business on Friday; compiled data on a national basis are officially released to the public on the following Friday. Address inquiries about the MMWR series, including material to be considered for publication, to Editor, MMWR Series, Mailstop C-08, CDC, 1600 Clifton Rd., N.E., Atlanta, GA 30333; telephone 888-232-3228.
All material in the MMWR series is in the public domain and may be used and reprinted without permission; however, citation of the source is appreciated. | 21,225 | {
"id": "23a66917a8d14bec2441e0d0288e405cdcd76e07",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # CRITERIA DOCUMENT: RECOMMENDATIONS FOR AN OCCUPATIONAL EXPOSURE STANDARD FOR ASBESTOS
recommends that worker exposure to asbestos dust in the workplace be controlled by requiring compliance with the following sections. Control of worker exposure to the limits stated will prevent asbestosis and more adequately guard against asbestos-induced neoplasms. The standard is amenable to techniques that are valid, reproducible, and available to industry and governmental agencies. It will be subject to review and will be revised as necessary. publication, the present emergency standard for exposure to asbestos dust (29 CFR 1910.93a) shall be in effect. This period is believed necessary to permit installation of necessary engineering controls.
Medical surveillance is required, except where a variance from the medical requirements of this proposed standard have been granted, for all workers who are exposed to asbestos as part of their work environment.
For purposes of this requirement the term "exposed to asbestos" will be interpreted as referring to time-weighted average exposures above 1 fiber/ cc or peak exposures above 5 fibers/cc. The major objective of such surveillance will be to ensure proper medical management of individuals who show evidence of reaction to past dust exposures, either due to excessive exposures or unusual susceptibility. Medical management may range from recommendations as to job placement, improved work practices, cessation of smoking, to specific therapy for asbestos-related disease or its com*plications. Medical surveillance cannot be a guide to adequacy of current controls when environmental data and medical examinations only cover recent work experience because of the prolonged latent period required for the development of asbestosis and neoplasms.
Required components of a medical surveillance program include periodic measurements of pulmonary function (forced vital capacity (FVC)), and forced expiratory volume for one second (FEV^), and periodic chest roentgenograms (postero-anterior 14 x 17 inches). Additional medical requirement components include a history to describe smoking habits and details on past exposures to asbestos and other dusts and to determine presence or absence of pulmonary, cardiovascular, and gastrointestinal symptoms, and a physical examination, with special attention to pulmonary rales, clubbing of fingers, and other signs related to cardiopulmonary systems. (ii)
The employer shall provide for maintenance and laundering of the soiled protective clothing, which shall be stored, transported and disposed of in sealed non-reusable containers marked "Asbestos-Con taminated Clothing" in easy-to-read letters.
(Hi) Protective clothing shall be vacuumed before removal.
Clothes shall not be cleaned by blowing dust from the clothing or shaking.
(iv) If laundering is to be done by a private contractor, the employer shall inform the contractor of the potentially harmful effects of exposure to asbestos dust and of safe practices required in the laundering of the asbestos-soiled work clothes.
(v) Resin-impregnated paper or similar protective clothing can be substituted for fabric type of clothing.
# (vi)
It is recommended that in highly contaminated operations (such as insulation and textiles) provisions be made for separate change rooms.
Each employee exposed to asbestos shall be apprised of all hazards, relevant symptoms, and proper conditions and precautions concerning use or exposure. Each exposed worker shall be informed of the information which is applicable to a specific product or material containing 5% or more asbestos (see Appendix III for details of information required).
The information shall be kept on file and readily accessible to the worker at all places of employment where asbestos materials are manu factured or used in unit processes and operations. It is recommended, but not required, that this information be provided for asbestos pro cesses and operations where the asbestos content is less than 5%.
Information as specified in Appendix III shall be recorded on U. S. Department of Labor Form 0SHA-20, "Material Safety Data Sheet", (see page X-3 and X-4), or a similar form approved by the Occupational Safety and Health Administration, U. S. Department of Labor.
Employers will be required- to maintain records of environmental exposure to asbestos based upon the following environmental sampling and recordkeeping schedule. Personal exposure samples will be collected at least annually by specific maximum-risk work operations from a number of employees. The first sampling period will be completed within 180 days of the date of this standard. These selected samples will be collected and evaluated as both time-weighted and peak concentration values. The personal sampling regime shall be on a quarterly basis for maximum-risk work areas under the following conditions:
(a) The environmental levels are in excess of the standard.
(b) There are other conditions existing that necessitate the requesting of a variance from the Department of Labor.
Records of the type of respiratory protection in use during the quarterly sampling schedule must also be maintained. The recommended standard is designed primarily to prevent asbestosis.
For other diseases associated with asbestos, there is insufficient information to establish a standard to prevent such diseases including asbestos-induced neoplasms by any all-inclusive limit other than one of zero. Nevertheless, a safety factor has been included in arriving at the concentration level that will reduce the total body burden and should more adequately guard against neoplasms.
Asbestos has been mined, milled, processed, and used for many years, and as a result, a number of workers have experienced significant accumulative exposure to asbestos dust over a working lifetime.
It has been recognized that biological monitoring (by periodic chest roentgenograms) and removal from further exposure after initiation of fibrosis, calcification or neoplasia will not absolutely prevent The construction industry has, in recent years, applied asbestos insulation materials by spraying, a method of application that generates more airborne asbestos fibers than older conventional methods. This technique at present utilizes only a small percentage of the total asbestos produced and its use is decreasing.
There are approximately 40,000 field insulation workers in the United States who are exposed to asbestos dust. The activities of these workers cause secondary exposures to an estimated three to five 2 million other building construction and shipyard workers.
Since the dust exposure to the individual worker is extremely variable and the number of asbestos workers at any one location is small, the primary and secondary asbestos dust exposures to all workers have never been satisfactorily estimated.
An estimated 50,000 workers are involved in the manufacture of asbestos-containing products. This figure does not include secondary manufacture of products which contain asbestos, such as electrical or thermal insulation, or products which include previously manufactured components containing asbestos.
# Early Historical Reports
The widespread use of asbestos fibers did not begin until the 2 last quarter of the nineteenth century.
With the increasing use of asbestos materials and increasing reports of asbestos related disease there developed concern over the role of these minerals as factors in human disease. Differentiation of the type of asbestos fiber was not made in most studies related to occupational exposure.
In the United States the exposures of greatest concern usually involve more than one type of fiber, although chrysotile predominates. To refine our knowledge of the biological actions of asbestos, it is imperative that the character of the exposure as to concentration, size, and type of fiber be known. At present, data of this complexity are scanty or often non-existent with respect to human exposure. Although a large percentage of the lungs of adult urban dwellers may be found to contain ferruginous bodies (depending on the method of examination), the significance of this is as yet unknown.
♦"Ferruginous bodies" is a more descriptive term. This and other aspects of the biologic effects of asbestos are well documented in the Annals of the New York Academy of Science.
# III-5
The core fibers have not been systematically identified to indicate how many are asbestos bodies, and there are little data bearing on possible health effects associated with the low concentrations of fibers found in ambient air.
An abnormality, occurring with unusually greater frequency in populations exposed to inhalation of asbestos fiber, is that of localized thickening, or plaques, of the pleura with or without calcification of the plaques. The role of the asbestos fiber in this manifestation is not clear.
The medical aspects of exposure to asbestos and the development of the occupational disease, asbestosis, are characterized by:
(1) A pattern of roentgenographic changes consistent with diffuse interstitial fibrosis of variable degree and, at times, pleural changes of fibrosis and calcification.
(2) Clinical changes including fine rales and finger clubbing.
These may be present or absent in any individual case.
(3) Physiological changes consistent with a lung disorder.
(4) A known history of occupational exposure to airborne asbestos dust. In general, a considerable time lapse between inhalation of the dust and appearance of changes as determined by X-ray. Neoplasms, such as mesothelioma, may occur without radiological evidence of asbestosis at exposure levels lower than those required for prevention of radiologically evident asbestosis. This may be of particular importance when consideration is given to short-term, high levels of exposure, and may result in the development of meso thelioma before or after completion of a normal span of work either in or out of the asbestos industry. This is illustrated by several case studies, including two cases of malignant mesothelioma, one a "family" and the other a "neighborhood" 26 case.
In another "family" case, a woman washed the overalls of her three daughters at home; all three daughters worked for an asbestos company with possible heavy exposures to asbestos.
# III-9
The time lapse between onset of exposure and mesothelioma in 344 deaths among asbestos insulation workers was studied. Meso thelioma developed after a longer lapse of time from onset of exposure to asbestos than was the case in the development of asbestosis (Table asbestosis, but that in most cases of mesothelioma that he had seen, the occurrence of asbestosis was not found. He postulated that the difference being the long periods of exposure required to produce asbestosis, while mesothelioma could occur long after a short intensive exposure. The 27 cases of mesothelioma in children under 19 years of age indicates the latent time period for develofpment of mesothelioma i may be shorter than first estimated.^9 on Fifteen casesJU of pleural mesothelioma associated with occupational exposure were reported in Australia. The relationship between the mesothelioma development and asbestos was based upon occupational histories and finding of asbestos bodies in the tissue. In some of these cases, the relationship to occupational exposure could not be developed with any degree of certainty, but included patients whose exposure was as short as six months. No patient was regarded clinically or radiologically as suffering from asbestosis; one person had pleural plaques that were radiologically visible.
Stumphiusbetween 1962 and 1968, found 25 cases of mesothelioma on Walcheren Island. Of these cases, 22 had been employed in the shipyard trades. Stumphius noted that the shipyard employed about 3000 men. This would result in a rate of mesothelioma of approximately 100 per 100,000 males per year. He also noted that the rate for Dutch provinces with heavy industry is 1.0 per 100,000 per y e a r .^ In the same study, examination of sputum from 277 shipyard workers showed that 60% had asbestos bodies. The frequency varied from 39% of those with no obvious exposure to 100% among those with slight but definite asbestos exposure. did produce in rats malignant pulmonary tumors of several types from exposure at very high doses (ca. 22,000 fibers/cc 86 mg/m^) of chrysotile asbestos that had been hammemilled to an increase in cobalt of 145%; nickel, 82%; and chromium, 34%.
# III-12
Differences in animal responses to "harsh" and "soft" chrysotile asbestos were seen by Smith al.^: granulomatous and fibrous pleural adhesions were thicker, and pleural mesotheliomas appeared more rapidly in response to harsh chrysotile. (Harsh chrysotile was characterized as appearing in thicker bundles and was hydrophobic whereas the soft chrysotile was hydrophilic).
There are no experimental animal dose-response data that can be used in estimating a work place air standard for asbestos.
Contributions to Occupational Exposure Standards from Animal Studies.
Of possible value in estimating occupational exposure limits are data regarding the relative disease-producing potency of the various forms and types of asbestos. (1) Fiber length and bundle size. The relation between length of fibers and of fibers to motes (nonfibrous particles) and asbestos induced disease has been one of continuing experimental inquiry.
Gardner and Cummings'^ and Gardner^ found that longer fibers appeared to have a greater fibrogenic effect, although fibrosis developed in animals exposed to dusts which were composed of but one to 1.5 percent fibers. The high exposure concentration of 100 mppcf (ca. 3,600 fibers/cc) makes any decision on the relative potency of fibers vs. motes virtually impossible; however, when animals were exposed to short-fiber asbestos dust, although the type and rate of tissue reaction
# III-14
were essentially the same, the extent of involvement was very much less than that of longer fibers. Inasmuch as exposure concentrations in these comparable studies were about the same, the conclusion can reasonably be made that longer fibers are more fibrogenic, but that the motes are not without fibrogenic potential.
In experiments with rabbits, King, Clegg, and Rae^~- using Rhodesian chrysotile fibers averaging 2.5 ;jm and 15 ;am in length, concluded that the shorter fibers produced generalized interstitial fibrosis, whereas the longer fibers produced nodular lesions. This finding was not confirmed by one of the investigators (King) in another animal species.^ Later repetition of the investigations, with "fine" chrysotile and amosite (85% and 82.6% respectively, less than 1 pa in length) by
Wagner^O yielded definite fibrosis with both dusts, thus confirming the original work of Gardner that short fibers or motes have fibrogenic potential.
This experimental work has significance for industrial air standards in indicating the need to support additional research on the "greater than 5 pm in length" specific requirement and the more general relation of fiber length to cancer induction, which has never been determined experimentally.
(2) Cytotoxicity. Both chrysotile and crocidolite were found to be markedly toxic to guinea pig macrophages in vitro.^ The fibrous fraction showed a high, and the particulate, a moderate toxicity, thus providing evidence in conformity with the relative biologic potencies of fibrous and nonfibrous forms found in in vivo studies.
# III-15
(3) Hemolytic Activity. In a similar effort to discover the initial stages of biologic activity of asbestos, and in particular to account for the iron-staining character of asbestos bodies, the hemolytic action of four asbestos types was determined. Whereas chrysotile proved to be potently hemolytic, crocidolite, amosite and anthophyllite were either completely inactive or only weakly.^® No attempt was made, however, to correlate the greater hemolytic activity of chrysotile with the ironstaining intensity of its asbestos bodies relative to those from other asbestos forms.
(4) Asbestos Hydrocarbons. As chrysotile proved to be most adsorptive of iron, so was it most adsorptive of benzpyrene; compared with 100% adsorption for chrysotile, crocidolite and amosite absorbed from solution 40% and 10% r e s p e c t i v e l y .^ On this basis, chrysotile should prove the most potent cocarcinogen of the three forms if its action is mediated through exogenous benzpyrene. This has not been demonstrated as yet in humans. A 10% desorption from chrysotile by serum in three days was demonstrated,^ a condition considered an essential first step in hydrocarbon carcinogenesis. In respect to asbestos bodies, it should be noted that "ferruginous bodies" produced in guinea pigs in response to other fibrous material, fine fibrous glass and ceramic aluminum silicate were identical in fine structure to that of asbestos b odies,^ thus rendering firm diagnostic decisions difficult in cases of multiexposures to different fibrogenic fibers in the electron and.light microscopic range. assumes that the fiber content of the dust is about 10% and he states that this is equivalent to about 12 fibers/cc. c ; - 7 Wright-' pointed out that others have noted the striking differences in the health experiences of workers in mines and mills as compared to other workers, specifically in comparison to insulation operations, but that he felt the question was still unresolved. In contrast to populations exposed to mixed environments, those engaged in the mining and milling of asbestos fibers showed no augmented frequency of b ronchogenic cancer.
Selikoff, ^ however, indicated that McDonald's "heavily exposed" group had 5 times as much lung cancer as the "lightly exposed" workers. Furthermore, lung cancer among insulation workers was found to be about 7 times greater than expected compared to the general non-exposed population.^ A non-exposed group was not reported by McDonald. lthough it has been suggested that the risks associated with asbestos exposure may be less in mining than in industrial operations, additional study will be necessary to confirm if such is true, based upon the comparison made by S e l i k o f f .Î
# II-19
Consideration must be given to McDonald's analysis of levels of exposure of 12 fibers/cc. At this level, he assumes that some degree of asbestosis may occur. The mathematical assumption made to arrive at this environmental level leaves a great deal to question, even without attempting to relate this information to the asbestos industry in general. Two primary considerations lack the evidence necessary to make general comparisons of these data with other reported work:
the assumption as stated by McDonald that the fiber content of the dust is 10%, and the method used to convert from mppcf to fibers/cc is not explained in the paper. The authors conclude that: "If asbestosis is to be prevented, airborne asbestos dust must be stringently controlled in the working environment.
From these data a TLV of 3 mppcf would provide inadequate protection and the proposed 2 mppcf may not be substantiated."
# III-21
Thus, considerable evidence exists indicating that the prevention or reduction of the occurrence of asbestosis among workers requires that the concentration of asbestos fibers to which they are exposed be reduced.
There is at this time, however, only scant correlation of epidemiological data with environmental exposure data upon which a definitive standard can be established. exposure that may be related to these two cases. If even in actual practice, levels were found to be 10 times those found by the investigators, it would substantiate the low levels of exposure recommended in this standard. The time interval for sanding as compared to tile installation must be small, and, if this is true, then, in fact, any level found would be very low if based on a time-weighted average exposure. This increases the weight of consideration that must be given to this possibly exposed occupational group and the relationship of these low exposures to asbestos to the development of disease.
Consideration must also be given related to the effect that may The individual sample high ( an index of exposure must be selected which, as nearly as possible, relates to the predominant biologic activity and dose-response of the size spectrum of fibers most commonly encountered. It is assumed for the present that the factor of safety associated with the standard will allow for differences in the size spectrum of respirable fibers that may be encountered.
The British, in evaluating respirable chrysotile fiber exposures in relation to the ongoing epidemiologic studies in the textile industry and for the basis of a standard for chrysotile, established as an index of 62 exposure, fibers greater than 5 micrometers in length.
A substantial amount of information on the biologic effects of asbestos has, and is, being obtained using this parameter of exposure measurement. A review of the research in Britain, with concurrence on the rationale involved, made it prudent that we use the same definition of index-of-exposure on which to base criteria for standards. These criteria should be re-evaluated when,
(1) more definitive information on the biologic response of asbestos including the agent(s) and dose-response data on different lengths of fiber is
V--2
available, (2) the spectrum of fiber lengths encountered in industry by types of asbestos and operations is ascertained, and (3) more precise epidemiologic data are developed.
To prevent fibrosis and excessive rates of neoplasia, such as meso thelioma, respiratory cancer, and gastrointestinal cancer, a standard for asbestos dust should be based on a concept of dose-response that includes not only the factor of fiber count times years of exposure but also that for total asbestos dust fibers retained over a number of years.
Thus, the effect after several decades of a one-time acute dose of limited duration which overwhelms the clearing mechanism, and is retained in the lungs, may be as harmful as the cumulative effect of lower daily doses of exposure over many years of work.
# V-3
The first standard for controlling exposure to asbestos dust was "1. As long as there is any airborne chrysotile dust in the work environment there may be some small risk to health. Nevertheless, it should be realized that exposure up to certain limits can be tolerated for a lifetime without incurring undue risks.
"2. The committee believes that a proper and reasonable objective would be to reduce the risk of contracting asbestosis to 1 percent of those who have a lifetime's exposure to the dust. By 'asbestosis'
V-5
this committee means the earliest demonstrable effects on the lungs due to asbestos.
"It is probable that the risk of being affected to the extent of having such early clinical signs will be less than 1 percent for 3 3 an accumulated exposure of 100 fiber years per cm or 2 fibers/cm 3 3 for 50 years, 4 fibers per cm for 25 years or 10 fibers per cm for 10 years. 3 per cm greater than 5 ^um in length as determined with the standard membrane filter method. Any other method can be used provided it is accompanied by appropriate evidence relating its results to those which would have been obtained with the standard membrane filter method. "5. When it is necessary to work intermittently in a 'high dust' area an approved mask should be worn, provided that the concentration 3 is no more than 50 fibers per cm a higher standard of respiratory protection should be provided such as a.pressure-fed breathing apparatus.
# V-6
"Additional Recommendations "1. It is recommended that where practicable an up-to-date employ ment record card be kept of every person which indicates, every calendar quarter, the category or categories in which he or she has been employed and in which he or she is recommended to work. "If asbestosis is to be prevented, airborne asbestos dust must be stringently controlled in the working environment. From these data a TLV of 3 mppcf would provide inadequate protection and the proposed 2 mppcf may not be substantiated."
Gee and Bouhuys,^^ in December, 1971, pointed out that on the basis of "reasonable probability," decisions must be made to control exposure to asbestos rather than from a precise definition of dose-response relation ship, and "the present threshold limit value for asbestos should be lowered far below some recent proposal." The 1971 ACGIH tentative threshold limit value is 5 fibers/ml y 5 ^m in length. Both are higher than the British standard of 2 fibers/cc by at least a factor of 1.5 times.
# V-9
The number of studies that have collected both environmental and medical data and with a significant number of exposed workers is not sufficient to establish a meaningful standard based upon firm scientific data. The requirement to protect the worker exposed to asbestos is defined in a number of studies outlined in this document. The general recognition of the increasing number of cases of asbestosis, bronchogenic cancer, and mesothelioma indicates the urgent need to develop a standard at the present time.
NIOSH recognizes that these data are fragmentary and, as a result, a safety factor must be included in any standard considered. On this basis the research that did include both environmental and medical data, or where a standard or limit had been proposed, was given a careful and detailed study to determine its particular contribution to the development of a national standard. contracting asbestosis to less than 1% of those who have a lifetime exposure to the dust. For such workers, who may possibly work for 50 years, the long-term average concentration to which they are exposed would need to be less than 2 fibers/cm . For others, who will be exposed to asbestos dust in air for shorter periods, the long-term average concentration need not be so low, as long as their exposure will amount to less than 100 fiber-years/cm It is recognized that the British standard is based upon data not as precise as desired, but it does offer a mechanism for com parison with the ACGIH TLV and after three years of use no change has been recommended. The British standard was primarily based upon a study of 290 men employed for 10 years or longer between 1933-1966 in an asbestos textile mill. The environmental dust concentrations to which different workers had been exposed were estimated to have varied 3 from 1 to 27 fibers/cm . The risk-exposure relationships were developed based upon basal rales and X-ray changes. In this study, basal rales were considered the key symptom since all workers exhibiting X-ray changes also exhibited basal rales.
In reviewing the values on the basis of the 100 fiber-years/cm proposed by the British Hygiene Standards Committee, the following comparisons can be made between the British Standard and the Emergency U. S. Standard. Each standard is normalized to 100 fiber-years to account for differences in the working lifetime of the average asbestos worker.
The Emergency U. S. Standard is based upon the ACGIH TLV which, in turn, 68 is based upon an exposure time of 30 years to 5 fibers/ml> 5 um in length , V-ll and the British, 50 years of exposure at 2 fibers/cm > 5 um in length.
In summary: U. S. Emergency British ACGIH 2 fibers/cc 5 fibers/ml Fiberyrs/cc 100 150
The validity of this type of comparison has already been questioned in this document, i.e., the "K" factor used to change ACGIH impinger 61>64 data to fiber counts. However, on this basis, data suggest that the ACGIH value is higher than the British value.
In addition to consideration of the British data, the comparison of British and ACGIH data suggests that the 30-year exposure value for a U. S. Standard should be about 3 fibers/cc 5 pm in length in order to assure that less than 1% of the workers exposed are at risk of developing the earliest clinical signs of asbestosis.
However, additional consideration must be given to the concepts of carcinogenesis as they relate to the determination of a standard for asbestos exposure. Any carcinogen (initiator) must be assumed, until otherwise proven, to have discrete, dose-dependent, irreversible and additive effects to cells that are transmissible to the cell progeny.
Thus, initiation of malignancy following single small exposures to asbestos is possible, but of a low probability. With frequent or chronic exposure and a low dose-rate, the probability of initiation of malignancy is increased. Yet, even under optimal conditions of cell proliferation (in the presence of promotors) these malignant
# transformations do not lead to instantaneous cancer, but remain insidious for a number of years (latent).
In protracted exposure, some of the total accumulated exposure is (c) The environmental samples were expressly collected in many cases for control purposes rather than for research and, as a result, meaningful evaluations cannot be made.
# Summary of the Basis for the Recommended Standard
# (d)
There is a lack of data to define with any degree of precision the threshold of development of neoplasms resulting from exposure to asbestos and the relationship of the latent period between exposure and development of neoplasms.
The standard recommended in this document is similar to the standard adopted by Her Majesty's Factory Inspectorate in 1 9 6 9^ (still in effect as of December 29, 1971), and more stringent than the recent U. S.
Emergency Standard. It is felt to be feasible technologically for the control of the exposure to the worker and effective biologically for protection of the worker against asbestos-induced diseases.
Considerations of carcinogenesis indicated the need for a measure of prudence. As a result of this rationale, a factor was added to reduce the time-weighted average exposure to 2.C) fibers/cc> 5 um. A ceiling value of 10.0 fibers/cc> 5 um that was not to be exceeded was included to reduce the possibility of the short-term heavy exposures to asbestos that have been reported to cause mesothelioma. In addition, this should reduce the likelihood of diseases (malignant and non-malignant) resulting from exposures in excess of 30 years or with very long latent periods. "IV. Part V, controlling manufacturing sources, is changed to require an emission standard of 2 fibers per cubic centimeter and no visible emissions. While some testimony indicated the difficulty in measuring compliance with a numerical emission standard, overall the evidence establishes both the need (protection against the great proportion of invisible fiber) and the ease of measurement of such a criterion. A "no visible emission" standard ha? been added to the numerical standard to simplify enforcemeat against exceptionally dirty emission sources. A grace period, until June 30, 1972, has been added to permit acquisition of the necessary control equipment to attain the emission standard."
# The proposed national emission standard for asbestos was published in the Federal
This air quality standard is, as it should be, more restrictive that an occupational standard due to differences in exposure time.
This proposed occupational standard would seem to be compatible with the proposed emission standard and each should complement the other in the control of asbestos exposure.
# VI-3
In the study of asbestosis conducted by Dreessen et al. midget impinger count data were used as an estimate of dust exposure. All of the dust particles seen, both grains and fibers, were counted since too few fibers were seen to give an accurate measurement. The resulting count concentration was a measure of overall dust levels rather than a specific measurement of the asbestos concentration. This method was satisfactory at that time since exposures were massive and the control measures installed to reduce overall dust levels also reduced the asbestos dust levels.
As dust levels were reduced, it became necessary to measure the biologically appropriate attribute of the dust cloud. At equal levels overall dustiness, the concentration of asbestos could vary considerably from textile manufacture (75-85%) to insulation (5-15%). Furthermore, if the limit were lowered below the 5 mppcf used previously and dust counts taken by the impinger technique, it would be necessary to consider the effect of background dust, which could be as high as 1 mppcf.
A number of methods for measurement of asbestos dust concentrations have been used in the NIOSH epidemiological study of the asbestos product Fibers longer than 5 jim in length are counted in preference to counting all fibers seen in order to minimize observer/microscope resolving power variability. Furthermore, the British define a "fibre" as a particle, "of length between 5 jim and 100 pm and having a length-to-breadth ratio of at least 3:1, observed by transmitted light by means of a microscope 62 at a magnification of approximately 500X."
Although the British have refrained from standardizing on a single method of measurement, recent measurements have been performed by a method essentially identical to the fiber-count method described in detail below, and the British hygiene standards for use with their 62 asbestos regulations are stated in these terms.
# Principles of Sampling
A dust sampling procedure must be designed so that samples of actual dust concentrations are collected accurately and consistently.
The results of the analysis of these samples will reflect, realistically , the concentrations of dust at the place and time of sampling.
In order to collect a sample representative of airborne dust, which is likely to enter the subject's respiratory system, it is necessary to position a collection apparatus near the nose and mouth of the subject or in his "breathing zone"» The concentration of dust in the air to which a worker is exposed will vary, depending upon the nature of the operation and upon the type of work performed by the operator and the position of the operator relative to the source of the dust. The amount of dust inhaled by a worker can vary daily, seasonally, and with the weather. In order to obtain representative samples of workers' exposures, it is necessary to collect samples under varying conditions of weather, on diffèrent days, and at different times during a shift.
The percentage of working time spent on different tasks will affect the concentration of dust the worker inhales since the different tasks usually result in exposure to different concentrations. The percentage can be determined from work schedules and by observation of work routines.
The dally average weighted exposure can be determined by using the following formula:
(Hours X conc. task A) + (Hours X conc. task B) + etc. 8 HOUts (or actual hours worked)
The concentration of any air contaminant resulting from an industrial operation also varies with time. Therefore, a longer sampling time will better approximate the actual average.
With the following recommended sampling procedure, it is possible to collect samples at the workers' breathing zones for periods from 4 to 8 hours, thus permitting the evaluation of average exposures for a half or full 8~hour shift-a desirable and recommended procedure. Furthermore, dust exposures of a more normal work pattern result from the use of personal samplers. In evaluating daily exposures, samples should be collected as near as possible to workers' breathing zones.
# Collecting Sample
The method recommended in this report for taking samples and counting fibers is based on a modification of the membrane filter method described by Edwards and Lynch. ^V
# III-3
The sample should be collected on a 37-millimeter Millipore type AA- filter mounted in an open-face filter holder. The holder should be fastened to the worker's lapel and air drawn through the filter by means of a battery-powered personal sampler pump similar to those approved by NIOSH under the provisions of 30 CFR 74. The filters are contained in plastic filter holders and are supported on pads which also aid in controlling the distribution of air through the filter.
To yield a more uniform sample deposit, the filter-holder face-caps should be removed. Sampling flow rates from 1.0 liter per minute (1pm) up to the maximum flow rate of the personal sampler pump (usually not over 2.5 1pm) and sampling time from 15 minutes to eight hours are acceptable provided the following restraints are considered: The following conclusions may be drawn from this analysis:
(1) The short-term limit should be for a period of at least 15 minutes and preferably 30 minutes.
(2) The 2.0 fiber/cc limit may be evaluated over periods of from 90 to 480 minutes.
Â8 many fields as required to yield at least 100 fibers should be counted. In general the minimum number of fields should be 20 and the maximum 1 0 0 .
# Mounting Sample
The mounting medium used in this method is prepared by dissolving 0.05 g of membrane filter per ml of 1:1 solution of dimethyl phthalate VIII-5
and diethyl oxalate. The index of refraction of the medium thus prepared is ND = 1.47.
To prepare a sample for microscopic examination, a drop of the mounting medium is placed on a freshly cleaned, standard (25 mm X 75 mm), microscopic slide. A wedge-shaped piece with arc length of about 1 cm is excised from the filter with a scalpel and forceps and placed dust-side-up on the drop of mounting solution. A No. 1-1/2 coverslip, carefully cleaned with lens tissue, is placed over the filter wedge. Slight pressure on the coverslip achieves contact between it and the mounting medium. The sample may be examined as 800» as the mount is transparent. The optical homogeneity of the resulting mount is nearly perfect, with only a slight background granularity under phase contrast, which disappears within one day. The sample should be counted within two days after mounting.
# Evaluation
The filter samples mounted in the manner previously described are evaluated in terms of the concentration of asbestos fibers greater than 5 }im in length. A microscope equipped with phase-contrast optics and a 4-mm "high-dry" achromatic objective is suitable for this deter mination. 10X eyepieces, one of which contains a Porton or other suitable reticle at the level of the field-limiting diaphragm, should be used. The left half of the Porton reticle field serves to define the counting area of the field. Twenty fields located at random on the sample are counted and total asbestos fibers longer than 5 jim are recorded. Any particle having an aspect ratio of three or greater is considered a fiber.
# VIII-6
The following formulae are used to determine the number of fibers/ml:
(1) Filter area (mm2) = K Field area (rnm^)
(2) Average net count X K = fibers/ml Air volume sampled (ml)
For example, assume the following: area of the filter used was 855 mm2 > counting area of one field under the Porton reticle was 0.005 mm2 average net count per field of 20 fields was 10 fibers; and sample was collected at 2 liters per minute for 90 minutes: Then: 855mm2 = 171,000 (K) 0.005 mm2 10 fibers x 171,000 = 9.5 fibers/ml 2,000 ml/min x 90 min
# Calibration of Personal Sampler
The accuracy of an analysis can be no greater than the accuracy of the volume of air which is measured. Therefore, the accurate calibration of a sampling device is essential to the correct interpretation of an instru ment's indication. The frequency of calibration is somewhat dependent on the use, care, and handling to which the pump is subjected. Pumps should be calibrated if they have been subjected to misuse or if they have just been repaired or received from a manufacturer. If hard usage is given the instrument, more frequent calibration may be necessary.
Ordinarily, pumps should be calibrated in the laboratory both before they are used in the field and after they have been used to collect a large number of field samples. The accuracy of calibration is dependent on the type of instrument used as a reference. The choice of calibra tion instrument will depend largely upon where the calibration is to be performed. For laboratory testing, a 1-liter burette or wet-test meter should be used. In the field, a rotameter is the most convenient VIII-7 instrument used. The actual set-up will be the same for all of these instruments. The calibration instrument will be connected in sequence to the filter unit which will be followed by the personal sampler pump.
In this way, the calibration instrument will be at atmospheric pressure.
? Connections between units can be made using the same type of tubing used in the personal sampling unit. Each pump must be calibrated separately for each type of filter used, if, for example, it has been decided to use a filter with a different pore size. The burette should be set up so that the flow is toward the narrow end of the unit.
Care must be exercised in the assembly procedure to insure adequate seals at the joints and that the length of connecting tubing be kept at a minimum. Calibration should be done under the same conditions of pressure, temperature and density as will be encountered. The rotameter should be used only in the field as a check if the diaphragm or piston pumps are not equipped with puls>ation dampeners. The pulsating flow resulting from these type pumps causes the rotameter to give results which are not as accurate as that obtained with a burette or wet-test meter. Calibration can be accomplished with any of the other standard calibrating instruments, such as spirometer, Marriott*s bottle, or drygas meter. The burette and wet-test meter were selected because of their accuracy, availability, and ease of operation.
# VIII-8
IX.
# APPENDIX II NUMERICAL HAZARD RATING SYSTEM
Thii numerical hazard ratings given to products for each category of hazard shall be in accordance with the following criteria. Figure 2 graphically illustrates the hazard identification system.
Health hazards shall be rated as follows:
The health hazard rating of a material shall be determined by evaluating the potential for exposure and the relative toxicity of the most toxic ingredient of a compound or mixture. For this evaluation, the to¡lowing relative toxicity criteria- for absorbed or exposure dose will be used: protective equipment, such as self-contained breathing apparatus or a hose mask with blower, and impervious clothing. This rating includes: Corrosive Material. Acids, alkali or other material that will cause severe damage to living tissue or to other material it contacts.
Commonly
Water Reactivity Hazard (Use No Water). Any material that may be a hazard because of its specific reactivity with water. Dimensions of the symbol and L A P I warning combination shall be optional but cf such size and location as to be readily visible and legible.
# IX-6
The symbol nnu warn in-; shall be applied by stencil mp;, painting, printing, lithographing, \viih fadc-rf:si!".:inil materials.
# IX-7
X.
# APPENDIX III MATERIAL SAFETY DATA SHEET
The following items of information which are applicable to a specific product or material containing 5% or more of asbestos shall be provided in the appropriate section of the Material Safety Data Sheet or approved form. If a specific item of information is Inapplicable (i.e. flash point) initials "n.a." not applicable should be Inserted. 1 -Personal Communication Dr. Irving Selikoff, January, 1971.
# TABLE XXVIII
Lapsed period from onset of exposure in 344 deaths among employees of an asbestos insulation factory, employed at some time in 1941-1945 and followed to 1970.
contaminated absorbants, etc.
(ix) Section VIII. Special Protection Information.
(A) Requirements for personal protective equipment, such as respirators, eye protection and protective clothing, and ventilation such as local ex haust (at site of product use or application), general, or other special types.
(x) Section IX. Special Precautions.
(A) Any other general precautionary information such as personal protective equipment for exposure to the thermal decomposition products listed in Section VI, and to particulates formed by abrading a dry coating, such as by a power sanding disc.
(xi) The signature of the responsible person filling out the data sheet, his address, and the date on which it is filled out.
(xii) The NFPA 704M numerical hazard ratings as defined in section (c) (5) following. The entry shall be made immediately to the right of the heading "Material Safety Data Sheet" at the top of the page and within a diamond symbol preprinted on the forms.
-Not applicable | 9,136 | {
"id": "0811cd6444c1bb5b9c4c4863b16c4cae8daae464",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Haemophilus influenzae type b (Hib) disease in the United States. As a comprehensive summary of previously published recommendations, this report does not contain any new recommendations; it is intended for use by clinicians, public health officials, vaccination providers, and immunization program personnel as a resource. ACIP recommends routine vaccination with a licensed conjugate Hib vaccine for infants aged 2 through 6 months (2 or 3 doses, depending on vaccine product) with a booster dose at age 12 through 15 months. ACIP also recommends vaccination for certain persons at increased risk for Hib disease (i.e., persons who have early component complement deficiencies, immunoglobulin deficiency, anatomic or functional asplenia, or HIV infection; recipients of hematopoietic stem cell transplant; and recipients of chemotherapy or radiation therapy for malignant neoplasms). This report summarizes current information on Hib epidemiology in the United States and describes Hib vaccines licensed for use in the United States. Guidelines for antimicrobial chemoprophylaxis of contacts of persons with Hib disease also are provided.# Introduction
Before 1985, Haemophilus influenzae type b (Hib) was the leading cause of bacterial meningitis and a common cause of other invasive diseases (e.g., epiglottitis, pneumonia, septic arthritis, cellulitis, purulent pericarditis, and bacteremia) among U.S. children aged <5 years (1). Meningitis occurred in approximately two thirds of children with invasive Hib disease; 15%-30% of survivors had hearing impairment or severe permanent neurologic sequelae. Approximately 4% of all cases were fatal (2). The first polysaccharide Hib vaccine was introduced in the United States in 1985, followed by conjugate Hib vaccines in 1987 and 1989. During 1989-2000, the annual incidence of invasive Hib disease in children aged <5 years decreased by 99%, to less than one case per 100,000 children (3)(4)(5)(6)(7). During 2000-2012, the average annual incidence rate of invasive Hib disease in children aged <5 years in the United States remained below the Healthy People 2020 goal of 0.27/100,000 (8) (data available at / abcs/reports-findings/surv-reports.html) (Figures 1 and 2). Studies have demonstrated that vaccination with Hib conjugate vaccine leads to decreases in oropharyngeal colonization among both vaccinated and unvaccinated children (9)(10)(11); the prevalence of Hib carriage has decreased among preschoolaged children from 2%-7% in the prevaccine era to <1% in the vaccine era (9,12).
Several Hib-containing vaccines have been licensed since the initial Advisory Committee on Immunization Practices (ACIP) recommendations on prevention and control of Hib disease published in 1993 (13); subsequent publications have provided additional data and updated recommendations for these vaccines (14)(15)(16)(17). This report summarizes previously published ACIP recommendations on prevention and control of Hib disease in immunocompetent and high-risk populations (14)(15)(16)(17)(18); it does not contain new recommendations and is intended as a resource for clinicians, public health officials, vaccination providers, and immunization program personnel. In addition, this report summarizes current information on Hib epidemiology in the
# Prevention and Control of Haemophilus influenzae Type b Disease
# Recommendations of the Advisory Committee on Immunization Practices (ACIP)
United States and describes Hib vaccines licensed for use in the United States. Guidelines for antimicrobial chemoprophylaxis of contacts of persons with Hib disease also are provided.
# Methods
# ACIP's Meningococcal and Haemophilus influenzae type b
Work Group- comprises a diverse group of health-care providers and public health officials. The Work Group includes professionals from academic medicine (pediatrics, family medicine, internal medicine, and infectious disease specialists), federal and state public health professionals, and representatives of professional medical organizations.
Published Hib vaccine recommendations were the primary sources of data used by the Work Group in summarizing recommendations for the prevention and control of Hib disease, including the evidence-based 2013 Infectious Diseases Society of America clinical practice guideline for vaccination of the immunocompromised host (17)(18)(19)(20)(21)(22)(23). Surveillance data came from the Active Bacterial Core surveillance (ABCs) system and the National Notifiable Diseases Surveillance System (NNDSS) (24).
Data on the immunogenicity and safety of current licensed and available Hib vaccines were summarized on the basis of findings from a literature search of PubMed and Web of Science databases that was completed on April 2, 2012. A nonsystematic review was conducted for studies on safety, effectiveness, and immunogenicity of the current Hib vaccines published from the time of vaccine licensure through March 2012. Because MenHibRix was licensed in June 2012, studies published before licensure also were reviewed. The literature search included clinical trials, randomized controlled trials, controlled clinical trials, evaluation studies, and comparative studies conducted worldwide and published in English. The Vaccine Adverse Events Reporting System (VAERS) (available at http:// www.vaers.hhs.gov) also was searched for postlicensure safety - A list of the members of the Work Group appears on page 14.
# Sources
# Background
H. influenzae is a species of bacteria that has encapsulated (typeable) or unencapsulated (nontypeable) strains. Encapsulated strains express one of six antigenically distinct capsular polysaccharides (types a, b, c, d, e, or f ). Encapsulated H. influenzae nontype b strains, particularly type a, can cause invasive disease similar to Hib disease (25,26). Nontypeable strains also can cause invasive disease but more commonly cause mucosal infections such as otitis media, conjunctivitis, and sinusitis. Hib vaccines only protect against H. influenzae type b strains; no vaccines against nontype b or nontypeable strains currently are available. H. influenzae colonizes the upper respiratory tract of humans and is transmitted person-to-person by inhalation of respiratory droplets or by direct contact with respiratory tract secretions.
The majority of Hib disease in the United States occurs among unimmunized and underimmunized infants and children (those who have an incomplete primary series or are lacking a booster dose) and among infants too young to have completed the primary immunization series (27) (Figure 3). Although rare, Hib disease after full vaccination with the primary series and booster dose does occur; among Hib case-patients aged 5 years. Additional information about H. influenzae disease is available at http:// www.cdc.gov/hi-disease.
Persons with certain immunocompromising conditions are considered at increased risk for invasive Hib disease; these conditions might include:
- functional or anatomic asplenia,
- HIV infection,
- immunoglobulin deficiency including immunoglobulin G2 subclass deficiency, - early component complement deficiency,
- receipt of a hematopoietic stem cell transplant, or - receipt of chemotherapy or radiation therapy for malignant neoplasms. Children who develop Hib disease despite appropriate vaccination should be evaluated for an immunological deficiency that predisposes them to Hib disease (28).
Historically, American Indian/Alaska Native (AI/AN) populations have had higher rates of Hib disease and colonization than the general U.S. population, with a peak in disease at a younger age (4-6 months) than among other U.S. infant populations (6-7 months) (29)(30)(31). Before introduction of vaccine in 1985, rates among AN children were five times higher than rates among non-AN children in Alaska (4). Although rates of Hib disease among AI/AN children have decreased in the postvaccine era, they remain higher than among non-AI/AN children. During 1998-2009, the average annual incidence of Hib disease in children aged <5 years in the United States was 8-10 times higher among AI/AN children (1.3/100,000) than it was among white (0.16/100,000) and black (0.12/100,000) children, respectively (27).
# Development of Hib Vaccines
The first Hib vaccine licensed for use in the United States in 1985 was a monovalent vaccine consisting of purified polyribosylribitol phosphate (PRP) capsular material from type b strains. Although the vaccine was highly effective in trials in Finland among children aged ≥18 months, postmarketing effectiveness studies in the United States demonstrated variable effectiveness (-69%-88%) (32). PRP vaccines were ineffective in children aged <18 months because of the T lymphocyte-independent nature of the immune response to PRP polysaccharide (13). Conjugation of the PRP polysaccharide with protein carriers that contain T-lymphocyte epitopes confers T-lymphocyte-dependent characteristics to the vaccine. This conjugation enhances the immunologic response to the PRP antigen, particularly in young infants, and results in immunologic memory (e.g., anamnestic response) (33). Studies have suggested that long-term protection from invasive Hib disease is correlated with the presence of anti-PRP levels ≥0.15 µg/ml in unvaccinated children and anti-PRP levels ≥1.0 µg/ml in vaccinated children (34,35).
By 1989, three monovalent Hib conjugate vaccines were licensed for use among children aged ≥15 months (29). In late 1990, two of these conjugate vaccines were licensed for use among infants (36,37). Since 1990, additional Hib vaccines from numerous manufacturers have been licensed and are currently used in the United States, including monovalent Hib conjugate vaccines and combination vaccines that contain a Hib conjugate vaccine. No polysaccharide Hib vaccines are used currently in the United States.
# Current Licensed and Available Hib Monovalent Conjugate Vaccines
As of January 1, 2014, three monovalent PRP polysaccharideprotein conjugate vaccines had been licensed by the Food and Drug Administration (FDA) and were available in the United States: PRP-OMP (PedvaxHIB, Merck and Co., Inc., Whitehouse Station, New Jersey), PRP-T (ActHIB, Sanofi Pasteur, Inc., Swiftwater, Pennsylvania), and PRP-T (Hiberix, GlaxoSmithKline, Research Triangle Park, North Carolina) (38) (Table 1).
In December 1990, PRP-OMP (PedvaxHIB) was licensed by FDA as a 2-dose primary series for infants at ages 2 and 4 months, with a booster dose (dose 3) at age 12 months (39). PRP-OMP contains purified PRP conjugated with an outer membrane protein complex (OMPC) of the B11 strain of Neisseria meningitidis serogroup B. Further information is available in the package insert at / Vaccines/ApprovedProducts/UCM253652.pdf.
In March 1993, PRP-T (ActHIB) was licensed by FDA as a 3-dose primary series for infants at ages 2, 4, and 6 months, with a booster dose (dose 4) at age 15 months (40). This vaccine contains purified PRP conjugated with tetanus toxoid. Further information is available in the package insert at http:// www.fda.gov/downloads/BiologicsBloodVaccines/Vaccines/ ApprovedProducts/UCM109841.pdf.
In August 2009, PRP-T (Hiberix) was licensed by FDA for use as the booster dose (which will be dose 3 or 4, depending on vaccine type used for primary series) of the Hib vaccine series for children aged 15 months through 4 years who have received a Hib primary series (16). This vaccine contains purified PRP conjugated with tetanus toxoid. Further information is available in the package insert at http:// www.fda.gov/downloads/BiologicsBloodVaccines/Vaccines/ ApprovedProducts/UCM179530.pdf.
# Current Licensed and Available Hib Combination Conjugate Vaccines
As of January 1, 2014, three combination vaccines that contain an H. influenzae type b conjugate vaccine had been licensed by Sources: Active Bacterial Core surveillance system and National Notifiable Diseases Surveillance System. - N = 265. An additional 57 children aged <5 years with Hib had unknown vaccine status and were excluded. † Among those with age-appropriate vaccine status, 41% were too young to complete the primary series, 16% completed the primary series, and 43% completed the full series. 1).
In October 1996, PRP-OMP/HepB (Comvax) was licensed by FDA for vaccination against invasive Hib disease and hepatitis B infection in infants at ages 2, 4, and 12 through 15 months (14). This vaccine includes the antigenic components used in PedvaxHIB (PRP-OMP) and Recombivax HB (hepatitis B surface antigen). Further information is available in the package insert at / BiologicsBloodVaccines/Vaccines/ApprovedProducts/ UCM109869.pdf.
In June 2008, DTaP/IPV/PRP-T (Pentacel) was licensed by FDA for vaccination against invasive Hib disease, diphtheria, tetanus, pertussis, and poliomyelitis in infants at ages 2, 4, 6, and 15 through 18 months (15). It is not indicated for the DTaP/IPV booster dose at age 4 through 6 years. The vaccine includes the antigenic components used in ActHIB (PRP-T) and Poliovax. Further information is available in the package insert at / Vaccines/ApprovedProducts/UCM109810.pdf.
In June 2012, MenCY/PRP-T (MenHibRix) was licensed by FDA for vaccination against invasive Hib disease and N. meningitidis serogroups C and Y disease in infants at ages 2, 4, 6, and 12 through 15 months (17). Infants at increased risk for meningococcal disease † should be vaccinated with a 4-dose series of MenCY/PRP-T. Routine meningococcal vaccination is recommended only for infants who are at increased risk for meningococcal disease. However, MenCY/PRP-T may be used in any infant for routine vaccination against Hib. Further recommendations for use of the MenCY component of MenCY/PRP-T have been published previously (17). Further vaccine information is available in the package insert at http:// www.fda.gov/downloads/BiologicsBloodVaccines/Vaccines/ ApprovedProducts/UCM308577.pdf.
# Immunogenicity of Current Licensed and Available Hib Vaccines
Protective antibody levels are detected for both PedvaxHib and ActHib after a primary series (41)(42)(43). However, the vaccines differ in the timing of antibody response. PedvaxHib produces a substantial antibody response after the first dose with an additional boost in geometric mean antibody concentration after the second or third dose (41,42,(44)(45)(46). Therefore, PedvaxHib is licensed as a 2-dose primary series. PedvaxHib effectiveness was 93%-100% in Navajo infants vaccinated with a 2-dose series (13,41,47).
Geometric mean antibody concentrations remain at or below 1.0 µg/ml after the first and second dose of ActHIB, but a protective antibody response is seen after the third dose (41,42,45,48,49). Effectiveness studies for ActHIB were terminated early in the United States with licensure of the first Hib conjugate vaccine; no cases of invasive Hib disease were reported among vaccinees at the time of study termination (13,47). A prospective controlled trial of PRP-T among 56,000 subjects in the United Kingdom found an effectiveness of 95% (95% confidence interval = 74%-100%) (47). † Infants with persistent complement component deficiencies, those with functional or anatomic asplenia (including sickle cell disease), healthy infants in communities with a meningococcal disease outbreak for whom vaccination is recommended, and infants traveling to or residing in areas with hyperendemic or epidemic meningococcal disease. Antibody levels decline after completion of the primary series with PRP-T and PRP-OMP vaccines and a booster dose at age 12-15 months is necessary to maintain protective antibody levels. Booster doses of PedvaxHib, ActHib, and Hiberix at age 12-15 months provide levels of antibody that are protective against invasive Hib disease (16,44,46,48,50,51).
Protective antibody responses comparable to those detected after receipt of separately administered PedVaxHIB and Recombivax HB vaccines are seen after the second primary dose and booster dose of Comvax vaccine (14,52). Pentacel and MenHibRix induce protective antibody responses that are noninferior to separately administered PRP-T vaccines after the third primary dose and booster dose (53)(54)(55)(56)(57)(58)(59).
No clinically significant immune interference has been observed with any of the available monovalent or combination Hib vaccines and concomitant administration of other routine childhood vaccines (51,52,60-67;ACIP, unpublished data, 2009).
# Safety of Current Licensed and Available Hib Vaccines
In prelicensure trials, adverse reactions to PedvaxHib, ActHib, and Hiberix were uncommon, usually mild, and generally resolved within 12-24 hours (16,41,43,46,49,50). Rates of adverse reactions to Comvax, Pentacel, and MenHibRix were similar to those seen with separately administered vaccines (14,(52)(53)(54)68).
Postmarketing surveillance for adverse events following receipt of Hib vaccines has been conducted primarily by two systems in the United States: VAERS and the Vaccine Safety Datalink (VSD). VAERS is a national passive surveillance system operated jointly by CDC and FDA that receives reports of adverse events following vaccination from health-care personnel, manufacturers, vaccine recipients, and others (69). VAERS can generate, but not test, vaccine safety hypotheses and is subject to several limitations, including reporting biases and inconsistent data quality (69). VSD is a collaboration between CDC and nine integrated health-care organizations that conducts population-based vaccination safety studies to assess hypotheses that arise from review of medical literature, reports to VAERS, changes in immunization schedules, or introduction of new vaccines (70).
# Safety Data Reported to VAERS
During January 1, 1990-May 31, 2013, VAERS received 29,047 reports involving receipt of Hib vaccines (PedvaxHIB, ActHIB, Hiberix, Comvax, and Pentacel) in the United States; 26,375 (91%) reports involved children aged <2 years.
Hib vaccines were administered concurrently with one or more other vaccines in 95% of case reports. The median time from vaccination to onset of an adverse event was 1 day. The most frequently reported adverse events were fever (31%), crying (11%), injection site erythema (11%), irritability (10%), and rash (9%). Among all Hib vaccines reports, approximately 17% were coded as serious as defined in the Code of Federal Regulations (71) (i.e., report contained information that the event led to death, life-threatening illness, hospitalization, prolongation of hospitalization, or permanent disability). Among the 5,062 reports coded as serious, the most frequent adverse events were fever (37%), vomiting (21%), convulsion (20%), irritability (17%), and intussusception (11%). In 97% of the intussusception reports, rotavirus vaccine was administered concomitantly and might have prompted reporting of this adverse event.
VAERS received reports of 878 deaths following Hib containing vaccines that occurred during January 1, 1990-May 31, 2013. An autopsy report or other medical records was available for 620 (71%) of these deaths, among which the most frequent cause of death was sudden infant death syndrome (52%). Other causes of death included respiratory (9%), cardiovascular (5%), infectious (5%), neurologic (3%), and gastrointestinal (2%) conditions. In 14% of reports, the cause was undetermined, and in 11% of reports, various other causes were reported (e.g., asphyxia and blunt force trauma).
The reporting frequencies for Hib containing vaccines are similar to what has been observed with other recommended childhood vaccines. No unusual or unexpected safety patterns were observed in VAERS data for any Hib vaccines.
# Population-Based Safety Findings
No postlicensure safety studies of monovalent Hib vaccines were identified by the literature review. However, the VSD conducted an observational study of the combination Hib vaccine, DTaP-IPV-Hib (Pentacel), for the period September 2008-January 2011 (55). Compared with children who received DTaP-containing control vaccine (i.e., without Hib), children aged 1-2 years who received DTaP-IPV-Hib vaccine had an elevated risk for fever (RR = 1.83; 95% CI = 1.34-2.50). DTaP-IPV-Hib vaccine was not associated with any other medically attended adverse health event.
An independent postmarketing safety evaluation of Hib-HepB (Comvax) was conducted by a managed care organization in Seattle, Washington, for the period July 1997-December 2000 (72). Using ICD-9 codes, the retrospective cohort study evaluated adverse events reported 1-30 days following administration of Hib-HepB, compared with rates of adverse events among two control groups (historical control group and self-comparison group). A total of 27,802 vaccine doses were administered during the study period with 111,129 diagnoses recorded within 0-30 days following administration of Comvax in any health-care setting. There were 127 separate adverse event codes with significant elevated relative risks and 66 codes with significantly decreased relative risks (p<0.5). On medical record review, there was no consistent pattern to respiratory or gastrointestinal illnesses; fever findings appeared to be explained by changes in data collection or by concomitant vaccination with measles, mumps, and rubella virus vaccine. Two deaths occurred within the study period, both of which were considered unrelated to vaccination. No consistent association was identified between serious adverse events and vaccination with Hib-HepB, and the vaccine had a favorable safety profile. 1). The first dose can be administered as early as age 6 weeks. A booster dose (which will be dose 3 or 4 depending on vaccine type used in primary series) of any licensed conjugate Hib vaccine (monovalent vaccine or Hib vaccine in combination with HepB or DTaP/IPV or MenCY ) is recommended at age 12 through 15 months and at least 8 weeks after the most recent Hib vaccination (Table 1).
# Recommendations for Hib Vaccine Use
# Recommendations for Routine Vaccination
Hib vaccine has been found to be immunogenic in patients with immunocompromising conditions although immunogenicity varies with the degree of immunocompetence (13,(73)(74)(75)(76)(77)(78)(79)(80)(81)(82)(83)(84). Patients at increased risk for invasive Hib disease who are vaccinated (have received a Hib primary series and a booster dose at age ≥12 months) do not need further routine immunization, except in certain situations (Table 2).
# Guidance for Hib Vaccine Use
# Guidance for Routine Vaccination
Doses for either primary series (2-dose or 3-dose) should be administered 8 weeks apart; however, if necessary, an interval of 4 weeks between doses is acceptable. If a PRP-OMP vaccine (PedvaxHIB or Comvax) is administered for both doses in the primary series, a third primary dose is not indicated. If a PRP-OMP vaccine (PedvaxHib or Comvax) is not administered for both doses in the primary series or there is uncertainty about which products were administered previously, a third primary series dose of a Hib conjugate vaccine is needed to complete the primary series. Any monovalent or combination Hib conjugate vaccine is acceptable for the booster dose (dose 3 or 4 depending on vaccine type used in primary series), regardless of the product used for the primary series. Hiberix should be used only for the booster dose (dose 3 or 4, depending on the vaccine type used for primary series) in children aged 12 months through 4 years who have received at least 1 dose of Hib vaccine previously.
# Guidance for Catch-up Schedules
If the first vaccination is delayed by >1 month, the recommended catch-up schedule (available at . cdc.gov/vaccines/schedules/hcp/child-adolescent.html) should be followed.
- of Hib vaccine is recommended at age 12 through 15 months; for the booster dose, there is no preferred vaccine formulation (i.e., any licensed Hib conjugate vaccine is acceptable). The importance of this early protection was demonstrated in Alaska (30,86). During July 1991-January 1996, a PRP-OMP vaccine was used statewide in Alaska and a >90% decrease in Hib disease rates occurred among AN and nonnative children (30,86). During 1996-1997, after the statewide vaccine was changed to a combination vaccine that included a non-OMP Hib component, Hib incidence increased significantly (19.8 to 91.1 cases/100,000 children aged <5 years, p<0.001) among AN children while remaining unchanged among nonnative children (30,86). Disease reappearance seemed to be attributable to the use of a Hib vaccine that did not achieve early protective antibody concentrations in children who had ongoing exposure to Hib via oropharyngeal colonization among close contacts. After returning to the use of PRP-OMP containing vaccines in Alaska, the incidence of Hib disease in AN children decreased to rates of fewer than six cases per 100,00 children aged <5 years (30,86).
# Children Aged <24 Months with Invasive Hib Disease
Children aged <24 months who develop invasive Hib disease can remain at risk for developing a second episode because natural infection in this age group does not reliably result in development of protective antibody levels. These children - Previously unvaccinated children aged ≥60 months who are not considered high-risk generally are immune to Hib disease and do not require catch-up vaccination. The recommended catch-up schedule should be followed for children aged <12 months who are at increased risk for Hib disease and have delayed or no Hib vaccination. Catch-up guidance for children aged 12 through 59 months who are at increased risk for Hib disease and who have delayed or no Hib vaccination is described below (see "High-risk groups"; Table 2).
# Guidance for Vaccinating Special Populations
American Indians/Alaska Natives Hib meningitis incidence peaks at a younger age (4-6 months) among AI/AN infants than among other U.S. infant populations (6-7 months) (29)(30)(31). Vaccination with a 2 dose primary series of a Hib vaccine that contains PRP-OMP (PedvaxHIB or Comvax) is preferred for AI/AN infants to provide early protection because these vaccines produce a protective antibody response after the first dose (41)(42)(43)51,52,85). If the first vaccination dose is delayed by >1 month, the recommended catch-up schedule (available at ) should be followed. A booster dose (dose 3) should be considered unvaccinated regardless of previous Hib vaccination and should receive Hib vaccine doses according to the age-appropriate schedule for unimmunized children (28,(87)(88)(89). Children aged <24 months who develop invasive Hib disease should receive primary vaccination or re-vaccination with a second primary series beginning 4 weeks after onset of disease.
# Preterm Infants
Medically stable preterm infants § should be vaccinated beginning at age 2 months according to the schedule recommended for other infants, on the basis of chronological age.
# High Risk Groups
Persons considered at increased risk for invasive Hib disease include those with functional or anatomic asplenia, HIV infection, immunoglobulin deficiency (including immunoglobulin G2 subclass deficiency), or early component complement deficiency, recipients of a hematopoietic stem cell transplant, and those receiving chemotherapy or radiation therapy for malignant neoplasms. A single dose of any licensed Hib conjugate vaccine should be administered to unimmunized older children, adolescents, and adults who are asplenic or who are scheduled for an elective splenectomy. Some experts suggest administering a dose prior to elective splenectomy regardless of prior vaccination history (22). On the basis of limited data on the timing of Hib vaccination before splenectomy, experts suggest vaccination at least 14 days before the procedure (18,19,23) (Table 2).
Unimmunized children aged ≥60 months who have HIV infection should receive 1 dose of Hib vaccine. Whether HIV-infected children who have received a full 3 or 4 dose vaccine series (depending on the vaccine type used for the primary series) will benefit from additional Hib doses is unknown. Because the incidence of Hib infections among HIV-infected adults is low, Hib vaccine is not recommended for adults with HIV infection (21,23) (Table 2).
Children aged 12-59 months who are at increased risk for Hib disease (persons with asplenia, HIV infection, immunoglobulin deficiency, early component complement deficiency, or chemotherapy or radiation therapy recipients) and who received no doses or only 1 dose of Hib conjugate vaccine before age 12 months should receive 2 additional doses of vaccine 8 weeks apart; children who received 2 or more doses of Hib conjugate vaccine before age 12 months should receive 1 additional dose, at least 8 weeks after the last dose (Table 2).
Hib vaccination during chemotherapy or radiation therapy should be avoided because of possible suboptimal antibody response. Patients vaccinated within 14 days of starting immunosuppressive therapy or while receiving immunosuppressive therapy should be considered unimmunized, and doses should be repeated beginning at least 3 months following completion of chemotherapy. Patients who were vaccinated more than 14 days before chemotherapy do not require revaccination, with the exception of recipients of a hematopoietic stem cell transplant who should be revaccinated with a 3-dose regimen 6-12 months after successful transplant, regardless of vaccination history (80); at least 4 weeks should separate doses (Table 2).
# Guidance for Vaccine Administration
Hib vaccines are administered intramuscularly in individual doses of 0.5 mL. Adverse events occurring after administration of any vaccine should be reported to VAERS. Reports can be submitted to VAERS online, by facsimile, or by mail. More information about VAERS is available by calling 1-800-822-7967 (toll-free) or online at .
# Interchangeability of Vaccine Product
Studies have demonstrated that any combination of licensed monovalent Hib conjugate vaccines for the primary and booster doses provide comparable or higher antibody levels than with the same monovalent product (33,(44)(45)(46)51,(91)(92)(93). Therefore, licensed monovalent Hib conjugate vaccines are considered interchangeable for the primary as well as the booster doses (dose 3 or 4, depending on vaccine type used for primary series) (18,94). Data on the interchangeability of combination vaccines with other combination vaccines or with monovalent vaccines are limited (52,63). Whenever feasible, the same combination vaccine should be used for the subsequent doses; however, if a different brand is administered, the dose should be considered valid and need not be repeated.
# Precautions and Contraindications
Adverse reactions to Hib-containing monovalent vaccines are uncommon, usually mild, and generally resolve within 12-24 hours (41)(42)(43)49). Rates of adverse reactions to Hib combination vaccines are similar to those observed with separately administered vaccines (14,33,(52)(53)(54). More complete information about adverse reactions to a specific vaccine is available in the package insert for each vaccine and from CDC at / side-effects.htm.
Vaccination with a Hib-containing vaccine is contraindicated in infants aged <6 weeks. Vaccination with a Hib-containing vaccine is contraindicated among persons known to have a severe allergic reaction to any component of the vaccine. The § Infants who do not require ongoing management for serious infection, metabolic disease, or acute renal, cardiovascular, neurologic, or respiratory tract illness and who demonstrate a clinical course of sustained recovery and pattern of steady growth (90).
tip caps of the Hiberix prefilled syringes might contain natural rubber latex, and the vial stoppers for Comvax, ActHib, and PedvaxHIB contain natural rubber latex, which might cause allergic reactions in persons who are latex-sensitive. Therefore, vaccination with these vaccines is contraindicated for persons known to have a severe allergic reaction to dry natural rubber latex (48,(50)(51)(52). The vial stoppers for Pentacel and MenHibRix do not contain latex (63,95). Vaccination with Comvax is contraindicated in patients with a hypersensitivity to yeast (52).
As with all pertussis-containing vaccines, benefits and risk should be considered before administering Pentacel to persons with a history of fever ≥40.5 o C, hypotonic-hyporesponsive episode, persistent inconsolable crying lasting ≥3 hours within 48 hours after receipt of a pertussis-containing vaccine, or seizures within 3 days after receiving a pertussis-containing vaccine (63).
Hib monovalent and combination conjugate vaccines are inactivated vaccines and may be administered to persons with immunocompromising conditions. However, immunologic response to the vaccine might be suboptimal (18).
# Guidance for Chemoprophylaxis
Secondary cases of Hib disease (illness occurring within 60 days of contact with a patient) occur but are rare. Secondary attack rates are higher among household contacts aged 95% of carriers (96)(97)(98)(99). There are no guidelines for control measures around cases of invasive nontype b H. influenzae disease. Chemoprophylaxis is not recommended for contacts of persons with invasive disease caused by nontype b H. influenzae because cases of secondary transmission of disease have not been documented (100,101).
# Index Patients with Invasive Hib Disease
Index patients who are treated with an antibiotic other than cefotaxime or ceftriaxone and are aged <2 years should receive rifampin prior to hospital discharge (22). Because cefotaxime and ceftriaxone eradicate Hib colonization, prophylaxis is not needed for patients treated with either of these antimicrobials.
# Household Contacts
Rifampin chemoprophylaxis is recommended for index patients (unless treated with cefotaxime or ceftriaxone) and all household contacts in households with members aged <4 years who are not fully vaccinated or members aged <18 years who are immunocompromised, regardless of their vaccination status (22).
# Child Care Contacts
Rifampin chemoprophylaxis is recommended in child care settings when two or more cases of invasive Hib disease have occurred within 60 days and unimmunized or underimmunized children attend the facility (22). When prophylaxis is indicated, it should be prescribed for all attendees, regardless of age or vaccine status, and for child care providers.
# Conclusion
Hib disease was once a leading cause of bacterial meningitis among U.S. children aged <5 years. As a result of the introduction of Hib vaccines in the United States and sustained high vaccine coverage, Hib disease is now rare, with rates below the Healthy People 2020 objective. However, the risk for invasive Hib disease continues among unimmunized and underimmunized children, highlighting the importance of full vaccination with the primary series and booster doses. Although Hib disease is uncommon, continued H. influenzae surveillance with complete serotyping data is necessary so that all Hib cases are identified and appropriate chemoprophylaxis measures can be taken.
# ISSN: 1057-5987
The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format. To receive an electronic copy each week, visit MMWR's free subscription page at . html. Paper copy subscriptions are available through the Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone 202-512-1800. | 7,844 | {
"id": "13a9c1abd676d1df47b2a31230f91ef05683a03f",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | How can we encourage ongoing development, refinement, and evaluation of practices to identify and build an evidence base for best practices? On the basis of a review of the literature and expert input, we worked iteratively to create a framework with 2 interrelated components. The first -public health impact -consists of 5 elements: effectiveness, reach, feasibility, sustainability, and transferability. The second -quality of evidence -consists of 4 levels, ranging from weak to rigorous. At the intersection of public health impact and quality of evidence, a continuum of evidencebased practice emerges, representing the ongoing development of knowledge across 4 stages: emerging, promising, leading, and best. This conceptual framework brings together important aspects of impact and quality to provide a common lexicon and criteria for assessing and strengthening public health practice. We hope this work will invite and advance dialogue among public health practitioners and decision makers to build and strengthen a diverse evidence base for public health programs and strategies.# Introduction
In an ideal world, decision makers and practitioners would have access to evidence-based programs and strategies to improve health and reduce health problems with the highest preventable burden. Decision makers would also have tools to guide their selecting and adapting appropriate approaches for specific contexts. In reality, much work needs to be done to achieve this goal, and many pressing public health problems exist where evidence is not yet fully established. Moreover, public health agencies at all levels face substantial fiscal constraints and challenges to improving health across varying contexts. Therefore, the need grows for evidence-based practices that afford maximum efficiency and effectiveness. Many agencies and organizations in the United States have acknowledged such challenges, including the White House (1), the Government Accountability Office (2), the US Department of Health and Human Services (3), the Congressional Budget Office (4), Healthy People 2020 (5), the Community Preventive Services Task Force (6), the National Academy of Sciences (7), and the Trust for America's Health (8). International attention on evidence-based practice has mirrored that in the United States, including acknowledgment by the United Nations (9) and the World Health Organization (10).
Systematic reviews involve a critical examination of studies addressing a particular issue, using an organized method for locating and evaluating evidence. Many systematic reviews typically favor what Chen (11) refers to as the "Campbellian validity typology" -the use of research designs and methods that maximize internal validity to assess program efficacy. Identifying evidence-based practices by using this paradigm alone will often ignore factors critical to the successful integration of social and public health programs in real world settings. One exception is The Guide to Community Preventive Services (The Community Guide) -a repository of 1) recommendations and findings made by the independent, nonfederal Community Preventive Services Task Force about the effectiveness of community-based programs, services, and policies; and 2) the systematic reviews on which these recommendations are based (www.thecommunityguide.org) (6). The Community Guide reviews include research-tested and practice-based studies that use a range of study designs and assess both internal and external validity (6,12).
There is a need to assess the evidence for, categorize, and encourage additional study of strategies without enough evidence to undergo review by The Community Guide but that hold potential for public health impact. Practitioners and decision makers need tools to guide them toward selecting and evaluating the best available practices when published best practices are unavailable. Although systematic reviews identify what many public health professionals consider best practices, the public health field lacks a consensus definition and commonly accepted criteria for the designation best practice.
Consequently, the Office for State, Tribal, Local and Territorial Support at the Centers for Disease Control and Prevention (CDC) convened the CDC Best Practices Workgroup to develop a working definition of best practices, along with criteria and processes for classifying diverse practices in relationship to best practice. Through these efforts, the workgroup also aimed to encourage further development of practices that show promise for improving public health outcomes. We describe the development, structure, and use of the workgroup's conceptual framework for creating a set of best practices. Our purposes in presenting the framework are to promote dialogue among scientists and practitioners about a consistent taxonomy for classifying the evidence for public health practices and to help researchers, practitioners, and evaluators show how their work contributes to building the evidence base for particular practices.
# Background and Approach
The CDC Best Practices Workgroup (see list of members in "Acknowledgments") consisted of 25 CDC staff members with varying backgrounds (eg, epidemiology, behavioral science, program design and management, evaluation, policy development) and topic expertise (eg, chronic disease, infectious disease, injury prevention, environmental health). The workgroup reviewed the literature to find models and frameworks for classifying evidence, including best practices. The following question guided the review: What is known about the scope and definitions of best practices? The review included a keyword search of peer-reviewed and gray literature; workgroup members also identified relevant literature. The scope of the review was intentionally broad and included concepts related to evidence-based programs and policies as well as practice-based evidence.
The workgroup defined practice as "field-based or research-tested actions intended to effect a positive change." Practices could include interventions, programs, strategies, policies, procedures, processes, or activities encompassing the 10 essential public health services and related activities (www.cdc.gov/nphpsp/essentialservices.html). This broad definition of practice includes activities designed to improve specific outcomes (eg, morbidity, mortality, disability, quality of life) or to focus on increasing the effectiveness and efficiency of program operations and services.
The review identified articles describing CDC initiatives that develop and disseminate recommendations for public health practice, such as Diffusion of Effective Behavioral Interventions (DEBI) (); documents from The Community Guide (6,12); and non-CDC articles about identifying and using evidence to improve practice (1-5,7-10,13). Terms and descriptions found during the literature search appear in Table 1 (14)(15)(16)(17)(18)(19). The workgroup abstracted key concepts, including gaps in defining and evaluating best practices and classifying other practices with varying amounts of evidence. A consensus definition of "best practice" was not found, but common elements were. In particular, we found that "best practice" and related terms do not refer to a static assessment or activity; rather, they refer to where, on a continuum, a particular practice falls at a given time. The review identified multiple ways to characterize this continuum or hierarchy, along with considerable variability in the number of stages or levels and in the rigor of methods used for identifying best practices.
The workgroup created a conceptual framework for planning and improving evidence-based practices by adapting and extending several streams of existing work related to developing a continuum of evidence (6,12,13,(20)(21)(22). To broaden the framework's usability, the workgroup developed criteria, definitions, and examples for key terms and formulated a series of questions to apply in assessing and classifying practices. The work was iterative and included frequent comparisons of the framework, definitions, criteria, and assessment questions with how these aspects of public health were discussed in the literature and with the extensive experience of workgroup members. We also continued to refine the products after the workgroup completed its tenure.
# Conceptual Framework
As a result of the review, the workgroup defined the term "best practice" as "a practice supported by a rigorous process of peer review and evaluation indicating effectiveness in improving health outcomes, generally demonstrated through systematic reviews." The workgroup produced a conceptual framework (Figure ) consisting of 2 interrelated components: public health impact and quality of evidence. (20); the integrative validity model (11); and the systematic screening and assessment method (21). Some elements of the public health impact component are more challenging to assess than other elements, such as sustainability and transferability. To address this issue, the workgroup developed questions that users can ask to determine the extent to which the practices they are developing or evaluating address each element (Box). The questions are not comprehensive, and some may not be relevant for all practices; however, having answers to these questions can help facilitate consistent interpretation of impact.
The quality-of-evidence component refers to where a practice lies on an evidence-based practice continuum (Figure). These elements represent 4 levels of evidentiary quality -weak, moderate, strong, and rigorous -that are represented on the horizontal axis. For example, field-based summaries or evaluations of progress with plausible but unproven impact are labeled as weak, whereas assessments of multiple studies or evaluations conducted by using systematic review methods are classified as rigorous (a complete set of definitions and examples is in Table 2) (22).
# Box. Definitions for Elements of Public Health Impact and Examples of Questions to Consider Related to the Elements
Effectiveness: Extent to which the practice achieves the desired outcomes
- What are the practice's desired outcomes?
- How consistent is the evidence?
- What is the magnitude of the effect, including efficiency or effectiveness or both, as appropriate?
- What is the significance to public health, systems, or organizational outcomes?
- What are the benefits or risks for adverse outcomes?
- In considering benefits or risks for adverse outcomes, does the practice promote health equity?
- To what extent does the practice achieve the desired outcomes?
At the intersection of impact and quality of evidence, a continuum of evidence-based practice emerges, depicted by the arrow at the center of the Figure. This continuum represents the ongoing application of knowledge about what is working to strengthen impact in a given context. Building on the work by Brennan and colleagues (23), a lexicon was created for the continuum consisting of 4 stages -emerging, promising, leading, and best. In this conceptual framework, emerging practices include practices assessed through field-based summaries or evaluations in progress that show some evidence of effectiveness and at least plausible evidence of reach, feasibility, sustainability, and transferability. Emerging practices are generally newer, with a plausible theoretical basis and preliminary evidence of impact. These practices require more implementation and further evaluation to determine whether their potential impact can be replicated over time and in other settings and populations. Promising practices include practices assessed through unpublished intervention evaluations that have not been peer reviewed and that demonstrate some evidence of effectiveness, reach, feasibility, sustainability, and transferability. Promising practices have been evaluated more thoroughly than emerging practices and may include practices with higher quality of evidence and lower impact or with lower quality of evidence and higher impact, where decisions related to application will likely depend on context. Leading practices include practices assessed through peerreviewed studies or through nonsystematic review of published intervention evaluations that show growing evidence of effectiveness and some combination of evidence of reach, feasibility, sustainability, and transferability. Best practices adhere to the most rigorous assessments in the continuum, including systematic reviews of research and evaluation studies, which demonstrate evidence of effectiveness and growing evidence of reach, feasibility, sustainability, and transferability (eg, The Community Guide and the HIV/AIDS Prevention Research Synthesis Project).
These classifications are hypothesized to be dynamic and can change over time for a given practice. For example, a practice may be assessed as promising at one point and be promoted to leading or best as stronger evidence is developed. Conversely, practices once considered "best" may become outdated as the field and environmental conditions evolve. The dynamic quality of these classifications highlights the importance of regularly re-evaluating practices at all points on the continuum, including updating systematic reviews of best practices. Such updates, incorporating all emerging evidence, are a critical component of the work of the Community Preventive Services Task Force and other review groups (6). By encouraging evaluation at each stage and supporting continued evaluation to move worthy practices up the evidence continuum, the framework aids quality and performance improvement (24).
# Challenges and Next Steps
Reach: Extent that the practice affects the intended and critical target population(s)
- What is the practice's intended and critical target population (individuals, customers, staff, agency, and other target populations)?
- What beneficiaries are affected?
- What is the proportion of the eligible population affected by the practice?
- How much of the population could ultimately be affected (potential reach)?
- How representative are the groups that are currently affected compared with groups ultimately affected by the problem?
- In considering representativeness, does the practice promote health equity?
- To what extent does the practice affect the intended and critical target population(s)?
Feasibility: Extent to which the practice can be implemented
- What are the barriers to implementing this practice?
- What are the facilitators to implementing this practice?
- What resources are necessary to fully implement the practice?
- Does the practice streamline or add complexity to existing procedures or processes?
- What is the cost-effectiveness and what are the available resources to implement the practice? Sustainability: Extent to which the practice can be maintained and achieve desired outcomes over time - How is the practice designed to integrate with existing programs or processes or both?
- How is it designed to integrate with existing networks and partnerships?
- What level of resources is required to sustain the practice over time?
- What long-term effects or maintenance or improvement of effects over time can be achieved?
- How has the practice been maintained to achieve its desired outcomes over time?
Transferability: Extent to which the practice can be applied to or adapted for various contexts - How has the practice been replicated in similar contexts, and did it achieve its intended outcomes?
- Was adaptation required in different contexts?
- How has the practice been adapted?
The framework as it now appears has several limitations: 1) it is conceptual and has not been fully evaluated, and 2) the two axes will be challenging to measure and balance against each other. Classifying a particular practice reliably using the framework depends on the background, perspective, and skill of the individuals assessing the practice.
Because substantial variation exists in practices needed to address various public health challenges, the questions and guidance used to make the designation of emerging, promising, leading, or best require ongoing refinement. Additional tools are needed to guide users in consistently examining the evidence related to a given practice in diverse circumstances. Helping users to develop a clear definition that addresses scope and boundaries for each practice being assessed is important because the development and implementation of public health practices often vary from one setting or population to another. The framework would benefit from additional input by diverse users and stakeholders to inform an iterative process of improvement and refinement over time.
To address these limitations, tools were developed to apply the framework, to determine quality and impact, and to begin to address the complexity of combining 5 elements. The framework and tools are undergoing pilot testing in numerous CDC programs. Pilot testing will assess the validity, intrarater and interrater reliability, and utility of the framework. Results of this pilot testing and further application of the framework will be published.
# Discussion
CDC is congressionally mandated to support the Community Preventive Services Task Force in developing the systematic reviews and related evidence-based recommendations found in The Community Guide (6). CDC also supports development of other systematic review activities (eg, the DEBI project) to identify best practices through rigorous assessment, advance public health science, and promote translation of interventions supported by the highest levels of evidence. The conceptual framework presented here builds on and complements these efforts by offering a practical approach designed for use by public health practitioners, evaluators, and researchers. The framework offers no justification for implementing less than a best practice in cases where a best practice is known and appropriate to address a public health problem. However, in cases where no practices have achieved best practice status, the framework supports the use of the best available practice for a given health problem and context while 1) making sure users recognize where that practice sits on the evidence continuum and 2) encouraging evaluation of impact, including publication of findings to contribute to the overall evidence base for that practice. A key goal for this framework is to improve public health programs by building practice-based evidence (20).
The value of the framework lies in offering a common lexicon and processes to strengthen the evidence for public health practice. The Federal Interagency Workgroup for Healthy People 2020 has adapted the evidence continuum components of the framework as developed by the CDC workgroup. Other stakeholders may use the framework, definitions, and criteria to communicate more clearly about the continuum of evidence for a wide spectrum of programs, including those that do not currently qualify as best practices but may benefit from additional study. Initial feedback from scientists, evaluators, and practitioners, both internal and external to CDC, supports both the face validity of and the need for this framework.
Applying the elements identified in the conceptual framework might assist with program planning, evaluation, and dissemination. The framework could potentially increase transparency and accountability for public health programming by clarifying the components and definitions of evidence. It could help practitioners and evaluators identify gaps in the evidence for a given practice and determine how their work contributes to building the evidence base. Practitioners might attend to the framework's effectiveness criteria as part of logic model development to help identify potential gaps between activities and intended outputs and outcomes. Several of these criteria already have been adopted in protocols to identify practice-based strategies for more rigorous evaluation (21). The framework's inclusion of feasibility and transferability could help facilitate adoption by increasing the likelihood that a practice will translate to other settings.
The conceptual framework is offered to promote dialogue among researchers, evaluators, practitioners, funders, and other decision makers. With this framework, we seek to foster a shared understanding for assessing standards of evidence and motivate organizations to implement and improve strategies that move practices along the continuum from emerging to best. | 3,582 | {
"id": "26f2c348d47adb7c2560642fd93d561c19931634",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Recommendations of the A dvisory Committee on Immunization Practices d e p a r t m e n t o f h e a l t h a n d h u m a n s e r v i c e s c e n t e r s f o r D i s e a s e C o n t r o l a n d P r e v e n t i o n MMWR T h e M M W R series o f p u b lic a tio n s is p u b lis h e d b y S u rv eillan ce, E pid em io lo g y , a n d L ab o rato ry Services, C e n te rs fo r D isease C o n tro l a n d P re v en tio n ( C D C ) , U .S . D e p a r tm e n t o f H e a lth a n d H u m a n Services, A tlan ta, G A 3 0 3 3 3 . Suggested C itation: C e n te rs fo r D isease C o n tro l a n d P revention.# Su m m a ry
# This report summarizes new recommendation and updates previous recommendations o f the Advisory Committee on Immunization Practices (ACIP) for postexposure prophylaxis (PEP) to prevent human rabies (CDC. Human rabies prevention-United States, 2008: recommendations o f the Advisory Committee on Immunization Practices. M M W R 2008;57). Previously, ACIP recommended a 5-dose rabies vaccination regimen with human diploid cell vaccine (HDCV) or purified chick embryo cell vaccine (PCECV)
. These new recommendations reduce the number o f vaccine doses to four. The reduction in doses recommended for PEP was based in part on evidence from rabies virus pathogenesis data, experimental animal work, clinical studies, and epidemiologic surveillance. These studies indicated that 4 vaccine doses in combination with rabies immune globulin (RIG) elicited adequate immune responses and that a fifth dose o f vaccine did not contribute to more favorable outcomes. For persons previously unvac cinated with rabies vaccine, the reduced regimen o f 4 1-mL doses o f H D C V or PCECV should be administered intramuscularly. The first dose o f the 4-dose course should be administered as soon as possible after exposure (day 0). Additional doses then should be administered on days 3, 7, and 14 after the first vaccination. ACIP recommendations for the use o f RIG remain unchanged. For persons who previously received a complete vaccination series (pre-or postexposure prophylaxis) with a cell-culture vaccine or who previously had a documented adequate rabies virus-neutralizing antibody titer following vaccination with noncell-culture vaccine, the recommendation for a 2-dose PEP vaccination series has not changed. Similarly, the number o f doses recommended for persons with altered immunocompetence has not changed; for such persons, PEP should continue to comprise a 5-dose vaccina tion regimen with 1 dose o f RIG. Recommendations for pre-exposure prophylaxis also remain unchanged, with 3 doses o f vaccine administered on days 0, 7, and 21 or 28. Prompt rabies PEP combining wound care, infiltration o f RIG into and around the wound, and multiple doses o f rabies cell-culture vaccine continue to be highly effective in preventing human rabies.
# Introduction
Rabies is a zoonotic disease caused by RNA viruses in the family Rhabdoviridae, genus Lyssavirus (1). Virus is transmitted in the saliva of rabid mammals via a bite. After entry to the central nervous system, these viruses cause an acute, progres sive encephalomyelitis. The incubation period usually ranges from 1 to 3 months after exposure, but can range from days to years. Rabies can be prevented by avoidance of viral exposure and initiation of prompt medical intervention when exposure does occur. In the United States, animal rabies is common. In a recent study, approximately 23,000 persons per year were estimated to have been exposed to potentially rabid animals and received rabies postexposure prophylaxis (PEP) (2). W ith the elimination of canine rabies virus variants and enzootic transmission among dogs, human rabies is now rare in the United States, with an average of one or two cases occurring annually since 1960 (3).
Prompt wound care and the administration of rabies immune globulin (RIG) and vaccine are highly effective in prevent ing human rabies following exposure. A variety of empirical schedules and vaccine doses have been recommended over time, based in part on immunogenicity and clinical experience in areas of the world with enzootic canine or wildlife rabies (4). As more potent vaccines were developed, the number of vaccine doses recommended for PEP has decreased, and studies aimed at further revision and reduction of PEP schedules and doses in humans have been encouraged. By the latter half of the 20th century, a 4-to 6-dose, intramuscular regimen using human diploid cell vaccine (HDCV) or purified chick embryo cell vac cine (PCECV) was being recommended (5-8). In the United States, a 5-dose PEP vaccine regimen was adopted during the 1980s (9-12). In 2007, when human rabies vaccine was in limited supply, an ad hoc National Rabies Working Group was formed to reassess the recommendations for rabies prevention and control in humans and other animals. In 2008, a smaller Advisory Committee on Immunization Practices (ACIP) Rabies Workgroup was formed to review rabies vaccine regimen options. This report provides updated ACIP recommendations regarding the use of a 4-dose vaccination regimen, replacing the previously recommended 5-dose regimen, for rabies PEP in previously unvaccinated persons.
# M ethods
The ACIP Rabies Workgroup- was formed in October 2008 to review 1) previous recommendations; 2) published and unpublished data from both national and global sources regard ing rabies PEP; and 3) the immunogenicity, effectiveness, and safety of a 4-dose PEP rabies vaccination regimen. The ACIP Rabies Workgroup used an evidence-based process for consid eration of a reduced vaccination regimen in human rabies PEP This approach consisted of a review of information available from basic and applied studies of rabies prevention. Because rabies is almost always fatal among immunologically naive - A list o f th e m em bership appears o n page 9 o f this report. persons once clinical symptoms of rabies occur, randomized, placebo-controlled efficacy studies of vaccine in humans cannot be conducted. The ACIP Rabies Workgroup reviewed six areas: 1) rabies virus pathogenesis, 2) experimental animal models, 3) human immunogenicity studies, 4) prophylaxis effectiveness in humans, 5) documented failures of prophylaxis in humans, and 6) vaccine safety. Studies for review were identified by searching the PubMed database and other relevant references and by consulting subject-matter experts. W hen definitive research evidence was lacking, the recommendations incor porated the expert opinion of the ACIP Rabies Workgroup members. The ACIP Rabies Workgroup also sought advice and comment from representatives of the vaccine industry, the National Association of State Public Health Veterinarians, the Council of State and Territorial Epidemiologists, state and local public health officials, additional national stakeholder groups, and other national and international experts. The proposed revised recommendations and a draft statement from the ACIP Rabies Workgroup were presented to the full ACIP during February 2009. After review and comment by ACIP, a revised draft, recommending a reduced regimen of 4 1-mL doses of rabies vaccine for PEP in previously unvaccinated persons, was prepared for consideration. These recommenda tions were discussed and accepted by ACIP at the June 2009 meeting (13).
# R a tio n ale fo r Reduced D o ses of H u m an R a b ies V accine
A detailed review of the evidence in support of a reduced, 4-dose schedule for human PEP has been published (14). The totality of the evidence, obtained from the available peerreviewed literature, unpublished data sources, epidemiologic reviews, and expert opinion strongly supports a reduced vac cination schedule (Table 1). Since the 19 th century, prophy lactic interventions against rabies have recognized the highly neurotropic characteristics of lyssaviruses and have aimed at neutralizing the virus at the site of infection before it can enter the human central nervous system (Figure 1) (4,15,16). To accomplish this, immunologic interventions must be prompt and must be directed toward local virus neutralization, such as local infiltration with RIG and vaccination. Modern recom mended rabies PEP regimens emphasize early wound care and passive immunization (i.e., infiltration of RIG in and around the wound) combined with active immunization (i.e., serial doses of rabies vaccine). Accumulated scientific evidence indi cates that, following rabies virus exposure, successful neutral ization and clearance of rabies virus mediated via appropriate PEP generally ensures patient survival (8). The induction of a rabies virus-specific antibody response is one important immunologic component of response to vac cination (4). Development of detectable rabies virus-specific neutralizing antibodies is a surrogate for an adequate immune response to vaccination. Clinical trials of human rabies vaccina tion indicate that all healthy persons develop detectable rabies virus-neutralizing antibody titer rapidly after initiation of PEP For example, in a literature review conducted by the ACIP Rabies Workgroup of at least 12 published rabies vaccination studies during 1976-2008 representing approximately1,000 human subjects, all subjects developed rabies virus-neutralizing antibodies by day 14 (14).
Observational studies indicate that PEP is universally effec tive in preventing human rabies when administered promptly and appropriately. O f the >55,000 persons who die annually of rabies worldwide, the majority either did not receive any PEP, received some form of PEP (usually without RIG) after sub stantial delays, or were administered PEP according to sched ules that deviated substantially from current ACIP or World Health Organization recommendations (17). For example, a review of a series of 21 fatal human cases in which patients received some form of PEP indicated that 20 patients devel oped signs of illness, and most died before day 28 (Figure 2). In such cases, in which widespread infection of the central nervous system occurs before the due date (i.e., day 28) of the fifth vaccine dose, the utility of that dose must be nil. In the United States, of the 27 human rabies cases reported during 2000-2008, none of the patients had a history of receiving any PEP before illness, and this is the most common situation for human rabies deaths in both developed and developing coun tries (3,8). In India, an analysis from two animal bite centers during 2001-2002 demonstrated that in 192 human rabies cases, all deaths could be attributed to failure to seek timely and appropriate PEP, and none of them could be attributed to a failure to receive the fifth (day 28) vaccine dose (18). Even when PEP is administered imperfectly or not according to established scheduled dose recommendations, it might be generally effective. Several studies have reported cases involving persons who were exposed to potentially rabid animals and who received less than 5, 4, or even 3 doses of rabies vaccine but who nevertheless did not acquire rabies (Table 2). For example, in one series from New York, 147 (13%) of 1,132 patients had no report of receiving the complete 5-dose vaccine regimen. O f these patients, 26 (18%) received only 4 doses of vaccine, and two of these patients were exposed to animals with laboratoryconfirmed rabies. However, no documented cases of human rabies occurred (CDC, unpublished data, 2003). The ACIP Rabies Working Group estimates that >1,000 persons in the United States receive rabies prophylaxis annually of only 3 or 4 doses, with no resulting documented cases of human rabies, even though >30% of these persons likely have exposure to confirmed rabid animals (14). In addition, no case of human rabies in the United States has been reported in which failure of PEP was attributable to receiving less than the 5-dose vac cine course. Worldwide, although human PEP failures have been reported very rarely, even in cases in which intervention appeared both prompt and appropriate (8), no cases have been attributed to the lack of receipt of the fifth human rabies vac cine dose on day 28 (4,17).
In vivo laboratory animal studies using multiple animal mod els from rodents to nonhuman primates have underscored the importance of timely PEP using RIG and vaccine, regardless of the absolute number of vaccine doses used or the schedule (14,19). For example, in a study in which 1, 2, 3, 4, or 5 doses of rabies vaccine were used in a Syrian hamster model in combination with human rabies immune globulin (HRIG), no statistically significant differences in elicited protection and consequent survivorship were observed among groups receiving different doses (20). In the same study, using a murine model, no differences were detected in immunogenicity and efficacy of PEP with 2, 3, or 4 vaccine doses. In another study using a nonhuman primate model, 1 dose of cell-culture vaccine, in combination with RIG administered 6 hours postexposure, provided substantial protection (21). In another study, a 3-dose regime was evaluated in a canine model and determined to be effective in preventing rabies (22).
Compared with older, nerve tissue-based products, adverse reactions associated with modern human rabies vaccination are uncommon (4). A review by the Workgroup of published and unpublished human rabies vaccine clinical trials and Vaccine Adverse Event Reporting System data identified no adverse events that were correlated to a failure to receive the fifth vac cine dose. As some adverse reactions might be independent clinical events with each vaccine administration, the omission of the vaccine dose on day 28 might have some positive health benefits. Otherwise, the overall safety of human rabies PEP is expected to be unchanged from the evidence provided in the 2008 ACIP report (12).
Preliminary economic assessments support the cost savings associated with a reduced schedule of vaccination (23,24). The ACIP Rabies Workgroup has estimated that, assuming 100% compliance with a recommended vaccine regimen, a change in recommendation from a 5-dose schedule to a 4-dose schedule would save approximately $16.6 million in costs to the U.S. health-care system. Persons who receive rabies vaccination might see some savings related to deletion of the fifth recommended dose of vaccine, measured in both the cost of the vaccine and the costs associated with the additional medical visit.
# R evised R a b ies P o ste x p o su re P ro p h y la x is R eco m m endatio n s
This report presents revised recommendations for human rabies PEP (Table 3). Rabies PEP includes wound care and administration of both RIG and vaccine.
# Postexposure Prophylaxis for Unvaccinated Persons
For unvaccinated persons, the combination of RIG and vaccine is recommended for both bite and nonbite exposures, regardless of the time interval between exposure and initiation of PEP. If PEP has been initiated and appropriate laboratory diagnostic testing (i.e., the direct fluorescent antibody test) indicates that the animal that caused the exposure was not rabid, PEP may be discontinued.
# Vaccine Use
A regimen of 4 1-mL vaccine doses of H D CV or PCECV should be administered intramuscularly to previously unvac cinated persons (Table 3). The first dose of the 4-dose regimen should be administered as soon as possible after exposure. The date of the first dose is considered to be day 0 of the PEP series. Additional doses then should be administered on days 3, 7, and 14 after the first vaccination. Recommendations for the site of the intramuscular vaccination remain unchanged (e.g., for adults, the deltoid area; for children, the anterolateral aspect of the thigh also is acceptable). The gluteal area should not be used because administration of vaccine in this area might result in a diminished immunologic response. Children should receive the same vaccine dose (i.e., vaccine volume) as recommended for adults.
# HRIG Use
The recommendations for use of immune globulin in rabies prophylaxis remain unchanged by the revised recommendation of a reduced rabies vaccine schedule. HRIG is administered once to previously unvaccinated persons to provide rabies virusneutralizing antibody coverage until the patient responds to vaccination by actively producing virus-neutralizing antibodies. HRIG is administered once on day 0 at the time PEP is initi ated, in conjunction with human rabies vaccines available for use in the United States. If HRIG was not administered when vaccination was begun on day 0, it can be administered up to and including day 7 of the PEP series (12,25). If anatomically feasible, the full dose of HRIG is infiltrated around and into any wounds. Any remaining volume is injected intramuscu larly at a site distant from vaccine administration. HRIG is not administered in the same syringe or at the same anatomic site as the first vaccine dose. However, subsequent doses (i.e., on days 3, 7, and 14) of vaccine in the 4-dose vaccine series can be administered in the same anatomic location in which HRIG was administered.
# Postexposure Prophylaxis for Previously Vaccinated Persons
Recommendations for PEP have not changed for persons who were vaccinated previously. Previously vaccinated persons are those who have received one of the ACIP-recommended pre-or postexposure prophylaxis regimens (with cell-culture vaccines) or those who received another vaccine regimen (or vaccines other than cell-culture vaccine) and had a docu mented adequate rabies virus-neutralizing antibody response. Previously vaccinated persons, as defined above, should receive 2 vaccine doses (1.0 mL each in the deltoid), the first dose immediately and the second dose 3 days later. Administration of HRIG is unnecessary, and HRIG should not be administered to previously vaccinated persons to avoid possible inhibition of the relative strength or rapidity of an expected anamnestic response (26). Local wound care remains an important part of rabies PEP for any previously vaccinated persons.
# V accin atio n a n d Serologic Testing Postvaccination Serologic testing
All healthy persons tested in accordance with ACIP guide lines after completion of at least a 4-dose regimen of rabies PEP should demonstrate an adequate antibody response against rabies virus (14). Therefore, no routine testing of healthy patients completing PEP is necessary to document serocon version (12). W hen titers are obtained, serum specimens col lected 1-2 weeks after prophylaxis (after last dose of vaccine) should completely neutralize challenge virus at least at a 1:5 serum dilution by the rapid fluorescent focus inhibition test (RFFIT). The rabies virus-neutralizing antibody titers will decline gradually since the last vaccination. Minimal differ ences (i.e., within one dilution of sera) in the reported values of rabies virus-neutralizing antibody results might occur between laboratories that provide antibody determination using the recommended RFFIT. Commercial rabies virus antibody titer determination kits that are not approved by the Food and Drug Administration are not appropriate for use as a substitute for the RFFIT. Discrepant results might occur after the use of such tests, and actual virus-neutralizing activity in clinical specimens cannot be measured.
# M a n a g e m e n t of A d v e rse R eactio n s, P recau tio n s, a n d C o n train d icatio n s M anagement of Adverse Reactions
Recommendations for management and reporting of vaccine adverse events have not changed. These recommendations have been described in detail previously (12).
# Immunosuppression
Recommendations for rabies pre-and postexposure pro phylaxis for persons with immunosuppression have not changed. General recommendations for active and passive immunization in persons with altered immunocompetence have been summarized previously (27,28). This updated report discusses specific recommendations for patients with altered immunocompetence who require rabies pre-and postexposure prophylaxis. All rabies vaccines licensed in the United States are inactivated cell-culture vaccines that can be administered safely to persons with altered immunocompetence. Because corticosteroids, other immunosuppressive agents, antimalarials, and immunosuppressive illnesses might reduce immune responses to rabies vaccines substantially, for persons with immunosuppression, rabies PEP should be administered using a 5-dose vaccine regimen (i.e., 1 dose of vaccine on days 0, 3, 7, 14, and 28), with the understanding that the immune response still might be inadequate. Immunosuppressive agents should not be administered during rabies PEP unless essential for the treatment of other conditions. If possible, immunosuppressed patients should postpone rabies preexposure prophylaxis until the immunocompromising condition is resolved. W hen post ponement is not possible, immunosuppressed persons who are at risk for rabies should have their virus-neutralizing antibody responses checked after completing the preexposure series. Postvaccination rabies virus-neutralizing antibody values might be less than adequate among immunosuppressed persons with HIV or other infections (29,30). W hen rabies pre-or postex posure prophylaxis is administered to an immunosuppressed person, one or more serum samples should be tested for rabies virus-neutralizing antibody by the RFFIT to ensure that an acceptable antibody response has developed after completing the series. If no acceptable antibody response is detected after the final dose in the pre-or postexposure prophylaxis series, the patient should be managed in consultation with their physician and appropriate public health officials.
# V a ria tio n from H u m an R ab ies V accin e P a c k a g e In serts
These new ACIP recommendations differ from current rabies vaccine label instructions, which still list the 5-dose series for PEP. Historically, ACIP review and subsequent public health recommendations for the use of various biologics has occurred after vaccine licensure and generally are in agreement with product labels. However, differences between ACIP recom mendations and product labels are not unprecedented. For example, during the early 1980s, ACIP review and recom mendations concerning the intradermal use of rabies vaccines occurred well in advance of actual label claims and licensing (9). O n the basis of discussions with industry representatives, alterations of current product labels for H D CV and PCEC are not anticipated by the producers of human rabies vaccines licensed for use in the United States. Rev In fec t D is 1 9 8 8 ;1 0 (S u p p l 4 ):S 7 3 9 -5 0 . 20. F ran k a R, W u X , Jack so n RF, e t al. Rabies virus pathog en esis in re latio n sh ip to in te rv e n tio n w ith inactiv ated a n d a tte n u a te d rabies vaccines.
V accine 2 0 0 9 ;2 7 :7 1 4 9 -5 5 . 21. Sikes R K , C leary W F, K oprow ski H , W ik to r T J, | 5,118 | {
"id": "5534691268f0e12e41595745e1af55bfad1be2fe",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Unger is involved in a material transfer agreement with Merck Research Laboratories to implement the competitive Luminex assay for HPV 6, 11, 16, and 18 serology. Presentations will not include any discussion of the unlabeled use of a product or a product under investigational use.# Introduction
and are self-limited, persistent genital HPV infection can cause cervical cancer in women and other types of anogenital can-Genital human papillomavirus (HPV) is the most com cers and genital warts in both men and women. mon sexually transmitted infection in the United States; an Approximately 100 HPV types have been identified, over estimated 6.2 million persons are newly infected every year 40 of which infect the genital area (2). Genital HPV types are (1). Although the majority of infections cause no symptoms categorized according to their epidemiologic association with cervical cancer. Infections with low-risk types (e.g., types 6
The material in this report originated in the National Center for HIV/ and 11) can cause benign or low-grade cervical cell changes, March 23, 2007 abnormalities, high-grade cervical cell abnormalities that are precursors to cancer, and anogenital cancers (5). High-risk HPV types are detected in 99% of cervical cancers (6); approximately 70% of cervical cancers worldwide are caused by types 16 and 18 (7). Although infection with high-risk types is considered necessary for the development of cervical cancer, it is not sufficient because the majority of women with high-risk HPV infection do not develop cancer (3,4).
In addition to cervical cancer, HPV infection also is associ ated with anogenital cancers such as cancer of the vulva, vagina, penis, and anus (Table 1) (8,9). Each of these is less common than cervical cancer (10)(11)(12)(13)(14). The association of genital types of HPV with nongenital cancer is less well established, but studies support a role in a subset of oral cavity and pharyn geal cancers (15).
In June 2006, the quadrivalent HPV vaccine types 6,11,16,18 (GARDASIL TM , manufactured by Merck and Co., Inc., Whitehouse Station, New Jersey) was licensed for use among females aged 9-26 years- for prevention of vaccine HPV-type-related cervical cancer, cervical cancer precursors, vaginal and vulvar cancer precursors, and anogenital warts. Efficacy studies are ongoing in men.
In the United States, cervical cancer prevention and con trol programs have reduced the number of cervical cancer cases and deaths through cervical cytology screening, which can detect precancerous lesions. The quadrivalent HPV vac cine will not eliminate the need for cervical cancer screening in the United States because not all HPV types that cause cervical cancer are included in the vaccine.
# Methods
The Advisory Committee on Immunization Practices (ACIP) HPV vaccine workgroup first met in February 2004 to begin reviewing data related to the quadrivalent HPV vac cine. The workgroup held monthly teleconferences and meet ings three times a year to review published and unpublished data from the HPV vaccine clinical trials, including data on safety, immunogenicity, and efficacy. Data on epidemiology and natural history of HPV, vaccine acceptability, and sexual behavior in the United States also were reviewed. Several eco nomic and cost effectiveness analyses were considered. Pre sentations on these topics were made to ACIP during meetings in June 2005, October 2005, and February 2006. Recom mendation options were developed and discussed by the ACIP HPV vaccine workgroup. When evidence was lacking, the recommendations incorporated expert opinion of the workgroup members. Options being considered by the workgroup were presented to ACIP in February 2006. The final recommendations were presented to ACIP at the June 2006 ACIP meeting. After discussions, minor modifications were made and the recommendations were approved at the June 2006 meeting. Modifications were made to the ACIP statement during the subsequent review process at CDC to update and clarify wording in the document.
The quadrivalent HPV vaccine is a new vaccine; additional data will be available in the near future from clinical trials. These data and any new information on epidemiology of HPV will be reviewed by ACIP as they become available, and rec ommendations will be updated as needed.
# Background Biology of HPV
HPVs are nonenveloped, double-stranded DNA viruses in the family Papillomaviridae. Isolates of HPV are classified as "types," and numbers are assigned in order of their discovery (16). Types are designated on the basis of the nucleotide se quence of specific regions of the genome. All HPVs have an 8 kb circular genome enclosed in a capsid shell composed of the major and minor capsid proteins L1 and L2, respectively. Purified L1 protein will self-assemble to form empty shells that resemble a virus, called virus-like particles (VLPs). In addition to the structural genes (L1 and L2), the genome en codes several early genes (E1, E2, E4, E5, E6, and E7) that enable viral transcription and replication and interact with the host genome. Immortalization and transformation func tions are associated with the E6 and E7 genes of high-risk HPV. E6 and E7 proteins from high-risk types are the pri mary oncoproteins; they manipulate cell cycle regulators, in duce chromosomal abnormalities, and block apoptosis (17).
Papillomaviruses initiate infection in the basal layer of the epithelium, and viral genome amplification occurs in differ entiating cells using the cellular replication machinery. After infection, differentiating epithelial cells that are normally nondividing remain in an active cell cycle. This can result in a thickened, sometimes exophytic, epithelial lesion. The vi rus is released as cells exfoliate from the epithelium. With neoplastic progression, the virus might integrate into the host chromosomes, and little virion production will occur.
# Immunology of HPV
HPV infections are largely shielded from the host immune response because they are restricted to the epithelium (18). Humoral and cellular immune responses have been docu mented, but correlates of immunity have not been established. Serum antibodies against many different viral products have been demonstrated. The best characterized and most typespecific antibodies are those directed against conformational epitopes of the L1 capsid protein assembled as VLPs. Not all infected persons have antibodies; in one study, 54%-69% of women with incident HPV 16, 6, or 18 infections had anti bodies (19). Among newly infected women, the median time to seroconversion is approximately 8 months (20,21).
# Laboratory Testing for HPV
HPV cannot be cultured. Detecting HPV requires identifi cation of HPV genetic information (DNA in the majority of assay formats). Assays differ considerably in their sensitivity and type specificity. The anatomic region sampled and the method of specimen collection will impact detection.
Only Epidemiology and basic research studies of HPV typically use nucleic acid amplification methods that generate typespecific and, in certain formats, quantitative results. Polymerase chain reaction (PCR) assays used most commonly in epide miologic studies target genetically conserved regions of the L1 gene. These consensus assays are designed to amplify HPV, and types are then determined by type specific hybridization, re striction enzyme digestion, or sequencing. In the trials of quadrivalent HPV vaccine, multiplex assays were used that spe cifically detect the L1, E6, and E7 gene for each HPV type.
The most frequently used HPV serologic assays are VLPbased enzyme immunoassays, designed to detect antibodies to the L1 viral protein. The type-specificity of the assay de pends on preparation of conformationally intact VLPs in re combinant baculovirus or other eukaryotic expression systems (22). Serologic assays are available only in research settings. Key laboratory reagents are not standardized, and no gold standards exist for setting a threshold for a positive result (23). In trials of the quadrivalent HPV vaccine, a competitive ra dioimmunoassay or a quadriplex competitive immunoassay was used, both of which measure neutralizing antibodies in serum (24,25).
# Epidemiology of HPV Infection Transmission and Risk Factors
Genital HPV infection is primarily transmitted by genital contact, usually through sexual intercourse (2,26). In virtu ally all studies of HPV prevalence and incidence, the most consistent predictors of infection have been measures of sexual activity, most importantly the number of sex partners (life time and recent) (27)(28)(29)(30)(31)(32)(33)(34). For example, one study indicated that 14.3% of women aged 18-25 years with one lifetime sex partner, 22.3% with two lifetime sex partners, and 31.5% with more than three lifetime partners (33) had HPV infec tion. Transmission of HPV through other types of genital contact in the absence of penetrative intercourse (i.e., oralgenital, manual-genital, and genital-genital contact) has been described, but is less common than through sexual intercourse (26,35,36). Additional risk factors primarily identified for fe males include partner sexual behavior (26) and immune sta tus (37,38). Genital HPV infection also can be transmitted by nonsexual routes, but this is uncommon. Nonsexual routes of genital HPV transmission include transmission from a mother to a newborn baby (39,40).
Because HPV is transmitted by sexual activity, understand ing the epidemiology of HPV requires data on sexual behav ior. The 2002 National Survey of Family Growth (http:// www.cdc.gov/nchs/nsfg) indicated that 24% of females in the United States were sexually active by age 15 years (41). This percentage increased to 40% by age 16 years and to 70% by age 18 years. Among sexually active females aged 15-19 years and 20-24 years, the median number of lifetime male sex partners was 1.4 and 2.8, respectively (42). The 2005 Youth Behavioral Risk Survey (/ mmwrhtml/SS5505a1.htm) indicated that that 3.7% of female students had been sexually active before age 13 years (43). Of those sexually active, 5.7% of 9th-grade females and 20.2% of 12th-grade females had had four or more sex partners.
# Natural History of HPV Infection
The majority of HPV infections are transient and asymp tomatic and cause no clinical problems; 70% of new HPV infections clear within 1 year, and approximately 90% clear within 2 years (27,(44)(45)(46). The median duration of new in fections is 8 months (27,45). Persistent infection with highrisk types of HPV is the most important risk factor for cervical cancer precursors and invasive cervical cancer (45,(47)(48)(49)(50). The risk for persistence and progression to precancerous lesions varies by HPV type, with HPV 16 being more oncogenic than other high-risk HPV types (51,52). Factors associated with cervical cancer in epidemiologic studies include ciga rette smoking, increased parity, increased age, other sexually transmitted infections, immune suppression, long-term oral contraceptive use, and other host factors (53)(54)(55). The time between initial HPV infection and development of cervical cancer is usually decades. Many aspects of the natural history of HPV are poorly understood, including the role and dura tion of naturally acquired immunity after HPV infection.
# HPV Prevalence and Incidence in the United States
Overall in the United States, an estimated 6.2 million new HPV infections occur every year among persons aged 14-44 years (1). Of these, 74% occur among those aged 15-24 years. Modeling estimates suggest that >80% of sexually active women will have acquired genital HPV by age 50 years (56).
Routine reporting of HPV does not exist in the United States. Information on prevalence and incidence has been obtained primarily from clinic-based populations, such as family planning and sexually transmitted disease or univer sity health clinic patients. These evaluations have documented prevalence of HPV DNA ranging from 14% to 90% (57). Prevalence was highest among sexually active females aged <25 years and decreased with increasing age (31,32,58,59). Data from a multisite, clinic-based study of sexually active women in the United States indicated that prevalence was highest among those aged 14-19 years (60).
Two studies have reported prevalence in representative, population-based samples. In a study of sexually active women aged 18-25 years, prevalence of any HPV was 26.9% (33). Prevalence of types 6 or 11 was 2.2%, and prevalence of types 16 or 18 was 7.8%. In a study of females aged 14-59 years during 2003-2004, the prevalence of any HPV was 26.8% (61). Prevalence was highest among women aged 20-24 years (44.8%). Overall, prevalence of types 6, 11, 16, and 18 was 1.3%, 0.1%, 1.5%, and 0.8%, respectively.
Few data exist on cumulative risk for HPV infection. De tection of HPV DNA indicates infection and does not pro vide information on women who were infected but cleared the HPV. Seroprevalence data can provide a better estimate of cumulative risk but will also be an underestimate, because not all persons with natural HPV infection have detectable antibodies. In a representative sample of women aged 20-29 years in the United States, HPV 16 seroprevalence was 25% (62). Because as few as 60% of those infected with HPV have detectable antibodies, the seroprevalence is an underestimate, and true exposure to HPV 16 could be as high as 41% among women in that age group. Data also are available from the quadrivalent HPV vaccine phase III trials, in which both HPV PCR assays on cervical specimens and serologic tests were performed at enrollment. Participation was restricted to sexu ally active women who had no more than four lifetime part ners or were planning sexual debut. Among 5,996 North American females aged 16-24 years, 92% were sexually ac tive, and the median number of lifetime sex partners was two; 24% had evidence of previous or current infection with HPV 6,11,16, or 18 on the basis of serology and/or PCR at the time of enrollment; four (0.1%) had evidence of infection with all four vaccine types (Merck and Co., unpublished data, 2006).
Studies of incident HPV infection that have evaluated HPV DNA detection over time demonstrate that acquisition oc curs soon after sexual debut. In a prospective study of college women in the United States, the cumulative probability of incident infection was 38.9% by 24 months after first sexual intercourse. Of all HPV types, HPV 16 acquisition was high est (10.4%); 5.6% had acquired HPV 18 (26).
HPV infection also is common among men (63)(64)(65)(66)(67). Among heterosexual men in clinic-based studies, prevalence of geni tal HPV infection often is >20% and is highly dependent on the anatomic sites sampled and method of specimen collection (64,66,67).
# Clinical Sequelae of HPV Infection
Clinical sequelae of HPV infection include cervical cancer and cervical cancer precursors, other anogenital cancers and their precursor lesions, anogenital warts, and recurrent respi ratory papillomatosis.
# Cervical Cancer and Precursor Lesions
HPV is a necessary but not sufficient cause of all cervical cancers. Approximately three fourths of all cervical cancers in the United States are squamous cell; the remaining are adeno carcinomas. HPV 16 and 18 account for approximately 68% of squamous cell cancers and 83% of adenocarcinomas (7).
Although HPV infection usually is asymptomatic, cervical infection can result in histologic changes that are classified as cervical intraepithelial neoplasias (CIN) grades 1, 2, or 3 on the basis of increasing degree of abnormality in the cervical epithelium or adenocarcinoma in situ (AIS). Spontaneous clearance or progression to cancer in the absence of treatment varies for CIN 1 and CIN 2, and CIN 3. CIN 1 usually clears spontaneously (60% of cases) and rarely progresses to cancer (1%); a lower percentage of CIN 2 and 3 spontaneously clears (30%-40%), and a higher percentage progresses to cancer if not treated (>12%) (68). Cervical cancer screening with the Pap test can detect cytologic changes that reflect the underly ing tissue changes. However, cytologic abnormalities detected by the Pap test can be ambiguous or equivocal. Abnormalities include ASC-US, atypical glandular cells, low-and high-grade squamous intraepithelial lesions (LSIL and HSIL), and AIS. HPV types 16 and 18 are more commonly found in associa tion with higher-grade lesions. In one study, the prevalence of HPV 16 was 13.3% among ASC-US, 23.6% among LSIL, and 60.7% among HSIL Pap tests (69).
No routine reporting or registry exists for abnormal Pap tests or cervical cancer precursor lesions in the United States; however, data are available from managed-care organizations and administrative data sets (70,71). Each year, approximately 50 million women undergo Pap testing; approximately 3.5-5.0 million of these Pap tests will require some follow-up, including 2-3 million ASC-US, 1.25 million LSIL, and 300,000 HSIL Pap tests (72)(73)(74).
In the United States, cases of cervical cancer are routinely reported to cancer registries such as the National Cancer In stitute Surveillance, Epidemiology, and End Results program, and CDC-administered National Program of Cancer Regis tries that cover approximately 96% of the U.S. population in 2003. Cervical cancer incidence rates have decreased approxi mately 75% and death rates approximately 70% since the 1950s, largely because of the introduction of Pap testing (74,75). However, the decrease in incidence is observed pri marily in squamous cell carcinomas; the incidence of adeno carcinomas has not changed appreciably (76). Adenocarcinomas are more difficult to detect because they are found in the endocervix; they account for approximately 20% of cervical cancer cases in the United States (77,78). In 2003, cervical cancer incidence in the United States was 8.1 per 100,000 women, with approximately 11,820 new cases reported (79). The median age of diagnosis for cervical can cer was 47 years.
Substantial differences exist in the cervical cancer incidence and mortality by racial/ethnic group in the United States (78). The incidence for black women was approximately 1.5 times higher than that for white women (Figure 1). Incidence for Hispanic women also was higher than that for white women (78). Death rates for black women were twice that for white women. Although incidence for Asian women overall is simi lar to that for white women (78), certain Asian subgroups, especially Vietnamese and Korean women, have higher rates of cervical cancer (80).
Geographic differences exist in incidence and mortality, with notably higher incidence and mortality in Southern states (Figures 2 and 3) and Appalachia (78,81). Mortality rates are 1975 1977 1979 1981 1983 1985 1987 1989 1991 1993 1995 1997 1999 higher among specific groups, including Hispanic women living on the Texas-Mexico border; white women in Appala chia, rural New York State, and the northern part of the north east United States; and among American Indians in the Northern Plains and Alaska Native women (82).
# Vaginal and Vulvar Cancer and Precursor Lesions
HPV is associated with vaginal and vulvar cancer and vagi nal and vulvar intraepithelial neoplasias; however, unlike cer vical cancer, not all vaginal and vulvar cancers are associated with HPV. The natural history of vaginal and vulvar neopla sia is incompletely understood (83,84). No routine screening exists for vaginal or vulvar cancer in the United States.
The majority of vaginal cancers and vaginal intraepithelial neoplasias III (VaIN III) are positive for HPV (85); HPV 16 is the most common type (86,87). Approximately one third of women with VaIN or vaginal cancer had been treated pre viously for an anogenital cancer, usually cervical cancer (86). Vaginal cancer is rare, and incidence has decreased by 20% during the preceding two decades. In the United States in 2003, a total of 1,070 cases of invasive vaginal cancer (age adjusted incidence rate: 0.7 per 100,000 females) and 391 deaths (death rate: 0.2 per 100,000 females) occurred (79). The median age for diagnosis of vaginal cancer was 69 years.
HPV is associated with approximately half of vulvar squa mous cell cancers, the most common type of vulvar cancer.
HPV-associated vulvar cancer tends to occur in younger women and might be preceded by vulvar intraepithelial neo plasia (VIN). In a recent study, HPV types 16 or 18 were detected in 76% of the VIN 2/3 and 42% of vulvar carci noma samples (87). In 2003, a total of 3,507 cases of vulvar cancer (age-adjusted incidence rate: 2.2 per 100,000) and 775 deaths (death rate: 0.4 per 100,000 females) occurred in the United States (79). During 1973-2000, the incidence of in situ vulvar cancer increased by 400%, and the rate of invasive vulvar cancer increased by 20%. Changes in detection or re porting of in situ cancers might be responsible for the in creased rate of in situ cancers (88).
# Anal Cancer
HPV is associated with approximately 90% of anal squa mous cell cancers. Anal intraepithelial neoplasia (AIN) is rec ognized as a precursor of anal cancer, although the natural history of these lesions (i.e., rate of progression and regres sion) is less clear than for CIN (50). Anal cancer is more com mon in women (2,516 new cases in 2003 ) than in men (1,671 new cases ) (79). During the preceding three decades, the incidence of anal cancer in the United States has increased, especially among men (89). Women at high risk for anal can cer include those with high-grade cervical lesions and cervi cal and vulvar cancers. Men who have sex with men and persons who have human immunodeficiency virus (HIV) infection also are at high risk for anal cancer (90). No national recommenda tions exist for cytologic screening to prevent anal cancers.
# Genital Warts
All anogenital warts (condyloma) are caused by HPV, and approximately 90% are associated with HPV types 6 and 11 (91). The average time to development of new anogenital warts after infection with HPV types 6 or 11 is approximately 2-3 months (92). However, not all persons infected with HPV types 6 or 11 acquire genital warts. Anogenital warts can be treated, although many warts (20%-30%) regress spontane ously. Recurrence of anogenital warts is common (approxi mately 30%), whether clearance occurs spontaneously or following treatment (93). Anogenital warts are not routinely reported in the United States. The prevalence of genital warts has been examined using health-care claims data (94). An es timated 1% of sexually active adolescents and adults in the United States have clinically apparent genital warts (29).
# Recurrent Respiratory Papillomatosis
Infection with low-risk HPV types, primarily types 6 or 11, rarely results in recurrent respiratory papillomatosis (RRP), a disease that is characterized by recurrent warts or papillo mas in the upper respiratory tract, particularly the larynx. On the basis of age of onset, RRP is divided into juvenile onset (JORRP) and adult onset forms. JORRP, generally de fined as onset before age 18 years, is better characterized than the adult form. JORRP is believed to result from vertical trans mission of HPV from mother to baby during delivery, al though the median age of diagnosis is 4 years. A multicenter registry of JORRP in the United States that collected data during 1999-2003 ( 95) demonstrated that although the clini cal course of JORRP was variable, it is associated with exten sive morbidity, requiring a median of 13 lifetime surgeries to remove warts and maintain an open airway. Estimates of the incidence of JORRP are relatively imprecise but range from 0.12 to 2.1 cases per 100,000 children aged <18 years in two cities in the United States (96). The prevalence, incidence, and disease course of the adult form of RRP are less clear.
# Treatment of HPV Infection
HPV infections are not treated; treatment is directed at the HPV-associated lesions. Treatment options for genital warts and cervical, vaginal, and vulvar cancer precursors include various local approaches that remove the lesion (e.g., cryo therapy, electrocautery, laser therapy, and surgical excision). Genital warts also are treated with topical pharmacologic agents (97). On the basis of limited existing data, available therapies for HPV-related lesions might reduce but probably do not eliminate infectiousness.
# TABLE 2. Cervical cancer screening guidelines -United States
# Prevention HPV Infection
Condom use might reduce the risk for HPV and HPVassociated diseases (e.g., genital warts and cervical cancer). A limited number of prospective studies have demonstrated a protective effect of condoms on acquisition of genital HPV. A study among newly sexually active college women demon strated a 70% reduction in HPV infection when their part ners used condoms consistently and correctly (98). Abstaining from sexual activity (i.e., refraining from any genital contact with another persons) is the surest way to prevent genital HPV infection. For those who choose to be sexually active, a mo nogamous relationship with an uninfected partner is the strat egy most likely to prevent future genital HPV infections.
Neither routine surveillance for HPV infection nor part ner notification is useful for HPV prevention (97). Genital HPV infection is so prevalent that the majority of partners of persons found to have HPV infection are infected already; no prevention or treatment strategies have been recom mended for partners.
# Cervical Cancer Screening
The majority of cervical cancer cases and deaths can be pre vented through detection of pre-cancerous changes in the cervix by cytology using the Pap test. Pap test screening includes a con ventional Pap or a liquid-based cytology (99). CDC does not issue recommendations for cervical cancer screening, but certain professional groups have published recommendations (Table 2). (102). USPSTF concluded that evidence is insufficient to recommend for or against routine use of HPV tests (100).
# Guidelines
An estimated 82% of women in the United States have had a Pap test during the preceding 3 years (103). Pap test rates for all age and ethnic populations have increased during the preceding two decades. However, certain groups continue to have lower screening rates. These include women with less than a high school education (77%); foreign-born women, especially women who have been in the United States for <10 years (61%); women without health insurance (62%); and certain racial/ethnic populations such as Hispanics (77%) and Asians (71%). Approximately half of women who had cervi cal cancer diagnosed in the United States had not had a Pap test in the 3 years before diagnosis (104).
# Quadrivalent Human Papillomavirus Vaccine Composition
The licensed vaccine is a quadrivalent HPV vaccine (GARDASIL TM , produced by Merck and Co, Inc.). The L1 major capsid protein of HPV is the antigen used for HPV vaccination (105) The vaccine should be stored at 2°C-8°C (36°F-46°F) and not frozen.
# Dose and Administration
Quadrivalent HPV vaccine is administered intramuscularly as three separate 0.5-mL doses. The second dose should be administered 2 months after the first dose and the third dose 6 months after the first dose. The vaccine is available as a sterile suspension for injection in a single-dose vial or a prefilled syringe.
# Efficacy
One clinical study evaluated efficacy of monovalent HPV 16 vaccine, and three studies evaluated efficacy of quadriva lent HPV vaccine: a phase II study of a monovalent HPV 16 vaccine (protocol 005) (106,107), a phase II study of quadriva lent HPV vaccine (protocol 007) (108-110), both among females aged 16-23 years, and two phase III studies of quadrivalent HPV vaccine (protocols 013 and 015) among females aged 16-23 and 16-26 years, respectively. All were randomized, double-blind, placebo-controlled studies.
The studies used prespecified endpoints to evaluate the impact of the quadrivalent vaccine in preventing HPV-related infection and disease. Phase II studies were primarily proof of-concept studies that evaluated the efficacy of vaccine using a persistent infection endpoint. Phase III studies evaluated the efficacy of vaccine on clinical lesions. Predefined combi nations of phase II and III studies were used to improve the precision of the efficacy findings. Various endpoints were as sessed in the different studies, including vaccine type-related persistent HPV infection, CIN, VIN and VaIN, and genital warts. The primary endpoint and the basis for licensure was the combined incidence of HPV 16-and 18-related CIN 2/3 or AIS. These endpoints served as surrogate markers for cer vical cancer. Studies using an invasive cervical cancer end point were not feasible because the standard of care is to screen for and treat CIN 2/3 and AIS lesions to prevent invasive cervical cancer. Furthermore, the time from acquisition of infection to the development of cancer can exceed 20 years. The two phase III efficacy studies of quadrivalent HPV vac cine (protocols 013 and 015) were international studies, which included persons from North America, South America, Eu rope, Australia, and Asia. Data on efficacy against CIN end points also are available from the phase II study (protocol when analyses were restricted to participants who received all 007) (108,110) and of monovalent HPV-16 vaccine (protocol 3 doses of vaccine, had no protocol violations, and no evi 005) (107).
dence of infection with the relevant vaccine HPV type (se-The quadrivalent HPV vaccine has a high efficacy for pre-ronegative and HPV PCR-negative through 1 month after vention of vaccine HPV type HPV 6-, 11-, 16-, and 18-dose 3) (Tables 3 and 4) (111). No evidence exists of protec related persistent infection, vaccine type-related CIN, CIN tion against disease caused by vaccine types for which partici 2/3, and external genital lesions (genital warts, VIN and VaIN) pants were PCR positive at baseline. Participants infected with one or more vaccine HPV types before vaccination were pro tected against disease caused by the other vaccine HPV types.
No evidence exists that the vaccine protects against disease caused by nonvaccine HPV types.
# Persistent HPV Infection
Two phase II studies evaluated persistent infection, defined as a vaccine HPV type detected by PCR at two or more con secutive visits 4 months apart or at a single visit if it was the last visit of record. In the phase II quadrivalent vaccine study (pro tocol 007), 276 women received the 20/40/40/20 µg dose for mulation of vaccine, and 275 received a placebo. The efficacy for prevention of persistent HPV 6, 11, 16, or 18 infection or disease at the end of study (approximately 2.5 years after dose 3) was 89.5% (95% confidence interval = 70.7%-97.3%). Of the vaccinated persons with persistent infection endpoints, three had HPV 16 detected at the last visit (without observed persistence), and one had persistent infection with HPV 18 (detected at both 12 and 18 months) but not at months 24, 30, or 36 (108). In the phase II study of a monovalent HPV 16 vaccine (protocol 005), the efficacy against persistent HPV 16 infection was 100% at a midpoint analysis (106) and 94.3% (CI = 87.8%-97.7%) at the end of the study (107). All seven cases in the vaccine group had HPV 16 DNA detected on the person's last study visit (without observed persistence).
# Cervical Disease
Two phase III trials evaluated efficacy against cervical disease. Protocol 015 included 12,157 women aged 16-26 years. Par ticipants had a Pap test, cervicovaginal sampling for HPV DNA testing, and detailed genital inspection at day 1 and months 7, 12, 24, 36, and 48, and were referred to colposcopy using a pro tocol specified algorithm based on Pap test results. The primary study endpoint was incidence of HPV 16-or 18-related CIN 2, CIN 3, AIS, or cervical cancer. In a per protocol analysis, the vaccine efficacy was 100% (CI = 80.9%-100%) for prevention of HPV 16 or 18 related CIN 2/3 or AIS (Table 3).
Protocol 013 included 5,442 females aged 16-23 years. Par ticipants had a Pap test at day 1 and at months 7, 12, 18, 24, 30, 36, and 48 and were referred to colposcopy according to protocol. In addition, participants had detailed genital in spection, with biopsy of abnormalities and cervicovaginal sam pling for HPV DNA sampling. The study had two primary efficacy endpoints: 1) external genital lesions related to HPV 6, 11, 16, or 18, including genital warts, VIN, VaIN, vulvar cancer, or vaginal cancer; and 2) cervical endpoints related to HPV 6, 11, 16, or 18, including CIN, AIS, or cervical can cer. In a per protocol analysis, the vaccine efficacy was 100% (CI = 89.5%-100%) for prevention of any grade CIN re lated to vaccine types (Table 3).
In a planned combined efficacy analysis, including data from four clinical studies (protocol 005, 007, 013, and 015), pro tection against HPV 16-or 18-related CIN 2/3 or AIS was 100% (CI = 92.9%-100%) (111). In a planned combined analy sis, including data from three studies (protocol 007, 013, and 015), protection against any CIN attributed to HPV 6, 11, 16, or 18, the efficacy was 95.2% (CI = 87.2%-98.7%). Four cases of CIN occurred in the vaccine group; all were CIN 1.
# External Genital Lesions
Data from three studies (protocol 007, 013, and 015) pro vide data on efficacy against external genital lesions. In a com bined analysis, the efficacy of quadrivalent HPV vaccine against HPV 6-, 11-, 16-, or 18-related external genital warts was 98.9% (CI = 93.7%-100%) in a per protocol analysis (Table 3). Efficacy against HPV 16-or 18-related VIN 2/3 or VaIN 2/3 was 100% (CI = 55.5-100.0) (Table 4).
# Efficacy in Females with Current or Previous Vaccine HPV-Type Infection
Because participants were enrolled into the clinical trials even if they were HPV DNA or antibody positive, evaluating efficacy in females infected with a vaccine HPV type at the time of vaccination was possible. Overall, 27% of the study population had evidence of previous exposure to or infection with a vaccine HPV type. Among these participants, 74% were positive to only one vaccine HPV type and did not have evidence of infection with the other three types. Among par ticipants positive to one or more vaccine HPV types, the vac cine had high efficacy for prevention of disease caused by the remaining vaccine HPV types (112).
The vaccine's impact on the course of infection present at the time of infection was evaluated using data from four clini cal studies (protocols 005, 007, 013, and 015). Three differ ent groups were analyzed on the basis of antibody and HPV DNA detection at the time of vaccination (Table 5). Among persons seropositive to the relevant HPV type but HPV DNA negative, efficacy against CIN 2/3 or AIS caused by that type was 100% (CI = -63.6%-100%). Among women who were HPV DNA positive but seronegative, efficacy was 31.2% (CI = -4.5-54.9). Among women who were both seropostive and HPV DNA positive, efficacy against CIN 2/3 caused by that type was -25.8% (CI = -76.4%-10.1%). Because of the small numbers and wide confidence intervals around efficacy estimates, limited conclusions can be drawn from these estimates.
# Efficacy in the Intent-to-Treat Population
Analyses among all women who received at least 1 dose of vaccine and had any follow-up 1 month after the first dose, regardless of initial PCR or serology, provide information on ). The lower efficacy in these analyses compared with the per protocol population indicates that certain women were infected with vaccine types before vaccination. A 12.2% (CI = -3.2%-25.3%) reduction occurred in any CIN 2/3 in the vaccinated group compared with the placebo group at a median follow-up time of 1.9 years (111).
# Duration of Protection
A subset of participants (n = 241) in the phase II quadriva lent HPV vaccine study (protocol 007) is being followed for 60 months after dose one. In a combined analysis of all partici pants through year 3 and a subset through 60 months, the effi cacy against vaccine HPV type persistent infection or disease was 95.8% (CI = 83.8%-99.5%) and efficacy against vaccine type-related CIN or external genital lesions was 100% (CI = 12.4%-100%) (110).
Follow-up studies are planned by Merck and Co., Inc. to determine duration of protection among women enrolled in the phase III studies through 3 years after dose 3. Additional data on duration of protection will be available from followup of approximately 5,500 women enrolled in one of the phase III quadrivalent HPV vaccine studies in the Nordic coun tries. These women will be followed for at least 14 years; sero logic testing will be conducted 5 and 10 years after vaccination; and Pap testing results will be linked to data from vaccine registries to monitor outcomes.
# Immunogenicity Immunogenicity in Persons Aged 9-26 Years
The immunogenicity of the quadrivalent HPV vaccine has been measured by detection of IgG antibody to the HPV L1 by a type-specific competitive Luminex-based immunoassay (cLIA) in the majority of the studies (24,25). This assay mea sures antibodies against neutralizing epitopes for each HPV type. The units (milliMerck units) are internally consistent but cannot be directly compared across HPV types or with results from other HPV antibody assays. The height of the antibody titers (geometric mean titers ) for the dif ferent types cannot be directly compared.
Data on immunogenicity are available from Phase II (109) and Phase III double-blind, randomized, placebo-controlled trials conducted among females aged 16-26 years and immu nogenicity studies conducted among males and females aged 9-15 years (113). In all studies conducted to date, >99% of study participants had an antibody response to all four HPV types in the vaccine 1 month after completing the 3-dose se ries (109,113). High seropositivity rates were observed after vaccination regardless of sex, ethnicity, country of origin, smoking status, or body mass index.
Vaccination produced antibody titers higher than those after natural infection. Among females aged 16-23 years, anti-HPV 6, 11, 16, and 18 GMTs 1 month after the third dose of vaccine were higher than those observed in participants who were HPV seropositive and PCR negative at enrollment in the placebo group (109).
Vaccination of females who were seropositive to a specific vaccine HPV type at enrollment resulted in higher antibody titers to that type, particularly after the first dose, compared with those seronegative at enrollment (109), suggesting a boosting of naturally acquired antibody by vaccination. In studies among females aged 16-26 years, the interval between the first and second dose of vaccine ranged from 6 to 12 weeks. Variation in the interval did not diminish the GMTs post-vac cination. Likewise, little impact of intervals was observed be tween the second and third dose ranging from 12 to 23 weeks.
A serologic correlative of immunity has not been identified and no known minimal titer determined to be protective. The high efficacy found in the clinical trials to date has precluded identification of a minimum protective antibody titer. Fur ther follow-up of vaccinated cohorts might allow determina tion of serologic correlates of immunity.
# Immunogenicity Bridge to Efficacy Among Females
Immunogenicity studies provide data, allowing compari son of seropositivity and GMTs among females aged 9-15 years with those among females aged 16-26 years who were in the efficacy studies (Table 6) (111). Seropositivity rates in all age groups were approximately 99% for HPV 6, 11, 16, and 18. Anti-HPV responses 1 month post dose 3 among females aged 9-15 years were noninferior to those aged 16-26 years. At month 18, anti-HPV GMTs in females aged 9-15 years remained two to three fold higher than those observed at the same time point in females aged 16-26 years in the vaccine efficacy trials.
# Duration of Antibody
The longest follow-up to date is 60 months in the phase II trial of quadrivalent HPV vaccine (110). Antibody titers de cline over time after the third dose but plateau by 24 months. At 36 months, anti-HPV 16 GMT among vaccinees remained higher than those in participants in the placebo group who were seropositive at baseline, and anti-HPV 6, 11, and 18 titers were similar to those seropositive in the placebo group (109). At 36 months, seropositivity rates were 94%, 96%, 100%, and 76% to HPV 6, 11, 16, and 18, respectively. No evidence exists of waning efficacy among participants who become seronegative during follow-up (110). Data from a revaccination study in which vaccinated women were given a challenge dose 5 years after enrollment into the study dem onstrated an augmented rise in antibody titer consistent with immune memory (114).
# Concomitant Administration of HPV Vaccine with Other Vaccines
GMTs after concomitant administration of quadrivalent HPV vaccine and hepatitis B vaccine at all 3 doses were noninferior to GMTs after administration at separate visits. Studies are planned to evaluate concomitant administration with meningococcal conjugate vaccine and with the adolescent/adult formulation of tetanus, diphtheria and acellular pertussis (Tdap) vaccine.
# Safety and Adverse Events
The quadrivalent HPV vaccine was evaluated for injectionsite and systemic adverse events, new medical conditions re ported during the follow-up period, and safety during pregnancy and lactation. Safety data on quadrivalent HPV vaccine are available from seven clinical trials and include 11,778 persons aged 9-26 years who received quadrivalent vaccine and 9,686 who received placebo. Detailed data were collected using vaccination report cards for 14 days following each injection of study vaccine on a subset of participants aged 9-23 years. The population with detailed safety data included 5,088 females who received quadrivalent HPV vac cine and 3,790 who received placebo (Tables 7-9) (111).
# Local Adverse Events
In the study population with detailed safety data, a larger proportion of persons reported injection-site adverse events (for all persons except those aged 100°F (>38°C) after dose one, two, or three (Table 9).
# TABLE 7. Injection-site adverse events among female participants aged 9-23 years in the detailed safety data, days 1-5 after any vaccination with quadrivalent human papillomavirus (HPV) vaccine
# Serious Adverse Events in All Safety Studies
Vaccine-related serious adverse events occurred in <0.1% of persons. The proportions of persons reporting a serious adverse event were similar in the vaccine and placebo groups, as were the types of serious adverse events reported. Seven persons had events that were determined to be possibly, prob ably, or definitely related to the vaccine or placebo. Five events occurred among quadrivalent HPV vaccine recipients and two among placebo recipients. The five in the quadrivalent HPV vaccine group included bronchospasm, gastroenteritis, head ache/hypertension, vaginal hemorrhage, and injection site pain/movement impairment.
In the overall safety evaluation, 10 persons in the group that received quadrivalent HPV vaccine and seven persons in the placebo group died during the course of the trials. None of the deaths was considered to be vaccine related. Two deaths in the vaccine group and one death in the placebo group oc curred within 15 days following vaccination. Seven deaths were attributed to motor-vehicle accidents (four in vaccine group and three in placebo group), three were caused by in tentional overdose (nonstudy medications) or suicide (one in vaccine group and two in placebo group), two were attrib uted to pulmonary embolus or deep venous thrombosis (one each in vaccine and placebo group), two were attributed to sepsis, one case each attributed to cancer and arrhythmia (in vaccine group), and one case caused by asphyxia (placebo group).
# New Medical History
Information was collected on new medical conditions that occurred in up to 4 years of follow-up. Overall, nine (0.08%) participants in the vaccine group and three (0.03%) partici pants in the placebo group had conditions potentially indica tive of autoimmune disorders, including various arthritis diagnoses (nine in vaccine group and two in placebo group) and systemic lupus erythematosis (none in vaccine group and one in placebo group) (111). No statistically significant dif ferences exist between vaccine and placebo recipients for the incidence of these conditions.
# Vaccination During Pregnancy
The quadrivalent clinical trial protocols excluded women who were pregnant. Human beta gonadotropin testing was conducted before administration of each vaccine dose, and if women were found to be pregnant, vaccination was delayed until completion of pregnancy. Nevertheless, among clinical trial participants, 1,244 pregnancies occurred in the vaccine group and 1,272 occurred in the placebo group (Table 10) (111). Among those with known outcomes (996 and 1,018), the percentage with spontaneous loss was similar in both groups (25%). A total of 15 and 16 congenital abnormalities occurred in the vaccine and placebo groups, respectively, in cluding five in the vaccine group and none in the placebo group among infants born to women who received vaccine or placebo within 30 days of estimated onset of pregnancy. The five congenital abnormalities were determined by an expert panel to be unrelated (one pyloric stenosis with ankyloglos sia, one congenital hydronephrosis, one congenital megaco lon, one club foot, and one hip dysplasia). Rates of congenital abnormalities in the study were consistent with those in sur veillance registries. Quadrivalent HPV vaccine has been clas sified as Category B on the basis of animal studies in rats showing no evidence of impaired fertility or harm to the fetus.
# Vaccination During Lactation
In the clinical trials, 995 women in the evaluated popula tion (500 and 495 persons in the group that received quadriva lent HPV vaccine or placebo, respectively) were breast feeding during the vaccination period. A total of 17 (3.4%) and nine (1.8%) infants of women who breastfed who received quadrivalent HPV vaccine or placebo, respectively, experi enced a serious adverse event. Of the 23 experiences among the 17 infants of women who received vaccine, 12 were respi ratory infections, five were gastroenteritis or diarrhea, and the remaining included various other single events. None was considered vaccine related.
# Impact of Vaccination and Cost Effectiveness Economic Burden of HPV
The prevention and treatment of anogenital warts and cer vical HPV-related disease imposes an estimated burden of $4 billion or more (2004 dollars) in direct costs in the United States each year (70,71,115). Of this, approximately $200 million is attributable to the management of genital warts; approximately $300-$400 million to invasive cervical can cer; and the remainder to routine cervical cancer screening, the follow-up of abnormal Pap tests, and pre-invasive cervi cal cancer (71,115). The estimated economic burden associ ated with HPV would be more substantial if the cost of other HPV-related diseases (e.g., vaginal and anal cancer and RRP) were included.
# Expected Impact of Vaccination
Various different models have been developed to evaluate the impact of HPV vaccine (116). Markov models have sug gested that vaccination of an entire cohort of females aged 12 years could reduce the lifetime risk for cervical cancer by 20%-66% (117,118) in that cohort, depending on the efficacy of the vaccine and the duration of vaccine protection. Models also project decreases in Pap test abnormalities and cervical cancer precursor lesions as a result of vaccination. For ex ample, incidence of low-grade Pap test abnormalities would decrease by 21% over the life of a vaccinated cohort of fe males aged 12 years (117). Models that incorporate HPV trans mission dynamics suggest an even greater potential impact of HPV vaccination on cervical cancer and cervical cancer pre cursors (119)(120)(121). Decreases in cervical cancer incidence and precursor lesions would occur more quickly with catch-up vaccination according to models that evaluated catch-up for females aged 12-24 years (121,122).
# Cost Effectiveness of HPV Vaccine
Since 2003, four studies have estimated the potential cost effectiveness of HPV vaccination in the context of cervical cancer screening practices in the United States (117)(118)(119)121). Two of these studies applied Markov models to estimate the cost per quality-adjusted life year (QALY), focusing on the costs and impact of HPV vaccination for a given cohort, with out considering the effect of vaccination on HPV transmis sion in the population (herd immunity). The other studies applied dynamic transmission models to incorporate the ben efits of herd immunity in estimating the cost effectiveness of HPV vaccination.
The two studies based on Markov models of the natural history of HPV infection examined the cost effectiveness of vaccinating females aged 12 years. One study assumed 100% vaccine coverage, 90% vaccine efficacy against HPV 16/18, lifetime duration of protection, and a cost of $377 per vac cine series (118). Under these assumptions, an estimated 58% reduction was achieved in the lifetime risk for cervical cancer for the vaccinated cohort at a cost of $24,300 (2002 dollars) per QALY compared with no vaccination. A second study assumed 70% vaccine coverage, 75% efficacy against all highrisk HPV types, 10 years duration of protection plus 10 addi tional years of protection with a booster, and a cost of $300 per vaccine series plus $100 per booster (117). Under these assump tions, an estimated 20% reduction in cervical cancer incidence was achieved in the vaccinated cohort at a cost of $22,800 per QALY (2001 dollars) compared with no vaccination.
The two cost effectiveness analyses based on dynamic trans mission models examined the cost effectiveness of vaccinat ing females. One study assumed vaccination at age 12 years with 70% vaccine coverage. The vaccine cost $300 per series plus $100 per booster and targeted HPV 16/18 with 90% efficacy and 10-year duration of protection plus 10 additional years with a booster (119). Under these assumptions, the life time risk for cervical cancer among vaccinated females would be reduced by 62% at a cost per QALY of $14,600 (2001 dollars) compared with no vaccination. A second study as sumed vaccination at or before age 12 years with 70% vac cine coverage (121). The vaccine cost $360 per series and targeted HPV types 6, 11, 16, and 18, with 90% efficacy against infection and 100% efficacy against HPV-related dis eases attributable to these HPV types, with lifelong duration of protection. Under these assumptions, over the long term, a reduction of approximately 75% was achieved in the cervical cancer incidence rate attributable to HPV 16 and 18 at a cost of $3,000 per QALY in 2005 dollars compared with no vac cination. This model also suggested that a catch-up program for females aged 12-24 years would cost $4,700 per QALY compared with vaccination of females aged 12 years only.
The cost per QALY gained by routine vaccination of fe males at age 12 years in the published studies ranged from $3,000 to $24,300. The results summarized are calculated using base-case scenarios, which vary across studies. In the sensitivity analyses, when base-case assumptions were modi fied, the estimated cost effectiveness ratios changed substan tially. For example, factors such as duration of vaccine-induced protection, duration of natural immunity, frequency of cervi cal cancer screening, vaccine coverage, and vaccine cost im pacted the estimated cost effectiveness of HPV vaccination (116)(117)(118)(119)121,123).
# MMWR March 23, 2007
# Summary of Rationale for Quadrivalent HPV Vaccine Recommendations
The availability of a quadrivalent HPV vaccine offers an opportunity to decrease the burden of HPV infection and its sequelae, including cervical cancer precursors, cervical can cer, other anogenital cancers, and genital warts in the United States. Quadrivalent HPV vaccine is licensed for use among females aged 9-26 years. In this age group, clinical trials indi cate that the vaccine is safe and immunogenic. Trials among females aged 16-26 years indicated the vaccine to be effective against HPV types 6-, 11-, 16-, and 18-related cervical, vaginal and vulvar cancer precursor and dysplastic lesions, and genital warts. HPV 16 and 18 are the cause of approxi mately 70% of cervical cancers; HPV 6 and 11 are the cause of approximately 90% of genital warts. Because HPV is sexu ally transmitted and often acquired soon after onset of sexual activity, vaccination should ideally occur before sexual debut. The recommended age for vaccination is 11-12 years; vaccine can be administered to females as young as age 9 years. At the beginning of a vaccination program, females aged >12 years will exist who did not have the opportunity to receive vaccine at age 11-12 years. Catch-up vaccination is recommended for females aged 13-26 years who have not yet been vaccinated.
The recommendation for routine vaccination of females aged 11-12 years is based on several considerations, includ ing studies suggesting that quadrivalent HPV vaccine among adolescents will be safe and effective; high antibody titers achieved after vaccination at age 11-12 years; data on HPV epidemiology and age of sexual debut in the United States; and the high probability of HPV acquisition within several years of sexual debut. Ideally, HPV vaccine should be admin istered before sexual debut, and duration of protection should extend for many years, providing protection when exposure through sexual activity might occur. The vaccine has been demonstrated to provide protection for at least 5 years with out evidence of waning protection. Long-term follow-up stud ies are underway to determine duration of protection. The recommendation also considered cost effectiveness evaluations and the established young adolescent health-care visit at age 11-12 years recommended by several professional organiza tions, when other vaccines are also recommended.
Although routine vaccination is recommended at age 11-12 years, the majority of females aged 13-26 years also can benefit from vaccination.
Females not yet sexually active can be expected to receive the full benefit of vaccination. Although sexually active females in this age group might have been in fected with one or more vaccine HPV types, type-specific prevalence studies in the United States suggest that a small percentage of sexually active females have been infected with all four of the HPV vaccine types. These data, available from North American females aged 16-24 years who participated in the quadrivalent vaccine trials, are from women who were more likely to have ever had sex than similar aged females in the general U.S. population. Among those sexually active fe males, the median number of lifetime sex partners (two) was similar in trial participants and females in the general U.S. population. The vaccine does not appear to protect against persistent infection, cervical cancer precursor lesions, or genital warts caused by an HPV type that females are infected with at the time of vaccination. However, females already infected with one or more vaccine HPV types before vaccination would be protected against disease caused by the other vaccine HPV types. Therefore, although overall vaccine effectiveness would be lower when administered to a population of females who are sexually active, and would decrease with older age and likelihood of HPV exposure with increasing number of sex partners, the majority of females in this age group will derive at least partial benefit from vaccination.
# Recommendations for Use of HPV Vaccine
Recommendations for Routine Use and Catch-Up
# Routine Vaccination of Females Aged 11-12 Years
ACIP recommends routine vaccination of females aged 11-12 years with 3 doses of quadrivalent HPV vaccine. The vac cination series can be started as young as age 9 years.
# Catch-Up Vaccination of Females Aged 13-26 Years
Vaccination also is recommended for females aged 13-26 years who have not been previously vaccinated or who have not completed the full series. Ideally, vaccine should be ad ministered before potential exposure to HPV through sexual contact; however, females who might have already been ex posed to HPV should be vaccinated. Sexually active females who have not been infected with any of the HPV vaccine types would receive full benefit from vaccination. Vaccina tion would provide less benefit to females if they have already been infected with one or more of the four vaccine HPV types. However, it is not possible for a clinician to assess the extent to which sexually active persons would benefit from vaccina tion, and the risk for HPV infection might continue as long as persons are sexually active. Pap testing and screening for HPV DNA or HPV antibody are not needed before vaccina tion at any age.
# Dosage and Administration
The vaccine should be shaken well before administration. The dose of quadrivalent HPV vaccine is 0.5 mL, adminis tered intramuscularly (IM), preferably in the deltoid muscle.
# Recommended Schedule
Quadrivalent HPV vaccine is administered in a 3-dose schedule. The second and third doses should be administered 2 and 6 months after the first dose.
# Minimum Dosing Intervals and Management of Persons Who Were Incorrectly Vaccinated
The minimum interval between the first and second doses of vaccine is 4 weeks. The minimum recommended interval between the second and third doses of vaccine is 12 weeks. Inadequate doses of quadrivalent HPV vaccine or vaccine doses received after a shorter-than-recommended dosing in terval should be readministered.
# Interrupted Vaccine Schedules
If the quadrivalent HPV vaccine schedule is interrupted, the vaccine series does not need to be restarted. If the series is interrupted after the first dose, the second dose should be administered as soon as possible, and the second and third doses should be separated by an interval of at least 12 weeks. If only the third dose is delayed, it should be administered as soon as possible.
# Simultaneous Administration with Other Vaccines
Although no data exist on administration of quadrivalent HPV vaccine with vaccines other than hepatitis B vaccine, quadrivalent HPV vaccine is not a live vaccine and has no components that adversely impact safety or efficacy of other vaccinations. Quadrivalent HPV vaccine can be administered at the same visit as other age appropriate vaccines, such as the Tdap and quadrivalent meningococcal conjugate (MCV4) vaccines. Administering all indicated vaccines together at a single visit increases the likelihood that adolescents and young adults will receive each of the vaccines on schedule. Each vac cine should be administered using a separate syringe at a dif ferent anatomic site.
# Cervical Cancer Screening Among Vaccinated Females
Cervical cancer screening recommendations have not changed for females who receive HPV vaccine (Table 2). HPV types in the vaccine are responsible for approximately 70% of cervical cancers; females who are vaccinated could subse quently be infected with a carcinogenic HPV type for which the quadrivalent vaccine does not provide protection. Fur thermore, those who were sexually active before vaccination could have been infected with a vaccine type HPV before vaccination. Health-care providers administering quadriva lent HPV vaccine should educate women about the impor tance of cervical cancer screening.
# Groups for Which Vaccine is Not Licensed Vaccination of Females Aged 26 Years
Quadrivalent HPV vaccine is not licensed for use among females aged 26 years. Studies are ongoing among females aged >26 years. No studies are under way among children aged <9 years.
# Vaccination of Males
Quadrivalent HPV vaccine is not licensed for use among males. Although data on immunogenicity and safety are avail able for males aged 9-15 years, no data exist on efficacy in males at any age. Efficacy studies in males are under way.
# Special Situations Among Females Aged 9-26 Years
# Equivocal or Abnormal Pap Test or Known HPV Infection
Females who have an equivocal or abnormal Pap test could be infected with any of approximately 40 high-risk or lowrisk genital HPV types. Such females are unlikely to be in fected with all four HPV vaccine types, and they might not be infected with any HPV vaccine type. Vaccination would provide protection against infection with HPV vaccine types not already acquired. With increasing severity of Pap test find ings, the likelihood of infection with HPV 16 or 18 increases and the benefit of vaccination would decrease. Women should be advised that results from clinical trials do not indicate the vaccine will have any therapeutic effect on existing HPV in fection or cervical lesions.
Females who have a positive HC2 High-Risk test conducted in conjunction with a Pap test could have infection with any of 13 high-risk types. This assay does not identify specific HPV types, and testing for specific HPV types is not con ducted routinely in clinical practice. Women with a positive HC2 High-Risk test might not have been infected with any of the four HPV vaccine types. Vaccination would provide protection against infection with HPV vaccine types not al ready acquired. However, women should be advised that re sults from clinical trials do not indicate the vaccine will have any therapeutic effect on existing HPV infection or cervical lesions.
# Genital Warts
A history of genital warts or clinically evident genital warts indicates infection with HPV, most often type 6 or 11. How ever, these females might not have infection with both HPV 6 and 11 or infection with HPV 16 or 18. Vaccination would provide protection against infection with HPV vaccine types not already acquired. However, females should be advised that results from clinical trials do not indicate the vaccine will have any therapeutic effect on existing HPV infection or geni tal warts.
# Lactating Women
Lactating women can receive HPV vaccine.
# Immunocompromised Persons
Because quadrivalent HPV vaccine is a noninfectious vac cine, it can be administered to females who are immunosup pressed as a result of disease or medications. However, the immune response and vaccine efficacy might be less than that in persons who are immunocompetent.
# Vaccination During Pregnancy
Quadrivalent HPV vaccine is not recommended for use in pregnancy. The vaccine has not been causally associated with adverse outcomes of pregnancy or adverse events in the devel oping fetus. However, data on vaccination during pregnancy are limited. Until additional information is available, initia tion of the vaccine series should be delayed until after comple tion of the pregnancy. If a woman is found to be pregnant after initiating the vaccination series, the remainder of the 3 dose regimen should be delayed until after completion of the pregnancy. If a vaccine dose has been administered during pregnancy, no intervention is needed. A vaccine in pregnancy registry has been established; patients and health-care provid ers should report any exposure to quadrivalent HPV vaccine during pregnancy (telephone: 800-986-8999).
# Precautions and Contraindications
# Acute Illnesses
Quadrivalent HPV vaccine can be administered to persons with minor acute illnesses (e.g., diarrhea or mild upper respi ratory tract infections with or without fever). Vaccination of persons with moderate or severe acute illnesses should be de ferred until after the patient improves (124).
# Hypersensitivity or Allergy to Vaccine Components
Quadrivalent HPV vaccine is contraindicated for persons with a history of immediate hypersensitivity to yeast or to any vaccine component. Data from passive surveillance in Vaccine Adverse Event Reporting System (VAERS) indicates that recombinent yeast derived vaccines pose a minimal risk for anaphylaxic reactions in persons with a history of allergic reactions to Saccharomyces cerevisiae (baker's yeast) (125).
# Preventing Syncope After Vaccination
Syncope (i.e., vasovagal or vasodepressor reaction) can oc cur after vaccination, most commonly among adolescents and young adults (124). Among reports to VAERS for any vac cine that were coded as syncope during 1990-2004, a total of 35% of these episodes were reported among persons aged 10-18 years. Through January 2007, the second most common report to VAERS following receipt of HPV vaccine was syn cope (CDC, unpublished data, 2007). Vaccine providers should consider observing patients for 15 minutes after they receive HPV vaccine.
# Reporting of Adverse Events After Vaccination
As with any newly licensed vaccine, surveillance for rare adverse events associated with administration of quadrivalent HPV vaccine is important for assessing its safety in wide spread use. All clinically significant adverse events should be reported to VAERS at , even if causal re lation to vaccination is not certain. VAERS reporting forms and information are available electronically at http:// www.vaers.hhs.gov or by telephone (800-822-7967). Webbased reporting is available and providers are encouraged to report electronically at / VaersDataEntryintro.htm to promote better timeliness and quality of safety data.
Safety surveillance for adolescent quadrivalent HPV vac cine, Tdap, MCV4, and other vaccines is being conducted on an ongoing basis in cooperation with FDA. A vaccine in pregnancy registry has been established by Merck and Co., Inc.; patients and health-care providers should report any ex posure to quadrivalent HPV vaccine during pregnancy (tele phone: 800-986-8999).
# Areas for Research and Program Activity Related to HPV Vaccine
With licensure and introduction of quadrivalent HPV vac cine for females, monitoring impact of vaccination and vac cine safety will be needed. Research in several areas is ongoing, and research in other areas is needed.
Duration of Protection from the Quadrivalent Vaccine: Long-term data on duration of antibody response and clini cal protection will be obtained through studies conducted in the Nordic countries through the Nordic cancer registries and through other studies in the United States (111). Follow up of vaccine trial participants aged 9-15 years will continue for up to 10 years after dose 3. This will include evaluation of antibody titers and, in participants who reach their 16th birth day, evaluation of vaccine effectiveness.
Surveillance for HPV-Related Outcomes: Although it will take years to realize the impact of vaccination on cervical can cer, decreases in cervical cancer precursors and genital warts should be realized sooner. Studies are planned to monitor these lesions and other HPV-related outcomes in the United States.
Virologic Surveillance: Prevalence and incidence of HPV types in the vaccine are expected to decrease as a result of vaccination. Studies are planned to monitor HPV types in various populations and specimens.
# Safety of Vaccination:
Postlicensure studies to evaluate gen eral safety and pregnancy outcomes will be conducted by the manufacturer and independently by CDC. Monitoring will be accomplished through VAERS and CDC's Vaccine Safety Datalink, which will include surveillance of cohorts of re cently vaccinated females and evaluation of outcomes of preg nancy among those pregnant at the time of vaccination. The manufacturer will be monitoring long-term safety as part of the Nordic Cancer Registry Program (111).
Simultaneous Vaccination: Safety and immunogenicity studies of simultaneous administration of quadrivalent HPV vaccine with Tdap and MCV4 are ongoing.
Efficacy of HPV Vaccine in Men: Studies are needed to define the efficacy of HPV vaccination in preventing genital warts and anogenital intraepithelial neoplasia in men. Stud ies of the effectiveness of HPV vaccination of men in pre venting transmission to both female and male sex partners are also needed.
Cervical Cancer Screening: Recommendations for cervi cal cancer screening guidelines have not changed. Evaluation of the impact of HPV vaccination on cervical cancer screen ing provider practices and women's screening behavior is needed as well as further economic analyses.
Vaccine Delivery and Implementation: Administration of 3 doses of vaccine in adolescents will be challenging. Pro grammatic research is needed to determine optimal strategies to reach this age group.
# Vaccines for Children Program
The Vaccines for Children (VFC) program supplies vac cines to all states, territories, and the District of Columbia for use by participating providers. These vaccines are to be administered to eligible children without cost to the patients or the provider. All routine childhood vaccines recommended by ACIP are available through this program. The program saves patients and providers out-of-pocket expenses for vac cine purchases and provides cost savings to states through CDC vaccine contracts. The program results in lower vac cine prices and ensures that all states pay the same contract prices. Detailed information about the VFC program is avail able at . | 15,160 | {
"id": "bbbf9f4cb3b0d279858cfed14e3e7479df02a1ba",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Tobacco use is the leading cause of preventable death in the United States. The majority of daily smokers (82%) began smoking before 18 years of age, and more than 3,000 young persons begin smoking each day. School programs designed to prevent tobacco use could become one of the most effective strategies available to reduce tobacco use in the United States. The following guidelines summarize school-based strategies most likely to be effective in preventing tobacco use among youth. They were developed by CDC in collaboration with experts from 29 national, federal, and voluntary agencies and with other leading authorities in the field of tobacco-use prevention to help school personnel implement effective tobacco-use prevention programs. These guidelines are based on an in-depth review of research, theory, and current practice in the area of schoolbased tobacco-use prevention. The guidelines recommend that all schools a) develop and enforce a school policy on tobacco use, b) provide instruction about the short-and long-term negative physiologic and social consequences of tobacco use, social influences on tobacco use, peer norms regarding tobacco use, and refusal skills, c) provide tobacco-use prevention education in kindergarten through 12th grade, d) provide program-specific training for teachers, e) involve parents or families in support of school-based programs to prevent tobacco use, f) support cessation efforts among students and all school staff who use tobacco, and g) assess the tobacco-use prevention program at regular intervals.# INTRODUCTION
Tobacco use is the single most preventable cause of death in the United States (1 ). Illnesses caused by tobacco use increase demands on the U.S. health-care system; lost productivity amounts to billions of dollars annually (2)(3).
Because four out of every five persons who use tobacco begin before they reach adulthood (1 ), tobacco-prevention activities should focus on school-age children and adolescents. Evidence suggests that school health programs can be an effective means of preventing tobacco use among youth (4)(5)(6)(7). The guidelines in this report have been developed to help school personnel plan, implement, and assess educational programs and school policies to prevent tobacco use and the unnecessary addiction, disease, and death tobacco use causes. Although these guidelines address school programs for kindergarten through 12th grade, persons working with youth in other settings also may find the guidelines useful.
The guidelines are based on a synthesis of results of research, theory, and current practice in tobacco-use prevention. To develop these guidelines, CDC staff convened meetings of experts from the fields of tobacco-use prevention and education, reviewed published research, and considered the conclusions of the National Cancer Institute Expert Advisory Panel on School-Based Smoking Prevention Programs (4 ) and the findings of the 1994 Surgeon General's Report, Preventing Tobacco Use Among Young People (8 ).
CDC developed these guidelines in consultation with experts representing the following organizations: American Academy of Pediatrics American Association of School Administrators American Cancer Society American Federation of Teachers American Heart Association American Lung Association American Medical Association
# BACKGROUND
School-based programs to prevent tobacco use can make a substantial contribution to the health of the next generation. In this report, the term "tobacco use" refers to the use of any nicotine-containing tobacco product, such as cigarettes, cigars, and smokeless tobacco. These products often contain additional substances (e.g., benzo(a)pyrene, vinyl chloride, polonium 210) that cause cancer in animals and humans (1 ). Recent estimates suggest that cigarette smoking annually causes more than 400,000 premature deaths and 5 million years of potential life lost (2 ). The estimated direct and indirect costs associated with smoking in the United States in 1990 totalled $68 billion (3 ).
In 1964, the Surgeon General's first report on smoking and health documented that cigarette smoking causes chronic bronchitis and lung and laryngeal cancer in men (9 ). Subsequent reports from the Surgeon General's office have documented that smoking causes coronary heart disease (10 ), atherosclerotic peripheral vascular disease (1 ), cerebrovascular disease (1 ), chronic obstructive pulmonary disease (including emphysema) (11 ), intrauterine growth retardation (1 ), lung and laryngeal cancers in women (12 ), oral cancer (13 ), esophageal cancer (13 ), and cancer of the urinary bladder (14 ). Cigarette smoking also contributes to cancers of the pancreas, kidney, and cervix (1,14 ). Further, low birth weight and approximately 10% of infant mortality have been attributed to tobacco use by pregnant mothers (1 ). The 1994 Surgeon General's report on smoking and health describes numerous adverse health conditions caused by tobacco use among adolescents, including reductions in the rate of lung growth and in the level of maximum lung function, increases in the number and severity of respiratory illnesses, and unfavorable effects on blood lipid levels (which may accelerate development of cardiovascular diseases) (8 ).
Breathing environmental tobacco smoke-including sidestream and exhaled smoke from cigarettes, cigars, and pipes-also causes serious health problems (15)(16). For example, exposure to environmental tobacco smoke increases the risk for lung cancer and respiratory infections among nonsmokers and may inhibit the development of optimal lung function among children of smokers (16 ). Exposure to environmental tobacco smoke also may increase the risk for heart disease among nonsmokers (17)(18). The Environmental Protection Agency recently classified environmental tobacco smoke as a Group A carcinogen, a category that includes asbestos, benzene, and arsenic (19 ).
Use of smokeless tobacco, including chewing tobacco and snuff, also can be harmful to health. A report of the Advisory Committee to the Surgeon General indicated that using smokeless tobacco causes oral cancer and leukoplakia (20 ). Early signs of these diseases, particularly periodontal degeneration and soft tissue lesions, are found among young people who use smokeless tobacco (8 ).
Tobacco use is addictive and is responsible for more than one of every five deaths in the United States. However, many children and adolescents do not understand the nature of tobacco addiction and are unaware of, or underestimate, the important health consequences of tobacco use (1 ). On average, more than 3,000 young persons, most of them children and teenagers, begin smoking each day in the United States (21 ). Approximately 82% of adults ages 30-39 years who ever smoked daily tried their first cigarette before 18 years of age (8 ). National surveys indicate that 70% of high school students have tried cigarette smoking and that more than one-fourth (28%) reported having smoked cigarettes during the past 30 days (22 ).
# THE NEED FOR SCHOOL HEALTH PROGRAMS TO PREVENT TOBACCO USE AND ADDICTION
The challenge to provide effective tobacco-use prevention programs to all young persons is an ethical imperative. Schools are ideal settings in which to provide such programs to all children and adolescents. School-based tobacco prevention education programs that focus on skills training approaches have proven effective in reducing the onset of smoking, according to numerous independent studies. A summary of findings from these studies demonstrates positive outcomes across programs that vary in format, scope, and delivery method (8 ).
To be most effective, school-based programs must target young persons before they initiate tobacco use or drop out of school. In 1992, 18% of surveyed U.S. high school seniors reported smoking their first cigarette in elementary school, and 30% started in grades seven to nine (23 ). Among persons age 17-18 years surveyed in 1989, substantially more high school dropouts (43%) than high school attendees or graduates (17%) had smoked cigarettes during the week preceding the survey (24 ).
Because considerable numbers of students begin using tobacco at or after age 15, tobacco-prevention education must be continued throughout high school. Among high school seniors surveyed in 1991 who had ever smoked a whole cigarette, 37% initiated smoking at age 15 or older (grades 10-12).
School-based programs offer an opportunity to prevent the initiation of tobacco use and therefore help persons avoid the difficulties of trying to stop after they are addicted to nicotine. The majority of current smokers (83%) wish they had never started smoking, and nearly one-third of all smokers quit for at least a day each year (25 ). Most smokers (93%) who try to quit resume regular smoking within 1 year (21,26 ). Of those persons who successfully quit smoking for 1 year or longer, onethird eventually relapse (14 ).
By experimenting with tobacco, young persons place themselves at risk for nicotine addiction. Persons who start smoking early have more difficulty quitting, are more likely to become heavy smokers, and are more likely to develop a smoking-related disease (1,27 ). Between 1975 and 1985, approximately 75% of persons who had smoked daily during high school were daily smokers 7-9 years later; however, only 5% of those persons had predicted as high school students that they would "definitely" smoke 5 years later (23 ). Smoking is addictive; three out of four teenagers who smoke have made at least one serious, yet unsuccessful, effort to quit (28 ). The 1994 Surgeon General's report on smoking and health concludes that the probability of becoming addicted to nicotine after any exposure is higher than that for other addictive substances (e.g., heroin, cocaine, or alcohol). Further, nicotine addiction in young people follows fundamentally the same process as in adults, resulting in withdrawal symptoms and failed attempts to quit (8 ). Thus, cessation programs are needed to help the young persons who already use tobacco (4 ).
School-based programs to prevent tobacco use should be provided for students of all ethnic/racial groups. In high school, more white (31%) and Hispanic (25%) students than black students (13%) are current smokers (29 ). Although ages and rates of initiation vary by race and ethnicity, tobacco use is a problem for all ethnic/racial groups. Given the diversity of cultures represented in many schools, it is important to tailor prevention programs for particular ethnic/racial subgroups of students. However, programs should be sensitive to, and representative of, a student population that is multicultural, multiethnic, and socio-economically diverse.
Effective school-based programs to prevent tobacco use are equally important for both male and female students. From 1975 to 1987, daily smoking rates among 12thgrade females were as high or higher than males. Since 1988, smoking rates for males and females have been nearly identical (23 ). However, rates of smokeless tobacco use differ by sex: in 1991, 19% of male high school students and only 1% of females reported use during the past 30 days (22 ). Given the growing popularity of smokeless tobacco use, particularly among males (30 ), and given the prevalent misconception that smokeless tobacco is safe (23 ), school-based programs to prevent tobacco use must pointedly discourage the use of smokeless tobacco.
Despite gains made in the 1970s, progress in reducing smoking prevalence among adolescents slowed dramatically in the 1980s. For example, the percentage of seniors who report that they smoked on one or more days during the past month has remained unchanged since 1980-at approximately 29% (23 ). Further, despite negative publicity and restrictive legislation regarding tobacco use, the proportion of high school seniors who perceive that cigarette users are at great risk for physical or other harm from smoking a pack a day or more has increased only minimally-from 64% in 1980 to 69% in 1992 (23 ). Thus, efforts to prevent the initiation of tobacco use among children and adolescents must be intensified.
School-based programs to prevent tobacco use also can contribute to preventing the use of illicit drugs, such as marijuana and cocaine, especially if such programs are also designed to prevent the use of these substances (31 ). Tobacco is one of the most commonly available and widely used drugs, and its use results in the most widespread drug dependency. Use of other drugs, such as marijuana and cocaine, is often preceded by the use of tobacco or alcohol. Although most young persons who use tobacco do not use illicit drugs, when further drug involvement does occur, it is typically sequential-from use of tobacco or alcohol to use of marijuana, and from marijuana to other illicit drugs or prescription psychoactive drugs (32 ). This sequence may reflect, in part, the widespread availability, acceptability, and use of tobacco and alcohol, as well as common underlying causes of drug use, such as risk-seeking patterns of behavior and deficits in communication and refusal skills. Recent reports on preventing drug abuse suggest that approaches effective in preventing tobacco use can also help prevent the use of alcohol and other drugs (33)(34)(35).
# PURPOSES OF SCHOOL HEALTH PROGRAMS TO PREVENT TOBACCO USE AND ADDICTION
School-based health programs should enable and encourage children and adolescents who have not experimented with tobacco to continue to abstain from any use. For young persons who have experimented with tobacco use, or who are regular tobacco users, school health programs should enable and encourage them to immediately stop all use. For those young persons who are unable to stop using tobacco, school programs should help them seek additional assistance to successfully quit the use of tobacco.
# NATIONAL HEALTH OBJECTIVES, NATIONAL EDUCATION GOALS, AND THE YOUTH RISK BEHAVIOR SURVEILLANCE SYSTEM
CDC's Guidelines for School Health Programs to Prevent Tobacco Use and Addiction were designed in part to help attain published national health objectives and education goals. In September 1990, 300 national health objectives were released by the Secretary of the Department of Health and Human Services as part of Healthy People 2000: National Health Promotion and Disease Prevention Objectives (36 ). The objectives were designed to guide health promotion and disease prevention policy and programs at the federal, state, and local levels throughout the 1990s. Schoolbased programs to prevent tobacco use can help accomplish the following objectives from Healthy People 2000 (37 ):
3.4 Reduce cigarette smoking to a prevalence of no more than 15% among people aged 20 and older. (Baseline: 29% in 1987) 3.5 Reduce the initiation of cigarette smoking by children and youth so that no more than 15% have become regular cigarette smokers by age 20. (Baseline: 30% in 1987) 3.7 Increase smoking cessation during pregnancy so that at least 60% of women who are cigarette smokers at the time they become pregnant quit smoking early in pregnancy and maintain abstinence for the remainder of their pregnancy. (Baseline: 39% in 1985) 3.8 Reduce to no more than 20% the proportion of children aged 6 and younger who are regularly exposed to tobacco smoke at home (Baseline: 39% in 1986) 3.9 Reduce smokeless tobacco use by males aged 12 through 24 to a prevalence of no more than 4%. (Baseline: 6.6% for age 12-17 in 1988) 3.10 Establish tobacco-free environments and include tobacco use prevention in the curricula of all elementary, middle, and secondary schools, preferably as part of quality school health education. (Baseline: 17% of school districts were smoke-free, and 75%-81% of school districts offered antismoking education in 1988) 3.11 Increase to at least 75% the proportion of worksites with a formal smoking policy that prohibits or severely restricts smoking at the workplace. (Baseline: 54% of medium and large companies in 1987) 3.12 Enact in 50 states comprehensive laws on clean indoor air that prohibit or strictly limit smoking in the workplace and enclosed public places . (Baseline: 13 states in 1988)
School-based programs to prevent tobacco use can also help accomplish one of the six National Education Goals (38 ): By the year 2000, every school in America will be free of drugs and violence and will offer a disciplined environment conducive to learning (Goal 6).
In 1990, CDC established the Youth Risk Behavior Surveillance System to help monitor progress toward attaining national health and education objectives by periodically measuring the prevalence of six categories of health risk behaviors usually established during youth that contribute to the leading causes of death and disease (39 ); tobacco use is one of the six categories. CDC conducts a biennial Youth Risk Behavior Survey (YRBS) of a national probability sample of high school students and also enables interested state and local education agencies to conduct the YRBS with comparable probability samples of high school students in those states and cities (22 ). The specific tobacco-use behaviors monitored by the YRBS include (40 ):
- ever tried cigarette smoking
- age when first smoked a whole cigarette - ever smoked cigarettes regularly (one cigarette every day for 30 days) - age when first smoked regularly
- number of days during past month that cigarettes were smoked
- number of cigarettes smoked per day during past month
- number of days during past month that cigarettes were smoked on school property
- ever tried to quit smoking cigarettes during past six months
- any use of chewing tobacco or snuff during past month
- any use of chewing tobacco or snuff during past month on school property.
States and large cities are encouraged to use the YRBS periodically to monitor the comparative prevalence of tobacco use among school students in their jurisdictions, and school officials are encouraged to implement programs specifically designed to reduce these behaviors. These national, state, and local data are being used to monitor progress in reducing tobacco use among youth and to monitor relevant national health objectives and education goals.
# RECOMMENDATIONS FOR SCHOOL HEALTH PROGRAMS TO PREVENT TOBACCO USE AND ADDICTION
The seven recommendations below summarize strategies that are effective in preventing tobacco use among youth. To ensure the greatest impact, schools should implement all seven recommendations.
1. Develop and enforce a school policy on tobacco use. 2. Provide instruction about the short-and long-term negative physiologic and social consequences of tobacco use, social influences on tobacco use, peer norms regarding tobacco use, and refusal skills.
# Provide tobacco-use prevention education in kindergarten through 12th grade;
this instruction should be especially intensive in junior high or middle school and should be reinforced in high school. 4. Provide program-specific training for teachers. 5. Involve parents or families in support of school-based programs to prevent tobacco use. 6. Support cessation efforts among students and all school staff who use tobacco. 7. Assess the tobacco-use prevention program at regular intervals.
# Discussion of Recommendations Recommendation 1: Develop and enforce a school policy on tobacco use.
A school policy on tobacco use must be consistent with state and local laws and should include the following elements (41 ): - An explanation of the rationale for preventing tobacco use (i.e., tobacco is the leading cause of death, disease, and disability) - Prohibitions against tobacco use by students, all school staff, parents, and visitors on school property, in school vehicles, and at school-sponsored functions away from school property - Prohibitions against tobacco advertising in school buildings, at school functions, and in school publications - A requirement that all students receive instruction on avoiding tobacco use - Provisions for students and all school staff to have access to programs to help them quit using tobacco - Procedures for communicating the policy to students, all school staff, parents or families, visitors, and the community - Provisions for enforcing the policy To ensure broad support for school policies on tobacco use, representatives of relevant groups, such as students, parents, school staff and their unions, and school board members, should participate in developing and implementing the policy. Examples of policies have been published (41 ), and additional samples can be obtained from state and local boards of education.
Clearly articulated school policies, applied fairly and consistently, can help students decide not to use tobacco (42 ). Policies that prohibit tobacco use on school property, require prevention education, and provide access to cessation programs rather than solely instituting punitive measures are most effective in reducing tobacco use among students (43 ).
A tobacco-free school environment can provide health, social, and economic benefits for students, staff, the school, and the district (41 ). These benefits include decreased fires and discipline problems related to student smoking, improved compliance with local and state smoking ordinances, and easier upkeep and maintenance of school facilities and grounds.
# Recommendation 2: Provide instruction about the short-and long-term negative physiologic and social consequences of tobacco use, social influences on tobacco use, peer norms regarding tobacco use, and refusal skills.
Some tobacco-use prevention programs have been limited to providing only factual information about the harmful effects of tobacco use. Other programs have attempted to induce fear in young persons about the consequences of use (44 ). However, these strategies alone do not prevent tobacco use, may stimulate curiosity about tobacco use, and may prompt some students to believe that the health hazards of tobacco use are exaggerated (45)(46)(47).
Successful programs to prevent tobacco use address multiple psychosocial factors related to tobacco use among children and adolescents (48)(49)(50)(51). These factors include:
- Immediate and long-term undesirable physiologic, cosmetic, and social consequences of tobacco use. Programs should help students understand that tobacco use can result in decreased stamina, stained teeth, foul-smelling breath and clothes, exacerbation of asthma, and ostracism by nonsmoking peers.
- Social norms regarding tobacco use. Programs should use a variety of educational techniques to decrease the social acceptability of tobacco use, highlight existing antitobacco norms, and help students understand that most adolescents do not smoke.
- Reasons that adolescents say they smoke. Programs should help students understand that some adolescents smoke because they believe it will help them be accepted by peers, appear mature, or cope with stress. Programs should help students develop other more positive means to attain such goals.
- Social influences that promote tobacco use. Programs should help students develop skills in recognizing and refuting tobacco-promotion messages from the media, adults, and peers.
- Behavioral skills for resisting social influences that promote tobacco use. Programs should help students develop refusal skills through direct instruction, modeling, rehearsal, and reinforcement, and should coach them to help others develop these skills.
- General personal and social skills. Programs should help students develop necessary assertiveness, communication, goal-setting, and problem-solving skills that may enable them to avoid both tobacco use and other health risk behaviors.
School-based programs should systematically address these psychosocial factors at developmentally appropriate ages. Particular instructional concepts should be provided for students in early elementary school, later elementary school, junior high or middle school, and senior high school (Table 1). Local school districts and schools should review these concepts in accordance with student needs and educational policies to determine in which grades students should receive particular instruction.
# Recommendation 3: Provide tobacco-use prevention education in kindergarten through 12th grade. This instruction should be especially intensive in junior high or middle school and should be reinforced in high school.
Education to prevent tobacco use should be provided to students in each grade, from kindergarten through 12th grade (4 ). Because tobacco use often begins in grades six through eight, more intensive instructional programs should be provided for these grade levels (4)(5). Particularly important is the year of entry into junior high or middle school when new students are exposed to older students who use tobacco at higher rates. Thereafter, annual prevention education should be provided. Without continued reinforcement throughout high school, successes in preventing tobacco use dissipate over time (52,53 ). Studies indicate that increases in the intensity and duration of education to prevent tobacco use result in concomitant increases in effectiveness (54)(55)(56).
Most evidence demonstrating the effectiveness of school-based prevention of tobacco use is derived from studies of schools in which classroom curricula focused exclusively on tobacco use. Other evidence suggests that tobacco-use prevention also can be effective when appropriately embedded within broader curricula for preventing drug and alcohol use (57 ) or within comprehensive curricula for school health education (31 ). The effectiveness of school-based efforts to prevent tobacco use appears to be enhanced by the addition of targeted communitywide programs that address the role of families, community organizations, tobacco-related policies, antitobacco advertising, and other elements of adolescents' social environment (8 ).
Because tobacco use is one of several interrelated health risk behaviors addressed by schools, CDC recommends that tobacco-use-prevention programs be integrated as
# Early Elementary School
# SKILLS: Students w ill be able to
programs might be jointly sponsored by the school and the local health department, voluntary health agency, other community health providers, or interested organizations (e.g., churches).
More is known about successful cessation strategies for adults. School staff members are more likely than students to find existing cessation options in the community. Most adults who quit tobacco use do so without formal assistance. Nevertheless, cessation programs that include a combination of behavioral approaches (e.g., group support, individual counseling, skills training, family interventions, and interventions that can be supplemented with pharmacologic treatments) have demonstrated effectiveness (71 ). For all school staff, health promotion activities and employee assistance programs that include cessation programs might help reduce burnout, lower staff absenteeism, decrease health insurance premiums, and increase commitment to overall school health goals (41 ).
# Recommendation 7: Assess the tobacco-use prevention program at regular intervals.
Local school boards and administrators can use the following evaluation questions to assess whether their programs are consistent with CDC's Guidelines for School Health Programs to Prevent Tobacco Use and Addiction. Personnel in federal, state, and local education and health agencies also can use these questions to a) assess whether schools in their jurisdicition are providing effective education to prevent tobacco use and b) identify schools that would benefit from additional training, resources, or technical assistance. The following questions can serve as a guide for assessing program effectiveness:
1. Do schools have a comprehensive policy on tobacco use, and is it implemented and enforced as written? 2. Does the tobacco education program foster the necessary knowledge, attitudes, and skills to prevent tobacco use? 3. Is education to prevent tobacco use provided, as planned, in kindergarten through 12th grade, with special emphasis during junior high or middle school? 4. Is in-service training provided, as planned, for educators responsible for implementing tobacco-use prevention? 5. Are parents or families, teachers, students, school health personnel, school administrators, and appropriate community representatives involved in planning, implementing, and assessing programs and policies to prevent tobacco use? 6. Does the tobacco-use prevention program encourage and support cessation efforts by students and all school staff who use tobacco?
The
part of comprehensive school health education within the broader school health program (58 ).
# Recommendation 4: Provide program-specific training for teachers.
Adequate curriculum implementation and overall program effectiveness are enhanced when teachers are trained to deliver the program as planned (59,60 ). Teachers should be trained to recognize the importance of carefully and completely implementing the selected program. Teachers also should become familiar with the underlying theory and conceptual framework of the program as well as with the content of these guidelines. The training should include a review of the program content and a modeling of program activities by skilled trainers. Teachers should be given opportunity to practice implementing program activities. Studies indicate that in-person training and review of curriculum-specific activities contribute to greater compliance with prescribed program components (4,5,61,62 ). Some programs may elect to include peer leaders as part of the instructional strategy. By modeling social skills (63 ) and leading role rehearsals (64 ), peer leaders can help counteract social pressures on youth to use tobacco. These students must receive training to ensure accurate presentation of skills and information. Although peer-leader programs can offer an important adjunct to teacher-led instruction, such programs require additional time and effort to initiate and maintain.
# Recommendation 5: Involve parents or families in support of school-based programs to prevent tobacco use.
Parents or families can play an important role in providing social and environmental support for nonsmoking. Schools can capitalize on this influence by involving parents or families in program planning, in soliciting community support for programs, and in reinforcing educational messages at home. Homework assignments involving parents or families increase the likelihood that smoking is discussed at home and motivate adult smokers to consider cessation (65 ).
Recommendation 6: Support cessation efforts among students and all school staff who use tobacco.
Potential practices to help children and adolescents quit using tobacco include selfhelp, peer support, and community cessation programs. In practice, however, these alternatives are rarely available within a school system or community. Although the options are often limited, schools must support student efforts to quit using tobacco, especially when tobacco use is disallowed by school policy.
Effective cessation programs for adolescents focus on immediate consequences of tobacco use, have specific attainable goals, and use contracts that include rewards. These programs provide social support and teach avoidance, stress management, and refusal skills (66)(67)(68)(69). Further, students need opportunities to practice skills and strategies that will help them remain nonusers (66,67,70 ).
Cessation programs with these characteristics may already be available in the community through the local health department or voluntary health agency (e.g., American Cancer Society, American Heart Association, American Lung Association). Schools should identify available resources in the community and provide referral and follow-up services to students. If cessation programs for youth are not available, such
# CONCLUSION
In 1964, the first Surgeon General's report on smoking and health warned that tobacco use causes serious health problems. Thirty years later, in 1994, the Surgeon General reports that tobacco use still presents a key threat to the well-being of children. School health programs to prevent tobacco use could become one of the most effective national strategies to reduce the burden of physical, emotional, and monetary expense incurred by tobacco use.
To achieve maximum effectiveness, school health programs to prevent tobacco use must be carefully planned and systematically implemented. Research and experience acquired since the first Surgeon General's report on smoking and health have helped in understanding how to produce school policies on tobacco use and how to plan school-based programs to prevent tobacco use so that they are most effective. Carefully planned school programs can be effective in reducing tobacco use among students if school and community leaders make the commitment to implement and sustain such programs. | 6,190 | {
"id": "9916ef89f9bcf76881f3a01cf02e90003d69df4e",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | NIOSH greatly appreciates the public comments on the December 2010 draft document that were submitted to the NIOSH docket. The comments and responses to them can be seen at: www.cdc.gov/niosh/docket/archive/docket161A.html# F o r e w o r d
The Occupational Safety and Health Act of 1970 (Public Law 91-596) was passed to assure safe and healthful working conditions for every working person and to preserve our hu man resources. This Act charges the National Institute for Occupational Safety and Health (NIOSH) with recommending occupational safety and health standards and describing exposures that are safe for various periods of employment, including (but not limited to) the exposures at which no worker will suffer diminished health, functional capacity, or life expectancy because of his or her work experience.
NIOSH issues Current Intelligence Bulletins (CIBs) to disseminate new scientific informa tion about occupational hazards. A CIB may draw attention to a formerly unrecognized hazard, report new data on a known hazard, or disseminate information about hazard con trol. CIBs are distributed to representatives of academia, industry, organized labor, public health agencies, and public interest groups, as well as to federal agencies responsible for ensuring the safety and health of workers.
NIOSH is the leading federal agency conducting research and providing guidance on the occupational safety and health implications and applications of nanotechnology. As nano technology continues to expand into every industrial sector, workers will be at an increased risk of exposure to new nanomaterials. Today, nanomaterials are found in hundreds of products, ranging from cosmetics, to clothing, to industrial and biomedical applications. These nanoscale-based products are typically called "first generation" products of nano technology. Many of these nanoscale-based products are composed of engineered nanopar ticles, such as metal oxides, nanotubes, nanowires, quantum dots, and carbon fullerenes (buckyballs), among others. Early scientific studies have indicated that some of these na noscale particles may pose a greater health risk than the larger bulk form of these materials.
Results from recent animal studies indicate that carbon nanotubes (CNT) and carbon nano fibers (CNF) may pose a respiratory hazard. CNTs and CNFs are tiny, cylindrical, large aspect ratio, manufactured forms of carbon. There is no single type of carbon nanotube or nano fiber; one type can differ from another in shape, size, chemical composition (from residual metal catalysts or functionalization of the CNT and CNF) and other physical and chemical characteristics. Such variations in composition and size have added to the complexity of understanding their hazard potential. Occupational exposure to CNTs and CNFs can occur not only in the process of manufacturing them, but also at the point of incorporating these materials into other products and applications. A number of research studies with rodents have shown adverse lung effects at relatively low-mass doses of CNT and CNF, including pul monary inflammation and rapidly developing, persistent fibrosis. Although it is not known whether similar adverse health effects occur in humans after exposure to CNT and CNF, the results from animal research studies indicate the need to minimize worker exposure. This NIOSH CIB, (1) reviews the animal and other toxicological data relevant to assessing the potential non-malignant adverse respiratory effects of CNT and CNF, (2) provides a quantitative risk assessment based on animal dose-response data, (3) proposes a recom mended exposure limit (REL) of 1 ^g/m3 elemental carbon as a respirable mass 8-hour time-weighted average (TWA) concentration, and (4) describes strategies for controlling workplace exposures and implementing a medical surveillance program. The NIOSH REL is expected to reduce the risk for pulmonary inflammation and fibrosis. However, because of some residual risk at the REL and uncertainty concerning chronic health effects, includ ing whether some types of CNTs may be carcinogenic, continued efforts should be made to reduce exposures as much as possible.
Just prior to the release of this CIB NIOSH reported at the annual meeting of the Society of Toxicology preliminary findings from a new laboratory study in which mice were exposed by inhalation to multi-walled carbon nanotubes (MWCNT) . The study was designed to investigate whether MWCNT have the potential to initiate or promote cancer. Mice receiving both an initiator chemical plus inhalation exposure to MWCNT were significantly more likely to develop tumors (90% incidence) and have more tumors than mice receiving the initiator chemical alone. These results indicate that MWCNT can increase the risk of cancer in mice exposed to a known carcinogen. The study did not indicate that MWCNTs alone cause can cer in mice. This research is an important step in our understanding of the hazards associ ated with MWCNT, but before we can determine whether MWCNT pose an occupational cancer risk, we need more information about workplace exposures, the types and nature of MWCNT being used in the workplace, and how that compares to the material used in this study. Research is underway at NIOSH to learn more about worker exposures and the potential occupational health risks associated with exposure to MWCNT and other types of CNTs and CNFs. As results from ongoing research become available, NIOSH will reassess its recommendations for CNT and CNF and make appropriate revisions as needed.
NIOSH urges employers to share this information with workers and customers. NIOSH also requests that professional and trade associations and labor organizations inform their members about the potential hazards of CNT and CNF. Carbon nanotubes (CNTs) and nanofibers (CNFs) are some of the most promising ma terials to result from nanotechnology. The introduction of these materials and products using them into commerce has increased greatly in the last decade . The development of CNT-based applications in a wide range of products is expected to provide great societal benefit and it is important that they be developed re sponsibly to achieve that benefit . Worker safety and health is a cornerstone of responsible development of an emergent technology because workers are the first people in society to be exposed to the products of the technology and the workplace is the first opportunity to develop and implement responsible practices.
In this Current Intelligence Bulletin, NIOSH continues its long-standing history of using the best available scientific information to assess potential hazards and risks and to provide guidance for protecting workers. Since it is early in the development of these materials and their applications, there is limited information on which to make protective recommenda tions. To date, NIOSH is not aware of any reports of adverse health effects in workers using or producing CNT or CNF. However, there are studies of animals exposed to CNT and CNF that are informative in predicting potential human health effects consistent with ways in which scientists traditionally have used such data in recommending risk management strategies. NIOSH systematically reviewed 54 laboratory animal studies, many of which indicated that CNT/CNF could cause adverse pulmonary effects including inflammation (44/54), granulomas (27/54), and pulmonary fibrosis (25/54) (Tables 3-1 through 3-8). NIOSH considers these animal study findings to be relevant to human health risks because similar lung effects have been observed in workers exposed to respirable particulates of other materials in dusty jobs . There are well established correlations between results of animal studies and adverse effects in workers ex posed to particulates and other air contaminants , and the effects, includ ing fibrosis, developed soon after exposure and persisted . These are significant findings that warrant protective action. NIOSH conducted a quantitative assessment of risk using the animal studies with sufficient dose-response data, which included two subchronic (90-day) inhalation studies and five additional studies conducted by other routes or durations. The estimated risk of developing early-stage (slight or mild) lung effects over a working lifetime if exposed to CNT at the analytical limit of quantification (NIOSH Method 5040) of 1 ^g/m3 (8-hr time-weighted average as respirable elemental carbon) is approximately 0.5% to 16% (upper confidence limit estimates) (Table A-8). In addition, the working lifetime equivalent estimates of the animal no observed adverse effect level (NOAEL) of CNT or CNF were also near 1 ^g/m3 (8-hr TWA) (Sections A.6.3.3 and A.7.6). Therefore, NIOSH recommends that exposures to CNT and CNF be kept below the recommended exposure limit (REL) of 1 ^g/m3 of respirable elemental carbon as an 8-hr TWA. Because there may be other sources of elemental carbon in the workplace that could interfere in the deter mination of CNT and CNF exposures, other analytical techniques such as transmission electron microscopy are described that could assist in characterizing exposures. Studies have shown that airborne background (environmental and in non-process areas in the workplace) concentrations to elemental carbon are typically less than 1 ^g/m3 and that an elevated exposure to elemental carbon in the workplace is a reasonable indicator of CNT or CNF exposure . Studies have also shown in some manufacturing operations that exposures can be controlled below the REL when engineering controls are used . However, NIOSH has not assessed the extent to which exposures can be controlled during the life cycle of CNT/CNF product use, but since airborne CNT/CNF behave as classical aerosols, the control of worker expo sures appears feasible with standard exposure control techniques (e.g., source enclosure, local-exhaust ventilation) . Previously in a 2010 draft of this CIB for public comment, NIOSH indicated that the risks could occur with exposures less than 1 ^g/m3 but that the analytic limit of quantification was 7 ^g/m3. Based on subsequent improvements in sampling and analytic methods, NIOSH is now recommending an exposure limit at the current analytical limit of quantification of 1 ^g/m3. More research is needed to fully characterize the health risks of CNT/CNF. Long-term ani mal studies and epidemiologic studies in workers would be especially informative. How ever, the toxicity seen in the short-term animal studies indicates that protective action is warranted. The recommended exposure limit is in units of mass/unit volume of air, which is how the exposures in the animal studies were quantified and it is the exposure metric that generally is used in the practice of industrial hygiene. In the future, as more data are ob tained, a recommended exposure limit might be based on a different exposure metric better correlated with toxicological effects, such as CNT/CNF number concentration .
There are many uncertainties in assessing risks to workers exposed to CNT/CNF. These uncertainties, as described and evaluated in this document, do not lessen the concern or di minish the recommendations. Other investigators and organizations have been concerned about the same effects and have recommended occupational exposure limits (OELs) for CNT within the range of 1-50 ^g/m3 . The relative consistency in these proposed OELs demonstrates the need to manage CNT/CNF as a new and more active form of carbon. To put this in perspec tive, since there is no Occupational Safety and Health Administration (OSHA) permissible exposure limit (PEL) for CNT/CNF, the PEL for graphite (5,000 ^g/m3) or carbon black (3,500 ^g/m3) might inappropriately be applied as a guide to control worker exposures to CNT/CNF. Based on the information presented in this document, the PELs for graphite or carbon black would not protect workers exposed to CNT/CNF. The analysis conducted by NIOSH was focused on the types of CNT and CNF included in published research studies. Pulmonary responses were qualitatively similar across the vari ous types of CNT and CNF, purified or unpurified with various metal content, and different dimensions . The fibrotic lung effects in the animal studies developed early (within a few weeks) after exposure to CNT or CNF, at relatively low-mass lung doses, and persisted or progressed during the post-exposure follow-up (~1-6 months) . However, the studied CNT and CNF only represent a fraction of the types of CNT and CNF that are, or will be, in commerce and it is anticipated that materials with different physical and chemi cal parameters could have different toxicities. At this time, however, given the findings in the published literature, NIOSH recommends that exposures to all CNT and CNF be con trolled to less than 1 ^g/m3 of respirable elemental carbon as an 8-hr TWA, and that the risk management guidance described in this document be followed. Until results from research can fully explain the physical-chemical properties of CNT and CNF that define their inha lation toxicity, all types of CNT and CNF should be considered a respiratory hazard and exposure should be controlled below the REL.
In addition to controlling exposures below the REL, it is prudent for employers to insti tute medical surveillance and screening programs for workers who are exposed to CNT and CNF for the purpose of possibly detecting early signs of adverse pulmonary effects including fibrosis. Such an assessment can provide a secondary level of prevention should there be inadequacies in controlling workplace exposures. In 2009, NIOSH concluded that there was insufficient evidence to recommend specific medical tests for workers exposed to the broad category of engineered nanoparticles but when relevant toxicological informa tion became available, specific medical screening recommendations would be forthcoming . As described in this document, the toxicologic evidence on CNT/CNF has advanced to make specific recommendations for the medical surveillance and screening of exposed workers. That is, the strong evidence for pulmonary fibrosis from animal studies and the fact that this effect can be detected by medical tests is the basis for NIOSH specific medical screening recommendations. NIOSH also recommends other risk management practices in addition to controlling exposure and medical surveillance. These include edu cation and training of workers and the use of personal protective equipment (e.g., respira tors, clothing, and gloves).
In summary, the findings and recommendations in this Current Intelligence Bulletin are intended to minimize the potential health risks associated with occupational exposure to CNT and CNF by recommending a working lifetime exposure limit (1 ^g/m3, 8-hr TWA, 45 years), a sampling and analytical method to detect CNT and CNF, medical surveillance and screening and other guidelines. The expanding use of CNT/CNF products in commerce and research warrants these protective actions.
# B a c k g r o u n d
The goal of this occupational safety and health guidance for carbon nanotubes (CNT) and carbon nanofibers (CNT) is to prevent the development of adverse respiratory health ef fects in workers. To date, NIOSH is not aware of any reports of adverse health effects in workers producing or using CNT or CNF. The concern about worker exposure to CNT or CNF arises from the results of recent laboratory animal studies with CNT and CNF. Short term and subchronic studies in rats and mice have shown qualitatively consistent noncancerous adverse lung effects including pulmonary inflammation, granulomas, and fibrosis with inhalation, intratracheal instillation, or pharyngeal aspiration of several types of CNT (single or multiwall; purified or unpurified). These early-stage, noncancerous adverse lung effects in animals include: (1) the early onset and persistence of pulmonary fibrosis in CNT-exposed mice , (2) an equal or greater potency of CNT compared with other inhaled particles known to be hazardous (e.g., crystalline silica, asbestos) in causing pulmonary inflammation and fibrosis , and (3) reduced lung clearance in mice or rats exposed to relatively low-mass concentrations of CNT . Findings of acute pulmonary inflammation and interstitial fibrosis have also been observed in mice exposed to CNF . The extent to which these animal data may predict clinically significant lung effects in workers is not known. Howev er, NIOSH considers these animal study findings of pulmonary inflammation, granulomas, and fibrosis associated with exposure to CNT and CNF to be relevant to human health risk assessment because similar lung effects have been observed in workers in dusty jobs .
Some studies also indicate that CNT containing certain metals (nickel, 26%) or higher metal content (17.7% vs. 0.2% iron) are more cytotoxic in vitro and in vivo . Although a number of different types of CNT and CNF have been evaluated, uncertainty exists on the generalizability of the current animal findings to new CNT and CNF.
In addition to the early-stage non-cancer lung effects in animals, some studies in cells or animals have shown genotoxic or carcinogenic effects. In vitro studies with human lung cells have shown that single-walled carbon nanotubes (SWCNT) can cause genotoxicity and abnormal chromosome number by interfering with mitosis (cell division) . Other in vitro studies did not show evidence of genotoxicity of some MWCNT .
Studies in mice exposed to multi-walled carbon nanotubes (MWCNT) have shown the migration of MWCNT from the pulmonary alveoli to the intrapleural space Porter et al. 2010;Mercer et al. 2010]. The intrapleural space is the same site in which malignant mesothelioma can develop due to asbestos exposure. Intraperitoneal injection of CNT in mice has resulted in inflammation from long MWCNT (> 5 ^m in length), but not short MWCNT (< 1 ^m in length) or tangled CNT . In rats administered CNT by peritoneal injection, the pleural inflammation and mesothelioma were related to the thin diameter and rigid structure of MWCNT . In a study of rats administered MWCNT or crocidolite by intrapulmonary spraying, exposure to either material produced inflammation in the lungs and pleural cavity in addition to mesothelial proliferative lesions .
Pulmonary exposure to CNT has also produced systemic responses including an increase in inflammatory mediators in the blood, as well as oxidant stress in aortic tissue and in crease plaque formation in an atherosclerotic mouse model Erdely et al. 2009]. Pulmonary exposure to MWCNT also depresses the ability of coronary arterioles to respond to dilators . These cardiovascular effects may be due to neurogenic signals from sensory irritant receptors in the lung. Mechanisms, such as in flammatory signals or neurogenic pathways causing these systemic responses, are under investigation.
Additional research is needed to fully explain the mechanisms of biological responses to CNT and CNF, and the influence of physical-chemical properties. The findings of adverse respiratory effects and systemic effects reported in several animal studies indicate the need for protective measures to limit worker exposure to CNT and CNF.
CNT and CNF are currently used in many industrial and biomedical applications, includ ing electronics, lithium-ion batteries, solar cells, super capacitors, thermoplastics, poly mer composites, coatings, adhesives, biosensors, enhanced electron-scanning microscopy imaging techniques, inks, and in pharmaceutical/biomedical devices. CNT and CNF can be encountered in facilities ranging from research laboratories and production plants to operations where CNT and CNF are processed, used, disposed, or recycled. The data on worker personal exposures to CNT and CNF are extremely limited, but reported workplace airborne concentrations for CNT and CNF indicate the potential for worker exposures in many tasks or processes and the reduction or elimination of expo sures when measures to control exposure are used. NIOSH has determined that the best data to use for a quantitative risk assessment and as basis for a recommended exposure limit (REL) are the nonmalignant pulmonary data from the CNT animal studies. At present, data on cancer and cardiovascular effects are not adequate for a quantitative risk assessment of inhalation exposure. NIOSH considers the pulmonary responses of inflammation and fibrosis observed in short-term and subchronic studies in animals to be relevant to humans, as inflammatory and fibrotic effects are also observed in occupational lung diseases associated with workplace exposures to other in haled particles and fibers. Uncertainties include the extent to which these lung effects in animals are associated with functional deficits and whether similar effects would be clini cally significant among workers. However, these fibrotic lung effects observed in some of the animal studies developed early (e.g., 28 days after exposure) in response to relatively low-mass lung doses, and also persisted or progressed after the end of exposure . Given the relevance of these types of lung effects to humans, the REL was derived using the published subchronic and short-term animal studies with dose-response data of early stage fibrotic and inflammatory lung responses to CNT exposure (Section 5 and Appendix A).
Critical effect levels for the noncancerous lung effects estimated from the animal doseresponse data (e.g., BMD, benchmark dose and BMDL, the 95% lower confidence limit es timates of the BMD) have been extrapolated to humans by accounting for the factors influ encing the lung dose in each animal species. The no observed adverse effect level (NOAEL) and lowest observed adverse effect level (LOAEL) estimates reported in the subchronic inhalation studies were also evaluated as the critical effect levels. Working-lifetime exposure concentrations were calculated based on estimates of either the deposited or retained alveo lar lung dose of CNT assuming an 8-hour time-weighted average (TWA) exposure during a 40-hour workweek, 50 weeks per year, for 45 years. Based on BMD modeling of the sub chronic animal inhalation studies with MWCNT , a working lifetime exposure of 0.2-2 ^g/m3 (8-hour TWA concentration) was estimated to be associated with a 10% excess risk of early-stage adverse lung effects (95% lower confidence limit estimates) (Tables 5-1 and A-5). Risk estimates derived from short-term animal stud ies (Tables A-3 and A-4) were consistent with these estimates.
In addition to the BMD-based risk estimates, NOAEL or LOAEL values were used as the critical effect level in animals. As with the BMD(L) estimates, the human-equivalent working lifetime concentrations were estimated, although using dosimetric adjustment and uncer tainty factors (Section A.6.3). The estimated human-equivalent working lifetime concentra tions based on this approach were approximately 4-18 ^g/m3 (8-hr TWA), depending on the subchronic study and the interspecies dose retention and normalization factors used. Divid ing these estimates by data-suitable uncertainty factors (e.g., UFs of 20-60), and assuming a threshold model, the estimated zero risk levels were <1 ^g/m3 as working lifetime 8-hr TWA concentrations. A recent subchronic inhalation (13-wk exposure plus 3 months follow-up) study of CNF in rats showed qualitatively similar lung response as in a shorter-term (28-day) study of CNF administered by pharyngeal aspiration in mice (Sections 3.5 and A.7). Using the NOAEL-based approach, the humanequivalent working lifetime concentration estimates were 1-4 ^g/m3 (8-hr TWA), depend ing on the data and assumptions used to estimate the human-equivalent dose (Section A.7).
In the 2010 draft Current Intelligence Bulletin (CIB) Occupational Exposure to Carbon Nanotubes and Nanofibers, NIOSH proposed a REL of 7 ^g/m3 elemental carbon (EC) 8-hr TWA, which was set at the upper limit of quantitation (LOQ) for NIOSH Method 5040 . In the draft CIB, NIOSH acknowledged that workers may still have an excess risk of developing early-stage pulmonary effects including fibrosis if exposed over a full working lifetime at the proposed REL. In view of these health risks, and ongoing improvements in sampling and analytical methodologies, NIOSH is recommending a REL of 1 ^g/m3 EC as an 8-hr TWA respirable mass concentration using NIOSH Method 5040 (Section 6.1, Appendix C). The 45-yr working lifetime excess risk estimates of minimal level (grade 1 or greater) lung effects in rats observed by histopathology at 1 ^g/m3 (8-hr TWA concentration) range from 2.4% to 33% (maximum likelihood estimates, MLE) and 5.3% to 54% (95% upper confidence limit, UCL) estimates (Table A-7). The 45-yr working lifetime excess risk estimates of slight/mild (grade 2) lung effects at 1 ^g/m3 (8-hr TWA) range from 0.23% to 10% MLE and 0.53% to 16% (95% UCL) (Tables 5-2 and A-8). These estimates are based on a risk assessment using dose-response data from the rat subchronic inhalation studies of two types of MWCNT. The range in these risk estimates reflects differences across studies and/or types of MWCNT and the uncertainty in the estimation of working lifetime CNT lung burden. The lung burden estimates are based on either the retained lung dose (normal clearance) or deposited lung dose (no clearance). Although data from animal stud ies with CNF are more limited , physical-chemical similarities between CNT and CNF and findings of acute pulmonary inflammation and interstitial fibrosis in animals exposed to CNF indicate the need to also control occupational exposure to CNF at the REL of 1 ^g/m3 EC. Because of uncertainties in the risk estimates some residual risk for adverse lung effects may exist at the REL; there fore, efforts should be made to reduce airborne concentrations to CNT and CNF as low as possible. Until the results from animal research studies can fully explain the mechanisms (e.g., shape, size, chemistry, functionalized) that potentially increase or decrease their toxic ity all types of CNT and CNF should be considered a respiratory hazard and occupational exposures controlled at the REL of 1 ^g/m3. E x p o s u r e M e a s u r e m e n t a n d C o n t r o l s Occupational exposure to all types of CNT and CNF can be quantified using NIOSH Meth od 5040. A multi-tiered exposure measurement strategy is recommended for determin ing worker exposure to CNT and CNF . When exposure to other types of EC (e.g., diesel soot, carbon black) are absent or negligible, environmental background EC concentrations are typically < 1 ^g/m3 including in facilities where CNT and CNF are produced and used . Thus, an elevated airborne EC concentration relative to background (environmental and in non-process ar eas in the workplace) is a reasonable indicator of CNT or CNF exposure. When exposure to other types of EC is possible, additional analytical techniques may be required to better characterize exposures. For example, analysis of airborne samples by transmission electron microscopy (TEM) equipped with energy dispersive x-ray spectroscopy (EDS) can help to verify the presence of CNT and CNF (Section 6.1.2).
Published reports of worker exposure to CNT and CNF using NIOSH Method 5040 (EC determination) are limited but in the study by Dahm et al. worker personal breath ing zone (PBZ) samples collected at CNT manufacturers frequently found low to nondetectable mass concentrations of EC when engineering controls were present. In a study by Birch et al. , the outdoor air concentrations over four survey days, two months apart, were nearly identical, averaging about 0.5 ^g/m3. Respirable EC area concentrations inside the facility were about 6-68 times higher than outdoors, while personal breathing zone samples were up to 170 times higher. In studies where airborne particle concentrations were used as a surrogate for measuring the potential release of CNT and CNF, the use of engineering controls (e.g., local exhaust ventilation, wet cutting of composites, fume hood/ enclosures) appeared to be effective in reducing worker exposure (Section 2.1). Howev er, direct reading instruments used in these studies are non-selective tools and often subject to interferences due to other particle sources, especially at low concentrations . Control strategies and technologies developed by several indus trial trade associations have proven successful in managing micrometer-sized fine powder processes, and should have direct application to controlling worker exposures from CNT and CNF processes. Examples include guidance issued for containing dry powder during manufacturing of detergents by the Association Internationale de la Savonnerie, de la Dé tergence et des Produits d'Entretien (AISE) . Following these guidelines makes it possible, at a minimum, to control enzyme-containing dust exposures below 60 ng/m3 for enzymes. Additional guidance on a broader process and facility approach is available from the International Society for Pharmaceutical Engineering (ISPE). This organization offers guidance on the design, containment, and testing of various processes that handle finely divided dry powder formulations. One guide in particular, Baseline Guide Volume 1, 2nd Edition: Active Pharmaceutical Ingredients Revision to Bulk Pharmaceutical Chemi cals, has broad applicability to CNT and CNF processes and is available from ISPE . Finally, the Institute for Polyacrylate Absorbents (IPA) has developed guidelines for its member companies to assist them in controlling worker exposures to fine polyacrylate polymer dust in the micrometer-size range through a combination of engineering controls and work practices . The extent to which worker exposure to CNT and CNF can be controlled below 1 ^g/m3 respirable mass concentration as an 8-hr TWA is unknown, but should be achievable in most manufacturing and end-use job tasks if engineering con trols are used and workers are instructed in the safe handling of CNT/CNF materials.
Until results from research studies can fully explain the physical-chemical properties of CNT and CNF that define their inhalation toxicity, all types of CNT and CNF should be considered a respiratory hazard, and exposures should be controlled as low as possible below the REL. The REL is based on the respirable airborne mass concentration of CNT and CNF because the adverse lung effects in animals were observed in the alveolar (gas exchange) region. "Respirable" is defined as the aerodynamic size of particles that, when inhaled, are capable of depositing in the alveolar region of the lungs . Sampling methods have been developed to estimate the airborne mass concentration of respirable particles . Reliance on a respirable EC mass-based REL will provide a means to identify job tasks with potential exposures to CNT and CNF so that appropriate measures can be taken to limit worker exposure.
# R e c o m m e n d a t i o n s
In light of current scientific evidence from experimental animal studies concerning the haz ard potential of CNT and CNF, steps should be taken to implement an occupational health surveillance program that includes elements of hazard and medical surveillance. NIOSH recommends that employers and workers take the following steps to minimize potential health risks associated with exposure to CNT and CNF.
# . R e c o m m e n d a t i o n s f o r E m p l o y e r s
- Use available information to continually assess current hazard potential related to CNT and CNF exposures in the workplace and make appropriate changes (e.g., sampling and analysis, exposure control) to protect worker health. At a minimum, follow require ments of the OSHA Hazard Communication Standard and the Hazardous Waste Operation and Emergency Response Standard .
- Identify and characterize processes and job tasks where workers encounter bulk ("free-form") CNT or CNF and materials that contain CNT/CNF (e.g., composites).
- Substitute, when possible, a nonhazardous or less hazardous material for CNT and CNF. When substitution is not possible, use engineering controls as the primary method for minimizing worker exposure to CNT and CNF.
- Establish criteria and procedures for selecting, installing, and evaluating the performance of engineering controls to ensure proper operating conditions. Make sure workers are trained in how to check and use exposure controls (e.g., exhaust ventilation systems).
- Routinely evaluate airborne exposures to ensure that control measures are working properly and that worker exposures are being maintained below the NIOSH REL of 1 ^g/m3 using NIOSH Method 5040 (Section 6 and Appendix C).
- Follow exposure and hazard assessment procedures for determining the need for and selection of proper personal protective equipment, such as clothing, gloves, and res pirators (Section 6).
- Educate workers on the sources and job tasks that may expose them to CNT and CNF, and train them about how to use appropriate controls, work practices, and personal protective equipment to minimize exposure (Section 6.3).
- Provide facilities for hand washing and encourage workers to make use of these facili ties before eating, smoking, or leaving the worksite.
- Provide facilities for showering and changing clothes, with separate facilities for stor age of nonwork clothing, to prevent the inadvertent cross-contamination of nonwork areas (including take-home contamination).
- Use light-colored gloves, lab coats, and workbench surfaces to make contamination by dark CNT and CNF easier to see.
- Develop and implement procedures to deal with cleanup of CNT and CNF spills and decontamination of surfaces.
- When respirators are provided for worker protection, the OSHA respiratory protec tion standard requires that a respiratory protection program be established that includes the following elements:
-A medical evaluation of the worker's ability to perform the work while wearing a respirator.
-Regular training of personnel.
-Periodic workplace exposure monitoring.
-Procedures for selecting respirators.
-Respirator fit testing.
-Respirator maintenance, inspection, cleaning, and storage.
- The voluntary use of respirators are permitted, but must comply with the provisions set forth in CFR 1910.134(c)(2)(i) and CFR 1910.134(c)(2)(ii).
- Information on the potential health risks and recommended risk management prac tices contained in this CIB should, at a minimum, be used when developing labels and Safety Data Sheets (SDS), as required .
1 .1 M e d i c a l S c r e e n i n g a n d S u r v e i l l a n c e
The evidence summarized in this document leads to the conclusion that workers occupation ally exposed to CNT and CNF may be at risk of adverse respiratory effects. These workers may benefit from inclusion in a medical screening program to help protect their health (Section 6.7).
# W orker Participation
Workers who could receive the greatest benefit from medical screening include the following:
- Workers exposed to concentrations of CNT or CNF in excess of the REL (i.e., all workers exposed to airborne CNT or CNF at concentrations above 1 ^g/m3 EC as an 8-hr TWA).
- Workers in areas or jobs that have been qualitatively determined (by the person charged with program oversight) to have the potential for intermittent elevated air borne concentrations to CNT or CNF (i.e., workers are at risk of being exposed when they are involved in the transfer, weighing, blending, or mixing of bulk CNT or CNF, or the cutting or grinding of composite materials containing CNT or CNF, or workers in areas where such activities are carried out by others).
# Program Oversight
Oversight of the medical surveillance program should be assigned to a qualified health care professional who is informed and knowledgeable about potential workplace expo sures, routes of exposure, and potential health effects related to CNT and CNF.
# Screening Elements
In i t ia l E v a lu a t io n
- An initial (baseline) evaluation should be conducted by a qualified health-care profes sional and should consist of the following:
-An occupational and medical history, with respiratory symptoms assessed by use of a standardized questionnaire, such as the American Thoracic Society Respiratory Questionnaire or the most recent.
-A physical examination with an emphasis on the respiratory system.
-A spirometry test (Anyone administering spirometry testing as part of the medical screening program should have completed a NIOSH-approved training course in spirometry or other equivalent training; additionally, the health professional overseeing the screening and surveillance program should be expert in interpreting spirometry testing results, enabling follow-up evaluation as needed.).
-A baseline chest X-ray (digital or film-screen radiograph). All baseline chest images should be clinically interpreted by a board eligible/certified radiologist or other physician with appropriate expertise, such as a board eligible/certified pulmonologist. Periodic follow up chest X-rays may be considered, but there is currently insufficient evidence to evaluate effectiveness. However, if periodic follow up is obtained, clinical interpretation and classification of the images by a NIOSH-certified B reader using the standard International Classification of Radiographs of Pneumoconioses (ILO 2011 or the most recent equivalent) are recommended.
-Other examinations or medical tests deemed appropriate by the responsible health-care professional (The need for specific medical tests may be based on factors such as abnormal findings on initial examination-for example, the findings of an unexplained abnormality on a chest X-ray should prompt further evaluation that might include the use of high-resolution computed tomography scan of the thorax.).
- Evaluations should be conducted at regular intervals and at other times (e.g., post incident) as deemed appropriate by the responsible health-care professional based on data gathered in the initial evaluation, ongoing work history, changes in symp toms such as new, worsening, or persistent respiratory symptoms, and when process changes occur in the workplace (e.g., a change in how CNT or CNF are manufactured or used or an unintentional "spill"). Evaluations should include the following:
-An occupational and medical history update, including a respiratory symptom update, and focused physical examination-performed annually.
-Spirometry-testing less frequently than every 3 years is not recommended ; and -Consideration of specific medical tests (e.g., chest X-ray).
Written reports of medical findings
- The health-care professional should give each worker a written report containing the following:
-The individual worker's medical examination results.
-Medical opinions and/or recommendations concerning any relationships between the individual worker's medical conditions and occupational exposures, any special instructions on the individual's exposures and/or use of personal protective equipment, and any further evaluation or treatment.
- For each examined employee, the health-care professional should give the employer a written report specifying the following:
-Any work or exposure restrictions based on the results of medical evaluations.
-Any recommendations concerning use of personal protective equipment.
-A medical opinion about whether any of the worker's medical conditions is likely to have been caused or aggravated by occupational exposures.
- Findings from the medical evaluations having no bearing on the worker's ability to work with CNT or CNF should not be included in any reports to employers. Confidentiality of the worker's medical records should be enforced in accordance with all applicable regulations and guidelines.
# W orker Education
Workers should be provided information sufficient to allow them to understand the nature of potential workplace exposures, potential health risks, routes of exposure, and instruc tions for reporting health symptoms. Workers should also be provided with information about the purposes of medical screening, the health benefits of the program, and the pro cedures involved.
# Periodic Evaluation of Data and Screening Program
- Standardized medical screening data should be periodically aggregated and evaluated to identify worker health patterns that may be linked to work activities and practices P eriodic E valuations that require additional primary prevention efforts. This analysis should be performed by a qualified health professional or other knowledgeable person to identify worker health patterns that may be linked to work activities or exposures. Confidentiality of workers' medical records should be enforced in accordance with all applicable regula tions and guidelines.
- Employers should periodically evaluate the elements of the medical screening pro gram to ensure that the program is consistent with current knowledge related to ex posures and health effects associated with occupational exposure to CNT and CNF.
Other important components related to occupational health surveillance programs, includ ing medical surveillance and screening, are discussed in Appendix B.
# . R e c o m m e n d a t i o n s f o r W o r k e r s
- Ask your supervisor for training in how to protect yourself from the potential hazards associated with your job, including exposure to CNT and CNF.
- Know and use the exposure control devices and work practices that keep CNT and CNF out of the air and off your skin.
- Understand when and how to wear a respirator and other personal protective equipment (such as gloves, clothing, eyewear) that your employer might provide.
- Avoid handling CNT and CNF in a 'free particle' state (e.g., powder form).
- Store CNT and CNF, whether suspended in liquids or in a powder form, in closed (tightly sealed) containers whenever possible.
- Clean work areas at the end of each work shift (at a minimum) using a HEPA-filtered vacuum cleaner or wet wiping methods. Dry sweeping or air hoses should not be used to clean work areas.
- Do not store or consume food or beverages in workplaces where bulk CNT or CNF, or where CNT-or CNF-containing materials, are handled.
- Prevent the inadvertent contamination of nonwork areas (including take-home con tamination) by showering and changing into clean clothes at the end of each workday. . CNF have lengths rang ing from tens of micrometers to several centime ters, average aspect ratios (length to diameter ratio) of > 100, and they display various morphologies, including cupped or stacked graphene structures. The primary characteristic that distinguishes CNF from CNT resides in graphene plane alignment. If the graphene plane and fiber axis do not align, the structure is defined as CNF, but when parallel, the structure is considered a CNT .
The synthesis of CNT and CNF requires a carbon source and an energy source . CNT and CNF are synthesized by several distinct methods, including chemical vapor deposition (CVD), arc discharge, laser ablation, and highpressure CO conversion (HiPco). Depending on material and method of synthesis, a metal catalyst maybe used to increase yield and sample homo geneity, and to reduce the synthesis temperature. The diameter of the fibers depends on the dimen sions of the metal nanoparticle used as a catalyst; the shape, symmetry, dimensions, growth rate, and crystallinity of the materials are influenced by the selection of the catalyst, carbon source, tempera ture, and time of the reaction. Different amounts of residual catalyst often exist following synthesis; consequently, post-synthesis treatments are used to increase the purity of the product. The most common purification technique involves selec tive oxidation of the amorphous carbon and/or carbon shells at a controlled temperature followed by washing or sonicating the material in an acid (HCL, HNO3, H2SO4) or base (NaOH) to remove the catalyst. As there are many types of purifica tion processes, purified CNT and CNF will exhibit differences in the content of trace elements and re sidual materials .
A growing body of literature indicates a potential health hazard to workers from exposure to various types of carbon nanotubes and nanofibers. A num ber of research studies with rodents have shown adverse lung effects at relatively low-mass doses of CNT (Tables 3-2 and 3-7), including pulmonary inflammation and rapidly developing, persistent fi brosis. Similar effects have been recently observed with exposure to CNF (Table 3-6). It is not known how universal these adverse effects are, that is, whether they occur in animals exposed to all types of CNT and CNF, and whether they occur in ad ditional animal models. Most importantly, it is not yet known whether similar adverse health effects occur in humans following exposure to CNT or CNF, or how airborne CNT in the workplace may compare in size and structure to the CNT aerosols generated in the animal studies.
Because of their small size, structure, and low sur face charge, CNT and CNF can be difficult to sepa rate in the bulk form and tend to be agglomerated or to agglomerate quickly when released in the air, which can affect their potential to be inhaled and deposited in the lungs. The extent to which work ers are exposed to CNT and CNF in the form of agglomerates or as single tubes or structures is unclear because of limited exposure measurement data, but airborne samples analyzed by electron mi croscopy have shown both individual and agglom erated structures .
This Current Intelligence Bulletin (CIB) summarizes the adverse respiratory health effects that have been observed in laboratory animal studies with SWCNT, MWCNT, and CNF. A recommended exposure lim it (REL) for CNT and CNF is given to help minimize the risk of occupational respiratory disease in work ers as well as guidance for the measurement and control of exposures to CNT and CNF.
# P o t e n t i a l f o r E x p o s u r e
The novel application of CNT and CNF has been extensively researched because of their unique physical and chemical properties. CNT and CNF are mechanically strong, flexible, lightweight, heat resistant, and they have high electrical conductiv ity . The purpose of the study was to enumerate the companies directly manufacturing (or using in other manufacturing processes) engineered carbonaceous nanomaterials in the United States, and to estimate the workforce size and characteristics of nanomaterials produced.
The number of workers engaged in the manufactur ing of CNT was estimated at 375, with a projected growth rate in employment of 15% to 17% annu ally. The quantity of CNT (SWCNT and MWCNT) produced annually by each company was estimated to range from 0.2 to 2500 kg. The size of the work force involved in the fabrication or handling of CNT/CNF-enabled materials and composites is un known, but it is expected to increase as the market expands from research and development to indus trial high-volume production . Maynard et al. also assessed the propensity for SWCNT to be released during the agitation of unprocessed SWCNT material in a laboratorybased study and during the handling (e.g., furnace removal, powder transfer, cleaning) of unrefined material at four small-scale SWCNT manufac turing facilities in which laser ablation and highpressure carbon monoxide techniques were used to produce SWCNT. Particle measurements taken during the agitation of unprocessed material in the laboratory indicated the initial airborne release of material (some visually apparent) with the particle concentration of the aerosol (particles < 0.5 ^m in diameter) observed to decrease rapidly over time.
With no agitation, particles around 0.1 ^m in di ameter appeared to be released from the SWCNT material, probably because of the airflow across the powder. At the four manufacturing facilities, short term SWCNT mass concentrations were estimated (using a catalyst metal as the surrogate measure ment) to range from 0.7 to 53 ^g/m3 (area samples) in the absence of exposure controls. When samples were evaluated by scanning electron microscopy (SEM), most of the aerosolized SWCNT were ag glomerated, with agglomerated sizes typically larg er than 1 ^m.
Airborne particle and MWCNT concentrations were determined by Bello et al. during chemical vapor deposition (CVD), growth, and handling of vertically aligned CNT films. Continuous airborne particle measurements were made using a real-time fast mobility particle sizer (FMPS) and a condensa tion particle counter (CPC) throughout the furnace operation. No increase in total airborne particle concentration (compared with background) was observed during the removal of MWCNT from the reactor furnace or during the detachment of MW-CNT from the growth substrate (a process whereby MWCNT are removed from the substrate with a ra zor blade). Electron microscopic analysis of a PBZ sample collected on the furnace operator found no detectable quantity of MWCNT, either as individual tubes or as agglomerates. No mention was made about the use of engineering controls (e.g., local exhaust ventilation, fume hood) to prevent expo sure to MWCNT.
The potential for airborne particle and SWCNT and MWCNT release was determined in a labora tory setting in which both types of CNT were pro duced using CVD . A qualitative assessment of the exposure (i.e., particle morphol ogy, aerosol size) was made during the synthesis of SWCNT and MWCNT in which modifications of the manufacturing methods were made to as certain how changes in the production of CNT in fluenced airborne particle size and concentration (e.g., SWCNT synthesis with and without a catalyst, and growth of MWCNT on a substrate and with no substrate). An FMPS and an aerodynamic particle sizer (APS) were used to monitor particle size and concentrations. Background particle concentra tions were determined to assist in quantifying the release of SWCNT and MWCNT during their syn thesis and handling. Samples were also collected for analysis by TEM to determine particle morphology and elemental composition. Particle measurements made inside a fume hood during the synthesis of SWCNT were found to be as high as 107 particles/ cm3 with an average particle diameter of 50 nm; PBZ samples collected on workers near the fume hood were considerably lower (< 2,000 particles/ cm3). The difference between particle concentra tions obtained during SWCNT growth using a catalyst and the control data (no catalyst) was small and was postulated to be a result of particles being released from the reactor walls of the furnace even when no SWCNT were being manufactured. Par ticle measurements made during the synthesis of MWCNT were found to peak at 4 x 106 particles/ cm3 when measured inside the fume hood. Particle size ranged from 25 to 100 nm when a substrate was used for MWCNT growth and from 20 to 200 nm when no substrate was present. Airborne par ticle concentrations and particle size were found to vary because of the temperature of the reactor, with higher particle concentrations and smaller particle sizes observed at higher temperatures. PBZ samples collected on workers near the fume hood during MWCNT synthesis had particle concentrations similar to background particle concentrations. TEM analysis of MWCNT samples indicated the presence of individual particles as small as 20 nm with particle agglomerates as large as 300 nm. Some individual MWCNT were observed, but were often accompanied by clusters of carbon and iron particles. The diameters of the tubes were reported to be about 50 nm. The use of a fume hood that was extra wide and high and operated at a constant velocity of 0.7 m/s face velocity, appeared to be ef fective in minimizing the generation of turbulent airflow at the hood face, which contributed to the good performance of the fume hood in capturing the airborne release of SWCNT and MWCNT dur ing their synthesis. Lee et al. investigated the potential air borne release of MWCNT at seven facilities (e.g., research laboratories, small-scale manufacturing) where MWCNT was either being synthesized by CVD or handled (e.g., ultrasonic dispersion, spray ing). Real-time aerosol monitoring was conducted using a scanning mobility particle sizer (SMPS) and a CPC to determine particle size and concentration. PBZ and area samples were collected for determin ing airborne mass concentrations (total suspended particulate matter) and for TEM (NIOSH Method 7402) and SEM analysis for particle identification and characterization. Background measurements of airborne nanoparticle exposures were deter mined at two of the seven worksites before starting work to assist in establishing a baseline for airborne nanoparticle concentrations. Most of the handling of MWCNT during synthesis and application was performed inside a laboratory fume hood, where most of the measurements were made. Exposure concentrations of total suspended particulate mat ter ranged from 0.0078 to 0.3208 mg/m3 for PBZ samples and 0.0126 to 0.1873 mg/m3 for area samples. TEM and SEM analysis of filter samples found no detectable amounts of MWCNT but only aggregates of metal particles (e.g., iron and alumi num) which were used as catalysts in the synthesis of MWCNT. The highest airborne particle releases were observed in area samples collected during catalyst preparation (18,600-75,000 particles/cm3 for 20-30 nm diameter particles) and during the opening of the CVD reactor (6,857 particles/cm3 for 20-50 nm diameter particles).
Other handling processes such as CNT preparation, ultrasonic dispersion, and opening the CNT spray cover also generated the release of nanoparticles.
The ultrasonic dispersion of CNT generated parti cles in the range of 120 to 300 nm, which were larger in size than those released from other processes.
The release of airborne carbon-based nanomate rials (CNMs) was investigated during the transfer and ultrasonic dispersion of MWCNT (10-20 nm diameters), fullerenes, and carbon black (15 nm diameter) inside a laboratory fume hood with the airflow turned off and the sash halfway open . Airborne exposure measurements were made during the weighing and transferring of dry CNMs to beakers filled with reconstituted freshwater with and without natural organic matter and then sonicated. The study was designed to de termine the relative magnitude of airborne nano material emissions associated with tasks and ma terials used to evaluate environmentally relevant matrices (e.g., rivers, ponds, reservoirs). Direct reading real-time instruments (i.e., CPC, OPC) were used to determine airborne particle number concentrations, with the results compared with particle number concentrations determined from general air samples collected in the laboratory be fore and after the laboratory process. Samples were also collected for TEM analysis to verify the pres ence of CNMs. Airborne particle number concen trations for all tasks exceeded background particle concentrations, which were inversely related to particle size, with the size distribution of particles skewed toward those CNMs with an aerodynamic diameter < 1 ^m. Airborne particle number con centrations for MWCNT and carbon black, during the sonication of water samples, were significantly greater than those found during the weighing and transferring of dry CNMs. TEM analysis of air borne area samples revealed agglomerates of all CNMs, with MWCNT agglomerates observed to be 500 to 1,000 nm in diameter.
The National Institute for Occupational Safety and Health (NIOSH) conducted emission and exposure assessment studies at 12 sites where engineered nanomaterials were produced or used . Studies were conducted in research and development laboratories, pilot plants, and smallscale manufacturing facilities handling SWCNT, MWCNT, CNF, fullerenes, carbon nanopearls, metal oxides, electrospun nylon, and quantum dots. Airborne exposures were characterized using a variety of measurement techniques (e.g., CPC, OPC, TEM) . The purpose of the studies was to determine whether airborne exposures to these engineered nanomaterials oc cur and to assess the capabilities of various mea surement techniques in quantifying exposures. In a research and development laboratory handling CNF, airborne particle number concentrations (de termined by CPC) were reported as 4000 particles/ cm3 during weighing/mixing and 5000 particles/ cm3 during wet sawing. These concentrations were substantially less than the reported background particle concentration of 19,500 particles/cm3. Samples collected for TEM particle characteriza tion indicated the aerosol release of some CNF. All handling of CNF was in a laboratory hood (with HEPA filtered vacuum) for the weighing/mixing and wet saw cutting of CNF composite materials.
In a facility making CNF in a chemical vapor phase reactor, OPC particle count concentrations ranged from 5,400 particles/cm3 (300-500 nm particle size) to a high of 139,500 particles/cm3 (500-100 nm particle size). Higher airborne particle concen trations were found during the manual scooping of CNF in the absence of exposure control measures. Samples collected for TEM particle characteriza tion indicated the aerosol release of some CNF. In another research and development laboratory, the potential for airborne exposure to MWCNT was evaluated during weighing, mixing, and sonication. All handling of MWCNT was performed in a laboratory hood (without HEPA filtered vacu um). Particle concentrations were determined by CPC (particle size 10-1000nm) and OPC (particle size 300-500nm, 500-100nm). CPC particle con centrations ranged from 1480-1580 particles/cm3 (weighing MWCNT in hood) to 2200-2800 parti cles/cm3 (sonication of MWCNT). The background particle concentration determined by CPC was 700 particles/cm3. Airborne particle concentrations determined by OPC ranged from 3,900-123,400 particles/cm3 (weighing) to 6,500-42,800 particles/ cm3 (sonication). Background particle concen trations determined by OPC ranged from 700 particles/cm 3 (1-10 ^m particle size) to 13,700 particles/cm3 (300-500 nm particle size). The higher particle concentrations determined with the OPC indicated the presence of larger, possibly agglomer ated particles. Samples collected for TEM particle characterization indicated the aerosol release of ag glomerated MWCNT.
Subsequent studies conducted by NIOSH at six primary and secondary pilot or small-scale man ufacturing facilities (SWCNT, MWCNT, CNF) employed a combination of filter-based samples to evaluate PBZ and area respirable and inhalable mass concentrations of EC as well as concentra tions of CNT and CNF structures determined by TEM analysis . A total of 83 filter-based samples (30 samples at primary and 22 at secondary manufacturers) were collected for EC determination (NIOSH Method 5040) and 31 samples for TEM analysis (NIOSH Method 7402). Similar processes and tasks were reported in the three primary and three secondary manufactur ers of CNT. These processes and tasks consisted of:
(1) similar production and harvesting methods for CNT, and common cleaning/housekeeping proce dures in primary manufacturers, and ( 2 In a study designed to investigate the release of CNT during the dry and wet cutting of composite materials containing CNT, airborne samples were collected to determine particle number, respirable mass, and nanotube concentrations . Two different composites containing MWCNT (10-20 nm diameters) were cut using a band saw or rotary cutting wheel. The laboratory study was designed to simulate the industrial cutting of CNTbased composites. PBZ and area samples (close to the emission source) were collected during dry cutting (without emission controls) and during wet cutting (equipped with a protective guard sur rounding the rotary cutting wheel). The cutting of composite materials ranged from 1 to 3 minutes. The dry cutting of composite materials generated statistically significant (P < 0.05) quantities of air borne nanoscale and fine particles when compared with background airborne particle concentrations.
Although the particle number concentration was dominated by the nanoscale and fine fractions, 71% to 89% of the total particle surface area was domi nated by the respirable (1-10 ^m) aerosol fraction.
During the dry cutting of composites, reported mean PM10 mass concentrations for area samples were 2.11 and 8.38 mg/m3, and 0.8 and 2.4 mg/m3 for PBZ samples. Submicron and respirable fibers were generated from dry cutting of all composites. TEM analysis of area samples found concentrations that ranged from1.6 fibers/cm3 (during the cutting of CNT-alumina) to 3.8 fibers/cm3 (during the cut ting of carbon-base composite materials). A PBZ fiber concentration of 0.2 fibers/cm3 was observed during the dry cutting of base-alumina composite materials. No fiber measurement data were report ed for the wet cutting of composite materials. No increase in mean PM10 mass concentrations were observed in 2 of 3 area samples collected during the wet cutting of composites. In the third sample, the observed high particle concentration was attrib uted to extensive damage of the protective guard around the rotary cutting wheel. Bello et al. also investigated the airborne release of CNT and other nanosized fibers dur ing solid core drilling of two types of advanced CNT-hybrid composites: (1) reinforced plastic hybrid laminates (alumina fibers and CNT), and
(2) graphite-epoxy composites (carbon fibers and CNT). Worker PBZ and area samples were col lected to determine exposures during the drilling of composite materials with local exhaust ventilation turned off. Four potential exposure-modifying fac tors were assessed: (1) by composite type, (2) drill ing rpm (low and high), (3) thickness of the com posite, and (4) dry versus wet drilling. Replicate test measurements (10-30 measurements) lasting 10 nm in diam eter. High aspect-ratio fiber concentrations were determined using the sizing and counting criteria in NIOSH Method 7400 (> 5 ^m long, aspect ratio > 3). Airborne exposure to both alumina fiber and CNT structures were found ranging in concentra tion from 1.0 fibers/cm3 (alumina composite) to 1.9 fibers/cm3 (carbon and CNT composite) for PBZ samples; similar concentrations were observed in area samples. Because sampling volume and fi ber surface density on the samples were below the optimal specification range of Method 7400, fiber concentration values were determined to be first order approximations. The authors concluded that higher input energies (e.g., higher drilling rpms, larger drill bits) and longer drill times associated with thicker composites generally produced higher exposures, and that the drilling of CNT-based com posites generated a higher frequency of nanofibers than had been previously observed during the cut ting of CNT-based composites . Cena and Peters evaluated the airborne release of CNT during the weighing of bulk CNT and the sanding of epoxy nanocomposite sticks measuring 12.5 x 1.3 x 0.5 cm. Epoxy reinforced test samples were produced using MWCNT (Bay-tubes®) with 10-50 nm outer diameters and 1-20 ^m lengths. The purpose of the study was to (1) characterize airborne particles during handling of bulk CNT and the mechanical processing of CNT composites, and (2) evaluate the effectiveness of local exhaust ventilation (LEV) hoods to capture airborne particles generated by sanding CNT com posites. Airborne particle number and respirable mass concentrations were measured using a CPC (particle diameters 0.01 to 1 ^m) and OPC (par ticle diameters 0.3 to 20 ^m). Respirable mass con centrations were estimated using the OPC data.
Samples for TEM analysis were also collected for particle and CNT characterization. PBZ and source airborne concentrations were determined during two processes: weighing bulk CNT and sanding epoxy nanocomposite test sticks. Exposure mea surements were taken under three LEV conditions (no LEV, a custom fume hood, and a biological safety cabinet). CPC and OPC particle concentra tions were measured inside a glove box in which bulk CNT (600 mg) was transferred between two 50-ml beakers; background particle concentra tions were measured inside the glove box before the process began. To study the sanding process, a worker manually sanded test sticks that contained 2% by weight CNT. Aerosol concentrations were measured for 15-20 min in the worker's breathing zone and at a site adjacent to the sanding process. The sanding process with no LEV was conducted on a 1.2 m by 2.2 m worktable. The sanding was also conducted inside a custom fume hood that consisted of a simple vented enclosure that allowed airflow along all sides of the back panel but had no front sash or rear baffles. The average face velocity of the fume hood was 76 ft/min. Exposures from the sanding process were also assessed while using a biological safety cabinet (class II type A2).
Particle number concentrations determined dur ing the weighing process contributed little to that observed in background samples (process to back ground ratio = 1.06), however it did influ ence the mass concentration (P/B = 1.79). The GM respirable mass concentration inside the glove box was reported as 0.03 ^g/m3 (background GM was 0.02 ^g/m3). During the sanding process (including no LEV, in a fume hood, and in a biological safety cabinet) the PBZ nanoparticle number concentra tions were negligible compared with background concentrations (P/B ratio = avg. 1.04). Particles gen erated during sanding were reported to be predomi nantly micron sized with protruding CNT and very different from bulk CNT that tended to remain in large (>1 ^m) tangled agglomerates. Respirable mass concentrations in the worker's breathing zone were elevated. However, the concentrations were lower when sanding was performed in the biological safety cabinet (GM = 0.2 ^g/m3) compared with those with no LEV (GM was 2.68 ^g/m3) or those when sand ing was performed inside the fume hood (GM = 21.4 ^g/m3; p value <0.0001). The poor performance of the fume hood was attributed to the lack of a front sash and rear baffles and its low face velocity. Some research has been conducted to date on workplace exposure to carbon nanofibers (CNF) . In a NIOSH health hazard eval uation conducted at a university-based research laboratory, the potential release of airborne CNF was observed at various processes using real-time aerosol instruments (e.g., CPC, ELPI, aerosol pho tometer) . General area ex posure measurements indicated slight increases in airborne particle number and mass concentrations relative to background measurements (outdoors and offices) during the transfer of CNF prior to weighing and mixing, and during the chopping and wet saw cutting of a polymer composite ma terial. Airborne total carbon mass concentrations (per NIOSH Method 5040, with correction for adsorbed vapor) within the laboratory process ing area were 2 to 64 times higher than those of a nearby office area, with the highest peak exposure concentration (1094 ^g/m3) found during the wet saw cutting of the CNF composite material. No in door particle concentrations exceeded the outdoor background concentrations. Particles having a di ameter of about 400 nm or greater were found in greater number during wet-saw cutting, while the number of particles having a diameter of about 500 nm or greater were elevated during the weigh ing and mixing of CNF. Airborne samples collected directly on TEM grids were analyzed for the pres ence of CNF. Some fibers observed by TEM had diameters larger than the 100 nm criterion used to define a nanofiber, which was consistent with re sults reported by Ku et al. , in which the mo bility diameter of aerosolized CNF was observed to be larger than 60 nm, with a modal aerodynamic diameter of about 700 nm. The majority of CNF observed by TEM were loosely agglomerated, rath er than single fibers, which was in general agree ment with the particle size measurements made by real-time instruments.
Detailed investigations of exposures at different job tasks were conducted at a facility manufacturing and processing CNF The tamping of the bag to settle contents and the subsequent closing dispersed CNF through the bag opening into the workplace. High particle number and active surface area concentrations were found during the opening of the dryer and during the manual redistribution of the CNF product. This was attributed to the presence of ultrafine particles emitted from the dryer and as by-products formed through the high-temperature thermal processing of CNF. No elevations in respirable mass concen trations were observed during these operations, suggesting that significant quantities of CNF were not released into the workplace. However, the transfer or dumping of dried CNF from a dryer to a drum, and subsequent bag change-out of final product, contributed the largest transient increases in respirable mass concentrations, with concentra tions exceeding 1.1 mg/m3 for transfer or dump ing and 0.5 mg/m3 for bag change-out. The authors concluded that integrated particle number and ac tive surface area concentrations (i.e., using CPC and diffusion charger) were not useful in assess ing the contribution of emissions from CNF in the workplace, because measurements were dominated by ultrafine particle emissions .
Respirable particle mass concentrations estimated by the photometer appeared to be the most useful and practical metric for measuring CNF when us ing direct-reading instruments. Results obtained for filter samples support the direct-reading instru ment findings. The TEM analyses of size-selective area samples indicated that large fiber bundles were present . In addition, sizeclassified samples (collected with impactors) ana lyzed for EC (by NIOSH 5040) indicated CNF par ticles in the micrometer range .
F ig u re 2 -1 . Workplaces and job tasks with potential for occupational exposure to carbon nanotubes and nanofibers. Adapted from Schulte et al. 2008. Various types of laboratory animal studies have been conducted with CNT and CNF using differ ent routes of exposure to evaluate potential toxic ity (Tables 3-1 through 3-8). These studies have shown a consistent toxicological response (e.g., pulmonary inflammation, fibrosis) independent of the study design (i.e., intratracheal, aspiration, and inhalation). Exposure to SWCNT, MWCNT, and CNF are of special concern because of their small size and fibrous structure. The nanometer diameters and micrometer lengths of some materi als closely resemble the dimensions of some min eral fibers (e.g., asbestos). Results from laboratory animal studies with SWCNT, MWCNT, and CNF also show pulmonary responses similar to those reported for some respirable particles and durable fibers. In some studies, CNT-induced lung fibrosis developed more rapidly and at a lower mass burden than either ultrafine carbon black or quartz . Pulmonary exposure to CNT has also produced systemic responses including an increase in inflammatory mediators in the blood, as well as oxidant stress in aortic tissue and increase plaque formation in an atherosclerotic mouse model Erdely et al. 2009;Stapleton et al. 2011].
CNT and CNF are widely considered durable in biological systems because of the process they undergo during synthesis in which contaminat ing catalytic metals are frequently removed by high temperature vaporization or acid/base treat ment . Researchers have measured the durability (in vitro) of four types of CNT, one type of glass wool fiber, and two asbestos fiber types in simu lated biological fluid (Gambles solution) followed by an assessment of their ability to induce an in flammatory response when injected into the ab dominal cavities of mice. Three of the four types of CNT tested for durability showed no signs, or minimal loss of mass, or change in fiber length, or morphology. When the four CNTs were injected into the peritoneal cavity of mice, an inflammatory and fibrotic response was induced for three of the CNTs that had retained their long discrete struc tures, whereas the other CNT, which was less dura ble and had shorter structures and/or formed tight bundles, caused minimal inflammation. The find ings were consistent with those found for asbestos and glass wool fibers after intraperitoneal injection into mice, in which an inflammatory and fibrotic response was elicited by the two asbestos samples that were both durable and contained a high per centage of long fibers, and the glass wool fiber that was not very durable and caused only minimal in flammation.
The physical-chemical properties (e.g., dimen sion, composition, surface characteristics) of CNT and CNF can often be modified to accommodate their intended commercial use. CNT and CNF can also be coated or functionalized, thus chang ing their surface chemistry. Toxicological effects of such changes remain largely unexplored except for some limited evidence indi cating that structural defects , surface and oxidative modification , nitrogen doping , surface functionalization , and polymer (acidand polystyrene-based) surface coating of CNT can influence their toxicity po tential. Recent studies indicate that functionalization of MWCNT with -COOH groups significantly decreases the inflammatory and fibrotic response after aspiration in a mouse model and when SWCNT were functionalized by carboxylation and subjected to phagolysosomal fluid, longitudinal splitting and oxidative degradation of the tubes occurred . Kagan et al. reported that in vitro myeloperoxidase, which is found in high concentrations in polymorphonuclear neutrophils (PMN), degraded SWCNT. However, it is uncer tain as to whether PMN-derived myeloperoxidase would degrade SWCNT in vivo (e.g., in the lung) because of the following: (1) PMN recruitment af ter SWCNT exposure is a transient rather than per sistent response, (2) there is no strong evidence for SWCNT phagocytosis by PMN, and (3) SWCNT and MWCNT are found in the lungs of mice months after pharyngeal aspiration .
Several animal studies have shown that the size (e.g., length) of MWCNT and SWCNT may have an effect on their biological activity . Intraperitoneal injection of mice with long MWCNT (20 ^m length), but not short MWCNT ( 5 ^m in length . However, when rats were ex posed to short MWCNT (< 1 ^m length) by intra peritoneal injection, only acute inflammation was observed, with no evidence of mesothelioma over the 2 year post-exposure period . Nagai et al. provided evidence that the car cinogenic potential of MWCNT may be related to the fiber-like properties and dimensions. Fischer 344/Brown Norway (male and female, 6 wk old) were injected with doses of 1 or 10 mg of one of five types of MWCNT with different dimensions and rigidity. The thin diameter MWCNT (~50 nm) with high crystallinity caused inflammation and mesothelioma, whereas thick (~150 nm) or tangled structures (~2-20 nm) were less cytotoxic, inflammogenic, or carcinogenic. A specific muta tion to tumor suppressor genes (Cdkn2a/2b) was observed in the mesotheliomas, which is similar to that observed in asbestos-associated mesothe liomas induced by asbestos. In vitro studies with mesothelial cells showed that the thin MWCNT pierced cell membranes and caused cytotoxicity.
Numerous studies have investigated the genotoxic properties of CNT with results from in vitro assays indicating that exposure to SWCNT and MWCNT can induce DNA damage, micronuclei formation, disruption of the mitotic spindle, and induction of polyploidy . Other in vitro studies of some MWCNT did not show evi dence of genotoxicity . The presence of residual metal catalysts was also found to promote the generation of reac tive oxygen species (ROS), thereby enhancing the potential for DNA damage . The results from in vitro studies with CNF have also shown that exposure can cause genotoxicity including aneugenic as well as clastogenic events. In addition, low-dose, long term exposure of bronchial epithelial cells to SW-CNT or MWCNT has been reported to transform these cells to exhibit unregulated proliferation, loss of contact inhibition of division, enhanced migra tion and invasion, and growth in solf agar . When SWCNT-transformed epithe lial cells were subcutaneously injected into the hind flanks of immunodeficient nude mice, small tumors were observed at one week post-injection. Histological evaluation of tumors showed classic cancer cell morphology, including the presence of multinucleated cells, an indicator of mitotic dys function .
When CNT and CNF are suspended in test media, agglomerates of various sizes frequently occur. This is particularly evident in test media used in recent studies where animals have been exposed to CNT suspensions by intratracheal instillation, intraperito neal injection, or by pharyngeal aspiration (a tech nique where particle deposition closely resembles inhalation). The agglomerate size for CNT and CNF is normally smaller in a dry aerosol than when sus pended in physiological media. Evidence from tox icity studies in laboratory animals indicates that decreasing agglomerate size increases the pulmo nary response to exposure . The extent to which ag glomerates of CNT and CNF de-agglomerate in biological systems (e.g., in the lung) is unknown. However, a diluted alveolar lining fluid has been shown to substantially improve dispersion of CNT in physiological saline . Mice or rats exposed to SWCNT by IT or pharynge al aspiration have developed granulomatous lesions at sites in the lung where agglomerates of SWCNT deposited . In addition, interstitial fibrosis has also been reported . This fibrotic response was associated with the migration of smaller SWCNT structures into the interstitium of alveolar septa .
3.1.1 IT Studies Lam et al. investigated the toxicity of SW CNT obtained from three different sources, each with different amounts of residual catalytic metals being present. Mice were exposed by IT to three different types of SWCNT (containing either 27% Fe, 2% Fe, or 26% Ni and 5% Y) at concentrations of 0.1 or 0.5 mg and to carbon black (0.5 mg) or to quartz (0.5 mg). The mice were toxicologically assessed 7 or 90 days post exposure. All types of SWCNT studied produced persistent epithelioid granulomas (which were associated with particle agglomerates) and interstitial inflammation that were dose-related. No granulomas were observed in mice exposed to carbon black, and only mild to moderate inflammation of the lungs was observed in the quartz exposure group. High mortality (5/9 mice) occurred within 4 to 7 days in mice instilled with the 0.5 mg dose of SWCNT containing nickel and yttrium. Warheit et al. exposed rats via IT to concen trations of 1 or 5 mg/kg SWCNT, quartz, carbonyl iron, or graphite particles, and evaluated effects at 24-hr, 1-week, 1-month, and 3-months post ex posure. The SWCNT were reported to have nomi nal diameters of 1.4 nm and lengths > 1 ^m, which tended to agglomerate into micrometer size struc tures. In this study, ~15% of the SWCNT-instilled rats died within 24 hours of SWCNT exposure, ap parently due to SWCNT blockage of the upper air ways. In the remaining rats, a transient inflamma tory response of the lung (observed up to 1-month post exposure) and non-dose dependent multifocal granulomas that were non-uniform in distribution were observed. Only rats exposed to quartz de veloped a dose-dependent lung inflammatory re sponse that persisted through 3 months. Exposures to carbonyl iron or graphite particles produced no significant adverse effects.
# Pharyngeal Aspiration Studies
Progressive interstitial fibrosis of alveolar walls has also been reported in mice when exposed via pharyngeal aspiration to purified SWCNT at doses of 10, 20, 40 ^g/mouse . As with studies by Lam et al. and Warheit et al. , epithelioid granulomas were associated with the deposition of SWCNT agglomerates in the terminal bronchioles and proximal alveoli. This granuloma formation was rapid (within 7 days), dose-dependent, and it persisted over the 60-day post exposure period. A rapid, dose-dependent, and progressive development of interstitial fibro sis in pulmonary regions distant from deposition sites of SWCNT agglomerates was observed, and it appeared to be associated with deposition of more dispersed SWCNT structures. At equivalent mass lung burdens, nano-sized carbon black failed to cause any significant pulmonary responses. These findings were consistent with those reported by Mangum et al. , in which rats exposed to 2 mg/kg via pharyngeal aspiration developed gran ulomas at sites of SWCNT agglomerates and dif fuse interstitial fibrosis at 21 days post exposure. Also noted was the formation of CNT structures bridging alveolar macrophages, which may affect normal cell division and/or function. When a more dispersed delivery of SWCNT was given by aspira tion to mice (10 ^g) , an acceler ated increase in collagen production in the alveolar interstitium occurred that progressed in the absence of persistent inflammation, with the development of few granulomatous lesions. A significant submicrom eter fraction of the dispersed SWCNT was observed to rapidly migrate into alveolar interstitial spaces with relatively little of the material being a target for macrophage engulfment and phagocytosis. Shvedova et al. compared the responses re sulting from exposure via pharyngeal aspiration with exposure via inhalation of more-dispersed SWCNT . One set of mice were exposed by inhalation to 5 mg/m3, 5 hr/day for 4 days, while mice exposed by aspira tion were given a single dose of 10 or 20 ^g. The SWCNT for both studies had dimensions of 0.8 1.2 nm diameters and 100-1000 nm lengths with a measured surface area (Brunauer-Emmett-Teller method ) of 508 m2/g. Both studies reported acute lung inflammation followed by the develop ment of granulomatous pneumonia and persistent interstitial fibrosis; these effects were observed for both purified (0.2% Fe) and unpurified (17.7% Fe) SWCNT. The finding that the acute lung inflam mation resolved after the end of exposure while the pulmonary fibrotic response persisted or pro gressed is unusual compared with lung responses observed from other inhaled particles. The findings indicate that the mechanism may involve the direct stimulation of fibroblasts by dispersed SWCNT that translocate to the lung interstitium . Quantitatively, mice exposed by in halation (dispersed SWCNT) were 4-fold more prone to developing an inflammatory response, interstitial collagen deposition, and fibrosis, when compared (at an estimated equivalent lung dose) with mice exposed by aspiration to a less dispersed suspension of SWCNT. The exposure of mice by inhalation of 5 mg/m3 SWCNT is relevant, because the Occupational Safety and Health Administration (OSHA) permis sible exposure limit (PEL) for respirable synthetic graphite of 5 mg/m3 is sometimes used for control ling workplace exposures to CNT.
# Inhalation Studies
# Pharyngeal Aspiration Studies
Exposures to well-dispersed MWCNT in mice via pharyngeal aspiration have resulted in dose-and time-dependent pulmonary inflammation , as well as central nervous system effects , at doses rang ing from 10 to 80 ^g/mouse. Exposure of mice to dispersed suspension of purified MWCNT at doses of 10, 20, 40, or 80 ^g resulted in pulmonary in flammation and damage, granulomas, and a rapid and persistent fibrotic response . Morphometric analyses indicated that the intersti tial fibrotic response was dose-dependent and pro gressed through 56 days post-exposure . There was also evidence that MWCNT can reach the pleura and that alveo lar macrophages containing MWCNT can migrate to the lymphatics and cause lymphatic inflamma tion . Some of the MWCNT (mean diameter of 49 nm and mean length of 4.2 ^m) were observed penetrating the outer lung wall and entered the intrapleural space Mercer et al. 2010]. Morphometric analyses indicated that 12,000 MWCNT entered the intra pleural space at 56 days post-exposure to 80 ^g of MWCNT .
# IT Studies
Lung inflammation and fibrosis have also been observed in rats exposed by IT to long (5.9 ^m) or short (0.7 ^m) MWCNT at doses of 0.5, 2, or 5 mg of either ground or unground MWCNT and examined up to 60 days post-exposure . Rats that received ground MWCNT (0.7 ^m) showed greater dispersion in the lungs, and fibrotic lesions were observed in the deep lungs (alveolar region). In rats treated with unground MWCNT (5.9 ^m), fibrosis appeared mainly in the airways rather than in the lung parenchyma.
The biopersistence of the unground MWCNT was greater than that of the ground MWCNT (81% vs. 36 %). At an equal mass dose, ground MWCNT produced a similar inflammatory and fibrogenic re sponse as chrysotile asbestos and a greater response than ultrafine carbon black .
Similar findings have been reported by Aiso et al. , in which rats exposed to IT doses of 0.04 and 0.16 mg of dispersed MWCNT (mean length-5 ^m, diameter-88 nm) caused transient inflammation, and persistent granulomas and al veolar wall fibrosis. These acute effects have also been reported in guinea pigs at IT doses of 12.5 mg and 15 mg ; in mice at doses of 0.05 mg (average di ameter of 50 nm, average length of 10 ^m) ], and at 5, 20, and 50 mg/kg ; and in rats dosed at 1, 3, 5, or 7 mg (diameters of 40 to 60 nm, lengths of 0.5 to 5 |im). In contrast, Elgrabli et al. reported cell death but no histopathological lesions or fibrosis in rats ex posed at doses of 1, 10, or 100 ^g MWCNT (diame ters of 20 to 50 nm, lengths of 0.5 to 2 ^m). Likewise, Kobayashi et al. observed only transient lung inflammation and a granulomatous response in rats exposed to a dispersed suspension of MWCNT (0.04-1 mg/kg). No fibrosis was reported, but the authors did not use a collagen stain for histopathology, which would have compromised the sensitivity and specificity of their lung tissue analysis.
In a study of rats administered MWCNT or crocidolite asbestos by intrapulmonary spraying (IPS), exposure to either material produced inflammation in the lungs and pleural cavity in addition to mesothelial proliferative lesions . Four groups of six rats each were given 0.5 ml of 500 ^g suspensions, once every other day, five times over a 9-day period and then evaluated. MWCNT and crocidolite were found to translocate from the lung to the pleural cavity after administration. MWCNT and crocidolite were also observed in the medias tinal lymph nodes suggesting that a probable route of translocation of the fibers is lymphatic flow. Analysis of tissue sections found MWCNT and crocidolite in focal granulomatous lesions in the alveoli and in alveolar macrophages.
# Inhalation Studies
Several short-term inhalation studies using mice or rats have been conducted to assess the pulmonary and systemic immune effects from exposure to MWCNT. Mitchell et al. report ed the results of a whole-body short-term inhalation study with mice exposed to MWCNT (diameters of 10 to 20 nm, lengths of 5 to 15 ^m) at concentra tions of 0.3, 1, or 5 mg/m3 for 7 or 14 days (6 hr/ day) (although there was some question regarding whether these structures were actually MWCNT ). Histopathology of lungs of exposed animals showed alveolar macrophages containing black particles; however, there was no observed inflammation or tissue damage. Systemic immunosuppression was observed after 14 days, al though without a clear concentration-response rela tionship. Mitchell et al. reported that the im munosuppression mechanism of MWCNT appears to involve a signal originating in the lungs that acti vates cyclooxygenase enzymes in the spleen. Porter et al. reported significant pulmonary inflam mation and damage in mice 1 day after inhalation of well-dispersed MWCNT (10 mg/m3, 5 hr/day, 2-12 days; mass aerodynamic diameter of 1.3 ^m, count aerodynamic diameter of 0.4 |im). In addition, gran ulomas were also observed encapsulating MWCNT in the terminal bronchial/proximal alveolar region of the lung. In an inhalation (nose-only) study with mice exposed to 30 mg/m3 MWCNT (lengths of 0.5 to 50 |im) for 6 hours, a high incidence (9 of10 mice) of fibrotic lesions occurred . MWCNT were found in the subpleural region of the lung 1 day post exposure, with sub pleural fibrosis occurring at 2 weeks post exposure that progressed through 6 weeks of follow-up. No fibrosis was observed in mice exposed to 1 mg/m3 of MWCNT or in mice exposed to 30 mg/m3 of na noscale carbon black.
Subchronic inhalation studies with MWCNT have also been conducted in laboratory studies with rats to assess the potential dose-response and time course for developing pulmonary effects . Ma-Hock et al. reported on the results of a 90day inhalation (head-nose) study with rats exposed at concentrations of 0.1, 0.5, or 2.5 mg/m3 MWCNT (BASF Nanocyl NC 7000) for 6 hr/day, 5 days/week for 13 weeks with a resultant lung burden of 47 1170 ^g/rat. No systemic toxicity was observed, but the exposure caused hyperplastic responses in the nasal cavity and upper airways (larynx and trachea), and granulomatous inflammation in the lung and in lung-associated lymph nodes at all ex posure concentrations. The incidence and severity of the effects were concentration-related. No lung fibrosis was observed but pronounced alveolar lipoproteinosis did occur.
Ellinger-Ziegelbauer and Pauluhn con ducted a short-term inhalation bioassay (before the Pauluhn 2010a subchronic study) to investi gate the dependence of pulmonary inflammation resulting from exposure to one type of MWCNT (Bayer Baytubes®), which was highly agglomerated and contained a small amount of cobalt (residual catalyst). Groups of rats were exposed to 11 mg/m3 MWCNT containing either 0.53% or 0.12% cobalt to assess differences in pulmonary toxicity because of metal contamination. Another group of rats was exposed to 241 mg/m3 MWCNT (0.53% cobalt) to serve the purpose of hazard identification. All animals were exposed to a single nose-only inha lation exposure of 6 hr followed by a post-expo sure period of 3 months. Time course of MWCNTrelated pulmonary toxicity was compared with rats exposed to quartz in post-exposure weeks 1, 4, and 13 to distinguish early, possibly surface area/activityrelated effects from retention-related poorly soluble particle effects. Rats exposed to either quartz or MWCNT resulted in somewhat similar patterns of concentration-dependent pulmonary inflam mation during the early phase of the study. The pulmonary inflammation induced by quartz in creased during the 3 months post-exposure peri od, whereas that induced by MWCNT regressed in a concentration-dependent manner. The time course of pulmonary inflammation associated with retained MWCNT was independent on the concentration of residual cobalt. Pauluhn , using the same MWCNT (0.53% cobalt) used in the study by Ellinger-Ziegelbauer and Pauluhn exposed rats (nose-only) at concentrations 0.1, 0.4, 1.5, and 6 mg/m3 for 6 hr/day, 5 days/week for 13 weeks. The aerosolized MWCNT were described as being high ly agglomerated (mean diameter of 3 ^m). Lung clearance of MWCNT at the low doses was slow, with a marked inhibition of clearance at 1.5 and 6 mg/m3. Histopathology analysis at 6 months post exposure revealed exposure-related lesions in the upper respiratory (e.g., goblet cell hypermetaplasia and/or metaplasia) and lower respiratory (e.g., in flammation in the bronchiole-alveolar region) tract in animals exposed at concentrations of 0.4, 1.5, and 6 mg/m3, as well as inflammatory changes in the distal nasal cavities that were similar to those found by Ma-Hock et al. . In rats exposed at 6 mg/ m3, a time-dependent increase of bronchioloalveolar hyperplasia was observed, as well as changes in granulomas and an increase in collagen deposition that persisted through the 39-week post-exposure observation period. No treatment-related effects were reported for rats exposed at 0.1 mg/m3.
In a report submitted by Arkema to EPA, rats exposed (nose only) to agglomerates of MWCNT (Arkema) at concentrations of 0.1, 0.5, and 2.5 mg/ m3 for 6 hr/day for 5 days exhibited histopathological effects that were consistent with those reported by Ma-Hock et al. , Ellinger-Ziegelbauer and Pauluhn and Pauluhn . An increase of various cytokines and chemokines in the lung, along with the development of granulomas were found in the 0.5 and 2.5 mg/m3 exposure groups, while no treatment-related effects were reported at 0.1 mg/m3.
# . S W C N T a n d M W C N T
I n t r a p e r i t o n e a l S t u d i e s Intraperitoneal injection studies in rodents have been frequently used as screening assays for poten tial mesotheliogenic activity in humans. To date, exposures to only a few fiber types are known to produce mesotheliomas in humans; these include the asbestos minerals and erionite fibers. Several animal studies have been conducted to investi gate the hazard potential of various sizes and doses of MWCNT and SWCNT to cause a carcinogenic response. Takagi et al. reported on the in traperitoneal injection of 3 mg of MWCNT in p53 +/-mice (a tumor-sensitive, genetically engineered mouse model), in which approximately 28% of the structures were > 5 ^m in length with an average diameter of 100 nm. After 25 weeks, 88% of mice treated with MWCNT revealed moderate to severe fibrotic peritoneal adhesions, fibrotic peritoneal thickening, and a high incidence of macroscopic peritoneal tumors. Histological examination found mesothelial lesions near fibrosis and granulomas. Similar findings were also seen in the crocidolite asbestos-treated positive control mice. Minimal mesothelial reactions and no mesotheliomas were produced by the same dose of (nonfibrous) C60 fullerene. Poland et al. reported that the peritoneal (abdominal) injection of long MW-CNT-but not short MWCNT-induced inflam mation and granulomatous lesions on the abdomi nal side of the diaphragm at 1 week post-exposure. This study, in contrast to the Takagi et al. study, used wild type mice exposed to a much low er dose (50 ^g) of MWCNT. Although this study documented acute inflammation, it did not evalu ate whether this inflammation would persist and progress to mesothelioma. Murphy et al. found similar findings in C57BI/6 mice that were in jected with different types of MWCNT composed of different tube dimensions and characteristics (e.g., tangled) or injected with mixed-length amosite as bestos. Mice were injected with a 5 ^g dose directly into the pleural space and evaluated after 24 hours, 1, 4, 12, and 24 weeks. Mice injected with long (> 15 ^m) MWCNT or asbestos showed significantly increased granulocytes in the pleural lavage, com pared with the vehicle control at 24 hours post ex posure. Long MWCNT caused rapid inflammation and persistent inflammation, fibrotic lesions, and mesothelial cell proliferation at the parietal pleural surface at 24 weeks post exposure. Short (< 4 ^m) and tangled MWCNT did not cause a persistent in flammatory response and were mostly cleared from the intrapleural space within 24 hours.
A lack of a carcinogenic response was reported by Muller et al. and Varga and Szendi in rats, and by Liang et al. in mice, follow ing intraperitoneal injection or implantation of MWCNT or SWCNT. No mesotheliomas were noted 2 years after intraperitoneal injection of MWCNT in rats at a single dose of 2 or 20 mg or MWCNT (phosphorylcholine-grafted) in mice when given daily doses of either 10, 50, or 250 mg/kg and evaluated at day 28 . However, the MWCNT sam ples used in the Muller et al. and Liang et al. studies were very short (avg. < 1 ^m in length observed by Muller et al. and < 2 ^m in length observed by Liang et al. ), and the findings were consistent with the low biological activity observed in the Poland et al. study when mice were exposed to short MWCNT. Varga and Szendi reported on the implantation of either MWCNT or SWCNT in F-344 rats (six per group) at a dose of 10 mg (25 mg/kg bw). Gelatin capsules containing either SWCNT (< 2 nm diam eters x 4-15 ^m lengths), MWCNT (10-30 nm di ameters x 1-2 ^m lengths), or crystalline zinc oxide (negative control) were implanted into the perito neal cavity. Histological examination at 12 months revealed only a granulomatous reaction of foreign body type with epithelioid and multinucleated giant cells in CNT-exposed animals. No information was reported on what effect the delivery of SWCNT and MWCNT in gelatin capsules had on their disper sion in the peritoneal given the tendency of CNT to agglomerate. If SWCNT and MWCNT remained agglomerated following delivery, this may have resulted in the lack of a mesothelioma-inducing ef fect. The low biological activity observed for the short MWCNT sample (< 2 ^m) used in the study was consistent with the findings from Poland et al.
, Muller et al. , and Liang et al. , in which short MWCNT were also used. reported that multiple aspirations of SWCNT (20 ^g/mouse, every 2 weeks, for 2 months) in Apo E -/-mice caused a 71% increase in aortic plaques. Inhalation of MWCNT by rats (26 mg/m3 for 5 hr; lung burden of 22 ^g) resulted in a 92% de pression of the responsiveness of coronary arte rioles to dilators 24 hr post-exposure , while aspiration of MWCNT has been shown to increase baroreflex activity in rats . Further more, pharyngeal aspiration of MWCNT (80 ^g/ mouse) results in induction of mRNA for certain inflammatory mediators and markers of blood/ brain barrier damage in the olfactory bulb, frontal cortex, midbrain and hippocampus brain regions 24 hr post-exposure . Several mechanisms have been suggested to explain these systemic responses:
# Translocation of CNT to Systemic Sites
Translocation of intraperitoneal instilled MWCNT from the abdominal cavity to the lung has been re ported ; however, there is no evi dence that the reported systemic effects are associ ated with translocation of CNT from the lung to the affected tissue. Aspirated gold-labeled SWCNT were not found in any organ 2 weeks post-exposure . Pulmonary exposure to particles causes localized inflammation at the sites of particle deposition in the alveoli. Erdely et al. reported that aspira tion of SWCNT or MWCNT (40 ^g/m) induced a small but significant increase in blood neutrophils, mRNA expression, and protein levels for certain inflammatory markers in the blood at 4 hr post exposure, but not at later times. Pulmonary CNT exposure also significantly elevated gene expres sion for mediators, such as Hif -3a and S100a, in the heart and aorta at 4 hr post-exposure. Evidence also exists that pulmonary exposure to particles al ters systemic micro-vascular function by potentiat ing PMN as they flow through pulmonary capillar ies in the close proximity to affected alveoli. These potentiated blood PMN adhere to micro-vessel walls and release reactive species that scavenge NO produced by endothelial cells . Therefore, less dilator-induced NO diffuses to vascular smooth muscle resulting in less dilation.
# . 5 C a r b o n N a n o f i b e r s ( C N F )
Recent observations indicate that exposure to CNF can cause respiratory effects similar to those ob served in animals exposed to CNT . In this study, female mice were exposed by pharyngeal aspiration to SWCNT (40 ^g), CNF (120 ^g) or crocidolite (120 ^g) and evaluated post exposure at 1, 7, and 28 days. Delivered structure number or particle surface area at the highest doses were 1.89 x 106 and 0.042 m2 for SWCNT, 4.14 x 106 and 0.05 m2 for CNF, and 660 x 106 and 0.001 m2 for asbestos. SWCNT and CNF were purified and con tained 0.23 and 1.4% iron, respectively compared to the 18% iron of the asbestos sample. SWCNT had diameters of 1 to 4 nm and lengths ranging from 1 to 3 ^m whereas the diameters of CNF ranged from 60 to 150 nm and lengths approximately 5 to 30 ^m. The fiber lengths of asbestos ranged from 2 to 30 ^m. On a mass dose bases, inflammation and lung damage at 1 day post-exposure followed the potency sequence of SWCNT>CNF>asbestos.
# System ic Inflam m ation
The same potency sequence was observed for TNF and IL-6 production at 1 day post-exposure. SW CNT agglomerates were associated with the rapid (7 days) development of granulomas, while neither CNF nor asbestos (being more dispersed) caused granulomatous lesions. Interstitial fibrosis (noted as TGF production, lung collagen, and Sirius red staining of the alveolar septa) was observed at 28 days post-exposure with a mass-based potency se quence of SWCNT>CNF=asbestos. The potency sequence for fibrosis was not found to be related to structure number or particle surface area (deter mined by BET gas absorption method) delivered to the lung. However, it is likely that gas absorp tion overestimates the surface area of agglomerated SWCNT structures delivered to the lung. Estimates of effective surface area, based on geometrical anal ysis of structures including agglomeration, provid ed an improved dose metric that was correlated to the toxicological responses to CNT and CNF.
Respiratory effects after a subchronic inhalation exposure of rats to CNF (purity > 99.7%) were recently reported by DeLorme et al. . Both male and female Sprague Dawley rats were exposed nose-only inhalation to CNF (VGCF-H Showa Denko), for 6 hrs/day, 5 days/week at concentra tions of 0, 0.54, 2.5, or 25 mg/m3 over a 90-day period and evaluated 1 day post exposure. Histopathological assessment included bronchoalveolar lavage fluid (BALF) analysis and cell proliferation studies of the terminal bronchiole, alveolar duct, and subpleural regions of the respiratory tract. The 25 mg/m3 exposed rats and the non-exposed control group were also evaluated after a 3-month recovery period. The aerosol exposure to rats was characterized using SEM and TEM to determine the size distribution and fiber concentrations us ing NIOSH Method 7400. At an aerosol concen tration of 0.54 mg/m3 the fiber concentration was 4.9 fibers/cc with a MMAD of 1.9 ^m (GSD 3.1), at 2.5 mg/m3 the concentration was 56 fibers/cc with a MMAD of 3.2 ^m (GSD 2.1), and at 25 mg/m3 the concentration was 252 fibers/cc with a MMAD of 3.3 ^m (GSD 2.0). The mean lengths and diameters of fibers were 5.8 ^m and 158 nm, respectively with surface area measurements (by BET) of 13.8 m2/g. At 1-day post exposure wet lung weights were sig nificantly elevated compared to controls in male rats at 25 mg/m3 and in female rats at 2.5 and 25 mg/m3. Small increases in inflammation of the ter minal bronchiole and alveolar duct regions were also observed in rats exposed to 2.5 mg/m3 while histopathological assessments of rats exposed at 25 mg/m3 found subacute to chronic inflamma tion of the terminal bronchiole and alveolar duct regions of the lungs along with thickening of the interstitial walls and hypertrophy/hyperplasia of type II pneumocytes. No adverse histopathological findings were reported for the 0.54 mg/m3 ex posure group. After the 3-month recovery period, lung weights remained elevated in each sex in the 25 mg/m3 exposure group. Inflammation and the numbers (> 70%) of fiber-laden alveolar macro phages still persisted in the lung of rats exposed to 25 mg/m3 with the inflammatory response report ed to be relatively minor but significantly increased when compared to the non-exposed control group. Fibers were also observed to persist in the nasal turbinate's at 3-months post-exposure in all rats ex posed at 25 mg/m3 causing a nonspecific inflamma tory response. In contrast to Murray et al. , no fibrosis was noted in this inhalation study. The most likely reason for this discrepancy is a differ ence in alveolar lung burden between the Murray et al. and the DeLorme et al. study.
In the former, the lung burden was 120 ^g/mouse lung. In contrast, lung burden was not reported or estimated in the DeLorme et al. rat study. However, with a MMAD as large as 3.3 ^m, nasal filtering would be expected to be high and alveolar deposition relatively low. Ma-Hock et al. Porter et al. Sriram et al. Mice Inhalation Ellinger-Ziegelbauer Rats Nose-only inhalation 11 and 241 mg/m3 for 6 hr (eval: at days 7, 28, and 90)
# C o n c l u s i o n s -H a z a r d a n d E x p o s u r e A s s e s s m e n t
Results of laboratory animal studies with both SWCNT and MWCNT report qualitatively similar pulmonary responses including acute lung inflam mation, epithelioid granulomas (microscopic nod ules), and rapidly developing fibrotic responses at relatively low-mass doses (Section 3). Animal stud ies with CNT and CNF have shown the following:
1. Early onset and persistent pulmonary fibrosis in SWCNT-, MWCNT-, and CNF-exposed animals in short-term and subchronic studies .
2. Similar pulmonary responses in animals (e.g., acute lung inflammation, interstitial fibrosis) when exposed to purified and unpurified SWCNT [Lam et al. 2004;Shvedova et al. 2005Shvedova et al. , 2008.
3. Equal or greater potency of SWCNT, MWCNT, and CNF compared with other inhaled par ticles (ultrafine carbon black, crystalline silica, and asbestos) in causing adverse lung effects including pulmonary inflammation and fibro sis .
4. CNT agglomeration affects the site of lung de position and response; large agglomerates tend to deposit at the terminal bronchioles and proximal alveoli and induce a granulomatous response, while more dispersed structures de posit in the distal alveoli and cause interstitial fibrosis . Agglomerated SWCNT tend to induce granu lomas, while more dispersed CNF and asbestos did not .
# Intraperitoneal injection of long (> 5 ^m)
MWCNT in mice causes fibrotic lesions and mesothelial cell proliferation .
Although pulmonary responses to SWCNT and MWCNT are qualitatively similar, quantitative differences in pulmonary responses have been re ported. In mice exposed to CNT by pharyngeal as piration (10 ^g/mouse), SWCNT caused a greater inflammatory response than MWCNT at 1 day post exposure . Morphometric analyses indicate that well-dispersed purified SWCNT (< 0.23% iron) are not well recognized by alveolar macrophages (only 10% of the alveolar burden being within alveolar macrophages) , and that 90% of the dispersed SWCNT structures have been ob served to cross alveolar epithelial cells and enter the interstitium . In contrast, approximately 70% of MWCNT in the respiratory airways are taken up by alveolar macrophages, 8% migrate into the alveolar septa, and 22% are found in granulomatous lesions , possibly due to the greater tube count per mass of SWCNT . In addition, although both SWCNT and MWCNT have been reported in the subpleural tissue of the lung , penetration of the visceral pleura and trans location to the intrapleural space has been reported only for MWCNT . Despite these differences, CNTs of various types, both purified and unpurified, dispersed or agglomerated, all cause adverse lung effects in rats or mice at relatively low mass doses that are rele vant to potential worker exposures.
Animal studies have also shown asbestos-type pa thology associated with the longer, straighter CNT structures . Mesothelial tumors have been reported in mice receiving intraperitoneal injection of long MWCNT (5-20 ^m in length) ; whereas chronic bioas says of short MWCNT (avg. < 1 ^m and < 2 ^m in length, respectively) did not produce mesothelioma. These find ings are consistent with those reported by Yamashita et al. and Nagai et al. who found that MWCNT injected into the peritoneal cavity of mice or rats generated inflammation/genetic dam age and mesothelioma that were related to the di mension of the CNT. Results from these peritoneal assay studies indicate that CNT of specific dimen sions and durability can cause inflammation, fi brosis, and mesothelial tumors in mice and in rats; however, additional experimental animal research is needed to: (1) provide quantitative data on the biopersistence of different types of CNT in the lung and, (2) address the key question as to the precise dimensions (and possibly other physical-chemical characteristics) of CNT that pose a potential patho genic risk for cancer including mesothelioma.
As synthesized, raw (unpurified) CNT, contain as much as 30% catalytic metals. Catalytic metals, such as iron-rich SWCNT, can generate hydroxyl radicals in the presence of hydrogen peroxide and organic (lipid) peroxides , and when human epidermal keratinocytes cells are ex posed to unpurified SWCNT (in vitro cellular stud ies), oxidant injury occurs . These catalytic metals can be removed from raw CNT by acid treatment or by high temperature to yield purified CNT with low metal content. Re moval of catalytic metals abolishes the ability of SWCNT or MWCNT to generate hydroxyl radi cals. However, in laboratory animal studies the pul monary bioactivity of SWCNT does not appear to be affected by the presence or absence of catalytic metals. Lam et al. compared the pulmonary response of mice to intratracheal instillation of raw (containing 25% metal catalyst) with purified (~2% iron) SWCNT and found that the granulomatous reaction was not dependent on metal contamina tion. Likewise, the acute inflammatory reaction of mice after aspiration of raw (30% iron) versus puri fied (< 1% iron) SWCNT was not affected by metal content [Shvedova et al. 2005[Shvedova et al. , 2008.
Pulmonary exposure to CNT have shown systemic responses including an increase in inflammatory mediators in the blood, as well as oxidant stress in aortic tissue and increase plaque formation in an atherosclerotic mouse model Erdely et al. 2009]. Pulmonary exposure to MWCNT also depresses the ability of coronary arterioles to respond to dilators . These cardiovascular effects may be due to neurogenic signals from sensory irritant receptors in the lung. Mechanisms, such as inflammatory signals or neu rogenic pathways causing these systemic responses, are under investigation.
Results from in vitro cellular studies have shown that SWCNT can cause genotoxicity and abnormal chromosome number, because of interference with mitosis (cell division), by disrupting the mitotic spindles in dividing cells and inducing the forma tion of anaphase bridges among the nuclei . In vitro studies also indicate that expo sure to CNF can cause genotoxicity (micronuclei) as a result of reactive oxygen species (ROS) pro duction, which in turn reacts with DNA, and by interfering physically with the DNA/chromosomes and/or mitotic apparatus . Lowdose, long-term exposure of bronchial epithelial cells to MWCNT has been shown to induce cell transformation, and these transformed cells induce tumors after injection into nude mice .
Currently, there are no studies reported in the liter ature on the adverse health effects in workers pro ducing or using CNT or CNF. However, because humans can also develop lung inflammation and fibrosis in response to inhaled particles and fibers, it is reasonable to assume that at equivalent expo sures (e.g., lung burden/alveolar epithelial cell sur face) to CNT and CNF, workers may also be at risk of developing these adverse lung effects.
Although data on workplace exposures to CNT and CNF are limited, aerosolization of CNT and CNF has been shown to occur at a number of operations during research, production, and use of CNT and CNF, including such work tasks as transferring, weighing, blending, and mixing. Worker exposure to airborne CNT and CNF has frequently been observed to be task-specific and short-term in du ration, with exposure concentrations (frequently reported as particle number or mass concentra tions) found to exceed background exposure mea surements when appropriate engineering controls are not used to reduce exposures . Results from stud ies also suggest that the airborne concentration and the physical-chemical characteristics of particles (e.g., discrete versus agglomerated CNT) released while handling CNT may vary significantly with production batch and work process. Comprehen sive workplace exposure evaluations are needed to characterize and quantify worker exposure to CNT and CNF at various job tasks and operations, and to determine what control measures are the most effective in reducing worker exposures.
The findings of adverse respiratory effects (i.e., pul monary fibrosis, granulomatous inflammation) and systemic responses in animals indicate the need for protective measures to reduce the health risk to workers exposed to CNT and CNF. Available evidence also indicates that the migration of MW CNT into the intrapleural space could potentially initiate mesothelial injury and inflammation that over time cause pleural pathology, including meso thelioma. Long-term inhalation studies are needed to determine whether CNT and CNF of specific dimension and chemistry can cause cancer in laboratory animals at doses equivalent to potential workplace exposures. In addition, the potential for migration of CNT through the lungs and for accu mulation in the intrapleural space with time after inhalation requires further investigation. Until re sults from animal research studies can fully explain the mechanisms in which inhalation exposure to CNT and CNF cause adverse lung effects and pos sible systemic effects, all types of CNT and CNF should be considered an occupational respiratory hazard, and the following actions should be taken to minimize health concerns:
1. Minimize workplace exposures.
2. Establish an occupational health surveillance program for workers exposed to CNT and CNF (Section 6, Appendix B). NIOSH bases its recommended exposure limits (RELs) on quantitative risk assessments when pos sible. Quantitative risk assessment provides esti mates of the severity and likelihood of an adverse response associated with exposure to a hazardous substance. The hazard and quantitative risk assess ments (Section 4 and Appendix A) provide the health basis for developing a recommended expo sure limit (REL) for CNT and CNF. Establishing health-based exposure limits is the first consider ation by NIOSH in setting a REL. The analytical feasibility of measuring worker exposures to air borne CNT and CNF is also taken into account in the establishment of the REL (Section 6.1).
In general, quantitative risk assessment involves the following steps: first a data set is selected that best depicts a dose-response relationship, in this case, the relationship between exposure to CNT and pulmonary effects in animals. Then, a critical dose in the animal lungs is calculated. A frequently used indicator of critical dose is the benchmark dose (BMD) which is defined as the dose corre sponding to a small increase in response (e.g. 10%) over the background level of response . Next, the dose in humans, that is equivalent to the critical dose in the animals, is estimated. This requires adjusting for species differences between animals and humans. It is assumed in the absence of specific data that an equivalent dose in animals and humans will result in the same risk of disease, based on the assumption that the same mechanism of action is operating in both animals and humans. After the critical average dose in human lungs is estimated from the animal data, an equivalent workplace concentration over a full working life time is derived. This is accomplished by using mathematical and physiological models to estimate the fraction of the dose that reaches various parts of the respiratory tract and is deposited and cleared . Variability in human dose and response, including sensitive subpopulations, and uncertain ty in the extrapolating animal data to humans are typically addressed with uncertainty factors in the absence of specific data.
NIOSH determined that the best data to use for a quantitative risk assessment and as the basis for a REL were the nonmalignant pulmonary data from short-term and subchonic animal studies. In these studies, lung exposures to CNT (i.e., various types of MWCNT and SWCNT, purified and unpurified, dispersed or agglomerated, and with different metal content) were observed to cause early-stage adverse lung effects including, pulmonary inflammation, granuloma, alveolar septal thickening, and pulmo nary fibrosis (Section 3 and Appendix A). NIOSH considers these animal lung effects to be relevant to workers because similar lung effects have also been observed in workers with occupational lung disease associated with exposure to various types of inhaled particles and fibers . Human-equivalent risk estimates were derived from animal dose-response data (in rats and mice). Human-equivalent expo sures over a 45-year working lifetime were estimat ed to be associated with either a specified risk level (e.g., 10%) of early-stage lung effects or with a no observed adverse effect level based on the animal studies. In the absence of validated lung dosimetry models for CNT, lung doses were estimated using spherical particle-based models and CNT airborne size data, assuming either deposited or retained lung dose in animals or humans.
The findings from this analysis indicate that work ers are potentially at risk of developing adverse lung effects if exposed to airborne CNT during a 45-year working lifetime. Minimal lung effects (grade 1 or higher) 0.5 to 4 ^g/m3 0.2 to 2 ^g/m3
Slight or mild lung effects (grade 2 or higher) 1 to 44 ^g/m3 0.7 to 19 ^g/m3 Abbreviation: TWA=Time-weighted average. *Excess (exposure-attributable) risk during a 45-year working lifetime. tHistopathology findings of granulomatous inflammation or alveolar septal thickening in rat subchronic inhalation studies of multiwall carbon nanotubes. Estimates vary by rat study and lung burden estimation method (Appendix A, Tables A-7 and A-8). Minimal lung effects (grade 1 or higher) 2.4% to 33% 5.3% to 54%
Slight or mild lung effects (grade 2 or higher) 0.23% to 10% 0.53% to 16% Abbreviation: TWA=Time-weighted average. *Histopathology findings of granulomatous inflammation or alveolar septal thickening in rat subchronic inhalation studies of multiwall carbon nanotubes. fExposure-attributable risk (added risk above background). Estimates vary by rat study and lung burden estimation method (Appendix A, Tables A-7 and A-8).
A-6). Risk estimates derived from other animal studies (e.g., single dose with up to 90-day followup) using SWCNT and other types of MWCNT (Tables A -3 and A-4) are consistent with these estimates, i.e., 0.08-12 ^g/m3 (8-hour TWA) (95% LCL estimates). These working lifetime exposure concentration estimates vary by approximately two orders of magnitude (across the different types of CNT, study design, animal species/strain and gen der, route of exposure, and response endpoints); yet all of these estimates are relatively low airborne mass concentrations, most within ~1 -10 ^g/m3 (8-hour TWA). NIOSH does not consider a 10% estimated excess risk over a working lifetime to be acceptable for these early-stage lung effects, and the REL is set at the optimal limit of quantification (LOQ) of the analytical method carbon (NIOSH method 5040) (Appendix C). Among the uncertainties in this risk assessment using animal data, there is uncertainty in extrapo lating the respiratory effects observed in short term or subchronic animal studies to estimate the probability of chronic respiratory effects in humans. In the absence of chronic data, these ani mal studies provide the best available information to derive initial estimates of health risk for use in REL development. Subchronic (13 wk.) exposure studies are a standard toxicity assay used in human health risk assessment, although the studies with shorter exposure and post-exposure durations also provide useful information about the relationship between CNT lung dose and response. To the ex tent that the precursor effects to chronic disease are observed in these shorter-term studies, the hazard and risk estimates would be expected to provide useful information for chronic disease prediction and prevention. Although there is uncertainty in the benchmark dose estimates from the subchronic studies because of the dose-spacing and high re sponse proportions, these estimates are similar to the NOAEL and LOAEL values reported in these studies (Table A -12).
One of the measures of pulmonary fibrosis used in the shorter-term studies -alveolar epithelial cell thickness (due to collagen deposition)-was previously used in the U.S. EPA ozone standard. This biological response was selected by EPA as the adverse lung response for cross-species doseresponse extrapolation because it indicates "fun damental structural remodeling" .
Some of these studies provide data comparing the potency of CNT with that of other particles or fibers for which animal and human data are available on the long-term adverse health effects. These studies show that on a mass basis, CNT had equal or greater potency (pulmonary inflammation or fibrosis re sponse at a given mass dose) to that of ultrafine car bon black, crystalline silica, or chrysotile asbestos . These comparative toxicity findings between CNT and other well-studied particles or fibers help to reduce the uncertainty about whether the lung effects in these short-term studies are relevant to evaluating the chronic respiratory hazard of CNT. Based on currently available data, it is difficult to as sess the relative toxicity of the various types of CNT and CNF because there has been limited systematic study of various CNT and CNF using the same study design. These available studies differ in factors that include the rodent species and strain, the techniques and assays for measuring lung effects, and the expo sure and post-exposure durations. Despite differences in the type and composition of SWCNT and MWCNT used in the animal studies, the risk estimates across the different types of CNT and studies are associated with relatively low mass exposure concentrations. Although data from laboratory animal studies with CNF are limited, the similarities in physical-chemical properties and adverse lung effects between CNF and CNT support the need to control exposures to CNF at the REL derived for CNT.
NIOSH is recommending an occupational expo sure limit for CNT and CNF to minimize the risk of developing adverse lung effects over a working lifetime. A mass-based airborne exposure limit is being recommended because this exposure metric is the same as that used in determining the dose-response relationship in animal studies and deriving the risk estimates, as well as being the most com mon exposure metric currently used in monitoring workplace exposures to CNT and CNF. The REL is based on the respirable particle-size fraction be cause the adverse lung effects in the animal stud ies were observed in the alveolar (gas-exchange) region. "Respirable" is defined as the aerodynamic size of particles that, when inhaled, are capable of depositing in the alveolar region of the lungs . Sampling methods have been devel oped to estimate the airborne mass concentration of respirable particles One of the earliest OELs for CNT was proposed by the British Standards Institute -the benchmark exposure limit (BEL) of 0.01 fiber/cm3, or one-tenth of their asbestos exposure limit (Table 5-5). Nanocyl derived an estimated OEL of 2.5 Mg/m3 for an 8-hr TWA exposure based on applying an overall assessment (a.k.a. uncertainty) factor of 40 to the LOAEL of 0.1 mg/m3 in the Ma-Hock et al. subchronic rat inhalation study of MWCNT. Aschberger et al. pro posed OELs of 1 Mg/m3 for MWCNT studied by Ma-Hock et al. and 2 Mg/m3 for MWCNT from Pauluhn , by adjusting 0.1 mg/m3 (the LOAEL in Ma-Hock et al. and the NOAEL in Pauluhn ) for rat-to-human daily expo sure and respiratory volume, and applying an over all assessment factor of 50 and 25, respectively.
Pauluhn derived an OEL using subchronic data in rats inhaling MWCNTs (Baytubes®) . This approach was based on the bio logical mechanism of volumetric overloading of al veolar macrophage-mediated clearance of particles from the lungs of rats . Increased particle retention half-time (an indication of lung clearance overload) was reported in rats exposed by subchronic inhalation to MWCNT (Baytubes®) at 0.1, 0.4, 2.5, or 6 mg/m3 The overloading of rat lung clearance was observed at lower-mass doses of MWCNT (Baytubes®) compared with other poorly soluble particles; and the particle volume dose was better correlated with retention half-time among poorly soluble particles including CNT . Pauluhn reported benchmark concentration (BMC) estimates of 0.16 to 0.78 mg/ m3 for rat lung responses of pulmonary inflamma tion and increased collagen, but selected the lower NOAEL of 0.1 mg/m3 to derive a human-equivalent concentration. The NOAEL was adjusted for hu man and rat differences in factors affecting the estimated particle lung dose (i.e., ventilation rate, alveolar deposition fraction, retention kinetics, and total alveolar macrophage cell volume in each spe cies). The product of these ratios resulted in a final factor of 2, by which the rat NOAEL was divided, to arrive at a human-equivalent concentration of 0.05 mg/m3 (8-hr TWA) as the OEL for MWCNT (Baytubes®). No uncertainty factors were used in deriving that estimate.
The Japanese National Institute of Advance Indus trial Science and Technology (AIST) derived an OEL for CNT of 30 Mg/m3 , based on studies supported by the New Energy and Industrial Technology Development Organization (NEDO) of Japan. Rat NOAELs for pulmonary inflammation were identified in 4-week inhala tion studies of SWCNT and MWCNT . Human-equivalent NOAELs were estimated by accounting for rat and human differ ences in exposure duration, ventilation rate, particle deposition fraction, and body weight . The rat NOAELs of 0.13 and 0.37 mg/m3 for SWCNT and MWCNT, respectively, were estimated to be equivalent to 0.03 and 0.08 mg/m3 in humans including adjustment by an uncertainty factor of 6. This total uncertainty factor included a fac tor of 2 for uncertainty in subchronic-to-chronic extrapolation and a factor of 3 for uncertainty in rat to human toxicokinetic differences (factors of 1 were assumed for toxicodynamic differences in rats and humans and for worker inter-individual vari ability). A relationship was reported between the BET specific surface area of various types of CNT and pulmonary inflammation (percent neutrophils in bronchoalveolar lavage fluid)
(Figure V.2 in Na kanishi
). Thus, the OEL of 0.03 mg/m3 was proposed for all types of CNT, based on the data for the SWCNT with the relatively high specific sur face area of ~1,000 m2/g (which was noted would be more protective for other CNTs with lower spe cific surface area). A period-limited (15-yr) OEL was proposed due to uncertainty in chronic effects and based on the premise that the results will be re viewed again within that timeframe with further data .
In summary, these currently proposed OELs for CNT range from 1 to 50 Mg/m3 (8-hr TWA con centration) , including the NIOSH REL of 1 Mg/m3. Despite the differences in risk assessment methods and assumptions, all of the derived OELs for CNT are low airborne mass concentrations relative to OELs for larger respirable carbon-based particles. For example, the cur rent U.S. OELs for graphite or carbon black are ap proximately 2.5 to 5 mg/m3. Each of these CNT risk assessments supports the need to control exposures to CNT in the workplace to low airborne mass con centrations (Mg/m3) to protect workers' health. Nanocyl 2.5 ^g/m3 (8-hr TWA) for MWCNT Adjusted rat LOAEL of 0.1 mg/m3 (subchronic inhalation) to workers and applied assessment factor of 40. Animal data have been used to evaluate the health hazard and risk of occupational exposure to CNT and CNF. Limited human exposure data are avail able, and NIOSH is not aware of any studies or re ports at this time of any adverse health effects in workers producing or using CNT or CNF. The best available scientific information to develop recom mended exposure limits is from the subchronic (13-wk) animal inhalation studies of two types of MWCNT and the shorter-term animal studies of SWCNT and other types of MWCNT.
The analysis of animal data in this risk assessment includes: (1) identifying the adverse health effects that are associated with exposure to CNT or CNF in laboratory animals;
(2) evaluating the severity of the response and the relevance to humans; and (3) estimating the human-equivalent dose and likeli hood (risk) of adverse effects in workers. Ideally, sufficient evidence is desired to derive exposure limits that are estimated to be associated with es sentially a zero risk of an adverse health effect even if exposed 8 hr/d, 40 hr/wk. over a 45-yr working lifetime. However, limitations in the scientific data result in uncertainties about those hazards and risk estimates. Characterizing the degree of that uncer tainty, and the extent to which use of those data are useful for occupational health risk management decision-making, is an important step in risk as sessment. Alternative models and methods con tribute to the differences in the risk estimates, and there is uncertainty about which biological end points, animal models, and interspecies and dose rate extrapolation methods may be most predictive of possible human health outcomes. These uncertainties can result in either under-esti mation or over-estimation of the true health risk to workers at a given exposure scenario. Each of these areas is discussed further below.
# Major Areas of Uncertainty (1) Lung response and severity level
The REL is based on estimates of excess risk of earlystage noncancer lung effects, which NIOSH has de termined are relevant to human health risk assess ment (Section A.2.1.3). The extent to which these lung responses would be associated with functional deficits in animals or clinically significant effects in humans is uncertain. However, these lung responses include early onset fibrosis which persisted or pro gressed after the end of exposure .
Limited evidence in animals suggests that these effects may be associated with some lung function decrement (reduced breathing rate in mice) . A quantitative measure of pul monary fibrosis-alveolar interstitial (septal, or connective tissue) thickening [Shvedova et al. 2005[Shvedova et al. , 2008 (
# 2) Dose rate and retention
Appendix A-6 provides some analyses to show the quantitative influence of dose rate and lung reten tion assumptions on the risk estimates and REL derivation (Tables A-5 and A-6; Section A.6.3.2.2). The lung effects were assumed to be associated with the total lung dose, regardless of the dose rate. If the average daily deposited lung dose is assumed (i.e., no difference in rat or human clearance rates), then the human-equivalent concentration would be ~30 times higher than that based on the ICRP clearance model, and ~10 times higher than that assuming simple first-order kinetics . The human-equivalent working lifetime (8-hr TWA) estimates based on deposited lung dose (assuming no clearance) are lower by a factor of ~5-7 than those estimates based on retained lung burden (assuming normal clearance) (Tables A-5 and A-6).
# 3) Inter-species dose normalization
Alternative assumptions about the biologicallyrelevant measure of equivalent dose can result in considerable differences in the human-equivalent dose. NIOSH normalized the inter-species lung dose based on the ratio of the human-to-animal average alveolar surface area. Alternatively, Pauluhn normalized the dose from rat to humans based on the average total alveolar macrophage cell volume. This difference resulted in a factor of ~4 in the humanequivalent lung burdens and working lifetime 8-hr TWA concentration estimates (Table A-13).
(4) Low dose extrapolation
# Minor Areas of Uncertainty
(1) Effect level estimation
An effect level is a dose associated with a speci fied effect (or lack of observed effect). A BMD is a statistical estimate of an effect level . Among the various studies and routes of exposure (Tables A-3 through A-5), no clear differences in exposure and risk estimates are seen for route of exposure (versus variability due to other differenc es among studies).
In the absence of CNT-specific lung models, stan dard rat and human lung dosimetry models were used to estimate either the deposited or the retained lung dose. These two estimates are considered to represent the upper or lower bounds on the pos sible lung burden estimates. The effect of assum ing deposited dose (no clearance) versus retained
(3 ) R a t lu n g dose e s tim a tio n dose (normal clearance) resulted in a difference of approximately five-fold. However, the actual es timates are expected to lie within this range. The working lifetime exposure estimates (associated with 10% excess risk of early stage lung effects) were lower based on deposited dose estimates than those based on retained dose estimates (Appendix A, Table A-5). An important first step in applying any strategy is to develop an inventory of the processes and job activities (e.g., handling of dry powders, use of composite materials) that place workers at risk of exposure. This inventory can be used to determine the number of workers potentially exposed and a qualitative assessment as to the workers and pro cesses with the highest potential for exposure.
The strategy should also incorporate provisions to quantify the airborne release of CNT and CNF oc curring at specific processes or job activities to pro vide "activity pattern data" .
Activity pattern data are useful for identifying possible causes of high exposure for remediation; however, these data are vulnerable to spatial varia tion in exposure concentrations and should not be used in predicting worker exposures. For example, in the study by Birch et al. , personal expo sure to CNF was much higher than area samples, depending on location. Respirable EC exposure for two employees working mainly in a thermal treat ment area was approximately 45 ^g/m3, and a CNF reactor area was about 80 ^g/m3, while the corre sponding area samples were about 32 ^g/m3 in the thermal treatment area and 13 ^g/m3 in the CNF reactor area. The EC concentration in the reactor area was less than half that in the thermal treat ment area, but the personal sample collected in the reactor area was nearly twice as high. Because area samples are often not predictive of personal exposure, extrapolating personal exposure from area concentrations should not be done without a thorough assessment of the workplace to estab lish whether a valid extrapolation is possible . NIOSH and others have developed exposure as sessment guidance for determining the release of engineered nanoparticles that can be adapted for determining sources of exposure to CNT and CNF.
To ensure that worker exposure to CNT or CNF is being maintained below the REL, several exposure measurement strategies are available . These strategies can be tailored to the specific workplace depending on the number of workers, complexity of the work environment (e.g., process type and rate of operation, exposure control methods, physical state and properties of material) and available resources. One approach for determining worker exposure would be to ini tially target similarly exposed groups of workers . This initial sampling effort may be more time ef ficient and require fewer resources for identifying workers with exposures to CNT or CNF above the REL. However, this measurement strategy may produce incomplete and upwardly biased expo sure estimates if the exposures are highly variable . Therefore, repeated measure ments on randomly selected workers may be re quired to account for between-and within-worker variation in exposure concentrations . Because there is no 'best' exposure measurement strategy that can be applied to all workplaces, multi-day random sampling of workers (all workers, if the exposed workforce is small) may be required to have an accurate assess ment of worker airborne exposure concentrations to CNT and CNF.
# CNT and CNF M easurem ent
A multi-tiered exposure measurement strategy is recommended for determining worker exposure to CNT and CNF (see Figure 6-1). The selection of workers and the frequency in which they should be sampled should follow guidelines established for the exposure monitoring program (Section 6.1.1). Ini tially, more samples may be required to characterize the workplace thoroughly. This initial assessment will help refine the sampling approach and deter mine whether EC interference is an issue. Careful consideration of environmental background is es sential. For example, outdoor EC may sometimes be higher than indoor background depending on the facility's air handling system. If so, the indoor EC background may be more representative of area and worker samples.
In workplaces where exposure to other types of EC (e.g., diesel soot, carbon black) may occur, the ini tial evaluation of a worker's exposure should include the simultaneous collection of a personal respirable EC sample and a personal sample for electron mi croscopy analysis (e.g., TEM, SEM). Electron mi croscopy analysis, in conjunction with energy dis persive x-ray spectroscopy (EDS), can be used for CNT and CNF identification. In addition, consider ation should be given to the sizing and counting of CNT and CNF structures during electron micros copy analysis should future efforts to control occu pational exposures be based on a different exposure metric (e.g., number concentrations of airborne CNT and CNF structures in a given size bin). While no specific electron microscopy (e.g., TEM, SEM) method exists for the sizing and counting of CNT and CNF structures, methods used in the analysis of other 'fibrous' materials are available [NIOSH 1994a;ISO 1999ISO , 2002 and could be adapted in the characterization of exposures.
NIOSH investigators have conducted a number of surveys at CNT and CNF producers and/or secondary users [Evans et al. 2010;Birch 2011a;Birch et al. 2011b;Dahm et al. 2011
# M ethod 5040 Limit of Detection
As with all analytical methods, the LOD is a vary ing number. However, the airborne EC LOD origi nally reported for NIOSH Method 5040 (i.e., about 2 ^g/m3), or an LOQ of 7 ^g/m3 was a high estimate . The LOD was based on analysis of pre-cleaned media blanks from different filter lots, during a 6-month period, and by different analysts at two different laboratories. Further, variability for the total carbon (TC) results was used to estimate the LOD rather than EC results. These combined factors gave a conservative (high) estimate of the EC LOD.
In practice, a much lower EC LOD is obtained by NIOSH Method 5040 than was originally reported, because the variability for EC results for a set of media blanks submitted (with the sample set) for the LOD (LOQ) determination is much lower than reported for the total carbon (TC) results. Thus, if EC is of primary interest, as with CNT/CNF measurement, and the level of organic carbon (OC) contamination is acceptable (with respect to the OC and TC LOD), EC results for as-received filters should be used to determine the EC LOD (Appendix C).
Estimates of the EC LODs and LOQs (in units EC/cm2 of air) determined with 25-mm and 37-mm quartz filter media from a given lot, and with manual splits assigned are reported in As stated on previous page, NIOSH Method 5040 LOD depends on the air volume, filter size, sample portion analyzed (usually 1.5 cm2), and the media blank variability. The latter is used to determine the LOD in unit's ^g/cm2. Expressed as an air concen tration, the EC LOD (^g/m3) corresponding to the EC LOD (^g/cm2) determined with media blanks (i.e., LOD = 3 times the standard deviation for the blanks) can be calculated by the following equation:
EC LOD (figfm ) = EC LOD (ptgfcm ^ ) x deposit area fa n ^ J
This equation explains why a lower LOD (^g/m3) can be obtained by reducing the filter size (deposit area), increasing the air volume, and minimizing the variability for the media blanks (i.e., the EC LOD in ^g/cm2). The LOD is improved using a smaller filter size since the deposit density is higher for an equivalent mass deposited. The same ap plies to the LOQ, commonly defined as 10 times the standard deviation (SD) for the blanks, or 3.3 times the LOD.
If 0.02 ^g EC/cm2 is taken as the SD for media blanks (with manual OC-EC split adjustment), the LODs and LOQs (in ^g EC/m3) for different air volumes, 25-mm and 37-mm filters, and a 1.5 cm2 filter portion analyzed would be as listed in Table 6-4. Results for a SD double this value (i.e., SD = 0.04 ^g EC/cm2) also are reported as worst-case estimates, but are seldom this high. If SDs for media blanks are frequently above 0.02 ug EC/cm2) the cause of the high blank variability should be identified and corrected.
Based on EC results for media blanks (Tables 6-1 and 6-3), a filter loading of about 0.3 ^g/cm2 (i.e., at or above the EC LOQs reported in Tables 6-1 and 6-3) will provide quantitative results.
As described above (6.1.2), higher flow rate respirable samplers (cyclones) with a 25 mm cassette can improve the sample collection to permit measurement of CNT and CNF above the LOQ (i.e., 1 ^g/m3) for samples collected for less than full work shift. Examples of sampling periods and flow rates that provide air volumes of about a halfcubic meter or higher (shaded area in table below) are listed in Table 6-5. A larger filter portion also can be used to further lower the LOD, but the in strument's small (quartz tube) oven and need for proper sample alignment limit the amount of sam ple that can be analyzed.
# . 3 W o r k e r E d u c a t i o n a n d T r a i n i n g
Establishing a program that includes the education and training of workers on the potential hazards of CNT and CNF and their safe handling is critical to preventing adverse health effects from exposure.
Research has shown that training can attain imme diate and long-term objectives when (1) workers are educated about the potential hazards of their job, (2) there are improvements in knowledge and work practices, (3) workers are provided the neces sary skills to perform their job safely, and (4) there Hoods can be tailored to the process or work task to optimize the capture of emissions.
Usually requires less overall exhaust airflow rates than dilution ventilation systems.
Air volumes and face velocity of LEV must be maintained to ensure the capture of emissions.
Workers must be trained in the correct use.
Fume hood sash opening needs to be adjusted to ensure proper hood face velocity.
System exhaust flow rate may need careful evaluation to ensure adequate capture while minimizing loss of product.
# C. Down flow booths
Small room or enclosure with low velocity (100 ft/min) downward airflow to push/pull contaminants away from the worker' s breathing zone.
Emissions pushed away from the worker' s breathing zone.
Flexible control that can be used for several tasks/operations.
Useful for manual operations for which a more contained enclosure is not feasible (e.g., larger amounts of materials or equipment).
Air volumes and control velocities of booth must be monitored/maintained to ensure proper performance.
Worker technique and interface with the work process can interfere with the capture of emissions.
Workers must be trained in the correct use.
(Continued) A program for educating workers should also in clude both instruction and "hands-on" training that addresses the following:
- The potential health risks associated with ex posure to CNT and CNF.
- The safe handling of CNT, CNF, and CNTand CNF-containing materials to minimize the likelihood of inhalation exposure and skin contact, including the proper use of engineering controls, PPE (e.g., respirators, gloves), and good work practices.
# . 4 C l e a n u p a n d D i s p o s a l
Procedures should be developed to protect workers from exposure to CNT and CNF during the cleanup of CNT and CNF spills and CNT-or CNF-contaminated surfaces. Inhalation and dermal exposures will likely present the greatest risks. The potential for in halation exposure during cleanup will be influenced by the likelihood of CNT and CNF becoming air borne, with bulk CNT and CNF (powder form) presenting a greater inhalation potential than CNT and CNF in solution (liquid form), and liquids in turn presenting a greater potential risk than CNTand CNF-encapsulated materials.
It would be prudent to base strategies for dealing with spills and contaminated surfaces on the use of current good practices, together with available in formation on exposure risks. Given the limited amount of data on dermal expo sure to CNT and CNF, it would be prudent to wear protective clothing and gloves when - all technical measures to eliminate or con trol the release of exposure to CNT and CNF have not been successful, or
- in emergencies.
If protective clothing and/or gloves are worn, particular attention should be given to prevent ing CNT and CNF exposure to abraded or lacer ated skin. Based on limited experimental evidence, airtight fabrics made of nonwoven textile seem to be more efficient in protecting workers against nanoparticles than fabrics made of woven cotton or polyester . The results of a study designed to evaluate the penetration of nano-and submicron particle penetration through various nonwoven fabrics found minimal penetration (< 5%) of iron oxide particles (< 100 nm) through nonwoven fabrics typically used for hospital frocks, hoodless cover alls, and firefighter ensemble insulation . The challenge when selecting appropriate protective apparel is to strike a balance between comfort and protection. Garments that provide the highest level of protection (e.g., an impermeable Level A suit) are also the least comfortable to wear for long periods of time, while garments that are probably the least protective (e.g., thin cotton lab coat) are the most breathable and comfortable to wear. The efficiency of commercial gloves to pre vent dermal exposure to nanoparticles varies de pending on the glove material, its thickness, and the manner in which it is used (e.g., long exposure times, other chemical exposures) , and the presence of other workplace aerosols. Based on this information, the respirator program manager may decide to choose a respirator with a higher assigned protection factor (APF) or choose a respirator with a higher level of filtration performance (e.g., changing from an N95 to a P100). Studies on the filtration performance of N-95 filtering facepiece respirators have found that the mean penetration levels for 40 nm particles range from 1.4% to 5.2%, indicating that 95 and higher performing respira tor filters would be effective at capturing airborne CNT and CNF [Balazy et al. 2006;Rengasamy et al. 2007Rengasamy et al. , 2008. Recent studies also show that nanoparticles <20 nm are also effectively captured by NIOSH-approved filtering facepiece respirators as predicted by the single fiber theory [Rengasamy et al. 2008[Rengasamy et al. , 2009.
# . 7 M e d i c a l S c r e e n i n g a n d S u r v e i l l a n c e
The toxicological evidence summarized in this document leads to the conclusion that workers oc cupationally exposed to CNT and CNF may be at risk of adverse respiratory effects. These workers may benefit from inclusion in a medical screening and surveillance program recommended to help protect their health (Figure 6-2) .
# Worker Participation
Workers who could receive the greatest benefit from medical screening include the following:
- Workers exposed to concentrations of CNT or CNF in excess of the REL (i.e., workers ex posed to airborne CNT or CNF at concentra tions above 1 Mg/m3 EC as an 8-hr TWA).
# Program Oversight
Oversight of the medical surveillance program should be assigned to a qualified health-care profes sional who is informed and knowledgeable about potential workplace exposures, routes of exposure, and potential health effects related to CNT and CNF.
# Screening Elements
# I n i t ia l e v a lu a t i o n
- An initial (baseline) evaluation should be conducted by a qualified health professional and should consist of the following:
-An occupational and medical history with respiratory symptoms assessed by use of a standardized questionnaire such as the American Thoracic Society Respi ratory Questionnaire , or the most recent equivalent.
-A physical examination with an empha sis on the respiratory system. -An occupational and medical history up date, including a respiratory symptom up date, and focused physical examinationperformed annually.
-Spirometry testing less frequently than every 3 years is not recommended -Consideration of specific medical tests (e.g., chest X-ray).
# W r i t t e n r e p o r t s o f m e d ic a l f in d in g s
- The health-care professional should give each worker a written report containing the following:
-The individual worker's medical exami nation results.
-Medical opinions and/or recommenda tions concerning any relationships be tween the individual worker's medical conditions and occupational exposures, any special instructions on the individ ual's exposures and/or use of personal protective equipment, and any further evaluation or treatment.
- For each examined employee, the health-care professional should give the employer a writ ten report specifying the following:
-Any work or exposure restrictions based on the results of medical evaluations.
-Any recommendations concerning use of personal protective equipment.
-A medical opinion as to whether any of the worker's medical conditions is likely to have been caused or aggravated by oc cupational exposures.
- Findings from the medical evaluations hav ing no bearing on the worker's ability to work with CNT and CNF should not be included in any reports to employers. Confidentiality of the worker's medical records should be en forced in accordance with all applicable regu lations and guidelines.
# Worker Education
Workers should be provided information suffi cient to allow them to understand the nature of potential workplace exposures, routes of exposure, and instructions for reporting health symptoms.
Workers should also be given information about the purposes of medical screening, the health ben efits of the program, and the procedures involved.
# Periodic Evaluation of Data and Surveillance Program
Standardized medical screening data should be pe riodically aggregated and evaluated to identify pat terns of worker health that may be linked to work activities and practices that require additional pri mary prevention efforts . This analysis should be performed by a qualified health-care professional or other knowledgeable person to identify patterns of worker health that may be linked to work activities or exposures. Con fidentiality of worker's medical records should be enforced in accordance with all applicable regula tions and guidelines.
Employers should periodically evaluate the ele ments of the medical screening program to ensure that the program is consistent with current knowl edge related to exposures and health effects associ ated with occupational exposure to CNT and CNF.
Other im portant components related to occupa tional health surveillance programs, including medical surveillance and screening, are discussed in Appendix B.
# R e s e a r c h N e e d s
Additional data and information are needed to as sist NIOSH in evaluating the occupational safety and health concerns of working with CNT and CNF. Data are particularly needed on workplace exposures to CNT and CNF, as well as information on whether in-place exposure control measures (e.g., engineering controls) and work practices are effective in reducing worker exposures. Additional assessment of NIOSH Method 5040 is needed to better understand potential interferences or other method limitations, improve the sensitivity and precision of the analytical method, and establish validity through the use of reference materials. The conduct of experimental animal studies with various types of CNT and CNF would help to ex plain potential mechanisms of toxicity and would provide a better understanding of the exposure pa rameters (e.g., mass, fiber/structure number, and particle size) that best describe the toxicological re sponses. Chronic studies in animals are needed to better estimate the long-term risks of lung disease in workers.
The following types of information and research are needed: - Evaluate NIOSH Method 5040 and other ap propriate sampling and analytical methods in CNT and CNF workplaces. For example, validate Method 5040 against EC reference material and ruggedize against several CNT and CNF types.
- Improve the sensitivity and precision of NIOSH Method 5040 and other appropriate methods for measuring airborne concentrations of CNT and CNF, including those based on metrics that may be more closely associated with the potential adverse effects (e.g., electron microscopy-based CNT or CNF structure counts).
- Develop improved sampling and analytical methods for measuring airborne exposures to CNT and CNF. Apply these different methods in toxicological studies to determine which expo sure metric best predicts the health endpoints in laboratory animal studies.
- Determine the effectiveness of engineering con trols to control airborne exposures to CNT and CNF below the NIOSH REL of 1 |ig/m3.
- Confirm the effectiveness of using HEPA filters in an exhaust ventilation system for removing ex posures to CNT and CNF.
- Determine the effectiveness of gloves and other PPE barrier materials in preventing dermal ex posure to CNT and CNF.
- Identify, quantify, and develop CNT and CNF reference materials for toxicology studies and for measurement quality control.
- Conduct workplace studies to measure total in ward leakage (TIL) of respirators for workers ex posed to nanoparticles (e.g., CNT/CNF).
# . 2 E x p e r i m e n t a l a n d H u m a n S t u d i e s
- Conduct chronic animal inhalation studies to as sess respiratory and other organ (e.g., heart and other circulatory system) effects. Special empha sis should be placed on assessing the risk for de veloping lung fibrosis and cancer. Studies should evaluate different types of CNT and CNF and use various exposure metrics (e.g., mass, tube, and structure counts, surface area) for assessing toxicological responses.
- Determine the mechanisms and other causative factors (e.g., tube, fiber and agglomerate size, surface area, and surface reactivity) by which CNT and CNF induce adverse effects (e.g., lung fibrosis) in animals.
- Develop early markers of exposure and pulmo nary response to CNT and CNF, given evidence from animal studies that CNT and CNF persist in the lungs and result in the development and progression of pulmonary fibrosis and/or cancer at relatively low-mass doses.
- Quantitatively and qualitatively compare the CNT and CNF materials used in the animal studies with the CNT and CNF materials found in workplace air.
- Determine the potential for CNT and CNF to penetrate the skin and cause toxicity.
- Evaluate the predictive value of using in vitro screening tests for assessing the hazard (e.g., fibrogenic potential) of various types of CNT and CNF.
- Assess the feasibility of establishing exposure registries for workers potentially exposed to CNT and CNF for conducting future epidemio logic studies and surveillance activities.
- Conduct cross-sectional and prospective studies of workers exposed to CNT and CNF. Ellinger-Ziegelbauer H, Pauluhn J . Pulmo nary toxicity of multi-walled carbon nanotubes (Baytubes) relative to quartz following a single 6h inhalation exposure of rats and a 3 months post exposure period. Toxicology 266(1-3):16-29.
Erdely A, Hulderman T, Salmen R, Liston A, Zeidler-Erdely PC, Schwegler-Berry D, Castranova V, Koyama S, Kim YA, Endo M, Simeonova PP . Cross-talk between lung and systemic cir culation during carbon nanotube respiratory expo sure. Potential biomarkers. Nano Lett 1( 9 Golanski L, Guiot A, Rouillon F, Pocachard J, Tar dif F . Experimental evaluation of personal protection devices against graphite nanoaerosols: fibrous filter media, masks, protective clothing, and gloves. Hum Exp Toxicol 28(6-7):353-359. Grubek-Jaworska H, Nejman P, Czuminska K, Przybylowski T, Huczko A, Lange H, Bystrzejewski M, Baranowski P, Chazan R . Preliminary re sults on the pathogenic effects of intratracheal ex posure to one-dimensional nanocarbons. . Cytotoxicity of multi-walled carbon nano tubes in three skin cellular models: effects of sonication, dispersive agents and corneous layer of re constructed epidermis. Nanotoxicol 4(1):84-97.
# Golanski L, Guiot
# A P P E N D I X A Q u a n t i t a t i v e R is k A s s e s s m e n t o f C N T Contents
# A.1 Introduction
The increasing production and use of CNT and the preliminary significant toxicology findings neces sitate an assessment of the potential adverse health effects in workers who produce or use these mate rials. Risk assessment is a process that uses stan dardized tools and procedures to characterize the health risk of exposure to a hazardous substance, as well as the uncertainties associated with those risk estimates. Research studies in toxicology, epide miology, exposure measurement, and other areas provide the data needed to perform the risk assess ment. The standard risk assessment paradigm in the United States includes four basic steps: hazard assessment, exposure assessment, dose-response analysis, and risk characterization . The most recent guidance recommends asking these questions: "What are the options avail able to reduce the hazards or exposures that have been identified, and how can risk assessment be used to evaluate the merits of the various options?" Risk assessment is intended to provide information needed to determine risk management options.
Risk assessment practice seeks to use the best avail able data and scientific methods as the basis for public and occupational health decision-making . When sufficient dose-response data are available (e.g., from animal studies), quantita tive risk assessment can be performed. Quantita tive risk assessment provides estimates of the sever ity and likelihood of an adverse response associated with exposure to a hazardous agent . Risk assessments are used in developing occupational exposure limits and in selecting and evaluating the effectiveness of expo sure controls and other risk management strategies to protect workers' health.
The best data available for risk assessment, in the absence of epidemiological studies of workers pro ducing or using CNT, are from animal studies with CNT. These studies include two subchronic inhalation studies of MWCNT in rats and several short-term studies of SWCNT, MWCNT, or CNF in rats or mice. These studies provide the data and information on the dose-response relationships and the biological mechanisms of early-stage in flammatory and fibrotic lung effects from expo sure to CNT. No chronic animal studies of CNT were available for this risk assessment.
The biological mode of action for CNT and CNF, as for inhaled particles and fibers, generally relates to their physical and chemical properties. These prop erties include: (1) nano-structure which increases the surface area and associated inflammogenic and fibrogenic response; (2) fiber shape which may decrease clearance of long structures, resulting in translocation to the interstitial and pleural tissues of the lungs; and (3) the graphitic structure of CNT and CNF which influences their durability and biopersistence However, some evidence suggests that functionalization may increase biodegradation of CNT .
Dose metrics that have been associated with lung responses to CNT or CNF in animal studies include mass, volume, number, and surface area. The CNT volume dose was associated with the overloading of CNT clearance from rat lungs and to the lung re sponses . The specific surface area (m2/g) dose of various types of CNTs was associ ated with the pulmonary inflammation response in rats . Mercer et al. found that the effective surface area (estimated from the geometry of the structures observed by electron microscopy) was more closely associated with the pulmonary inflammation and fibrosis in mice.
In addition to exhibiting some of the same phys ical-chemical properties of other poorly-soluble particles and/or fibers, the nanoscale structure of CNT and CNF may relate to more specific biologi cal modes (or mechanisms) of action. For example, evidence in vitro suggests that disperse CNT may act as a basement membrane, which enhances fi broblast proliferation and collagen production . This mechanism is consistent with the observation in mice of the rapid onset of diffuse interstitial fibrosis, which progressed in the absence of persistent inflammation, following ex posure to SWCNT or MWCNT by pharyngeal as piration . As fibrosis progresses, it causes thickening of the alveolar septal air/blood barrier, which can result in a decrease of gas-exchange between lung and blood .
The focus of this quantitative risk assessment is on the early-stage noncancer lung responses (fibrotic and in flammatory) from studies in rats and mice, for which dose-response data are available. These responses are relevant to humans as observed in workers in dusty jobs . Dose-response relationships are based on mass dose, because the mass of CNT and CNF was associated with lung responses in all of the animal studies and because it is the metric typically used to measure air borne exposure in the workplace (Section 6 and Ap pendix C). The evidence for cancer effects from CNT and CNF (Section 3 and 4) is insufficient for quantita tive risk assessment, and may also depend on specific types of CNT or CNF structures .
# A.2 Methods
NIOSH used dose-response data from subchronic and short-term studies in rats and mice exposed to SWCNT or MWCNT to estimate the lung doses as sociated with early-stage inflammatory and fibrotic lung responses. Benchmark dose (BMD) modeling . It is preferable to have data with one or more doses near the bench mark response (e.g., 10%) ; howev er, in some studies the response proportions were quite high at each dose (e.g., 30-100%) . In ad dition, one study had only one dose group in addition to the control, but the study was included because it is the only animal in halation study for SWCNT currently available and it provides a useful comparison by route of exposure.
No other deficiencies were noted in the selected studies that would have resulted in their omission.
Either the individual animal dose-response data or the mean and standard deviation of the group response are required for BMD model fitting. The dose was either the intratracheal instillation (IT) or pharyngeal aspiration (PA) administered mass dose (mg/lung) or the inhaled mass concentration (mg/m3). Datasets with treatment-related mortal ity of animals were not used. Data on special prepa rations of CNT (e.g., ground CNT) or studies using sensitive animal models (e.g., vitamin E deficient) were not included (although these data may be of interest for subsequent analyses using animal m od els to investigate biological mechanisms, including in sensitive human populations, or to evaluate the effect of specific alterations in CNT properties on hazard potential).
Study details of the data selected for this risk as sessment are provided in Table A-1. These stud ies include the two recently published subchronic inhalation studies of MWCNT in rats and several IT, PA, or short-term inhalation studies in rats or mice ex posed to SWCNT with post-exposure durations and examination from 4 to 26 weeks after exposure. In the subchronic inhalation studies, rats were headnose exposed or nose-only exposed to three or four different airborne mass concentrations (6 hr/d, 5 d/week) for 13 weeks. Lung responses were examined at the end of the 13-week exposure in both studies; post exposure follow-up was extended to 6 months in the Pauluhn study.
The IT, PA, and short-term inhalation studies pro vide additional dose-response data for comparison to other MWCNT or SWCNT with different types and amounts of metal contaminants. Although both IT and PA routes bypass the head region and deliver the CNT material directly to the trachea and lung airways, PA is to be considered more similar to inhalation than IT because PA provides greater dispersion of deposited material in the lungs . Two other short-term exposure studies (Porter et al. and Ellinger-Ziegelbauer-Pauluhn ), which were included in the external review draft of the CIB , have been omitted in this analysis This is be cause the dose-response data were of equivocal fit to the minimum data criteria for BMD analysis and because updates of these studies are available for the same CNT material and from the same laboratory (i.e., Mercer et al. andPauluhn ).
Those studies are included in these risk analyses.
# A.2.1.2 Dose Rate Evaluation
A study of 1-day inhalation exposure to MWCNT (Baytubes) in rats and examined 13 weeks after the end of exposure provided an opportunity to compare the dose-response relationships of the 1-day in halation exposure study with that of the 13-week (subchronic) inhalation study in order to examine the influence of dose rate on the rat lung responses (Section A.3.2). These findings are relevant to interpreting and using the results from the short-term exposure studies of the SW-CNT and other MWCNT.
# A.2.1.3 Lung R esponses Evaluated
The lungs are the target organ for adverse effects as shown in animal studies of CNT (Sections 3 and 4). Granulomatous inflammation, alveolar interstitial thickening, and pulmonary fibrosis are among the benchmark responses evaluated in this risk assess ment (Table A-1). These responses are considered to be relevant to workers since inflammatory and fibrotic lung diseases have been associated with occupational exposure to various types of inhaled particles and fibers . These pulmonary inflammation and fibrotic effects in animals were observed at relatively early stages, although they developed earlier in mice exposed to SWCNT than from exposure to crystalline silica, which is a known fibrogenic particle .
The most quantitative measure of fibrosis was re ported by the studies that measured the thickening of the gas-exchange region of the lungs (alveolar interstitial or septal connective tissue) due to in creased collagen (as observed by lung tissue stain ing in histopathology examination) . This alveolar thickening was observed to progress with time after administration of a single dose in mice administered by PA . Alveolar thickening was also ob served in a subchronic study, which persisted up to 6 months after the end of exposure in a 13-wk inhalation study in rats . Alveolar interstitial (epithelial cell) thickness has been used as the adverse response in other risk assessment (of ozone) because it indicates "fundamental structural remodeling" .
Alveolar interstitial fibrosis can be detected by Sirius red staining of septal collagen . Interstitial thickening with fibrosis has been demonstrated by Sirius red staining of lungs from mice exposed to SWCNT or MWCNT . In SWCNT exposed mice, the septal fibrosis has been further confirmed by transmission electron microscopy . Pauluhn reported: "Increased in terstitial collagen staining (Sirius red) occurred at 1.5 and 6 mg/m3. Focal areas of increased collagen staining were adjacent to sites of increased particle deposition and inflammatory infiltrates (onset at 0.4 mg/m3, see Table 3). Increased septal collagen staining was depicted as equal to interstitial fibro sis (for details, see Fig 12)." In that study, a severity level of minimal (category 1) or greater persisted or progressed up to 26 weeks after the end of the 13-week inhalation exposure to either 0.4, 1.5, or 6 mg/m3 . Hypercellularity in the bronchial alveolar junctions was observed in these same dose groups; this effect persisted after the end of exposure, but resolved by the 39th week in the 0.4 mg/m3 group. The 0.4 mg/m3 dose group was considered the LOAEL for inflammatory lung effects, while 0.1 mg/m3 was considered the NO-AEL . Concerning the focal septal thickening observed at 0.4 mg/m3, pathologists' in terpretations may differ as to whether these earlystage responses would be considered adverse or to have the potential to become adverse. NIOSH interpreted the alveolar septal thickening (and as sociated effects) in the 0.4 mg/m3 and higher dose groups as being adverse changes of relevance to hu man health risk assessment due to their persistence and consistency with the early-stage changes in the development of pulmonary fibrosis. For these rea sons, the alveolar septal thickening of minimal or higher grade (i.e., proportion of rats with this re sponse, which included rats exposed at 0.4 mg/m3 and higher doses) was selected as the benchmark response in the Pauluhn study. Although these data were reported as the average histopathology score in each dose group , NIOSH requested the response propor tion data as these were needed for the dichotomous BMD modeling. These data were provided by Dr. Pauluhn in response to this request .
Pulmonary inflammation has been associated with exposure to airborne particles and fibers, and it is a hallmark of occupational lung disease in humans. It is also a precursor to particle-associated lung can cer in rats . Pulmonary inflammation can be measured by the increase in polymorphonuclear leukocytes (PMNs) in bronchoalveolar lavage (BAL) fluid following exposure to various particles including CNT. However, for some CNT, the inflammation resolves, while the fibrosis continues to develop . This in dicates that neutrophilic inflammation in BAL fluid may not be a good predictor of adverse lung effects from some CNT, which appear to cause fibrosis by a different mechanism than for other types of particles and fibers (by resembling the lung base ment membrane and serving as a framework for fibroblast cell growth, without eliciting a persistent inflammatory response) . In other studies, the inflammatory effects of MWCNT were associated with granuloma development and with alveolar lipoproteinosis, a more severe inflammatory lung response, ob served at higher doses of MWCNT .
Minimal or higher levels of severity of these lung re sponses were selected as the benchmark responses. This included minimal level (grade 1 or higher) of pulmonary inflammation or alveolar septal thickening as observed by histopathology. The incidence data on the minimal level of effect that is persistent provides a sensitive measure of a critical effect, which is of interest for health risk assessment. It is not known whether the human-equivalent effects to those ob served in the animal studies would be associated with abnormal lung function or clinical disease, or if progression to more severe levels could occur if these effects developed as a result of chronic expo sure. To evaluate sensitivity of risk estimates to the selection of a minimal level of disease, risk estimates were also derived for the next level of response (grade 2 or higher) in the subchronic animal studies.
The lung response measures in this risk assess ment are either dichotomous (proportion of ani mals observed with the response endpoint) or con tinuous (amount or level of response in individual animals) (Table A-1). The dichotomous responses include the incidence of lung granulomas ; granulomatous inflammation ; and histopathology grade of alveolar interstitial (septal) thickening .
The continuous responses include the amount of hydroxyproline (as mass) and alveolar interstitial connective tissue thickness .
# A.2.1.4 Sum m ary of D ose-response Data
Collectively, the data available for CNT risk as sessment include dose-response data from several rodent species and strains, both males and females, and three routes of exposure to several types of SWCNT and MWCNT with varying types and amounts of metal contaminants (Table A-1). The dose metric used in this risk assessment is the mass dose of CNT in the lungs, either the administered dose (IT or PA studies) or the lung burden (deposit ed or retained) estimated from the airborne particle size distribution and exposure concentration data (inhalation studies). Mass dose was used because all of the studies reported this dose metric and because mass dose was associated with the inflammatory and fibrotic lung responses in the animal studies.
# A . 2 . 2 E s t i m a t e d L u n g D o s e i n A n i m a l s
For the IT and PA studies , the adminis tered CNT mass dose was assumed to be equiva lent to the deposited lung dose. In the inhalation studies , the deposited lung dose was estimated from the exposure concentration and duration, the species-specific ventilation rate, and the alveolar deposition fraction (estimated from the CNT aero dynamic particle size data), as follows: x Minute Ventilation (L/min) x 0.001 m3/L x 60 min/hr
x Alveolar Deposition Fraction
The exposure concentration and duration, as re ported in the animal studies, are shown in Table A-1. The values used for respiratory minute venti lation were based on the species and body weight: 0.037 L/min for mice (369 g body weight); and 0.21 L/min for male and female rats in MaHock et al. , assuming average body weight of 300 g using the mass-median aerodynamic diameter (MMAD) and geometric standard deviation (GSD) data re ported for SWCNT and MWCNT (Table A-2).
In the mouse inhalation study , an alveolar deposition fraction of 0.01 was estimated based on the MMAD (Table A-2) and interpolating from the deposition fractions for monodisperse spherical particles reported in Table 2 of Raabe et al. . For the two subchronic inhalation studies of MWCNT , in addition to the deposited dose, the retained lung dose was also estimated.
The MPPD 2.0 model was used to estimate the lung burden at the end of the 13-week exposure based on the particle MMAD and GSD (Table A-2) reported in those studies, as suming unit density (the lowest density accepted by MPPD 2.0). However, Ma-Hock et al. reported the MWCNT particle density of 0.043 g/ ml, and Pauluhn reported the MWCNT particle density of 0.1-0.3 g/ml. The sensitivity of the lung dose estimates to the assumption of den sity 1 or lower is evaluated in Section A.6.1.1. Also evaluated is the effect of MPPD model version 2.0 or 2.1 on the lung dose estimates. A recent update of MPPD 2.0 to MPPD 2.1 included revised estimates of the rat head/extra thoracic deposition efficiency based on the equations in Raabe et al. , which resulted in lower predicted deposition fractions in the rat pulmonary region [personal communication, O.
Price and E. Kuempel, 9/24/10] (Section A.6.1.1).
These lung dosimetry models have not been evalu ated for CNT. However, the measured aerodynamic diameter is considered to provide a reasonable estimate of the deposition efficiency in the respi ratory tract for CNT which have MMAD in the micrometer size range (Table A-2) (see Section A.6.1.1 for further discussion). The estimates of CNT clearance and retention in the lungs may be more uncertain than those for deposition fraction, given the slower clearance reported for CNT in some animal studies . Reasonable bounds on the uncertainty of the CNT lung dose estimates are considered to be *MPPD 2.0 human; Yeh and Schum deposition model; 9.6 m3/8 hr d (20 L/min, or 1143 ml tidal volume at 17.5 breaths/min); inhalability adjustment; assumed unit density. fMMAD and GSD in Shvedova et al. were estimated from data reported in Baron et al. . *Raabe et al. : mouse DFalv interpolated from values in Table 2 of Raabe et al. .
§MPPD 2.0 rat; 0.21 L/min or 2.45 ml tidal volume (assuming 300 g male and female rats) ; and 0.25 L/min or 2.45 ml tidal volume (369 g male rats) . This is because the CNT depos ited in the lungs may undergo some clearance, al though evidence from animal studies suggests the clearance rate may be slower than for other poorly soluble particles at relatively low-mass doses in the rats and mice .
# A . 2 . 3 A n i m a l D o s e -r e s p o n s e M o d e l i n g a n d B M D E s t i m a t i o n
The dose-response data in rats and mice exposed to SWCNT or MWCNT were modeled using benchmark dose methods . A benchmark dose has been defined as ". . . a statistical lower confidence limit for the dose corresponding to a specified increase in level of health effect over the background level" . The increased level of adverse effect (called a benchmark response, or BMR) associated with a BMD is typically in the low region of the dose-response data (e.g., a 10% excess risk). In this document, the term BMD is used to describe the point estimate based on maximum likelihood esti mation, and the term BMDL is used to describe the lower 95% confidence limit (i.e., as originally de fined by Crump ). A 10% excess risk, based on dichotomous or quantal data, is used because it is at or near the limit of sensitivity in the animal bioassay . The BMDL associated with a 10% BMR is used as a point of departure (POD) for low-dose extrapolation us ing linear or nonlinear methods (depending on the mode of action evidence) . The low-dose extrapolation may include estimation of the probability of effects at low doses or below a ref erence value (not risk-based) by accounting for un certainties in the dose estimation (e.g., extrapolation from animal to human, inter-individual variability, limitations in the animal data .
# A.2.3.1 Dichotom ous Response Data
For dichotomous data (yes/no response), a BMD is defined as the dose associated with a specified increase in the probability of a given response, ei ther as an excess risk (i.e., additional probability above background) or as a relative risk (i.e., relative to the background probability of having a normal response) .
In this analysis, the BMD (using dichotomous data) is the dose d corresponding to a specified excess (add ed) risk (e.g., 10%) in the proportion of animals with a given adverse lung response (BMR), as follows:
Equation A-2: BM R = P(d) -P(0)
where P(d) is the probability of an adverse re sponse at the BMD, and P(0) is the probability of that adverse response in an unexposed population .
The dichotomous BMR lung responses include the presence or absence of granulomatous inflamma tion or alveolar septal thick ening (Table A-1). The propor tion of animals responding with the minimal or higher severity was selected as the benchmark re sponse. The BMD(L) estimates are expressed as the mass dose of SWCNT or MWCNT in rodent lungs associated with the specified BMR. These animal based BMD(L)s are extrapolated to humans based on species-specific differences in the estimated de position and retention of CNTs in the lungs (Sec tion A.2.4).
# A.2.3.2 C ontinuous Response Data
BMD estimation using continuous data requires specifying a BMR level along the continuum of re sponses. Continuous response data provide in formation on the amount or degree of a biological response. Continuous response measures may in clude nonzero levels that are associated with the normal structure or function (e.g., a certain num ber immune cells or amount of protein in healthy lungs). These levels can become elevated in response to a toxicant, and at some point, they may result in irreversible, functional impairment of the lungs . If data are available, the BMRs can be based on a biologically significant response that is associated with, or expected to result in, a mate rial impairment of health. However, there may be insufficient data to determine a specific level that is associated with a measurable adverse response. In that case, a statistical criterion may be used as a BMR for continuous data .
A statistical method (originally referred to as a "hybrid" method) is described by Crump to provide BMD(L) estimates from continuous data that are equivalent to a 10% excess risk based on dichotomous data, assuming that an abnormal or biologically significant response is defined as the upper 99th percentile of the unexposed (control) distribution. According to this method, "for a nor mal distribution with constant variance, setting BMR = 0.1 and PO = 0.01 is equivalent to choos ing the BMD to be the dose that results in an in crease in the mean equal to 1.1 times the standard deviation," assuming a normal distribution with constant variance . That is, if one as sumes that the probability of the specified adverse response in the unexposed population is the upper 1% of a normal distribution of responses, then se lecting a BMR of 1.1 standard deviations above the control mean response is equivalent to a 10% BMD as estimated in dichotomous data.
In evaluating possible BMRs for the continuous data of CNT in mice, earlier studies of chronic ozone exposure in rats were examined to deter mine if a biologically-based BMR could be identi fied for pulmonary fibrosis (measured as alveolar connective tissue thickening) associated with ab normal pulmonary function . However, those rat findings did not appear to extrapolate well to the mice in Shvedova et al. ; however, this amount of response occurred in up to 30% of the control (unexposed) mice in Shvedova et al. , in part due to the greater variability in the alveolar tissue thickness in the unexposed mice. In addition, no data were found of a biologically relevant BMR for the amount of hydroxyproline in the lungs of rats or mice. In the absence of an identified biological basis for a BMR for the continuous response measures of alveolar connective tissue thickening or the amount of hydroxyproline, NIOSH used the statistical criterion described by Crump , in which a BMR of 1.1 standard deviations above the control mean response is equivalent to a 10% excess risk in the dichotomous data, assuming the 99th percentile of the distribution of control responses is abnormal or biologically significant.
That is, the BMR for the continuous data (alveo lar connective tissue thickness and hydroxyproline amount) is defined as follows:
Equation A-3: BM R = |j(d) -|j(0)
where ^(d) is the mean response at the BMD (d); ^(0) is the control mean response; and BMR is the specified number of standard deviations (SDs) (i.e., 1.1 in these analyses). Thus, the continuous databased BMD is the dose associated with a 10% in crease in the proportion of animals exposed at dose d with response greater than the 99th percentile of the control mean response. The estimates of d) and fi(0) are derived from the fitted dose-response models (polynomial) (Section A.2.3.3).
# A.2.3.3 BMD Model Fitting
The animal dose-response data were fit using the benchmark modeling software (BMDS 2.1.2) . The dichotomous data were fit with a multistage (polynomial degree 2) model. This is the only model that provided an adequate fit to the sub chronic inhalation data, each of which had only one dose between zero and 100% response for the endpoints evalu ated (granulomatous inflammation or alveolar sep tal thickening, histopathology grade 1 or higher).
The other BMDS models failed to converge or, in further statistical evaluation, showed non-unique parameter solutions. The continuous dose-response data were fit with a polynomial model of degree 2 for all data with three or more dose groups, and de gree 1 (linear) for data with two groups (see Table A-1 for dose groups).
P values for goodness of fit were computed for the individual BMDS models (based on likelihood methods) . Model fit was consid ered adequate at P > 0.05 (i.e., testing for lack of fit), although the P values based on likelihood ra tio tests may not be a reliable indicator of model fit in the studies with few animals per group. The number of animals per dose group in each study is given in Table A-1. EPA typically uses a P > 0.1 criteria for BMD model fit . Either criteria is considered reasonable and represents a trade-off in the type I or type II error. That is, P > 0.1 provides more power to reject an incor rect model, while P > 0.05 provides less chance of rejecting a correct model. The BMD model fits to each data set are shown in
# A.2.3.4 H um an-equivalent Dose and Working Lifetime Exposure
The rodent BMD(L)s were extrapolated to humans based on species-specific differences in the alveolar epithelial surface area of the lungs (i.e., by normal izing the dose per unit of cell surface area). It is as sumed that humans and animals would have equal response to an equivalent dose (i.e., mass of CNT per unit surface area of lungs). The human-equivalent BMD and BMDL estimates were the target lung doses used to estimate, respectively, the maximum likelihood estimate (MLE) and 95% lower con fidence limit (95% LCL) estimate of the MLE, as an 8-hr TWA exposure concentration during a 45year working lifetime.
The human-equivalent BMD and BMDL estimates were calculated as follows: where the values used for alveolar lung surface area (AlvSA) were 102 m2 (human) ; 0.4 m2 (rat) and 0.055 m2 (mouse) . In Tables A-3 through A-5, the humanequivalent BMD(L)s were multiplied by 0.001 mg/ ^g to obtain the units of mg per lung.
The human-equivalent BMD(L)s are expressed as the mass (mg) of CNT in the lungs. The working lifetime airborne mass concentration that would result in the BMD(L) human-equivalent lung mass dose was calculated based on either deposition only (no lung clearance) or retention (lung deposi tion and clearance), as described below.
(a) Deposited lung dose The values assumed include 9.6 m 3 8-hr air intake (reference worker ); alveolar deposi tion fraction based on aerodynamic particle size (Table A-2); and working lifetime days (250 days/ yr x 45 yr).
# (b) Retained lung dose
The MPPD 2.0 human model for inhaled poorly soluble spherical particles was used to estimate the working lifetime exposure concentration that would result in the humanequivalent BMD(L) lung burden estimates. This was done by a systematic search to identify the 8-hr time weighted average (TWA) airborne concentration over a 45-year working lifetime that predicted the target lung burden. The input parameters used in the MPPD human model (Yeh and Schum human deposition model option) include CNT aerody namic particle size (MMAD, GSD) (Table A-2); inhalability adjustment; oronasal-normal augmenter;
F ig u re A -1 . Benchmark dose model (multistage, polynomial degree 2) fit to rodent dose-response data from the two subchronic inhalation studies of MWCNT in rats: Ma-Hock et al. , response: granulo matous inflammation, Pauluhn , response: alveolar septal thickening, minimal or greater. P values are 0.99 for Ma-Hock et al. and 0.88 for Pauluhn . F ig u re A -3 . Benchmark dose model fit to rodent dose-response data from short-term studies with continu ous response: Shvedova et al. (SWCNT, mouse, pharyngeal aspiration; response: alveolar connective tissue thickening) (P value is 0.23) (polynomial model, degree 2, coefficients restricted to non-negative-fit to all data except top dose group due to nonhomogeneous variance); and Shvedova et al. (SWCNT, mouse, inhalation; response: alveolar connective tissue thickening) (P value is not applicable, linear). Bench mark response level: 1.1 standard deviations about the control mean response. All dose-response models used in this risk assess ment provided adequate fit (P > 0.05) to the ro dent data for BMD(L) estimation (P values for the Pearson X2 goodness of fit test shown in Tables A-3 through A-5).
In Table A-3, the BMD(L) and BMC(L)- estimates are based on the IT, PA, or short-term inhalation exposure studies of SWCNT or MWCNT with continuous response measures. Lung responses in rodents were evaluated at 32 to 60 days after first exposure. Rodent dose is the administered (IT or PA) or estimated deposited dose (inhalation). The BMR is the specified adverse lung response at 1.1 standard deviations above the estimated rodent control mean response (i.e., alveolar connective tissue thickness or amount of hydroxyproline) (as explained in Section A.2.3.2). Considerably high er 8-hr TWA concentrations are estimated based on the endpoint of lung hydroyxproline amount compared with those based on Abbreviation for both BMC and BMCL estimates.
the alveolar connective tissue thickness endpoint, which is a more sensitive (earlier) indicator of fi brosis .
In Table A-4, the BMD(L) and BMC(L) estimates are based on the IT exposure study of SWCNT with dichotomous response measures. Lung responses were evaluated 90 days after the first exposure. The BMR is the 10% excess risk of the specified adverse lung response (proportion of rats with lung granu lomas). Although Lam et al. report doseresponse data for three different preparations of SWCNT (containing either 2% Fe, 27% Fe, or 26% Ni), the BMD(L) and BMC(L) estimates are pro vided only for the SWCNT with 2% Fe, which was the only dataset of the three reported by Lam et al. that was adequately fit by the BMD model (Table A-4).
Table A-5 provides the BMD(L) and BMC(L) esti mates based on the two subchronic inhalation stud ies of MWCNTs, which also report dichotomous response measures. Lung responses were evaluated at the end of the 13-week (91 d) exposure period. Rodent dose is either the total deposited lung dose or the retained lung dose at the end of exposure.
The BMR is estimated as the 10% excess risk of the specified adverse lung response (granulomatous inflammation or alveolar septal thickening of histopathology grade 1 or higher). As expected, the esti mates based on deposited lung dose are lower than those based on the retained lung dose, because the assumption of no clearance in the deposited lung dose results in a lower estimated 8-hr TWA con centration to attain the human-equivalent BMD(L) lung burdens. The estimates for MWCNT (with 9.6% Al2O3) based on the rat granulomatous in flammation response are lower than those for MW-CNT (Baytubes) (with 0.53% Co) based on the rat alveolar septal thickening response.
Table A-6 shows the animal and human BMD(L) estimates and equivalent working lifetime 8-hr TWA concentration estimates, BMC(L), associated with grade 2 (slight/mild) or higher lung responses in the subchronic inhalation studies, based on the estimated deposited lung dose. As expected, higher BMD(L)s and BMC(L)s are estimated from the Table A-3. Benchmark dose estimates- and associated hum an working lifetime airborne concentrations-continuous response data in rats or mice exposed to SWCNT or MWCNT by IT, PA,f or short-term inhalation (dose metric: adm inistered or estim ated deposited lung dose)
Table A-5. Benchmark dose estim ates and associated working lifetime airborne concentrations-grade 1 or higher severity of lung responses in rats after subchronic inhalation of MWCNT (dose metric: estim ated deposited or retained dose in lungs) histopathology grade 2 or higher lung responses (Table A-6) compared with those estimated from histopathology grade 1 (minimal) or higher re sponses (Table A-5) because more animals devel oped the grade 1 or higher response at a given dose (i.e., histopathology grade 1 or higher is a more sensitive response).
A . Histopathology severity scores for alveolar septal thickening were available for each study. The num ber of male rats with alveolar septal thickening (of minimal or higher grade) and the respective expo sure concentrations are as follows:
- Ellinger-Ziegelbauer and Pauluhn : 1, 0, and 6 rats (6 total per group) at 0, 11.0, and 241.3 mg/m3.
- Pauluhn : 0, 0, 9, 10, 10 rats (10 total per group) at 0, 0.1, 0.45, 1.62, and 5.98 mg/m3.
The dose metric used for this comparison was the deposited lung dose, estimated from MPPD 2.0 based on the particle size data (MMAD and GSD) and the rat exposure con ditions reported in each study.
To evaluate whether these data can be described by the same dose-response relationship, a multistage (polynomial degree 2) model was fit to the combined data . This model provided adequate fit to the data (P = 0.37), suggesting that these data can be de scribed by the same dose-response model using the estimated total deposited lung dose, regardless of the dose rate differences (i.e., dose administered in 1 day vs. during 91 days). This finding is consistent with the impaired clearance and biopersistence of the deposited MWCNT in the rat lungs at these doses as shown in Pauluhn .
A . . NRC and others have recommended using risk-based low-dose extrapolation for non cancer endpoints. NIOSH practice has also in cluded risk-based low-dose extrapolation for non cancer endpoints. In the absence of information on the shape of the dose-response relationship in the low-dose region, assumptions can include linear and nonlinear model-based extrapolation. Linear extrapolation is the most protective (i.e., unlikely to underestimate the risk). However, the actual risk could be much lower, including zero.
Low-dose linear extrapolation of the working life time-equivalent 10% excess risk estimates in Table A-5 (deposited dose assumption) results in BMC (BMCL) estimates of 0.051 (0.019) ^g/m3 or 0.077 (0.038) ^g/m3 associated with 1% excess risk (Ma-Hock et al. or Pauluhn , respective ly). The corresponding BMC (BMCL) estimates as sociated with 0.1% excess risk are 0.0051 (0.0019) ^g/m3 or 0.0077 (0.0038) ^g/m3. Multistage modelbased estimates are higher for the BMCs, but nearly identical for the BMCLs: 0.16 (0.019) ^g/m3 or 0.24 (0.042) ^g/m3 associated with 1% excess risk; and 0.050 (0.0020) ^g/m3 or 0.075 (0.0042) ^g/m3 asso ciated with 0.1% excess risk (Ma-Hock et al. or Pauluhn , respectively). Tables A-7 and A-8 provide working lifetime ex cess risk estimates of early stage-lung effects (mini mal or higher histopathology grade of granuloma tous inflammation or alveolar septal thickening) associated with 1, 2, or 7 ^g/m3 as an 8-hr TWA concentration. These concentrations were selected as possible limits of quantification (LOQs) that were under evaluation for the analytical method to mea sure airborne CNT in the workplace (NIOSH meth od 5040). These estimates are based on lung dose estimates assuming either total deposited lung dose (no clearance) or retained dose (normal, spherical particle-based clearance). Risk estimates are higher for the no clearance assumption than those assum ing normal clearance, within either the minimal (grade 1) (Table A-7) or slight/mild (grade 2) (Ta ble A-8) lung responses. These excess (exposureattributable) risk estimates were derived from the multistage (degree 2) model fit to the rat subchron ic dose-response data, or by linear extrapolation below the 10% BMC(L) estimates shown in Tables A-5 and A-6.
# A.4 Discussion
NIOSH conducted a quantitative risk assessment of CNTs by evaluating dose-response data of early-stage adverse lung effects in rats and mice exposed to sever al types of SWCNT or MWCNT (with different metal contaminants), by several routes of exposure (inha lation, PA, or IT), and duration of exposure (single day or subchronic) and post-exposure period (up to 26 weeks). Because of the different study designs and response endpoints used in the rodent studies, lim ited information was available to evaluate the extent to which the differences in the risk estimates across studies are due to differences in the CNT material or are attributable to other study differences. Some evi dence indicates that CNT with certain metals (nickel, 26%) or with higher metal content (18% vs. 0.2% Fe) are more toxic and fibrogenic. However, some studies have shown that both unpurified and purified (low metal content) CNT were associated with early-onset and persistent pulmonary fibrosis at relatively low-mass doses and 0.4 mg/ m3 , which are more than an order of magnitude lower than the LOAEL of 7 mg/m3 for ultrafine carbon black in the same animal species and study design (13-week inhalation studies in rats, although with different strains, Wistar (male and female) and F-344 (female) ).
Because no chronic animal studies or epidemio logical studies of workers producing or using CNT have been published to date, the best available data for risk assessment were the subchronic inhalation studies of MWCNT in rats . For SWCNT, no subchronic stud ies were available, and several short-term studies (IT, PA, or inhalation exposure) in rats or mice provide the only available dose-response data for either SWCNT
(Table A-1).
All of these studies reported inflammatory, granu lomatous, and/or fibrotic lung effects of relevance to human health risk assessment. These lung effects in the animal studies were relatively early-stage and were not reversible after exposure ended (up to approximately 6 months post-exposure ). In the studies with multiple post-exposure follow-up times, the amount of pulmonary fibrosis persisted or progressed with longer follow-up . One of the measures of pulmonary fibrosis used in the short-term stud ies -alveolar epithelial cell thickness (due to in creased collagen deposition associated with CNT mass lung dose)-was also used to develop the EPA ozone standard. This response endpoint was selected by EPA as the adverse lung response for cross-species dose-response extrapolation, because it indicates "fundamental structural remodeling" .
The excess risk estimates based on the subchronic and short-term studies of MWCNT and SWCNT suggest that workers are at >10% excess risk of developing early-stage adverse lung effects (pul monary inflammation, granulomas, alveolar septal thickening, and/or fibrosis) if exposed for a work ing lifetime at the estimated upper LOQ of 7 ^g/m3 based on NIOSH Method 5040 for measuring the airborne concentration of CNT (Appendix C; Tables A-3 through A-8). Working lifetime airborne concentration (8-hr TWA) esti mates of 0.51-4.2 ^g/m3 MLE and 0.19-1.9 ^g/m3 95% LCL were associated with a 10% excess risk of early-stage lung effects (histopathology grade 1 minimal or higher) based on the subchronic in halation studies (Table A-5). For histopathology grade 2 (slight or slight/mild ), the working lifetime 8-hr TWA concentrations associated with an estimated 10% excess risk are 1.0 to 44 ^g/m3 MLE and 0.69 to 19 ^g/m3 95% LCL (Table A-6).
As discussed in Section A.2.3, the 10% BMDL esti mates are a typical POD for extrapolation to lower risk. NIOSH does not consider 10% or greater ex cess risk levels of these early-stage lung effects to be acceptable if equivalent effects were to occur in workers as a result of working lifetime exposures to CNT. Linear extrapolation by application of un certainty factors (e.g., Table A -14) would result in lower 8-hr TWA concentrations. However, the lowest LOQ of NIOSH Method 5040 (1 ^g/m3) is the best that can be achieved at this time in most workplaces and is similar to or greater than the 8-hr TWA concentrations estimated to be asso ciated with 10% excess risk of minimal (grade 1) effects (Table A-7). Some of the risk estimates are less than 10% at the LOQ of 1 ^g/m3 (8-hr TWA), in particular those based on the slight/mild (grade 2) rat lung effects and assumed normal clearance (Table A-8).
Although uncertainties and limitations exist in these animal studies, the evidence supports the health-based need to reduce exposures below 1 ^g/ m3. These risk estimates indicate the need for re search to develop more sensitive measurement methods for airborne CNT in the workplace, to demonstrate effective exposure control, and to evaluate the need for additional risk management measures such as the use of respirators and other personal protective equipment and medical screen ing (Section 6, Appendix B). Chronic bioassay data are also needed to reduce the uncertainty concern ing the potential for chronic adverse health effects from long-term exposure to CNT. Evaluation of the factors that influence the risk estimates and the ar eas of uncertainty are discussed below.
A . Several factors suggest that in the absence of chronic data these short-term and subchronic ani mal data may be reasonable for obtaining initial es timates of the risk of human noncancer lung effects from exposure to CNT. First, some fraction of CNT that deposit in the lungs are likely to be biopersis tent based on studies in animals and studies of other poor ly soluble particles in human lungs . Sec ond, the pulmonary fibrosis developed earlier and was of equal or greater severity than that observed from exposure to the same mass dose of other in haled particles or fibers (silica, carbon black, asbes tos) examined in the same study . Third, the adverse lung responses persisted or progressed after the end of exposure up to 90 days after a single-or multipleday exposure to SWCNT or MWCNT [Lam et al. 2004;Muller et al. 2005;Shvedova et al. 2005Shvedova et al. , 2008 There is uncertainty in estimating working-lifetime health risk from either subchronic or short-term animal studies, and perhaps from the shorter-term studies. The strength of the subchronic inhala tion studies is that they provide exposure con ditions that are more similar to those that may be encountered by workers exposed to airborne CNT. However, there is some uncertainty about the deposited and retained dose in the rat lungs (see Section A.6.1 for sensitivity analysis of the lung dose estimates). In the PA or IT studies, the administered lung dose is known, although the pattern of lung deposition (especially for IT ad ministration) may differ from that of inhalation.
The subchronic inhalation studies and some of the PA studies include multiple doses, which can provide better information about the shape of the dose-response relationship. However, in the sub chronic studies, steep dose-response relationships were observed for lung response proportions based on histopathology score, reaching 100% response for minimal or higher severity (grade 1) (Figure A-1). Although the data are sparse in the low dose region (near a 10% response level), the BMD(L) estimates are generally similar to the LOAEL and NOAEL values reported in those studies (Section A.6.2 and Table A -12).
A comparison of data from 1-day and 13-week inhalation exposures in rats , indicates that the dose-response relationship was consistent de spite the differences in dose-rate in those two stud ies . This finding indicates that it may be reasonable to assume that the dose-response relationships for the IT, PA, and short-term inha lation exposure studies would be consistent with the subchronic study results if the same response is examined at the same time point, although ad ditional study is needed to confirm this finding. The BMC(L) estimates among the subchronic and short-term studies (Tables A-3 There are limited data to evaluate the role of physical-chemical properties of CNT on the lung responses. Although the dose estimates vary for the early-stage lung effects in rats and mice (and in the human-equivalent concentrations (Tables A-3 through A -6), all estimates are relatively low mass concentrations. It is difficult to tease out the CNT-specific factors affecting these estimates from those due to the other study differences (e.g., expo sure route, duration, animal species, lung response measures).
The two subchronic inhalation studies of MWCNT , based on the same study design (13 week inhalation) and animal species/strain (Wistar rats), facilitates com parison. Different types of MWCNT and different generation methods for aerosolizing exposures were used in each study, although the primary par ticle sizes reported were similar-approximately 10 nm in width and 0.1-10 ^m in length, with spe cific surface area of approximately 250-300 m 2/g . The aerody namic diameter (and resulting alveolar deposition fraction) estimates were also fairly similar (Table A-2); yet the bulk densities differed (approximately 0.04 and 0.2 g/ml, respectively, in Ma-Hock et al. and Pauluhn ). The metal content also differed, with 9.6% ALO3 in the MWCNT in the Ma-Hock et al. study vs. 0.5% Co in the MWCNT (Baytubes) in the Pauluhn study. The lung responses differed both qualita tively and quantitatively, including "pronounced granulomatous inflammation, diffuse histiocytic and neutrophilic inflammation, and intra-alveolar lipoproteinosis" with a LOAEL of 0.1 mg/m3 in Ma-Hock et al. , vs. "inflammatory changes in the bronchioloalveolar region and increased interstitial collagen staining" with a LOAEL of 0.45 mg/m3 . Yet, both MWCNT studies report ed LOAELs that are lower by more than an order of magnitude compared to the LOAEL (7 mg/m3) reported in a 13-week inhalation study of ultrafine carbon black .
A recent study provides a quantitative comparison of the effects of SWCNT and MWCNT on pul monary interstitial fibrosis . In this study, MWCNTs were administered to mice by pharyngeal aspiration at several different doses (0 , 10, 20, 40, or 80 ^g); the lung tissues (stained for collagen using Sirius red) were exam ined at 56 days post-exposure. At the 80-^g dose of MWCNT, the average thickness of the alveo lar interstitial connective tissue was significantly increased at 28 days, and a progressive increase in thickness was observed at 56 days. The 40-^g MW CNT dose group also showed a significant increase in the interstitial connective tissue thickness at 56 days. These data were compared with those of an earlier study of SWCNT using the same study design. The individual MWCNTs had a mean diameter of 49 nm and a mean length of 3.9 ^m. The individual SWCNTs were 1-4 nm in diameter and several hundred nanometers in length. Both SWCNT and MWCNT were rapidly incorporated into the alveolar interstitial spaces (within 1 hour individual CNT or small clumps of CNT were observed), although the percentage of the administered SWCNT observed in the alveolar interstitium (~90%) was much higher than that for MWCNT (~8%). After accounting for the differ ences in the target tissue dose, SWCNTs were still ~8.5-fold more fibrogenic than MWCNTs. Howev er, the surface area of SWCNT was ~20-fold greater per unit mass than that of MWCNTs (508 m2/g for SWCNT vs. 26 m2/g for MWCNT), suggesting that the greater fibrogenic potency of SWCNT may be due to its greater surface area. When the lung re sponse was evaluated per unit CNT surface area dose, SWCNT was no longer more potent, and the MWCNT were 2.5-fold more potent on a surface area basis. There is uncertainty about the degree of dispersion (and hence available surface) of these materials in vivo, which precludes assigning exact potency factors . However, these findings suggest that the greater fibrotic potency of SWCNT on a mass basis is likely due to its greater surface area available to react with lung tissue.
Comparison of other CNT types and metal content is generally impeded by differences in study design. In one of the few studies to investigate CNT with different metal content, Lam et al. reported lung granuloma and inflammation responses in mice administered IT doses of SWCNT containing either 2% Fe, 27% Fe, or 26% Ni. The number of mice developing granulomas by group (each con taining 5 mice) were the following:
- 0.1 mg dose: 2 (2% Fe); 5 (27% Fe); and 0 (26% Ni)
- 0.5 mg dose: 5 (2% Fe); 5 (27% Fe); and 5 (26% Ni)
In addition, three mice died in the first week in the 0.5 mg dose of the 26% Ni group.
Because of the sparse data and the steep doseresponse relationship, only the SWCNT containing 2% Fe were adequately fit by the BMDS model. The high mortality rate in mice exposed to the SWCNT containing Ni suggests this material is highly toxic.
The greater response proportion in the mice exposed to 0.1 mg SWCNT with 27% Fe (5/5) compared with rats exposed to the same dose of SWCNT with 2% Fe (2/5) suggests that the CNT with higher Fe con tent are more toxic than CNT with lower Fe content.
In Shvedova et al. [2005Shvedova et al. [ , 2008, higher iron con tent was also associated with greater lung response and thus lower BMD(L) estimates. The BMD(L) es timates for SWCNT with 18% Fe were lower than those for SWCNT with 0.2% Fe (Table A-3), even though the post-exposure time was longer (60 vs. 28 days) for the 0.2% Fe SWCNT [Shvedova et al. 2005[Shvedova et al. , 2008. All types of CNT (including SWCNT and MWCNT, purified or unpurified, and with var ious types and percentages of metals) were of simi lar or greater potency (i.e., similar or greater lung responses at the same mass dose) in these animal studies compared to the other types of particles or fibers tested (asbestos, silica, ultrafine carbon black) [Lam et al. 2004;Muller et al. 2005;Shve dova et al. 2005Shve dova et al. , 2008.
# A . 4 . 3 L u n g D o s e E s t i m a t i o n
In any CNT risk assessment, there may be greater uncertainty in the estimated lung dose of respirable CNT than there is for spherical airborne particles for which lung dosimetry models have been de veloped and validated. Evaluations have not been made on the influence of particle characteristics (e.g., shape and density) on the inhalability and deposition of CNT in the human respiratory tract, and on the clearance or biopersistence of CNT.
However, the available data on the aerodynamic size of CNT provides an initial estimate (based on validated models for spherical particles) of the deposited mass fraction of airborne CNT in the human respiratory tract, and specifically in the al veolar (gas exchange) region. The clearance rate of CNT from the lungs may be more uncertain than the deposition efficiency, as animal studies indicate that CNT clearance becomes impaired in rat lungs at lower mass doses than for larger particles of greater density . The NIOSH risk assessment helps to characterize this uncertainty by providing bounds on the range of possible lung dose estimates, from assuming normal clearance to assuming no clearance of the deposited CNT. This approach also provides a framework for introduc ing improved dose estimates when validated lung dosimetry models for CNT become available.
The assumptions used in the lung dose estima tion have a large influence on the animal and human-equivalent BMD(L) or BMC(L) estimates (Tables A-5 and A-6), as well as on the estimated human-equivalent NOAEL (Section A.6.3). The rat BMD(L) estimates based on the estimated retained lung dose after subchronic inhalation exposure in rats are lower than those based on the estimated deposited lung dose (Table A -5). This is because the retained dose estimates allow for some lung clearance to occur during the 13-week exposure in rats, and a lower dose estimate is therefore associ ated with a given fixed response proportion. The human-equivalent BMD(L) estimates based on retained dose are also lower because they are pro portional to the rat BMD(L)s (i.e., calculated based on the ratio of the human to rat alveolar epithelial cell surface area). However, the working lifetime 8-hr TWA concentrations, BMC(L)s, based on the estimated retained lung doses are higher than those based on the estimated deposited lung dose. This is because the retained dose estimates (which assume some particle clearance from workers' lungs during the 45 years of exposure), require a higher inhaled airborne concentration to reach the estimated hu man-equivalent BMD(L) lung doses.
The estimated deposited lung dose of CNT (assum ing no clearance) may overestimate the actual CNT lung dose, given that the short-term kinetic data have shown some CNT clearance in rats and mice . On the other hand, the estimated retained lung dose of CNT, based on models for poorly soluble spheri cal particles, may underestimate the retained CNT lung burden, given that overloading of rat lung clearance has been observed at lower mass doses of MWCNT (Baytubes) than for other poorly soluble particles . Thus, although there is uncertainty in the deposition and retention of CNT in the animal and human lungs, the deposited and retained lung dose estimates reported in this risk assessment may represent reasonable upper and lower bounds of the actual lung doses.
A . 4 . 4 C r i t i c a l E f f e c t L e v e l E s t i m a t e s
The response endpoints in these animal studies of CNT are all relatively early-stage effects. Although these effects were persistent or progressive after the end of exposure in some studies, there was no information on whether these responses were as sociated with adverse functional effects. More advanced-stage responses (grade 2 or higher severity on histopathology examination) were also evalu ated, and as expected, these responses resulted in lower risk estimates (Table A -6). It is expected that exposure limits derived from these early response data would be more protective than those based on frank adverse effects. On the other hand, because of the lack of chronic studies, there is considerable uncertainty about the potential chronic adverse health endpoints.
The excess risk estimates at the lower LOQ (1 ^g/ m3) are considerably lower than those at the upper LOQ (7 ^g/m3) of NIOSH Method 5040, for either minimal (TableA-7) or slight/mild (Table A-8) lung effects based on the rat subchronic inhalation data.
The range in the estimates in Table A-7 and A-8 reflects the low precision in the animal data and the uncertainty about CNT retention in the lungs. There is also uncertainty about the relationship between the lung dose and response, including whether there is a threshold. For example, for slight/mild lung effects (Table A-8), the actual risk could be as low as zero or as high as 16% at the REL of 1 ^g/m3.
NIOSH utilized BMD modeling methods to esti mate the critical effect level (i.e., the dose associated with the critical effect or benchmark response) in order to provide a standardized method for risk estimation across studies. In contrast, the NOAELbased approaches do not estimate risk, but may as sume safe exposure or zero risk below the derived OEL. BMD modeling also uses all of the doseresponse data, rather than only a single dose for a NOAEL or LOAEL, and takes appropriate statisti cal account of sample size, unlike NOAEL-based approaches. However, the BMD modeling options for some of these CNT data were limited because of sparse data, and the dose groups with 100% response (observed in the subchronic inhalation studies) contribute little information to the BMD estimation. A common challenge in risk assess ment is defining a biologically relevant response for continuous endpoints, which was also encountered in this risk assessment. A standard practice of using a statistical definition of the benchmark response was used for the continuous BMD estimation in the absence of data on the functional significance of the early-stage pulmonary inflammation and fibrotic responses (Section A.2.3.2).
For CNT, as with other chemicals, there is uncer tainty in whether a NOAEL or a BMDL from a short-term or subchronic study in animals would also be observed in a chronic study. For example, in the Pauluhn study, 0.1 mg/m3 was the NOAEL based on subchronic inhalation exposure in rats, but there was some indication that lung clearance overloading may have already begun (i.e., retention half-time about two-fold higher than normal, although imprecision in the low dose measurement was noted) . A comparison of the BMD and the NOAEL estimates shows that these estimates are statistically consis tent (Section A.6.2). Thus, there is uncertainty as to whether chronic exposure at 0.1 mg/m3 might result in adverse lung effects that were not ob served during subchronic exposure. It is also un certain whether these subchronic effects (without additional exposure) would resolve with longer post-exposure duration (beyond the 26-week post-exposure period in the Pauluhn study). Yet, workers may be exposed to CNT daily for many years, e.g., up to a working lifetime. The NIOSH REL is intended to reduce the risk of lung disease from exposures to CNT and CNF up to a 45-year working lifetime.
A . 4 . 5 A n i m a l D o s e -r e s p o n s e D a t a Q u a l i t y
In the absence of epidemiological data for CNT, the two subchronic inhalation studies of two types of MWCNT, in addition to the short-term studies of SWCNT and MWCNT, provide the best available dose-response data to develop initial estimates of the risk of early-stage adverse lung responses as sociated with exposure to CNT. The availability of animal dose-response data for different types of CNT-and the consistent low mass concentra tion BMC(L) estimates-suggests these risk esti mates are relatively robust across a range of CNT types, including SWCNT or MWCNT, either puri fied or unpurified (containing different types and amounts of metal), dispersed or agglomerated. Al though a formal comparison of the potency of the different CNT is not feasible because of differences in study design, these studies consistently show that relatively low-mass doses of CNT are associ ated with early-stage adverse lung effects in rats and mice. Consequently, the human-equivalent bench mark dose and working lifetime exposure estimates derived from these studies are also relatively low on a mass basis. The excess risk estimates of early-stage adverse lung responses to CNT generally indicate > 10% excess risk (lower 95% confidence limit esti mates) at the upper LOQ (7 ^g/m3) of the measure ment method (NIOSH Method 5040) regardless of the CNT type or purification (Tables A-3 through A-5). Lower risks are estimated at the optimal LOQ (1 ^g/m3), depending on lung dose assumptions (Tables A-7 through A-8).
A more in-depth analysis of specific areas of uncer tainty in this CNT risk assessment is provided in Section A.6. This includes quantitative evaluation of the methods and assumptions used in the CNT risk assessment for the derivation of a REL.
# A.5 Conclusions
Risk estimates were developed using benchmark dose methods applied to rodent dose-response data of ad verse lung effects following subchronic or short-term exposure to various types of SWCNT and MWCNT.
In the absence of validated lung dosimetry models for CNT, lung doses were estimated assuming either de posited or retained lung dose in animals and humans. These findings suggest that workers are at risk of de veloping adverse lung effects, including pulmonary inflammation and fibrosis, if exposed to CNT over a working lifetime. Based on the two rat subchronic inhalation studies for two types of MWCNT (with different metal content), working lifetime exposures of 0.2-2 ^g/m3 (8-hr TWA; 95% LCL estimates) are estimated to be associated with a 10% excess risk of early-stage lung effects (minimal severity grade 1) (Table A -5). For a severity level of slight/mild (grade 2), the 45-year working lifetime excess risk estimates are approximately 0.7-19 ^g/m3 (8-hr TWA; 95% LCL estimates) (Table A-6).
These working liftetime 8-hr TWA concentrations are below the estimated upper LOQ (7 ^g/m3) of NIOSH Method 5040 for measuring the respirable mass concentration of CNT in air as an 8-hr TWA. Similar risk estimates relative to the LOQ were also derived for SWCNT and MWCNT from the short term studies, regardless of whether the CNT were purified or unpurified (with different types and amounts of metals), i.e., 0.08-12 ^g/m3 (Tables A-3 and A-4). Lower risks are estimated at the lower LOQ of 1 ^g/m3, which are approximately 0.5% to 16% based on the rat subchronic dose-response data for the slight/mild lung effects and different lung dose estimation (95% UCL estimates) (Table A-8). Higher risks are estimated for the more sen sitive endpoint of minimal grade 1 lung effects (Table A-7). Additional analyses and risk estimates based on other methods and assumptions are pro vided in Section A.6. (d) interspecies dose normalization. The deposition fraction is based on the airborne particle size (and to some extent shape for nonspherical particles), on the breathing pattern (nasal, oral, or combination) and minute ventilation, and on the lung airway ge ometry. The ventilation rate depends on the species and on the activity level. Reference values are avail able for the average ventilation rates in rats and hu mans . The airborne particle size data (as reported in the animal stud ies) (Table A-2) were used to estimate the deposited lung dose of CNT in rats and humans, using spheri cal particle based models. The long-term clearance kinetics have been well studied and validated for inhaled poorly soluble spherical particles in rats [Anjilvel and Asgharian 1995;Asgharian et al. 2001Asgharian et al. , 2003 and in humans [ICRP 1994;Kuempel et al. 2001a, b;Gregoratto 2010Gregoratto , 2011, but models spe cifically for CNT are not yet available.
# A.6 Sensitivity Analyses
This section examines some of the key param eter values used in the lung dose estimation, and also characterizes the quantitative influence of alternative models and assumptions. Two stud ies were available to evaluate the lung dose es timates in rats. Pauluhn and Ellinger-Ziegelbauer and Pauluhn provided cobalt tracer-based measurements of the CNT lung burden based on cobalt-tracer measurements. These data were used to compare MPPD modelbased estimates. Because of prediction equation changes in the MPPD model from version 2.0 to 2.1, which affect the model-predicted rat alveo lar deposition fraction predictions (discussed further in Section A.2.2), the cobalt tracer-based estimates are compared to each model version (Section A.6.1.2). The influence of assumed density on the CNT lung deposition fraction is quantified in addition to the evaluation of the MPPD model version 2.0 vs. 2.1 predictions (Section A.6.1.1). The derivation of allometricbased (body weight scaled) lung ventilation rate estimates is also discussed (Section A.6.1.3).
# A.6.1.1 Lung Dosim etry Modelbased Deposition Fraction and Dose Estimates
The fraction of inhaled CNT that is deposited in the respiratory tract is predicted from the aerosol characteristics. The deposition mechanisms include impaction, sedimentation, interception, and dif fusion. The aerodynamic diameter, by definition, represents the gravitational settling (sedimentation) behavior of particles . The definition of aerodynamic diameter standardizes the shape (to spherical) and density (to that of water, 1 g/ml).
The aerodynamic diameter of a particle, regardless of its shape and density, is the diameter of a sphere with the same gravitational settling velocity as the particle in question. Conventionally, aerodynamic diameter has been used as a reference diameter to represent total particle deposition in the respiratory system over a wide particle size range. Models such as MPPD use particle density (specified by the user), to convert aerodynamic to physical diameter and vice versa, and in this manner capture the key particle deposi tion mechanisms for spherical particles.
However, for high-aspect ratio particles and par ticles less than 500 nm diameter, including some individual or airborne agglomerates of CNT, the aerodynamic diameters are much smaller than their diffusion-equivalent diameter (i.e., the mea sure of diameter that captures the diffusional depo sition mechanism) . When the different equivalent diameters could significantly differ, it is recommended to ex perimentally measure these property-equivalent diameters, and subsequently use the measured di ameters in the lung deposition models to provide a reliable representation of each relevant deposition mechanism .
In the animal inhalation studies of CNT , the airborne particle sizes (MMAD) were in the micrometer size range (~1-3 ^m) (Table A-2) and the airborne CNT structures in those studies were roughly spherical agglomerates-suggesting that deposition from diffusional mechanisms may be negligible and aerodynamic diameter may provide a reasonable estimate of the deposition efficiency of CNT in the respiratory tract. However, the density of the airborne structures can affect the deposition efficiency predictions in MPPD . An evaluation of the effect of the CNT density assump tions on the rat alveolar deposition fraction is pro vided in this section.
In the rat model, MPPD version 2.1 (but not 2.0) accepts density values less than one. The MMAD (GSD) values reported in the subchronic rat inha lation studies varied slightly with particle concen tration and sampling device . The central MMAD (GSD) values were used for the deposition fraction and lung burden estimates. The influence of the alternative particle size estimates was not fully evaluated but ap peared to be minimal compared with other factors (MPPD rat model version and assumed density).
In addition, the MPPD model estimates of CNT lung burden in rats are compared to the measured CNT lung burdens from two rat inhalation studies. Pauluhn reported the amount of cobalt tracer in the rat lungs as well as the amount of Co that was matrix-bound to the CNT. The Ellinger-Ziegelbauer and Pauluhn 1-day inhalation study with 91day post-exposure follow-up also reported Co data. These data provided a basis for comparison to lung burden estimates from the MPPD models.
Results in Table A -9 show that the rat deposition estimates (at the same density) vary by a factor of approximately two depending on the version of the MPPD model (2.0 or 2.1). As discussed in Sec tion A.2.2, this is apparently because of a change in MPPD 2.1 in the deposition efficiency equations for the head region of the rat model, which reduces the deposition efficiency of the alveolar region. The lower density further reduces the alveolar deposi tion efficiency estimates. These findings suggest that rat alveolar lung dose estimates based on MPPD 2.1 (regardless of density assumption) would result in greater estimated potency of the CNT (because the response proportions do not change) and thus lower BMD(L) estimates in rats and lower OEL esti mates (by approximately a factor of two) than those shown in the main analyses. --------------------------------------------------------------------------------------------------------------------------- 6 in Pauluhn to be approximately 10, 125, 450, and 1,650 ng, respectively, by increasing exposure concentration. The CNT amount in the lungs was estimated from the reported 0.115% Co that was matrixbound to the CNT ; the remaining mass (99.885% was assumed to be CNT). The CNT mass was thus calculated as CNT (ng) = / 0.00115. CNT (ng) x 0.001 |ig/ng equals CNT (|ig). *MMAD (GSD)-2.9 (1.8) and 2.2 (2.6), respectively, for 11 and 241 mg/m3, from Ellinger-Ziegelbauer and Pauluhn ; alveo lar deposition fraction-0.050 and 0.043, respectively, for 11 and 241 mg/m3; assumed density of 1 g/ml; tidal volume-2.45 ml. fAssumed density of 0.2 g/ml; tidal volume-2.45 ml; alveolar deposition fraction-0.019 and 0.026, respectively for 11 and 241 mg/m3. *Mass of cobalt at 91 d post-exposure was estimated from Figure 2 in Ellinger-Ziegelbauer and Pauluhn to be approximately 0.03 |ig (11 mg/m3) and 0.39 |ig (241 mg/m3). The CNT amount in the lungs was estimated from the reported 0. Table A-10 shows that the cobalt-based estimate of CNT in the rat lungs is numerically between the deposited and retained dose estimated by MPPD 2.0 (density of 1). The MPPD 2.11 model (den sity of 0.2) underestimated the Co-based lung burden, even the deposited dose estimate (assuming no clearance). These findings suggest that the model-based estimates of the de posited and retained rat lung doses in the main analyses (MPPD 2.0, density 1) provided reason able estimates of the bounds on the estimated lung burden. Moreover, these findings are consis tent with the animal toxicokinetic data that show CNT overloads alveolar clearance at lower mass doses than for particles with lower total surface area or volume lung dose, resulting in increased retention of CNT in the lungs of rats and mice than expected for other poorly soluble respirable particles . The finding that the cobalt-tracer estimates were be tween the deposited and retained lung doses is consistent with CNT reduced clearance compared with spherical particles.
Similar comparisons were made of the cobalt-tracer or lung model estimated lung dose of MWCNT in a study of rats exposed for 1 day (Table A -11). Results show that the MPPD 2.0 model overestimated the retained lung dose of CNT by nearly a factor of two (at the higher dose) compared with the estimates based on the cobalt tracer in the Ellinger-Ziegelbauer and Pauluhn study (Table A -11). This suggests greater clearance than would be predicted at this high dose (241 mg/m3) based on overload ing of lung clearance in the rat model (MPPD 2.0). If the retained lung dose estimated by cobalt tracer is the best estimate (closest to actual), this suggests that the BMD estimates using the modelestimated lung burdens may be overestimates (i.e., they underestimate potency because the response proportion is constant while the actual lung bur den causing the effect may be lower). Some error may also exist in the cobalt-tracer measurements of the MWCNT mass (estimated from Figure 2 in Ellinger-Ziegelbauer and Pauluhn ). where VE is the minute ventilation (L/min); BW is body weight; and bo + bi are species-specific pa rameters.
For the rat, bo + bi are -0.578 and 0.821, respec tively (Table 4-6 of US EPA ). For a 300 g rat, the ventilation rate can be calculated as follows:
Equation A-7: 0.21 L/min = Exp [-0.578 + 0.821 x Ln(0.
This is also the default minute ventilation in MPPD .
Rat mean body weights in Pauluhn were reported as 369 g (male) and 245 g (female) in the control (unexposed) group at 13 weeks. Because the alveolar septal thickening response data in Pauluhn were based on male rats only, a male rat minute ventilation of 0.25 L/min (calculated from equation A-6) was used to estimate lung dose in that study.
Ma-Hock et al. did not report the rat body weight, although the rat strain (Wistar) and study duration (13 weeks) were the same as in Pauluhn . Because the granulomatous inflammation response data in Ma-Hock et al. were com bined for the 10 male and 10 female rats in each dose group (since response proportions were sta tistically consistent), an average rat body weight in male and female rats of 300 g was assumed, which is similar to the male and female average body weight of 307 g reported in Pauluhn and the default value of 300 g in MPPD. Subsequent ly, body weights were obtained for the Ma-Hock et al. study . The average male and female rat body weight at 13 weeks was nearly identical (305 g) to that reported in Pau luhn . Other rat minute ventilation rates of 0.8-1 L/min per kg [Pauluhn 2010a, citing Snipes 1989 would result in somewhat higher lung dose estimates.
Based on equation A-6, a minute ventilation of 0.21 L/min is calculated for female and male rats in Ma-Hock et al. , and 0.25 L/min for male rats in Pauluhn . Minute ventilation is the prod uct of tidal volume and breathing frequency. As suming the same breathing frequency (102 min-1), a tidal volume of 2.45 ml is calculated (equation A-6) and used instead of the default value in MPPD 2.0 in estimating the rat lung dose in the Pauluhn data.
In humans, based on the MPPD 2.0 model , the default pulmonary ventila tion rate is 7.5 L/min, based on default values of 12 min-1 breathing frequency and 625 ml tidal volume.
The "reference worker" ventilation rate is 20 L/min or 9.6 m3/8 hr (given 0.001m3/L, and 480 min/8-hr). In these estimates, 17.5 min-1 breath ing frequency and 1143 ml tidal volume were used in MPPD 2.0 to correspond to a 20 L/min reference-worker ventilation rate.
A . 6 . 2 C r i t i c a l E f f e c t L e v e l S e l e c t i o n
A key step in the dose-response analyses of any risk assessment is estimating the critical effect level.
A critical effect level from an animal study is ex trapolated to humans to derive a POD for low dose extrapolation (Section A.2.3). A critical effect is typically the most sensitive effect associated with exposure to the toxicant (i.e., the effect observed at the lowest dose) which is adverse or is causally linked to an adverse effect. The early-stage lung effects discussed in Section A.2.1.3 are the criti cal effects used in both the main risk assessment and in these sensitivity analyses. The primary Ma-Hock et al. 0. Lowest observed adverse effect level; BMC: Benchmark concen tration (maximum likelihood estimate) associated with 10% excess risk of specified BMR. BMCL: 95% lower confidence limit of the BMC; based on a multistage model, polynomial degree 2, P = 0.88); BMR: Benchmark response; nd: not determined *Same response proportion per dose, and therefore the same BMD(L) estimates, for alveolar lipoproteinosis. critical effect selected is the proportion of rats with minimal (grade 1) or higher severity of pul monary inflammation or alveolar septal thickening (as reported by Ma-Hock et al. and Pau luhn ). In addition, grade 2 (slight/mild) or greater effects (as reported in the same studies) were also evaluated as a response endpoint since the interpretation of the histopathology, for a slight or mild response, may be less variable than that for a minimal response, and may also be more relevant to a potential adverse health effect in humans.
The critical effect levels in the main analysis are the BMD(L) estimates from the dose-response m od eling of the rat estimated deposited or retained lung dose and to the human-equivalent lung dose estimates. The working lifetime exposure concen tration that would result in that equivalent lung dose was then calculated, assuming either particlesize specific lung deposition only (assuming no clearance) or the estimated retained lung dose (as suming normal spherical particle clearance).
In the main risk analysis, BMD methods were se lected over NOAELs or LOAELs because of sev eral statistical advantages (Section A.2). However, BMD(L) estimates may also be uncertain, for ex ample, when the dose spacing is not optimal, as occurred in the CNT subchronic studies (Figures A-1 and A-4). In this sensitivity analysis, NOAELs and LOAELs reported in the subchronic inhalation studies are used as the effect levels in evaluations of alterna tive methods to derive OEL estimates. A quanti tative comparison of possible critical effect levels is shown in Table A-12. The BMDL estimates are generally similar to the NOAEL estimates (within a factor of approximately 1 to 4), which suggests that the BMDL estimates may be reasonable despite the sparse data in the low dose region of the subchron ic inhalation studies (Figure A-1).
A statistical analysis was performed to compare the NOAEL and BMD estimates (in this example, the BMD is an exposure concentration, or BMC). The maximum likelihood estimate of the excess risk (of a minimal or higher grade of alveolar septal thick ening) at 0.1 mg/m3 is 0.10 (i.e., 10%), based on the BMD model fitted to the dose-response data in the Pauluhn study (Table A-12). Yet, 0.1 mg/m3 was identified as a NOAEL based on zero adverse response being observed . In order to assess the precision of the estimate of the excess risk associated with this NOAEL, the likelihood of the data in the NOAEL and control groups was re parameterized in terms of the respective sum and difference of the expected response proportions; and an upper confidence limit for the difference was assessed by inverting its likelihood ratio test statistic. When a nominal confidence coefficient of 95% for a two-sided interval was applied, a value of 0.17 (i.e., 17%) was obtained for the UCL of the difference. Hence, the results supporting the use of 0.1 mg/m3 as a NOAEL are also statistically con sistent with the results from the BMD model since the MLE of excess risk based on the model is less than the UCL.
In a standard risk assessment approach, BMDL estimates may be considered equivalent to a NO AEL for use as a POD in risk assessment . Once an effect level is selected in a given ani mal study, it is extrapolated to a human-equivalent effect level (e.g., as 8-hr TWA concentration), or human-equivalent concentration (HEC). This HEC_POD (human-equivalent point-of-departure) is the POD for either extrapolating to a lower (ac ceptable) risk level or applying uncertainty factors in the derivation of an OEL. These steps are discussed further in Section A.6.3.
# A . 6 . 3 A l t e r n a t i v e O E L E s t i m a t i o n M e t h o d s
As mentioned in the previous section, a standard risk assessment method using animal data typi cally involves first identifying a critical effect level in animals (e.g., NOAEL or BMDL), which is the POD . ,. A HEC POD is estimated by extrapolating anim al j l o the animal dose to humans by accounting for the biological and physical factors that influence the lung dose across species*. Lung dosimetry m od els can account for these interspecies differences and provide equivalent dose estimates in animals and humans given the exposure concentration and duration, the breathing rates and patterns, and the physical properties of the aerosol. A simpli fied standard approach in lieu of a lung dosimetry model to apply a total dosimetric adjustment fac tor to the animal effect level (Section A.6.3.1). It is useful to evaluate both approaches given that the lung dosimetry models have not been specifically validated for respirable CNT. The interspecies NF adjusts for the size difference in the lung (surface area or volume) into which the CNT dose deposits. Studies of other inhaled parti cles or fibers are relevant to evaluating mechanisms that may also apply to CNT in the lungs. Possible dose metrics related to the modes of action for pulmonary inflammation and fibrosis include the CNT mass, surface area, or volume dose per al veolar epithelial cell surface area or alveolar mac rophage cell volume in each species. Normalizing the dose (e.g., NOAEL) across species to the total average alveolar macrophage cell volume in rat or human lungs is based on the experimental observa tion of overloading of alveolar clearance in rats and mice exposed to respirable poorly soluble particles or fibers .
# (a) Alveolar macrophage cell volume
At a sufficiently high particle dose, pulmonary clearance can become impaired due to overloading of alveolar macrophage-mediated clearance. In rats, the overloading dose has been observed as particle mass (~1 mg/g lung), volume (~1 ^l/g lung for unit density particles) , or surface area (200-300 cm2 particles per rat lung) . On a volume basis, an overloading particle dose corresponds to approxi mately 6%-60% of total alveolar cell volume, when overloading begins and is complete, respectively . The 60% value has been observed experimentally , although particle clearance impairment may start at lower particle volume lung dose . Biological responses to over loading include: accumulation of particle-filled macrophages in the alveoli, increased permeability of the epithelial cell barrier, persistent inflamma tion, increased particle translocation to the alveo lar interstitium and lung-associated lymph nodes, as well as increasing alveolar septal thickening, lipoproteinosis, impaired lung function, and fibrosis [Muhle et al. 1990[Muhle et al. , 1991.
Although the overload mode of action in the rat has been well-studied, the extent to which overloading is involved in human lung responses to inhaled particles is not as clear due to observed differences in both the kinetics and the pattern of particle re tention in the lungs of rats and humans. Whereas particle clearance in rats is first-order at doses be low overloading, studies in workers have shown that human lung clearance of respirable particles is not first-order even at relatively low retained par ticle mass lung low doses . Humans also apparently re tain a greater portion of the particles in the alveolar interstitium, whereas rats retain more particles in the alveolar space . The greater interstitial particle retention may increase the dose to the target tissue for pulmonary fibro sis in humans relative to that for the same depos ited dose in rats lungs. Given the differences in the particle clearance kinetics and retention patterns in rats and humans, normalizing the dose across species based on the total alveolar macrophage vol ume may not be the best dose metric for predicting adverse lung responses in humans.
# (b) Alveolar epithelial cell surface area
Another dose metric that may be relevant to the in flammatory and fibrotic lung responses is the par ticle or CNT dose per surface area of alveolar epi thelial cells .
It is the epithelial cell surface with which particles interact when they migrate through the epithelial cell layer into the interstitium, and epithelial cells are also involved in recruitment of inflammatory and fi brotic cells . For this reason, normal izing the dose based on the total alveolar epithelial cell surface area may be more predictive of the hu man lung response. However, since both the alveolar macrophages and epithelial cells are involved in the lung responses to inhaled particles, some combina tion of dose metrics may ultimately be most predic tive in this dynamic biological system.
In the absence of a more complete biologicallybased model, an evaluation of the quantitative in fluence of each assumed dosimetric mode of action (e.g., based on either the alveolar macrophage cell volume or the epithelial cell surface area) provides information on the sensitivity of the risk assess ment and OEL derivation to the interspecies dose normalization factor. Thus, replacing the alveolar macrophage volume ratio in equation A-10 with a NFa/NFr of 0.4m2/102m2 results in a total AF that is 4.5 x greater. That is, Equation A-12: AF = (9.6m3/0.102m3) x (0.118/0.057) x (10/1) x (0.4m2/102m2) = 7.7 Equation A-13: HEC_NOAEL = 0.1mg/m3 / 7.7 = 0.013 mg/m3
The larger AF results in a corresponding smaller human-equivalent concentration. This illustrates that the risk estimates for CNT-as for other inhaled particles-is sensitive to the assumed mode of ac tion concerning the interspecies normalizing factor.
The retained dose to the target tissue is influenced by the clearance mechanism in the lung region in which the particles deposit. RT in equation 2 (as the kinetic factor in Pauluhn ) is intended to account for the differences in the rat and human particle retention half-time. This factor is also de pendent on the assumptions concerning the bio logical mode of action. In the rat, evidence suggests that doses of poorly soluble low toxicity particles below those causing overloading of lung clearance (i.e., at steady-state) would not be associated with adverse lung effects. A steady-state lung burden means that the rate of particle deposition equals the rate of clearance such that once the steady-state burden had been achieved, the lung burden would be the same over time if exposure conditions did not change. For example, if steady-state lung bur den was reached after subchronic (13 week) expo sure to a given exposure concentration, then the chronic (2 yr) lung burden would be the same given the same rates of exposure and clearance. However, the steady-state lung burden may not been entirely reached by 13 weeks in the rat or in an equivalent time in humans. Based on the rat overload mode of action, Pauluhn assumed that humans would achieve a steady-state lung burden if ex posed at an equivalent total particle volume dose in the alveolar macrophages (over a roughly equiva lent human exposure duration of 10 years to a rat 3 month exposure). A ratio of 10/1 for human/rat retention half-time rate was used , based on a simple first-order clearance rate model of particle clearance from the lungs in both rats and humans . The volumetric dose of CNT associated with overloading in the rat was equivalent to a relatively low mass dose compared to other poorly soluble particles . That is, the human long-term retained lung burden would be expected to exceed a steady-state lung burden predicted from the rat model (i.e., low-dose first order clearance with dose-dependent impairment,
# A .6 .3 .2 .2 In te r s p e c ie s D o s e R e te n t io n F a c t o r
-r overloading, of particle clearance after reaching a critical lung dose).
An alternative approach evaluated was to use the MPPD 2.0 human lung dosimetry model to estimate directly the retained lung burden in humans over a working lifetime. This approach assumes a mode of action in which the cumulative retained particle dose is related to the adverse lung responses, regardless of the dose rate (i.e., the time required to reach that dose). The cumulative exposure concept (concentration x time), known as "Haber's Law," is a typical default assumption in risk assessment for long-term expo sures in the absence of other data . Some studies in workers (coal miners) have shown that the working lifetime cumulative exposure and the retained lung dose are better predictors of pul monary fibrosis than the average exposure concen tration without consideration of duration . Yet, there remains uncertainty about how well a cumulative dose received over a short dura tion may predict the response to the same cumula tive dose received over a longer duration (i.e., at a lower dose rate). The direction of error could go either way, depending on the biological mecha nisms of response. For example, a lower dose rate may allow the lung defense mechanisms to adapt to the exposure (e.g., by increasing clearance or re pair mechanisms), which could reduce the adverse response at a later time point. On the other hand, a longer time in which a substance is in contact with the tissue may exacerbate the response, resulting in a more severe effect at the later time point. The actual lung response may be some combination of these effects.
To evaluate the assumptions used to estimate the human and rat retention kinetics, estimates were compared from the MPPD2.0 lung dosimetry model to the ratio of 10/1 for RTh/RTa used by Pauluhn . The rat and human lung dosimetry models take into account the ventilation rates, the deposition fraction by re spiratory tract region (predicted from particle size and breathing rate and pattern, nasal vs. oronasal), and the normal average clearance rates. Using the particle size and breathing rate values for the Pauluhn 2010a study (Table A-2), the rat retained lung burden at the end of the 13 week exposure to 0.1 mg/m3 was estimated to be ~12 ^g (Table A -10). This is similar to the 8.7 ^g lung burden es timated from the cobalt-tracer based measurement (Table A -10).
Assuming that the rat has achieved steady-state lung burden after 13 wk exposure to 0.1 mg/m3, the chronic lung burden should also be approximately 12 ^g. § Extrapolating the rat lung dose of 0.012 mg to the human-equivalent lung burden would result in either:
- 13.5 mg-estimated by dividing the rat lung dose by an interspecies NF for the average to tal alveolar macrophage cell volume (i.e., 3.03 x 1010 ^m3/3.49 x 1013 ^m3) (rat/human); or - 3.0 mg based on the average total alveolar epithelial cell surface area (0.4 m2/102 m2) (rat/human).
The associated 8-hr TWA concentration for 45-yrs would result in human-equivalent lung burdens (es timated from MPPD 2.0 human model ) of 16 ^g/m3 and 3.5 ^g/m3, respec tively, for the normalized lung burdens based on the alveolar macrophage cell volume or the alveolar epithelial cell surface area (Table A -13). The value of 16 ^g/m3 is approximately 3-fold lower than the ~50 ^g/m3 as the human equivalent concentration to the rat NOAEL reported in Pauluhn (or 3.5 x lower than the 58 ^g/m3 HEC_LOAEL by applying an AF of 1.7 without rounding to 2). This difference is due to the approximately 3x higher retained lung dose estimates after a 45-year working lifetime (Ta ble A-14) to that estimated as a 10-year steady-state lung burden . This suggests that the RT of 10 may underestimate the rat to human lung retention kinetics, and that a factor of 35 (i.e., 10 x 3.5) may be more realistic. Since the MPPD model already takes into account the ventilation rate and deposition fraction, the difference in the human retained lung dose estimates is due to greater §At 2 years, the MPPD model predicted a lung burden of 13 ^g and a lung burden plus lung-associated lymph node burden of 23 ^g.
particle retention predicted by the MPPD model (which includes the ICRP clearance model) compared to that of the first-order kinetic model used to estimate the factor of 10/1 .
When this same method was applied to the rat LOAEL of 0.1 mg/m3 in the Ma-Hock et al. subchronic inhalation study, but using the particle size data and rat minute ventilation specific for that study (Table A-2 and Section A.2.2), similar human-equivalent estimates were obtained. The slightly higher doses are due to the greater DF for the MWCNT in the Ma-Hock et al. study. In addition, the POD from the Ma-Hock et al. study is based on a LOAEL (vs. NOAEL in Pauluhn , so an additional uncertainty factor would be applied (as discussed in the next section). In each case, the estimates using an interspecies nor malizing factor based on the alveolar epithelial cell surface area are lower by a factor of approximately four. The estimates in Excess risk estimates based on the short-term studies of SWCNT and MWCNT in rats and mice (Tables A-3 and A-4) are consistent with those from the rat subchronic inhalation studies (Tables A-5 and A-6). However, the uncertainty factors ap plied to the short-term studies would be expected to be higher (e.g., by factor of 2) than those for the subchronic studies. Alveolar macrophage volume 11.7 13.5 16
Alveolar epithelial cell surface area 3.0 3.5
Ma-Hock et al. Alveolar macrophage volume 16.0 18 18 A . 6 . 4 S u m m a r y o f S e n s i t i v i t y
A n a l y s e s F i n d i n g s Many of the areas of uncertainty in these risk es timates for CNT also occur in other standard risk assessments based on subchronic or short-term animal study data. Potential chronic effects of CNT are an important area of uncertainty because no chronic study results were available. Uncertainty exists about the estimated lung doses for the in halation studies because of lack of experimental evaluation or validation of lung dosimetry models to predict deposition and retention of CNT. Infor mation is also limited on the relative potency of dif ferent types of CNT to cause specific lung effects in animals because of study differences. Despite the variability in the risk estimates across the various types of CNT, all of the risk estimates were associ ated with low-mass concentrations (below the up per and lower LOQ or 7 or 1 ^g/m3, respectively).
In conclusion, these sensitivity analyses show that the estimates of a health-based OEL are not strong ly dependent on the BMD-based risk assessment methods, and the use of an alternative (POD/UF) method provides supporting evidence indicating the need for a high level of exposure containment and control for all types of CNT.
# A.7 Evaluation of Carbon Nanofiber Studies in Mice and Rats
Two in vivo studies of carbon nanofibers (CNF) have been recently published in mice and rats . In order to compare the lung responses to CNF observed in these studies, estimates of the lung doses normalized across species are provided in this section. A . 7 . 1 P a r t i c l e C h a r a c t e r i s t i c s Both types of CNF were vapor grown, but obtained from different sources. In Murray et al. , the CNF was supplied by Pyrograf Products, Inc. The chemical composition was 98.6% wt. elemental car bon and 1.4% wt. iron. CNF structures were 80 to 160 nm in diameter, and 5 to 30 ^m in length. The specific surface area (SSA) measured by BET was 35 45 m2/g; the effective SSA was estimated as ~21 m2/g . In DeLorme et al. , the CNF was supplied by Showa Denko KK, Tokyo, Japan. The chemical composition was >99.5% car bon, 0.03% oxygen and < 0.003% iron. CNF struc tures were 40-350 nm (158 nm average) in diameter and 1-14 ^m in length (5.8 ^m average). The BET SSA was 13.8 m2/g .
# A . 7 . 2 E x p e r i m e n t a l D e s i g n a n d A n i m a l s
The species and route of exposure also differed in the two studies. In Murray et al. , six female C57BL/6 mice (8-10 wk of age, 20.0 + 1.9 g body weight) were administered a single dose (120 ^g) of CNF by pharyngeal aspiration ; mice were examined at 1, 7, and 28 days post-ex posure. In DeLorme et al. , female and male Crl:CD Sprague Dawley rats (5 wk of age) were ex posed to CNF by nose-only inhalation at exposure concentrations of 0, 0.54, 2.5, or 25 mg/m3 (6 hr/d, 5 d/wk, 13 wk). The rats were examined 1-d after the end of the 13-wk exposure and 3 months post exposure. Body weights were reported as: 252 g + 21.2 female; 520 g + 63.6 male (unexposed con trols, 1-d post-exposure); 329 g + 42.2 female; 684 g + 45.8 male (unexposed controls, 3 mo. post-ex posure) .
# A . 7 . 3 L u n g R e s p o n s e s
In mice, the lung responses to CNF included pul monary inflammation (polymorphonuclear lym phocytes, PMNs, measured in bronchioalveolar lavage fluid, BALF); PMN accumulation in CNFexposed mice was 150-fold vs. controls on day 1. By day 28 post-exposure, PMNs in BALF of CNFexposed mice had decreased to 25-fold vs. controls. Additional lung effects included increased lung permeability (elevated total protein in BALF), cy totoxicity (elevated lactate dehydrogenase, LDH), which remained significantly elevated compared to controls at day 28 post-exposure. Oxidative dam age (elevated 4-hydroxynonenol, 4-HNE, and oxi datively modified proteins, i.e., protein carbonyls) was significantly elevated at days 1 and 7, but not at day 28. Collagen accumulation at day 28 post exposure was 3-fold higher in CNF-exposed mice vs. controls by biochemical measurements. Consis tent with the biochemical changes, morphometric measurement of Sirius red-positive type I and III collagen in alveolar walls (septa) was significant ly greater than controls at day 28 post-exposure .
In rats, the respiratory effects observed in the De Lorme et al. study were qualitatively similar to those found in the Murray et al study . The wet lung weights were significantly elevated com pared to controls in male rats at 25 mg/m3 CNF and in female rats at 2.5 and 25 mg/m3 CNF at 1-day post-exposure; lung weights remained elevated in each sex in the 25 mg/m3 exposure group at 3 mo. post-exposure. Histopathologic changes at 1 day post-exposure included inflammation in the ter minal bronchiolar and alveolar duct region in the 2.5 and 25 mg/m3 exposure groups, and interstitial thickening with type II pneumocyte proliferation in the 25 mg/m3 exposure group. Cell proliferation assays confirmed increased cell proliferation in that highest dose group in the subpleural, parenchymal and terminal bronchiolar region; the subpleural proliferation in this dose group did not resolve in the females by the end of the 3 month recovery pe riod. Cell proliferation appeared to resolve in males after a 3 month recovery period but numerically re mained higher in the parenchyma and subpleural regions. Histopathologic evidence of inflamma tion and the presence of fiber-laden macrophages were reported to be reduced but still present in the high dose group after a 3 month recovery period. Inflammation within the alveolar space (as mea sured by PMN levels in BALF) was statistically sig nificant only in the rats exposed to 25 mg/m3 CNF. However, the percent PMNs increased in a dose responsive manner: 1.2 (± 0.81), 1.4 (± 0.79), 2.7 (± 0.67), and (± 2.0), respectively, in the 0, 0.54, 2.5, and 25 mg/m3 exposure groups. LDH and other BALF markers were elevated at the end of the 13wk exposure only in the 25 mg/m3 exposure group, and LDH remained elevated at 3-mo. post-exposure in that group. The observed no adverse effect level (NOAEL) in rats was reported to be 0.54 mg/m3. The lowest observed adverse effect level (LOAEL) was reported to be 2.5 mg/m3 "...based on the minimal inflammation observed in terminal bronchioles and alveolar ducts of male and female r a t s . " .
The sample size and sensitivity of the markers or assays are factors that could influence the statisti cal power and likelihood of observing exposurerelated effects in these animal studies. In Murray et al. , six animals per group were used for the BAL analysis, histopathology evaluation, oxi dative stress markers, and lung collagen measure ments. Five animals per group were used for the BAL and cell proliferation assays in the DeLorme et al. study (male and female data were ana lyzed separately). The Murray et al. study used a more sensitive marker of interstitial fibrosis in measuring the average thickness of the alveolar connective tissue, while the DeLorme et al. study did not report using that assay.
# A . 7 . 4 E f f e c t s i n O t h e r T i s s u e s
In rats, CNF were observed in the nasal turbi nates of the high-dose group (25 mg/m3) at 1 day post-exposure, which was accompanied by hyaline droplet formation in the epithelium; CNF persist ed in the nasal turbinates at 3-mo. post-exposure in the high dose group . In all exposure groups, CNF translocated to the tra cheobronchial lymph nodes and CNF fibers were seen in brain, heart, liver, kidney, spleen, intestinal tract, kidneys, and mediastinal lymph nodes, but no associated histopathologic abnormalities were detected . In CNF-exposed mice, T cell mitogen (concavalin A) responsiveness In order to quantitatively compare the results of the two CNF studies in mice and rats, equivalent lung doses were estimated by accounting for differences in route of exposure and particle size characteris tics and by normalizing to either the mass or al veolar surface area of the lungs in each species. The respiratory tract region where the adverse effects were observed is the pulmonary (a.k.a. alveolar) re gion, which is where gas exchange occurs between the lungs and blood circulatory system across the alveolar septal walls. In mice, the lung dose esti mate is simply the proportion of the administered dose (by pharyngeal aspiration) that is estimated to deposit in the alveolar re gion. Mercer et al. reported that 81% of the aspirated MWCNT by pharyngeal aspiration de posited in the alveolar region of the mouse. If this figure applies to the CNF reported in Murray et al. , then approximately 97 ^g of the 120 ^g ad ministered dose would be deposited in the alveolar region. In the absence of CNF-specific data, 100% alveolar deposition of the administered dose was also assumed.
In rats, the airborne particle size data are used to estimate the inhalable, deposited, and retained lung doses of CNT, based on the exposure concen trations and particle size characteristics reported . The multipath particle de position model (MPPD), version 2.90 , was used to estimate these lung doses. MPPD ver sion 2.11 was originally used to obtain some parti cle deposition estimates, but some output indicated errors in estimating the tracheobronchial regional deposited dose, which appeared to lower the alveo lar deposition estimates. This issue was apparently resolved in the updated version (2.90).
Particle characteristic input values used in MPPD include the mass median aerodynamic diameter (MMAD), geometric standard deviation (GSD), and density. The following MMAD and GSD val ues were reported by airborne exposure concen trations: 0.54 mg/m3 (MMAD 1.9 ^m; GSD 3.1); 2.5 mg/m3 (MMAD 3.2 ^m; GSD 2.1); and mg/ m3 (MMAD 3.3 ^m; GSD 2.0). The density assumed for this CNF is 0.08 g/ml. Density was not reported in DeLorme et al. and was obtained from the manufacturer's data analysis sheet, which in dicates it is the same material as that reported in DeLorme et al. .
The default breathing rates and parameters were assumed, and inhalability adjustment was selected. In MPPD 2.90, nonspherical particle shape can be taken into account in the respiratory tract deposi tion estimates, but some of the required input pa rameters (GSD of structure diameter and length and correlation) were not reported in DeLorme et al. . So, the spherical particle assump tion (aspect ratio of 1.0) was assumed, which may not be unreasonable given that the fiber intercep tion mechanism may be less for CNF structures of length 5.8 ^m than for longer fibers. The default breathing parameters (including 0.21 ml tidal vol ume and 102 breaths/min) may be reasonable for the female Sprague Dawley rats in the DeLorme et al. study based on similar body weight (300 g) associated with the default values , but may be too low for the male Sprague Dawley rats. The average body weights in control rats (air-only exposed) at the end of 13-wk exposure period and the 90-d post-exposure pe riod, respectively, were: 252 and 329 g (females); 520 and 684 g (males) . The retained lung burden at the end of the 13-wk ex posure was also estimated in MPPD 2.90 using the particle size data for each exposure concentration (using MMAD and GSD values reported above).
The lung dose estimates in rats and mice were nor malized by the lung weight or alveolar surface area to estimate the equivalent dose across species. The aver age lung weights of rats were those reported in DeLo rme et al. 1-d post-exposure in the control rats (1.9 g and 1.3 g in males and females, respectively). The average mouse lung weight was 0.15 g . The average alveolar surface area assumed for the rat lungs was 0.4 m2 , and that of mice was 0.055 m2 .
The total deposited CNF dose in the alveolar region was estimated in rats in the DeLorme et al. study in the following equation: The inhalable fraction estimates of CNF in rats were 0.79, 0.73, and 0.72, respectively, in rats at the re ported particles sizes for concentrations of 0.54 mg/ m3 (MMAD 1.9 ^m; GSD 3.1); 2.5 mg/m3 (MMAD 3.2 ^m; GSD 2.1); and 25 mg/m3 (MMAD 3.3 ^m; GSD 2.0) in DeLorme et al. ] (based on MPPD v. 2.90 [ARA 2009 as described in Section A.7.4). The alveolar deposition fraction estimates were 0.0715, 0.0608, and 0.054, respectively, for the 0.54, 2.5, and 25 mg/m3 exposure concentrations.
The normalized dose estimates in mice and rats (as CNF mass per alveolar surface area or mass of lungs) and associated lung responses are shown in Tables A-15 and A-16. In mice, these lung dose es timates are similar to or higher than the deposited lung dose estimate in the rat at the LOAEL (2.5 mg/ m3), but less than the deposited lung doses estimat ed in rats at the highest concentration (25 mg/m3) (Tables A-15 and A-16). The mouse deposited lung burden estimates are higher than the rat retained lung burden estimates at all doses, assuming spherical-particle model clearance in MPPD 2.90 . If CNF is cleared in a similar manner as that reported for MWCNT in Pauluhn , the ac tual retained lung dose in rats may be intermediate between the estimated deposited and retained lung burdens. Thus, the mouse fibrotic lung response was observed at an administered lung dose that was similar to, or higher than, the rat lung doses estimated at the LOAEL. This suggests a roughly similar dose-response relationship in the rat and mouse lungs to CNT, based on the limited data in these two studies.
As discussed above (Section A.7.3), the mouse lung responses to CNF (at a 120 ^g dose) included al veolar septal thickening identified as pulmonary fibrosis based on collagen deposition observed by Sirius Red staining and the measured thickness of the alveolar connective (septal) tissue . In the DeLorme et al. study, similar qualitative lung responses were observed at the 25 mg/m3 (as discussed in Section A.7.3). The DeLorme et al. did not report fibrosis at 25 mg/m3 although the description of the responses is consistent with early stage fibrosis reported in the Murray et al. .
NOAELs were reported for one type of CNF in De Lorme et al. and for one type of MWCNT in Pauluhn , which were 0.1 and 0.54 mg/m3, respectively. It follows that the human-equivalent working lifetime exposure estimates to the NO-AEL would be roughly 5-fold higher for the CNF than that for the MWCNT (although not exactly, due to particle size differences and lung deposition estimates). Table A-13 shows estimates of humanequivalent concentrations to effect levels in the Pau luhn and Ma-Hock subchronic inhalation studies, based on different assumptions in extrapolating the rat lung dose to humans. The application of uncer tainty factors (e.g., Table A -14) with the CNF used in the DeLorme et al. study would result in estimated working lifetime no-effect levels in hu mans of roughly 1-4 ^g/m3. fIn rats, the pulmonary deposition fraction and 13-wk retained lung burdens were estimated from MPPD 2.9 . *In mice, this estimate assumes 100% alveolar deposition of the administered by pharyngeal aspiration. If 81% alveolar deposition is assumed as for MWCNT , this estimate would be 1.8 mg/m2 lung. fIn rats, the pulmonary deposition fraction and 13-wk retained lung burdens were estimated from MPPD 2.9 . *In mice, this estimate assumes 100% alveolar deposition of the administered by pharyngeal aspiration. If 81% alveolar deposition is assumed as for MWCNT , this estimate would be 0.65 mg/g lung. Trout and Schulte 2009], and NIOSH continues to recommend occupational health surveillance as an important part of an effective risk management program.
H a z a rd s u rv e illa n c e includes elements of hazard and exposure assessment
- The hazard assessment involves reviewing the best available information concerning toxicity of materials. Such an assessment may come from databases, texts, and published literature or available regulations or guide lines (e.g., from NIOSH or the Occupational Safety and Health Administration ).
Human studies, such as epidemiologic in vestigations and case series or reports, and animal studies may also provide valuable in formation. In most instances involving CNT there are limited toxicological data and a lack of epidemiologic data with which to make a complete hazard assessment.
- The exposure assessment involves evaluating relevant exposure routes (inhalation, ingestion, dermal, and/or injection), amount, duration, and frequency (i.e., dose), as well as whether exposure controls are in place and how pro tective they are. When data are not available, this will be a qualitative process.
B . 2 M e d i c a l S u r v e i l l a n c e Medical surveillance targets actual health events or a change in a biologic function of an exposed person or persons. Medical surveillance involves the ongoing evaluation of the health status of a group of workers through the collection and ag gregate analysis of health data for the purpose of preventing disease and evaluating the effectiveness of intervention programs (primary prevention). NIOSH recommends the medical surveillance of workers when they are exposed to hazardous ma terials, and therefore are at risk of adverse health effects from such exposures. Medical screening is one form of medical surveillance that is designed to detect early signs of work-related illness in indi vidual workers by administering tests to apparently healthy persons to detect those with early stages of disease or risk of disease. Medical screening gener ally represents secondary prevention.
Medical surveillance is a second line of defense behind the implementation of engineering, ad ministrative, and work practice controls (includ ing personal protective equipment). Integration of hazard and medical surveillance is important to an effective occupational health surveillance pro gram, and surveillance of disease or illness should not proceed without having a hazard surveillance program in place.
# B.2.1 Planning and C onducting Medical Surveillance
Important factors when considering medical sur veillance include the following:
1. A clearly defined purpose or objective.
2. A clearly defined target population.
3. The availability of testing modalities to accom plish the defined objective. Testing modalities may include such tools as questionnaires, physi cal examinations, and medical testing.
A clear plan should be established before beginning a medical surveillance program. The plan should include the following:
1. A rationale for the type of medical surveillance.
2. Provisions for interpreting the results.
3. Presentation of the findings to workers and management of the affected workplace.
4. Implementation of all the other steps of a complete medical surveillance program .
The elements for conducting a medical surveillance program generally include the following:
1. An initial medical examination and collection of medical and occupational histories.
2. Periodic medical examinations at regularly sched uled intervals, including specific medical screen ing tests when warranted.
3. More frequent and detailed medical examina tions as indicated, based on findings from these examinations.
4. Post-incident examinations and medical screen ing following uncontrolled or nonroutine increas es in exposures such as spills.
Worker training to recognize symptoms of expo sure to a given hazard.
6. A written report of medical findings.
7. Employer actions in response to identification of potential hazards. 1996] for organic and elemental carbon (OC and EC). The analysis quantifies total carbon (TC) in a sample as the sum of OC and EC. The method was developed to measure diesel particulate matter (DPM) in oc cupational settings, but it can be applied to other types of carbonaceous aerosols. It is widely used for environmental and occupational monitoring.
For the thermal-optical analysis, a portion (typi cally a 1.5-cm2 rectangular punch) of a quartz-fiber filter sample is removed and placed on a small quartz spatula. The spatula is inserted in the instru ment's sample oven, and the oven is tightly sealed. Quartz-fiber filters are required for sample collec tion because temperatures of 850 °C and higher are employed during the analysis. The thermal-optical analyzer is equipped with a pulsed diode laser and photo detector that permit continuous monitoring of the filter transmittance. This optical feature cor rects for the "char" that forms during the analysis because of carbonization of some materials.
Thermal-optical analysis proceeds in inert and oxi dizing atmospheres. In both, the evolved carbon is catalytically oxidized to carbon dioxide (CO2). The CO2 is then reduced to methane (CH4), and CH4 is quantified with a flame ionization detector (FID). The OC (and carbonate, if present) is first removed in helium, as the temperature is increased to a pre set maximum. If sample charring occurs, the fil ter transmittance decreases as the temperature is stepped to the maximum. After the OC is removed in helium, an oxygen-helium mix is introduced, and the temperature is again stepped to a maxi mum (850 °C or higher, depending on the sample) to effect combustion of the remaining material. As the light-absorbing carbon (mainly EC and char) is oxidized from the filter, the filter transm it tance increases. The split between the OC and EC is assigned when the initial (baseline) value of the filter transmittance is reached. All carbon removed before the OC-EC split is considered organic, and that removed after the split is considered elemental.
If no charring occurs, the split is assigned before removal of EC. If the sample chars, the split is not assigned until enough light-absorbing carbon is removed to increase the transmittance to its ini tial value.
OC and EC results are reported as micrograms per square centimeter (^g/cm2) of sample deposit.
The total OC and EC on the filter are calculated by multiplying the reported values by the deposit area.
Because only a portion of the sample is analyzed, it must be representative of the entire deposit. Thus, a homogeneous deposit is assumed. The entire filter must be analyzed (in portions if a 37-mm filter is used) if the filter deposit is uneven.
# C . 2 M e t h o d E v a l u a t i o n
The reported accuracy of NIOSH 5040 is based on analysis of TC in different sample types. Accuracy was based on TC, because there is no analytical standard for determining the OC-EC content of a complex carbonaceous aerosol. In the method eval uation, five different organic compounds were ana lyzed to examine whether the instrument response is compound dependent. Linear regression of the data (43 analyses total) for all five compounds gave a slope and correlation coefficient (r) near unity , indicating a compound-independent response. Eight different carbonaceous materials also were analyzed by three methods, in-house by thermal-optical analysis and by two other methods used by two external labora tories. Sample materials included, DPM, coals, ur ban dust, and humic acid. Thermal-optical results agreed well with those reported by the two other laboratories. The variability of the TC results for the three laboratories ranged from about 1%-7%. These findings demonstrate that carbon can be accurately quantified irrespec tive of the compound or sample type.
In sampling DPM, different samplers gave compa rable EC results because particles from combustion sources are generally less than one ^m (diameter).
As such, the particles are collected with high ef ficiency (near 100%) and evenly deposited on the filter. In the method evaluation, different sampler types (open-face 25-mm and 37-mm cassettes, 298 personal cascade impactors, and four proto type impactors) were used to collect diesel exhaust aerosol at an express mail facility. The relative stan dard deviation (RSD) for the mean EC concentra tion was 5.6% . Based on the 95% confidence limit (19%; 13 degrees of freedom, n = 14) on the accuracy, the NIOSH accuracy cri terion was fulfilled. Variabil ity for the OC results was higher (RSD = 12.3%), which is to be expected when different samplers are used to collect aerosols that contain semi-volatile (and volatile) components, because these may have a filter face velocity dependence. The method pre cision (RSD) for triplicate analyses (1.5 cm2 filter portions) of a 37-mm quartz-fiber filter sample of DPM was normally better than 5%, and often 2% or less .
In the method evaluation, the limit of detection (LOD) was estimated two ways: (1) through analy sis of low-level calibration standards , and (2) through analysis of pre-cleaned media blanks. In the first approach, OC standard solutions (sucrose and ethylene diaminetetraacetic acid ) covering a range from 0.23 to 2.82 ^g C (or from 0.15 to 1.83 ^g C per cm2 of filter) were analyzed. An aliquot (usually 10 ^L) of the standard was applied to one end of a 1.5-cm2 rectangular filter portion that was pre-cleaned in the sample oven just before application of the aliquot. The filter portion was pre-cleaned to remove any OC contamination, which can greatly increase the EC LOD when TC results are used for its estimation.
After cleaning the filter portion, metal tweezers are used to remove from the sample oven the quartz spatula that holds the portion. External to the oven, the spatula is held in place by a metal bracket such that the standard can be applied without removing the filter portion from the spatula. This avoids po tential contamination from handling.
Results of linear regression of the low-level calibra tion data were used to calculate the LOD as 3 oy/m, where oy is the standard error of the regression and m is the slope of the regression line. TC results were used rather than OC because the pyrolysis correc tion may not account for all of the char formed during analysis of the standard (because of low sample loading and/or the position of the aliquot in the laser). If not, a small amount of the OC will be reported as EC, introducing variability in the OC results and increasing the LOD. The LOD es timated through the linear regression results was 0.24 ^g C per filter portion, or 0.15 ^g/cm2.
A simpler approach for LOD determination is through analysis of media blanks. In the method evaluation, TC results for pre-cleaned, 1.5-cm2 portions of the filter media were used to calculate the LOD estimate. The mean (n = 40) TC blank was 0.03 ±0.1 ^g TC. Thus, the LOD estimated as three times the standard deviation for pre-cleaned media blanks (3 o blank) was about 0.3 ^g C. This result agrees well with the value (0.24 ^g C) esti mated through analysis of the standard solutions.
Considering a 960-L air sample collected on a 37 mm filter and a 1.5-cm2 sample portion, this LOD translates to an air concentration of about 2 ^g/m3 ( /0.960 m3 = 1.78 ^g/ m3), corresponding to the reported upper LOQ of about 7 ^g/m3 (LOQ = 3.3 x LOD).
As with all analytical methods, the LOD is a vary ing number. However, the EC LOD (about 2 ^g/m3, or an LOQ of 7 ^g/m3) reported for NIOSH Meth od 5040 is a high estimate. As discussed in Sec tion 6 of the CIB, it was based on analysis of pre cleaned media blanks from different filter lots, over a 6-month period, and by different analysts at two different laboratories. Further, variability for the TC results, rather than the EC results, was used to estimate the LOD. These combined factors gave a conservative (high) estimate of the EC LOD. More typical values, under different sampling conditions, are discussed in Section 6.1 of the CIB. When results of the initial m ethod evaluation were published , an inter laboratory comparison was not possible because the thermal-optical instrument was available in only one laboratory. After additional laboratories acquired thermal-optical instruments, a round robin comparison was conducted. Matched sets of filter samples containing different types of complex carbonaceous aerosols were dis tributed to 11 laboratories. Six of the eleven ana lyzed the samples according to NIOSH 5040, while five used purely thermal (i.e., no char correction) methods. Good interlaboratory agreement was ob tained among the six laboratories that used NIOSH 5040. In the analysis of samples containing DPM, the variability (RSD) for the EC results ranged from 6% to 9%. Only low EC fractions were found in wood and cigarette smoke. Thus, these materials pose minimal interference in the analysis of EC. In addition, only minor amounts of EC were found in two OC standards that char: about 1% for sucrose and 0.1% for the disodium salt of ethylene diaminetetraacetic acid (EDTA). Two aqueous solutions of OC standards were included in the comparison as a check on the validity of the char correction and accuracy of the TC results. Variability (RSD) of the TC results for the two standard solutions and five filter samples ranged from 3% to 6%.
A second interlaboratory comparison study using NIOSH 5040 was also conducted . Seven environmental aerosol samples were analyzed in duplicate by eight laboratories. Four samples were collected in U.S. cities, and three were collected in Asia. Interlaboratory variability for the EC results ranged from 6% to 21% for six samples having EC loadings from 0.7 to 8.4 ^g/cm2. Four of the six had low EC loadings (0.7 ^g/cm2 to 1.4 ^g/ cm2). The variability for the OC results ranged from 4% to 13% (OC loadings ranged from about 1 to 25 ^g/cm2). Results for TC were not reported, but the variability reported for the OC results should be representative of that for TC, because the sam ples were mostly OC (75% to 92%). Similar find ings were also reported by Chai et al. from seven laboratories in which analysis was performed using Method 5040 on four sample filter sets con taining OC and EC. The summary RSDs for EC re sults were <12% for all four sample sets. it is important to ensure that all of the carbonate is removed during the first stage of the analysis. If it is not completely removed (because of high loading), the sample should be acidified.
C . 5 O r g a n i c C a r b o n S a m p l i n g A r t i f a c t s Problems commonly referred to as 'sampling artifacts" have been reported when collecting particu late OC on quartz fiber filters. These artifacts do not affect the EC results, but they cause positive or negative bias in the measurement of particulate OC (and TC). Eatough et al. observed loss of semi-volatile OC from particles during sampling, referred to as the "negative" or evaporation artifact. This artifact causes a negative bias in the particu late OC (and TC) concentration, because OC ini tially collected as condensed matter is subsequently lost through evaporation from the filter during sampling. Conversely, several studies have demon strated a "positive" or adsorption artifact because of filter adsorption of gas phase OC. A quartz-fiber filter collects airborne particulate matter and allows gases and vapors to pass through, but some adsorp tion of gas phase (and vapor) OC occurs, resulting in overestimation of the true airborne particulate OC concentration .
Most of the studies on sampling artifacts apply to environmental air sampling. Occupational sampling methods and conditions are generally much differ ent than environmental. Environmental samples are usually collected at much higher face velocities: 20-80 cm/s as opposed to 3-4 cm/s for occupational samples. In addition, the concentrations of carbon are much lower in environmental air than in most occupational settings , and the types of aerosols sampled are different (e.g., aged aerosol from multiple envi ronmental sources, as opposed to aerosols close to source). These differences are important because OC sampling artifacts depend upon conditions such as filter face velocity, air contaminants pres ent, sampling time, and filter media. Given the much lower filter face velocities typical of occupa tional sampling, adsorption (i.e., positive artifact) is expected over evaporation for occupational sam ples. Turpin et al. , Kirchstetter et al. , Noll andBirch , and Schauer et al. have reported adsorption as the dominant artifact.
To correct for the positive adsorption artifact, tan dem quartz filters have been applied. When sam pling with tandem filters, particulate matter is collected by the first filter, while both the first and second filters are exposed to and adsorb gaseous and vaporous OC. For the correction to be effec tive, both filters must be in equilibrium with the sampled airstream, adsorb the same amount of gas/ vapor OC, and not have a significant amount of OC loss through evaporation. The OC on the second filter can then be subtracted from the OC on the first filter to account for the adsorbed OC. Several studies have found the tandem filter correction to underestimate the adsorption artifact , while others have shown effective correction .
Air samplers containing a Teflon® and quartz filter also have been used for correction of the positive OC artifact. In theory, the Teflon top filter collects particulate matter with negligible OC gas/vapor adsorption, so only the quartz filter beneath it ad sorbs gas and vapor OC. Studies on tandem filter corrections have shown the quartz filter beneath Teflon to have a greater OC value than quartz be neath quartz . This finding was attributed to the quartz beneath quartz not reaching equilibrium with the sampling stream and underestimating the adsorp tion artifact. Others have attributed it to the evapo ration artifact being more prevalent when using a Teflon filter instead of a quartz filter, and they re ported the quartz behind Teflon to overestimate the adsorption artifact .
Several studies have shown no difference when us ing either type of correction .
Noll and Birch conducted studies on OC sampling artifacts for occupational samples to test the accuracy of the tandem quartz-filter correction.
In practice, using two quartz filters for air sampling is preferable to the Teflon-quartz combination be cause both the collection and blank filters are in the same sampler. The tandem quartz correction effectively reduced positive bias for both labora tory and field samples. Laboratory samples were collected under conditions that simulated DPM sampling in underground mines. Without correc tion, TC on the sample filter was 30% higher than the actual particulate TC for 50% of the samples, but it was within 11% of the particulate TC after the tandem quartz-fiber correction. For field samples, this correction significantly reduced positive bias due to OC adsorption artifact. Little artifact effect was found after the correction was made. Method 5040 was developed to measure DPM in occupational environments, but it can be applied to other types of carbonaceous aerosols. When ap plied to materials such as carbon black or CNT/ CNF, particle deposition on a filter may be more variable because particles in these materials are much larger than DPM. Variability depends on the sampler type, and as expected, different samplers (e.g., cyclones, open-and closed-face cassettes) will give different air concentration results, depending on the particle size distribution . Diesel emissions, and combustion aerosols gener ally, are composed of ultrafine (< 100 nm diame ters) particles. Because of the small size, DPM nor mally deposits evenly across the quartz-fiber filter used for sample collection. As already discussed, even deposition is required because only a portion of the filter is normally analyzed. Thus, it must be representative of the entire sample deposit.
When applying NIOSH 5040 to CNT/CNF, it is important to verify an even filter deposit so that an accurate air concentration (based on results for the filter portion) can be calculated. Alternatively, the entire filter can be analyzed if the deposit is un even, but this requires analysis of multiple portions of a 37-mm filter because of the relatively small di ameter (about 1 cm) of the carbon analyzer's quartz sample oven. Quality assurance procedures should include duplicate analyses of the 37-mm filter to check precision, especially if the deposit appears uneven. If a 25-mm filter is used, the entire filter can be analyzed, which improves the LOD and obviates the need for an even deposit, but a repeat analysis (or other chemical analysis) of the sample is not possible if the entire filter is analyzed. In ad dition, the filter must be cut into portions, and the portions must be properly loaded in the analyzer so the sample transmittance can be monitored. Additional details on the evaluation and use of NIOSH 5040 are provided elsewhere .
As discussed in the CIB, NIOSH 5040 has been ap plied to several field studies on CNT/CNF .
In one study, it was employed for area monitoring at a laboratory facility that processes CNF in the production of polymer composites . Carbon nanofibers and CNT have negligible (if any) OC content, making EC a good indicator of these materials. Survey results were reported in terms of TC, which is subject to OC interferences, but the OC results were blank corrected by the tan dem filter method described in the preceding sec tion (organic carbon adsorption artifacts) to mini mize the positive sampling artifact. Further, based on the thermal profiles for the air samples and the bulk materials (CNF and composite product), the blank-corrected TC was a good measure of the CNF air concentration, except in an area where a wet saw was operating. In that area, TC was a mea sure of the composite aerosol released during the sawing operation, which contained a high OC frac tion due to the composite matrix.
There are several issues and limitations when ana lyzing dusts generated during cutting, sanding, or grinding CNT/CNF composites. First, the accu racy of determining the EC fraction of a polymer composite is questionable and expected to vary, de pending on polymer type and sample loading. Fur ther, EC in both the polymer and bulk CNT/CNF materials will be measured (i.e., not speciated) if both are present. In addition, the EC loading in a polymer composite is usually a low percentage (e.g., 1%). Therefore, if the composite dust is the only EC source, and if its EC content is determined accurately, an EC concentration of 2 ^g/m3 would correspond to a dust concentration (at 1% EC) of 200 ^g/m3, considerably higher than the EC concentration. As such, the sample can be easily overloaded with OC because of the high relative OC content, which can both overload the analyzer and cause positive bias in the EC result. Further, in a composite particle, the CNT/CNF is bound within a polymer (or resin) matrix, which is dissimilar to a particle of unbound material. An effort to improve the analysis of sam ples containing dusts of polymer composites is on going; however, in the context of the NIOSH REL, the intended application is CNT/CNF in powder form, purified or unpurified. Whenever possible, a bulk sample of the material (and, if available, other materials that may be aerosolized) should be ana lyzed as the thermal properties of CNT/CNF are material dependent (e.g., CNF, SWCNT, MWCNT, functionalized or not functionalized). The OC-EC split for a bulk material is not reliable because it depends on how the powder is applied to the filter punch, but a small amount of the CNT/CNF should be analyzed to determine the onset of oxidation of the material and confirm its complete oxidation.
NIOSH investigators also conducted extensive air monitoring at a facility that manufactures and pro cesses CNFs . Both personal breathing zone and area samples were collected. To evaluate the method precision, paired samples were collected and repeat analyses of the filters were performed. The relative percent difference (RPD) and RSD (%) for repeat analyses of 12 samples collected in different areas of the facility are listed in Table 1. Total, thoracic, and respirable dust samples are included. Total (inhalable equivalent) dust was collected with 37-mm cassettes, while cyclones were used to collect tho racic and respirable dust. The RPD was determined by analyzing either two punches from the same filter (duplicates) or one punch from two different filters (paired samplers); the RSD was determined by analyzing one filter in triplicate. The precision for the EC results ranged from about 3% to 14% except for one respirable sample, where the RPD was about 22%. Higher variability for the latter may relate to spatial variation, because the two fil ter punches analyzed were from different samplers. Spatial variation, rather than sampler variability, is a likely explanation for this particular result as two other sets of paired samplers do not show higher variability. The RPDs for these are about 8% and 13%, comparable to results for multiple punches from the same filter.
# Delivering on the N ation's promise: safety and health a t work for a ll people through research and prevention
To receive NIOSH documents or more information about | 65,739 | {
"id": "3425cda71f5b09bbc51515f12b15077b456869f1",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # I. BACKGROUND
As a result of several major disease outbreaks on cruise vessels, the Centers for Disease Control and Prevention (CDC) established the Vessel Sanitation Program (VSP) in 1975, as a cooperative activity with the cruise ship industry. This program assists the cruise ship industry in fulfilling its responsibility for developing and implementing comprehensive sanitation programs to protect the health of the traveling public. The VSP fosters cooperation between government and industry in order to define and reduce health risks associated with cruise ships and to ensure a healthful and clean environment for ships' passengers and crew. Every vessel that has a foreign itinerary and that carries thirteen (13) or more passengers is subject to twice-yearly inspections and, when necessary, re-inspection by VSP personnel. It should be noted that VSP operations are supported entirely by user fees.
The VSP also provides construction plan reviews for "new buildings" and "major retrofits," and on-site construction reviews when VSP determines it is necessary. Construction reviews are normally conducted when a ship is near completion or when it first enters a U.S. port. As a public health agency, CDC places a high value on this service, especially as it relates to the prevention of illness aboard cruise ships. Shipbuilders pay the costs and expenses of VSP staff traveling to shipyards to conduct on-site construction reviews.
The primary objective of this document is to provide a framework for consistency in the sanitary design, construction, and construction inspections of cruise ships. CDC is committed to promoting the highest construction standards for public health related areas and believes compliance with these construction guidelines will help ensure a healthful environment on cruise ships. In developing this document CDC reviewed several standards, regulations, and criteria from a variety of sources for general guidance. These sources are listed under Acknowledgments part B.
New cruise ships must comply with all international code requirements (e.g., International Maritime Organization (IMO) Conventions, including the Safety of Life at Sea Convention (SOLAS), the International Convention for the Prevention of Pollution from Ships (MARPOL), the Tonnage and Load Line Convention, International Electric Code (IEC), International Standards Organization (ISO)). This document does not cross reference related, and sometimes overlapping, standards that new cruise ships must meet.
Construction guidelines are provided for various components of the ship's facilities that relate to public Figure 1 health such as food service and water. CDC also believes that ship owners and operators should have the option of selecting the type of equipment that meets their individual needs. They should keep in mind, however, that the equipment chosen must be maintained over time to meet the VSP routine inspection criteria.
It is not CDC's intention to limit the introduction of new technology or new designs for shipbuilding. A shipbuilder, owner, manufacturer, or other interested party may request VSP to review a construction guideline based on new technologies, concepts and/or methods. VSP will review the request and respond in writing as to the functional merit of the proposed changes.
The CDC Recommended Shipbuilding Construction Guidelines for Passenger Vessels Destined to Call on U.S. Ports will apply to all new buildings (i.e., ships) in which the keel is laid after February 1, 1997. The construction guidelines will also apply to major retrofits planned after February 1, 1997. A major retrofit is defined as any change in the structural elements of the ship (e.g., galleys, pantries, dinning rooms, water treatment systems, plumbing systems, waste management systems, pools, spas). These guidelines will not apply to minor retrofits. Minor retrofits are small changes like equipment replacement, installation or removal of single-use equipment (e.g., refrigerator units, bains-marie units), or single pipe runs.
CDC recognizes that the shipbuilding and cruise industries are constantly evolving and that these guidelines may require periodic revision. Our intent is to periodically ask ICCL and other knowledgeable parties to meet with us to review the guidelines and determine whether changes are necessary to keep up with the innovations in the industry.
# II. GENERAL DEFINITIONS
Accessible --Capable of being exposed for cleaning and inspection with the use of simple tools such as a screwdriver, pliers, or an open end wrench.
Air-break --A piping arrangement in which a drain from a fixture, appliance, or device discharges indirectly into another fixture, receptacle, or interceptor at a point below the flood-level rim (Figure 1).
Air gap --The unobstructed vertical distance through the free atmosphere between the lowest opening from any pipe or faucet supplying water to
# Figure 2
a tank, plumbing fixture, or other device and the flood-level rim of the receptacle or receiving fixture. The air gap must be at least twice the diameter of the supply pipe or faucet (Figure 2).
Backflow --The flow of water or other liquids, mixtures, or substances into the distribution pipes of a potable supply of water from any source or sources other than the potable water supply.
Back-siphonage is one form of backflow.
Backflow, check, or nonreturn valve --A mechanical device installed in a waste line to prevent the reversal of flow under conditions of back pressure. In the check-valve type, the flap should swing into a recess when the line is flowing full, to preclude obstructing the flow.
Backflow preventer --An approved backflowprevention plumbing device that must be used on potable water distribution lines where there is a direct connection or a potential connection between the potable water distribution system and other liquids, mixtures or substances from any source other than the potable water supply. Some devices are designed for use under continuous water pressure, while others are nonpressure types. To ensure proper protection of the water supply, a thorough review of the water system should be made to confirm that the appropriate device is selected for each specific application. The following lists general types and uses:
Atmospheric vacuum breaker --An approved backflow prevention plumbing device utilized on potable water lines where shut-off valves do not exist downstream from the device. The device is not approved for use when installed in a manner such that it will be under continuous water pressure. An atmospheric vacuum breaker must be installed at least 6 inches above the flood level rim of the fixture or container to which it is supplying water.
Hose bib connection vacuum breaker --An approved backflow prevention plumbing device that attaches directly to a hose bib via a threaded head. This device utilizes a single check valve and vacuum-breaker vent. It is not approved for use under continuous pressure, for example, when a shut-off valve is located downstream from the device.
Continuous pressure backflow preventer --An approved backflow prevention plumbing device that is designed and approved for use under continuous water pressure, for example, when shut-off valves or other restrictions such as filters are located downstream from the device.
Back-siphonage --The flowing back of used, contaminated, or polluted water from a plumbing fixture or vessel or other source into a water supply pipe as a result of negative pressure in the pipe.
Corrosion-resistant --Capable of maintaining original surface characteristics under prolonged influence of the use environment, including the expected food contact and the normal use of cleaning compounds and sanitizing (bactericidal) solutions.
Coved --Having a concave surface or molding that eliminates the usual angles of ninety degrees or less
Cross-connection --Any physical connection between two otherwise separate piping systems that allows a flow from one system to the other. These cross-connections are particularly important when one of the piping systems carries potable water.
Easily cleanable --Readily accessible and fabricated with a material, finish, and design that allows for cleaning by normal methods.
Food contact surfaces --Surfaces of equipment and utensils with which food normally comes in contact, and surfaces from which food may drain, drip, or splash back onto surfaces normally in contact with food.
Food handling areas --Any area where food is stored, processed, prepared, transported, or served.
Food preparation areas --Any area where food is processed, cooked, or prepared for service.
Food service areas --Any area where food is presented to passengers or ship personnel.
Food storage areas --Any area where food or food products are stored.
Food transport areas --Any area through which unprepared or prepared food is transported during food preparation, storage and service operations.
Nonfood contact surfaces --All exposed surfaces, other than food contact or splash contact surfaces, of equipment located in food storage, preparation, and service areas.
Nonpotable fresh water -Fresh water intended for use in technical and other areas where potable water is not required, e.g., laundries, the engine room, toilets, waste treatment areas, and for washing decks in areas other than the ship's hospital, food service, preparation and storage areas.
Potable water (PW) --Fresh water intended for drinking, washing, bathing, or showering; for use in the ship's hospital; for handling, preparing, or cooking food; and for cleaning food storage and preparation areas, utensils, and equipment. Potable water must meet the International Standards for Drinking Water, especially the bacteriological, chemical, and physical requirements.
Potable water tanks --All tanks into which potable water is bunkered for distribution and used as potable water.
Readily accessible --Exposed or capable of being exposed for cleaning or inspection without the use of tools.
Readily (or easily) removable --Capable of being detached from the main unit without the use of tools.
Removable --Capable of being detached from the main unit with the use of simple tools such as a screwdriver, pliers, or an open end wrench.
Sealant --Material approved by the National Sanitation Foundation, the United States Department of Agriculture (USDA), or the Food and Drug Administration (FDA) for the filling in of seams 1/32-inch (0.8 mm) or less.
Sealed --Having no openings that will permit the entry of soil or seepage of liquids.
Sealed Seam --A seam having no openings that will permit the entry of soil or liquid seepage.
Seam --An open juncture between two similar or dissimilar materials. Continuously welded junctures, ground and polished smooth, are not considered seams.
Sewage --Any liquid waste containing animal or vegetable matter in suspension or solution, including liquids containing chemicals in solution.
Smooth --A surface, free of pits and inclusions, having a cleanability equal to a No. 3 finish (100 grit) on stainless steel.
Splash contact surfaces --Surfaces that are subject to routine splash, spillage, or other soiling during normal use.
Direct splash surfaces --Areas adjacent to food contact surfaces that are subject to splash, drainage, or drippage onto food contact surfaces.
Indirect splash surfaces --Areas adjacent to food contact surfaces that are subject to splash, drainage, drippage, condensation, or spillage from food preparation and storage.
# III. GENERAL FACILITIES REQUIREMENTS
# A. Sizing
Sizing and flow, whenever possible, are determined during the plan review process. The adequacy of size and the appropriateness of the flow are dependent on numerous factors, e.g., ship's total size, the number of passengers and crew, and the ship's itinerary. In general, food storage, preparation, and service areas; dish washing areas; and waste management areas must be of adequate size to accommodate the number of passengers being served, the type of menu, and the type of operations. Food storage areas (frozen, dry, and refrigerated) must be designed to meet maximum expected itineraries allowing for scheduled re-provisioning. Adequate cold and hot storage, including temporary storage, must be available for each type of service and for foods being transported to service areas remote from the galley.
# B. Flow
Functions and work stations must be arranged in a logical sequence which minimizes crosstraffic, backtracking, and allows for adequate separation of clean and soiled operations. An orderly flow of food from the purveyor through the storage, processing, and preparation areas to the service areas, and finally to the waste management area must be provided. The goal is smooth, rapid production and service, conducted in accordance with strict temperature control requirements, and a minimum expenditure of worker time and energy and food handling. Flow patterns are discussed during the plan review process.
C. Equipment/Devices
The following is a list of equipment required in galleys and lido food service areas:
i. Drinking fountains.
ii.
Blast chillers are to be incorporated into the design of each crew and passenger galley. Two or more units may be required depending on the size of the vessel, their intended application, and the distances between the chillers and the storage and service areas.
iii. Food preparation sinks are to be located in as many areas as necessary, i.e., in all meat, fish, and vegetable preparation rooms, cold pantries or garde mangers, and in any other areas where washing or soaking of food is conducted. An automatic vegetable washing machine may replace food preparation sinks in vegetable preparation rooms.
# iv.
Storage cabinets, shelves, and/or racks are to be provided for food products, condiments, and equipment in preparation areas, beverage container storage rooms, and bar storage rooms.
v.
Portable tables, carts, or pallets are needed in areas where food is dispensed from cooking equipment, such as soup kettles, steamers, braising pans, and tilting skillets. They are also needed for ice bins.
vi.
Easily cleanable knife lockers that meet food contact standards are to be provided.
# vii.
Storage areas, cabinets, or shelves are to be provided for waiter trays .
viii. Dishware lowerators or similar dish storage and dispensing cabinets are to be provided.
Bakeries, pot wash stations, and other heavy use areas shall have a prewash station (including overhead spray) or a four-compartment sink with an insert pan and an overhead spray. In addition, the main pot washing station(s) should have an automatic washing machine designed to handle the largest piece of equipment for that area. Automatic washing machines can be substituted for three-compartment sinks with separate prewash stations provided they are sized to the equipment being washed and have a prewash area with a sink. A single-door pass through ware washing machine is preferable to an undercounter model.
All preparation areas shall have easy access to a three-compartment utensil washing sink or a ware washing machine equipped with a dump sink and a prewash hose.
Soup stations and other bulk cooking stations shall have portable or stationary stands for removal of cooked product and a storage location or rack for large items such as ladles, paddles, whisks, and spatulas.
Bulk milk and juice stations and other beverage dispensing equipment shall have readily removable drain pans. Coffee, water, and ice dispensing equipment may have built-in drains in the tabletop.
Storage areas must be provided for all equipment and utensils, such as ladles and cutting blades used in food preparation areas (e.g., vegetable preparation, bakery, and cold pantry areas).
All installed equipment requiring a drain must be designed so that food and wash water drainage flows into a container, floor drain scupper, or floor sink, rather than directly onto a deck.
Top openings and rims of food cold tops, bains-maries, ice wells and other food and ice holding units must be protected by a raised rim of at least 3/16 inch (4.8 mm) flanged upward above the level that liquids may accumulate.
# D.
Equipment Surfaces
All food contact surfaces shall consist of materials that are appropriate for food contact and shall be: smooth; easily cleanable and maintainable; provided with coved corners; and preferably seamless. External corners and angles shall be formed with sufficient radii to permit proper drainage and exhibit no sharp edges. Seams sealed with approved sealant may be utilized in limited application when practical function and/or design requires. Questions as to the applicability of the limited use of sealed seams may be directed to the VSP.
Splash contact surfaces shall consist of materials that are appropriate for food contact and shall have smooth, easily cleanable surfaces exhibiting no sharp edges.
Nonfood contact surfaces shall be durable and non-corroding. Exposed surfaces shall be smooth and easily cleanable. Floor material shall be non-skid and non-absorbent.
In general, all food contact, splash contact and nonfood contact surfaces shall be smooth, durable, and noncorroding. Surfaces shall be designed to preclude unnecessary edges, projections or crevices and shall be readily accessible.
# E.
Bulkheads, Deckheads, and Decks
Bulkhead and deckhead construction precludes the use of exposed fasteners. All seams between adjoining panels that are more than 1/32" (0.8 mm) shall be covered with profile strips.
All bulkheads shall be sufficiently reinforced to prevent panels from buckling or becoming detached under operating conditions.
# Door penetrations shall be completely welded indentations and not open voids.
Locking pins shall be inserted into inverted nipples. This also applies to the penetrations around fire doors, in the thresholds, and in bulkhead openings.
Coving of at least a d-inch (9.5 mm) radius shall be provided where decks and bulkheads interface, and at the juncture between decks and equipment foundations. Stainless steel coving, if applied, shall be of sufficient thickness so as to be durable and be adequately secured.
# F. Floor Drains and Scuppers
Floor drains, scuppers, and sink covers shall be of stainless steel, or other approved material which meets the requirements of a smooth, easy to clean surface, strong enough to maintain its original shape, and exhibit no sharp edges. They should be tight fitting, removable for cleaning, and uniform in length (e.g., 3 feet or 1 meter) so they are interchangeable.
Floor drains, scuppers, and sinks shall be sized to eliminate spillage from overflow to adjacent deck surfaces, and they should be located in nontraffic spaces, such as in front of soup kettles, boilers, tilting pans, and braising pans.
Floor scupper channels shall be of stainless steel, with smooth finished surfaces, and be sized to preclude ponding and spillage.
Deck scupper drain lines should be a minimum of 2 ½-inches (6.4 cm) in diameter and sloped to the collecting drain. Cross-drain connections should be provided to preclude ponding and spillage from the gutter when the ship is listing.
Ramps over thresholds shall be easily removable or sealed in place, sloped for easy roll-in and roll-out of trolleys, and be strong enough to maintain their shape. Ramps over scupper covers can be constructed as an integral part of the gutter system provided they are cleanable and durable.
If deck drains are provided in walk-in refrigerators and freezers, they shall have air breaks or air gaps in the drain lines below the deck level in which the rooms are located.
# IV. GENERAL HYGIENE FACILITIES
A.
# Hand-Washing Facilities
Ensure that hand washing facilities are constructed of stainless steel and provide hot and cold running water from a single mixing faucet.
Ensure that hand washing facilities include a suitable soap dispenser, a paper towel dispenser, a corrosion-resistant waste receptacle, and splash panels where necessary to protect adjoining equipment. It is recommended that a waste receptacle be attached to the bulkhead and be readily removable for cleaning.
Provide a hot and cold water supply complete with faucet and mixing valve below the hand washing sink for the filling of cleaning buckets.
Install hand washing sinks throughout the food service, preparation, and ware washing areas in such a manner that no food handler has to walk more than 25-feet (7.6 m) to reach a station.
Install a sufficient number of hand washing sinks at the soiled dish drop-off area in the main galley to allow adequate turn around time for individuals bringing soiled dishware back from the dining rooms or other food service areas.
Install foot pedals, knee pedals, elephant ears, or electronic sensors on hand washing facilities in food service areas.
Install permanent signs in English indicating that hand washing is required.
# B.
Toilet Facilities
Install toilet facilities in close proximity to the contiguous work area, including all galley and lido food service areas.
Ensure that the toilet room is well ventilated and equipped with a Hand washing station. Install permanent signs in English indicating that hand washing is required.
Ensure that the door to the toilet room is self-closing.
Ensure that the decks are constructed of hard, durable tile and are coved to provide at least a d-inch (9.5 mm) radius.
Ensure that deckheads and bulkheads are easily cleanable.
Provide toilet facilities and diaper changing stations with a covered waste receptacle inside all child care or child activity areas (where children are separated from their parent or guardian). These facilities are to be located in a way that does not require children or providers to exit the immediate care area.
# C.
Child Care Facilities
Child care facilities and children's play areas shall be provided with child-size toilets and hand washing facilities that are easily accessible to children.
Separate toilet and hand washing facilities shall be provided for child care workers.
Hand washing sinks shall be accessible without barriers, such as doors, to each Child care area.
If diaper changing facilities are provided, hand washing sinks should be provided adjacent to diaper changing tables.
Diaper changing tables shall be easily cleanable and constructed of nonabsorbent materials.
Durable, easily cleanable waste containers with tight fitting lids for disposing of soiled diapers should be provided for each diaper changing table.
Contamination of hands, toys, and equipment in child play areas has appeared to play a role in the transmission of diseases in child care settings. The provision of toys and equipment that are easy to clean and sanitize must be considered.
# V. EQUIPMENT MOUNTING AND PLACEMENT
A.
Permanently installed equipment may be sealed to the bulkhead and/or to adjacent equipment. For permanently installed equipment that is not sealed to bulkheads and adjacent equipment, spacing shall be based on the following to allow accessibility for cleaning. These guidelines do not apply to open racks or other equipment of open design.
For single pieces of equipment less than 2-feet long (0.61 m), provide at least 6-inches (15.2 cm) of clear unobstructed space between adjacent equipment and between the equipment and bulkheads.
For pieces of equipment more than 2-feet long (0.61 m) but less than 4-feet long (1.22 m), provide at least 8-inches (20.3 cm) of clear unobstructed space between adjacent equipment and between the equipment and bulkheads.
For pieces of equipment more than 4-feet long (1.22 m) but less than 6-feet long (1.83 m), provide at least 12-inches (30.5 cm) of clear unobstructed space between adjacent equipment and between the equipment and bulkheads. This specification does not apply to open racks or open designs.
For pieces of equipment more than 6-feet long (1.83 m), provide at least 18-inches (46 cm) of clear unobstructed space between adjacent equipment and between the equipment and bulkheads.
# B.
All equipment that is not classified as portable shall be fixed by continuous welding to stainless steel pads or plates on the deck. The stainless steel welding shall have smooth edges, rounded corners, and no gaps. Equipment may also be attached as an integral part of the deck surface by the use of glue, epoxy, or other durable, adhesive product provided the arrangement is smooth and easily cleanable. Equipment that locks in place shall be constructed to be free of gaps and crevices and be easily cleanable.
# D.
When mounting equipment on a foundation or coaming, ensure that the foundation\coaming is at least 4-inches (10.2 cm) above the finished deck. Use cement or a continuous weld to seal equipment to the foundation\coaming. Provide a sealed-type foundation/coaming for equipment not mounted on legs. Ensure that the overhang of the equipment from the foundation\coaming does not exceed 4-inches (10.2 cm). Completely seal any overhang of equipment along the bottom (Figure 3).
# Figure 3
E. Ensure that table-mounted equipment, unless easily movable, is either sealed to the tabletop or mounted on legs at least 4-inches (10.2 cm) above the tabletop.
# VI. FASTENERS AND REQUIREMENTS FOR SECURING EQUIPMENT
A.
The back splash attachment to the bulkhead must be continuous or tack-welded, polished, and sealed tight.
# B.
Use continuous welds for attaching all food contact surfaces, or connections from food contact surfaces, to adjacent splash zones to ensure a seamless coved corner. Use only continuous polished welds for food contact surfaces and splash zones adjacent to food contact surfaces.
For splash zone attachments to the bulkhead, decking, or other equipment, use a continuous or tack-weld, polished and sealed tight. All gaps shall be less than 1/32-inch (0.8 mm) prior to being sealed. If used, fasteners must be low profile, nonslotted, and noncorroding such that the resulting gap is less than 1/32-inch (0.8 mm). All bulkheads, deckheads, or decks receiving such attachments should be reinforced.
# C.
Do not leave gaps or seams or use exposed slotted screws, Phillips head screws, or pop rivets in food splash zones or on food contact surfaces.
# D.
For non-food contact surfaces of equipment, the gaps and seams must not exceed 1/32-inch (0.8 mm). Gaps less than c-inch (3.2 mm) shall be sealed with an approved sealant.
For those surfaces exposed to extreme temperatures or for gaps greater than c-inch (3.2 mm), use only stainless steel profile strips.
# E.
Ensure that pop rivets, Phillips head and slotted screws, and other fasteners used in non-food contact areas are constructed of corrosion-resistant materials.
# VII. LATCHES AND HINGES
Ensure that built-in equipment hinges and latches are durable, noncorroding, and capable of being easily cleaned.
# VIII. GASKETS
A.
Ensure that equipment gaskets for reach-in refrigerators, steamers, ice bins, and ice cream freezers are constructed of smooth, nonabsorbent, nonporous materials.
# B.
Close and seal gaskets at their ends and corners, and seal hollow sections.
# C.
Ensure that refrigerator gaskets are designed to be removable.
# D.
Ensure that fasteners used to install gaskets conform with the requirements specified for Section VI.
# IX. EQUIPMENT DRAIN LINES
A.
Construct a minimum 1-inch (2.5 cm) internal diameter drain line from cold top tables, bainsmarie, ice cream scoop dipper wells, and food preparation sinks, so that they can either be cleaned in place with a long brush or be readily removable for cleaning.
# B.
Ensure that drain lines with angles, corners, or sections longer than 3-feet (0.9 m) are readily removable for cleaning.
C. Drain lines may be constructed of stainless steel or easily cleanable flexible or rigid materials. Air breaks are acceptable for equipment drain lines.
# D.
All installed equipment drain lines including condensate drain lines from refrigeration units must minimize the piping distance from the drain line outlet to the deck scupper drain.
When possible, drain lines should extend in a straight vertical line to a deck scupper drain. When this is not possible, the horizontal distance of the line should be kept to a minimum.
Drain lines which run horizontally under equipment mounted on legs shall not extend for a distance of greater than 12-inches (30.5 cm) and shall be positioned at least 4-inches (10.2 cm) above the deck.
# X. ELECTRICAL CONNECTIONS, PIPELINES, AND OTHER ATTACHED EQUIPMENT
A.
Ensure that the electrical connections and control panels on all equipment and on the bulkhead are watertight and drip proof (i.e., electrical enclosures located in catering spaces shall meet the International Electrical Code). Use stainless steel to encase electrical wiring from permanently installed equipment.
# B.
Do not install ozone or ultra-violet equipment in provisions rooms or food preparation areas unless such equipment is constructed of noncorroding stainless steel with fasteners meeting the requirements under Section VI.
# C.
Ensure that other bulkhead mounted equipment installations, such as phones, speakers, and cameras, are sealed-tight with the bulkhead panels and are not placed in areas exposed to moisture, food splash, or grease.
# D.
Tightly seal any areas where electrical lines, steam, or water pipelines penetrate the panels or tiles of the deck, bulkhead, or deckhead. Also, seal any openings between the electrical lines or the steam or water pipelines and the surrounding conduit or pipelines.
# E.
Encase steam and water pipelines to kettles and boilers in stainless steel cabinets, or position the pipelines behind bulkhead panels. A minimal amount of exposed pipeline is acceptable.
# XI. HOOD SYSTEMS
A.
Install hood systems and/or direct duct exhaust over ware washing equipment, including three compartment sinks in pot wash areas (does not apply to undercounter dishwashing machines).
For ware washing machines with direct duct exhaust, such exhaust should be directly connected to the hood exhaust trunk.
All exhaust hoods over ware washing equipment or three-compartment sinks should be designed with a minimum six inch overhang from the edge of equipment so as to capture excess steam and vapors.
Ware washing machines with direct duct exhaust to the ventilation system shall have a clean-out port in each duct, located between the top of the ware washing machine and the hood system or deckhead.
The flat condensate drip pans located in the ducts from the ware washing machines shall be removable for cleaning.
# B.
Install hood systems above cooking equipment to ensure they adequately remove excess steam and grease laden vapors. For bains-marie or steam tables, excess heat and steam will be controlled by either hood systems or dedicated local ventilation extraction.
# C.
Select proper size exhaust vents and locate them appropriately so as to capture heat and steam.
# D.
Where filters are used, ensure that they are easily removable.
# E.
Ensure that vents and duct work are accessible for cleaning. (Hood washing systems are recommended for removal of grease generated from cooking equipment.)
# F.
In constructing hood systems, use stainless steel with coved corners to provide at least a dinch (9.5 mm ) radius. Use continuous welds or profile strips on adjoining pieces of stainless steel. A drainage system is not required for draining grease or manually applied cleaning solutions from hood assemblies. Drainage systems are required for hood assemblies utilizing an automatic wash down systems.
# G.
Ventilation systems shall be in compliance with manufacturers' recommendations.
# XII. PROVISIONS ROOMS
A.
Bulkheads and Deckheads
Tight-fitting (i.e., seams less than 1/32-inch (0.8 mm)) stainless steel panels are required in walk-in refrigerators and freezers. Stainless steel panels are preferable for dry storage areas.
Painted steel may be used for provisions passageways and drystores areas.
# Figure 4
B.
# Deck Covering
Either hard, durable, nonabsorbent tiles, durable epoxy decking, or corrugated (e.g., diamond plate) stainless steel deck panels are to be used in provisions rooms. All bulkheads and deck junctures shall be coved and sealed tight. If a forklift will be used in this area, the stainless steel panels should be sufficiently reinforced to prevent buckling. (Note: Corrugated stainless steel panels or painted steel must be used in all provisions passageways).
# C. Provision Evaporators, Drip
Pans, and Drain Lines
Ensure that the evaporators located in the walk-in refrigerators, freezers, and dry stores are constructed with stainless steel panels that cover piping, wiring, coils, and other difficult-to-clean components.
Ensure that the drip pans are constructed of stainless steel, have coved corners, are sloped to drain, and are of sufficient strength to maintain slope.
Place non-metal spacers between the drip pan brackets and the interior edges of the pans.
Ensure that all fasteners comply with the guidelines in Section VI.
For freezer drip pans, provide a heater coil and attach it to a stainless steel insert panel or to the underside of the drip pan. The panel must be easily removable for cleaning of the drip pan. Ensure that heating coils provided for drain lines are installed inside the lines.
Ensure that drain lines from the evaporators are sloped and extend through the bulkheads or deck and drain to a deck scupper or that they drain through an accessible air gap or air break.
The thermometer probe shall be encased in stainless steel conduit.
# XIII. GALLEYS, FOOD PREPARATION ROOMS, AND PANTRIES
A.
Bulkheads and Deckheads
Construct bulkheads and deckheads with a high quality, noncorroding stainless steel.
Ensure that the gauge is thick enough so that the panels do not warp, flex, or separate under normal conditions. Seams must be less that 1/32-inch (0.8 mm). For seams larger that 1/32-inch (0.8 mm) but smaller than c-inch (3.2 mm), use an approved sealant. For gaps greater than c-inch (3.2 mm), use only stainless steel profile strips.
All bulkheads to which equipment is attached shall be of sufficient thickness and/or reinforcement to allow for the reception of fasteners or welding without compromising the quality and construction of the panels.
Utility line connections should be through a stainless steel conduit that is mounted away from bulkheads for ease in cleaning.
Back splash attachments to the bulkhead must be continuous or tack welded, polished, and made watertight with an approved (e.g., NSF, FDA, USDA) sealant.
# B.
Deck Covering
Use hard, durable, nonabsorbent, non-skid tiles or a durable epoxy material in galleys, food preparation rooms, and all pantries. Stainless steel panels, which are continuously welded and are proven non-skid may also be used. All bulkheads and deck junctures shall be coved and sealed tight.
Seal all deck tiling with a durable, water-tight grouting material. Seal stainless steel panels with a continuous weld.
In technical spaces between undercounter cabinets or refrigerators, the deck covering may be tile, non-skid stainless steel, or other hard, durable, and easily cleanable surfaces.
# XIV. BUFFET LINES, BAR SERVICE AREAS, WAITER STATIONS, AND OTHER FOOD SERVICE AREAS
A.
Bulkheads and Deckheads
Bulkheads and deckheads may be constructed of decorative tiles, pressed metal panels, or other hard, durable, noncorroding materials. Stainless steel is not required in these areas; however, the materials used shall be easily cleanable.
# B.
Deck Covering
Ensure that buffet lines located in crew and officers' mess rooms have a hard, durable, nonabsorbent deck covering that is at least 3-feet (0.9 m) in width measured from the edge of the service counter or from the outside edge of the tray rail, if such a rail is present.
Ensure that the dining room service stations have a hard, durable, nonabsorbent deck covering (e.g., sealed granite, marble) at least 2 feet (0.6m) from the edge of the working sides of the service station.
Ensure that the deck surfaces behind the bar service counters and under equipment are constructed of hard, durable, nonabsorbent tiles, epoxy, or stainless steel.
Durable linoleum tile or durable vinyl deck covering materials may be used only in crew and officer dining areas, except as specified in Section XIII. B.1.
All bulkheads and deck junctures shall be coved and sealed tight.
# C.
Sneeze Guards/Sneeze Shields
Sneeze guards shall be provided in all areas from which food will be displayed for consumer self-service.
Sneeze guards may be built-in, permanent, and integral parts of display tables, bainsmarie, or cold top tables.
The sneeze guard panels must be durable plastic or glass that is smooth and easily cleanable. Sections in manageable lengths must be removable for cleaning.
# Figure 5
The sneeze guards shall be positioned in such a way that the shielding panels intercept the line between the consumer's mouth and the displayed foods (Figure 5). This should take into account such factors as the height of the display table, the presence or absence of a tray rail, and the distance between the edge of the display table and the actual placement of the food (e.g., the bains-marie well).
# XV. BARS
A.
Install a stainless steel, vented, double-check valve backflow prevention device in all bars that have carbonation systems (e.g., multiflow beverage dispensing systems). Install the device before the carbonator and downstream from any copper in the potable water supply line.
# B.
Encase supply lines to the dispensing guns for the carbonating system in a single tube. If the tube penetrates through any bulkhead or counter top, grommet the intersection.
# C.
Bulk dispensers of soft drinks should be designed and located so as to minimize the length of the dispensing lines carrying the liquid mixes. The systems shall provide a means for flushing the interior of the lines carrying liquids.
# D.
Bar construction shall follow the guidelines relating to equipment and to food and splash contact surfaces noted in Sections III and V.
# XVI. WARE WASHING
A.
For pre-washing, provide a rinse hose(s). Provide a garbage grinder or pulper system with adequate table space for all food preparation areas. Grinders are optional in pantries and bars. If a sink is to be used for pre-rinsing, provide a removable strainer. (Note: Shock absorbing materials may be used for the installation of pulpers and grinders to protect against vibration damage and to dampen noise.)
# B.
For soiled dish tables with pulper systems, ensure that the trough extends the full length of the table and that the trough slopes toward the pulper.
# C.
To prevent water from pooling, equip clean landing tables with across the counter scuppers with drains at the exit from the machine and sloped to the scupper. Install a second scupper and drain line if the length of table is such that the scupper at the exit from the machine will not effectively eliminate all pooling. (The length of drain lines shall be minimized and when possible be straight vertical lines with no angles.) at all times.
In bars and dining room waiters' station designed for lowered lighting during normal operations, 20 foot candles (220 lux) must be provided during cleaning operations.
# B.
Ensure that light fixtures are installed tightly against the bulkhead and deckhead panels or in a manner that allows easy cleaning around the fixtures.
# C.
Ensure that each light shield is shatterproof, easily removable, and completely covers and encloses the entire bulb.
# D.
For effective illumination, it is recommended that the deckhead mounted light fixtures be placed above the work surface s and position ed in an "L" pattern rather than an in-line arrange ment.
# E.
Heat lamps do not have to meet the shielding requirements; however, the bulbs should be shatter proof or recessed within the outer shell of the lamp.
# F.
Deckhead mounted lights for bars and the lido buffet areas may be recessed within the deckhead panels without being shielded.
# G.
Ensure that light bulbs, including flourescent lights, installed in or near bar counters are effectively shielded.
# XVIII.WASTE MANAGEMENT
A.
# Food and Garbage Lifts
Ensure that the interiors of food and garbage lifts are constructed of stainless steel and meet the same facility requirements as other food service areas.
Ensure that the decks are constructed of a durable, non-absorbent, non-corroding material (e.g., stainless steel, diamond plate aluminum) and coved at least d-inch (9.5 mm) all along the sides.
Position bulkhead mounted vents in the upper third of the panels.
Install a floor drain at the bottom of the lift shaft. Avoid open channels in the shaft.
Ensure that the interiors of dumbwaiters are constructed of stainless steel and meet the same facility requirements as other food service areas. Ensure that the bottom of the dumbwaiter is a stainless steel panel coved to provide a 3/8-inch (9.5 mm) radius.
Ensure that electrical panel controls are watertight (i.e., refer to IEC IP-44).
Ensure that lighting fixtures are recessed or fitted with stainless steel guards to prevent breakage.
Trash or garbage chutes for transfer of waste material to storage or processing areas are prohibited.
# B.
Trolley and Waste Container Wash Rooms
Construct decks, bulkheads, and deckheads according to facility requirements for food service areas. Provide a bulkhead mounted pressure washing system with a deck basin and drain connected directly to the waste system. (An enclosed automatic washing machine may be used in place of the pressure washer and deck basin.)
Provide Hand washing facilities.
Provide adequate ventilation and extraction of steam and heat.
Provide cleaning lockers. If wet storage of brooms, mops, or other equipment is intended for the cleaning lockers, then the lockers shall be vented.
Facilities such as a deep utility sink provided with hot and cold water or a pressure washing system with a deck basin and drain shall be provided for cleaning of maintenance equipment such as brooms and mops. Wall mounted racks or hooks shall be provided for hanging the equipment for drying. A room(s) designated for this purpose shall be provided separate from food preparation and ware washing areas.
# C.
Garbage Holding Facilities
Construct a garbage and refuse storage or holding room of adequate size to hold unprocessed waste for the longest expected points of disposal. The refuse storage room shall be physically separated from all food-preparation and storage areas.
Ensure that the storage room is well ventilated, is temperature and humidity controlled, and contains a sealed, refrigerated space for storage of wet garbage.
Provide hand washing facilities and a potable hot and cold water tap for a hose connection.
Provide deck drainage to prevent pooling of any water.
Ensure that deckheads, bulkheads, and decks (other than the refrigerated spaces) are easily cleanable, with a berm\coaming provided around all waste-processing equipment, and that they have proper deck drainage.
# D.
Garbage Processing Areas
Ensure that the garbage processing area is of adequate size for the operation and has a sufficient number of tables for sorting.
Ensure that the sorting tables are constructed of stainless steel and have rounded edges. Deck Coaming, if provided, should be at least 3-inches (7.6-cm) and coved. If the tables are provided with drains, direct the table drains to a deck drain and provide the deck drain with a strainer.
Ensure that the processing area includes hand washing facilities, a potable hot and cold water tap for a hose connection, and an adequate number of deck drains.
Provide a cleaning materials storage locker.
Ensure that all decks and bulkheads are easily cleanable. Deck drains shall be provided.
Ensure adequate lighting of at least 20 foot candles (220 LUX) is provided.
A sink equipped with a pressure washer or an automatic washing machine shall be provided for the washing of equipment, storage containers, and garbage cans.
Black and grey water lines that are above or that penetrate into the decks containing galleys or other food preparation or storage areas must not have any mechanical couplings. Press fitted piping is not acceptable for these areas.
Black and grey waste drain systems from cabins, catering areas, and public spaces shall be designed to prevent the back-up of waste and the emission of odors or gases into these areas.
Sewage holding tanks shall be vented independent of all other tanks, to the outside of the vessel and away from any air intakes.
# XIX. WATER SYSTEM
# A.
Bunker Stations
Ensure that the filling line is positioned at least 18-inches (46 cm) above the deck and is painted blue.
Ensure that the filling line has a screw cap fastened by a noncorroding chain so it does not touch the deck when hanging.
Ensure that the screw connections for the hose attachments are unique to only fit potable water hoses.
Label the filling line "POTABLE WATER FILLING" with at least ½-inch (12.7-mm) lettering, high stamped, stenciled, or painted on the bulkhead in the area of the bunker line.
ii.
Ensure that the coating of the tanks is approved for use in potable water tanks, specifications are provided to VSP, and that all manufacturers' recommendations for application and drying or curing are followed.
iii. Coat all items that penetrate the tank (e.g., bolts, pipes, pipe flanges) with the same product as the tank interior.
iv. Ensure that the system is designed to be superchlorinated one tank at a time through the filling line.
v.
Ensure that lines for nonpotable liquids do not pass through potable water tanks. Minimize the use of non-potable lines above potable water tanks. All lines above tanks shall not have any mechanical couplings. If coaming along the edges of the tank is present, provide slots along the top of the tank to allow leaking liquid to run off and be detected.
# vi.
Re-treat welded pipes to make them corrosion resistant.
vii. Treat all potable water lines inside potable water tanks so as to make them jointless and corrosion resistant.
# viii.
Identify each tank with a number, and the words "POTABLE WATER" in letters ½-inch (12.7 mm) high.
# ix.
Install sample cocks above the deck plating on each tank.
x.
Ensure that sample cocks point downward and are identified and numbered.
# Storage Tank Manholes
Install manholes on the sides of potable water tanks.
# Storage Tank Water Level
Provide an automated method for determining the water level for potable water tanks.
# Storage Tank Vents
Figure 6 i. Ensure that air relief vents end well above the water line outside the ship. The cross sectional areas of the vent must be equal to or greater than those of the filling line to the tank. The vent shall terminate with the open end pointing down and shall be screened with 16-mesh corrosion-resistant screen.
ii.
A single pipe may be used as a combination vent and overflow.
iii. Do not connect the vent of a potable water tank to the vent of a nonpotable water tank.
Storage Tank Drains i. Design tanks to drain completely.
ii.
Ensure that the drain opening is at least 4-inches (10.2 cm) in diameter, ideally the same diameter as that of the inlet pipe.
iii.
When drainage is by suction pump, the liquid shall drain from a sump. Use separate pumps to drain tanks. In addition, locate the drain in the pump discharge line ahead of any branch take-off to the distribution system. Install a valve on the main immediately beyond the drain line take-off (Figure 6).
Ensure that the potable water pumps have adequate capacity for service demands and are not used for any other purpose.
Ensure that pumps are automatic priming, not manual. Use a direct connection, not an airgap, when supplying water to a potable water pump.
Ensure that pumps and distribution lines are large enough so that pressure will be maintained at all times and at levels adequate to operate all equipment.
Provide a pressure type back-flow prevention device in the potable water line prior to the automatic ejectors.
# K.
Evaporators/Distillation
Locate the seawater inlet line forward from the overboard discharge pipes.
Use only direct connections to the potable water system. Swing lines are not allowed.
Provide an air gap or RPZ backflow preventer between the potable water system and the nonpotable water system.
Post manufacturer's instructions near the evaporator or distillation plant.
Ensure that high and low pressure units connected directly to the potable water lines have the ability to go to the waste system if the distillate is not fit for use.
Ensure that units have a low range salinity indicator, an operation temperature indicator, an automatic discharge to waste, and an alarm with trip setting.
Ensure that the high saline discharge goes to bilge through an airgap or goes overboard through an RPZ backflow preventer.
L. Halogenation
Bunkering and Production i.
Provide labeled potable water taps with appropriate backflow preventers at each halogen supply tank.
ii. Provide a labeled sample cock at least 10-feet (3 m) after the halogen injection point.
iii.
Ensure that halogen injection is flow meter or analyzer controlled.
iv.
Ensure that pH adjustment equipment is provided for bunkering. The analyzer, controller (e.g., proportional integral derivative), and dosing pump should be balanced to accommodate changes in flow rates.
# Distribution i.
Provide a completely automatic halogenation system that is controlled by an analyzer.
ii.
Ensure that the halogenation probe measures free or active halogen that is linked to the analyzer, controller, dosing pump, and flow meter if used in conjunction with an analyzer.
iii. Provide a backup system with automatic switch-over.
iv.
Ensure that an analyzer and recorder are located at a distant point in the system. The analyzer shall measure and indicate free halogen.
# v.
Provide an audible alarm in the engine control room to indicate low halogen residual readings at the distant point analyzer.
vi.
Provide labeled potable water taps with appropriate backflow preventer at halogen injection points.
# vii.
Locate a labeled sample cock at least 10-feet (3 m) after the halogen injection point.
viii.
Ensure that chart recorders are circular, have a minimum of 24-hour recording capacity, and express halogen content from at least 0 to 5 ppm.
# XX. BACKFLOW PREVENTION
# A.
All non-potable connections to the potable water system must use appropriate backflow prevention (e.g., airgaps, reduced pressure principal backflow preventers, pressure vacuum breakers, atmospheric vacuum beakers, pressure-type backflow preventers, or double-check valves with intermediate atmospheric vent).
# B.
Ensure that airgaps (i.e., the most reliable method of backflow protection) are twice the diameter of the supply pipe.
# C.
In high hazard situations where airgaps are impractical or cannot be installed, use an RPZ backflow preventer.
# D.
If RPZ backflow preventers are used, provide a test kit for testing the device annually, and keep records of such tests.
# E.
Use air gaps or mechanical backflow prevention devices when water must be supplied under pressure.
# F.
Install atmospheric vacuum breakers 6-inches (15.2 cm) above the fixture flood level rim with no valves downstream from the device.
# G.
Pressure-type backflow preventers (e.g., carbonator backflow preventer) or double-check valves with intermediate atmospheric vents prevent both back-siphonage and backflow caused by back pressure, and must be used in continuous pressure-type applications.
# H.
Ensure that the following connections to the potable water system are protected against backflow or back-siphonage by mechanical back-flow prevention devices or air gaps:
The connection between potable water tanks and nonpotable water tanks.
The connection between evaporators and any nonpotable water system.
The potable water supply to the boiler or boiler feed tanks.
The potable water supply to priming pumps used for non-potable applications.
The potable water supply to the lube and fuel oil separators.
The potable water supply to beverage system carbonators.
Flexible shower hoses in cabin showers, if the hoses can be submerged.
The connection between potable water and air conditioning supply or expansion tanks.
The potable water supply to the beauty salon rinse hoses.
The potable water supply line to photo developing equipment and on all potable water
# K.
Ensure that three-compartment ware washing sinks are of the correct size for their intended use and that they are large enough to submerge the largest piece of equipment. Ensure that the sinks have coved, continuously welded, internal corners that are integral to the interior surfaces.
# L.
Install either; a) an across the counter scupper with a drain dividing the wash compartment from the rinse compartment; b) a splash shield at least 4-inches (10.2 cm) above the flood level rim of the sink between the wash and rinse compartments; or c) an overflow drain in the wash compartment 4-inches (10.2 cm) below the flood level.
# M.
Equip hot water sanitizing sinks with thermometers, a long-handled stainless steel wire basket, and a jacketed or coiled steam supply with a temperature control valve to control water temperature so as to prevent condensation on the deckhead.
Provide three-compartment ware washing sinks with a separate pre-wash station for the main galley and crew galley pot washing areas.
For meat, fish, and vegetable preparation areas, provide at least one threecompartment sink or an automatic dishwashing machine(s) with a pre-wash station.
Provide ware washing facilities accessible to all food preparation areas, such as the bakery, lido, and pantries.
# N.
Construct overhead racks according to an open tubular design (unless the racks are of solid panel construction, in which case they should drain at each end to the landing table below).
# O.
Provide for adequate ventilation to prevent condensation on the deckhead or adjacent bulkheads.
# XVII. LIGHTING
# A.
Ensure that a minimum of 20 foot candles (220 lux) is available at counter level in all foodpreparation and ware washing areas. For equipment storage and food storage areas, including provisions, galley walk-in refrigerators and freezers, garbage and food lifts, and garbage and toilet rooms, 20 foot candles (220 Lux) of lighting shall be provided at a distance of 30-inches (76.2 cm) above the deck.
Lighting levels of 20 foot candles (220 lux) in provision rooms are based on measurements taken while the rooms are empty. Lighting levels of at least 10 foot candles must be maintained
Filters may be used in the bunkering line prior to halogenation. Filters must be accessible for inspection and removable for cleaning.
# B.
Filling Hoses
Provide special hoses that are durable and have smooth, impervious linings, caps on each end, and unique fittings for potable water.
Provide at least two 50-foot (15.2-m) hoses per bunker station.
Ensure that each hose dedicated to potable water filling is properly labeled or tagged so that it is not used for any other purpose.
# C.
Filling Hose Storage
Provide storage space for at least four 50-foot (15.2-m) potable water bunker hoses per bunker station.
# D.
# Fire and Technical Connections
Install a reduced pressure zone (RPZ) backflow prevention device where hoses from shore will be connected.
# E. Storage Capacity for Potable Water
1. Provide a minimum of 2 days storage capacity.
F.
# Potable Water Storage Tanks
General Requirements:
i. Ensure that the tanks are independent of the shell of the ship. Skin or doublebottom tanks are not allowed for potable water storage. Provide an 18-inch (46-cm) cofferdam above and between other tanks and also between the tanks and the hull.
# G. Suction Lines
Locate suction lines at least 6-inches (15.2 cm) from the tank bottom or sump bottom.
# H.
Distribution System
Locate distribution lines at least 18-inches (46 cm) above the bilge plating or the normal bilge water level.
Ensure that the distribution lines are not cross-connected with the piping of any nonpotable water systems.
Ensure that no lead or cadmium pipes or fittings are used.
Ensure that only potable water taps are installed in the galleys, the hospital, and the cabin showers and sinks.
Paint potable water piping and fittings in blue or stripe them with a blue band at 15-foot (4.6-m) intervals on each side of partitions, decks, and bulkheads.
Ensure that technical steam to be applied indirectly to food, utensils, and equipment is made from potable water and provided through coils, tubes, or separate chambers.
Steam applied directly to food and food contact surfaces shall be produced from potable water and generated locally by the food service equipment designed for this purpose, e.g., vegetable steamers, combi-ovens, etc.
Ensure that an air gap or approved backflow prevention device is present if water is supplied to a bilge, waste, ballast, or laundry tank.
# I.
Potable-Water Pressure Tanks
Ensure that potable water hydrophore tanks are not cross-connected to nonpotable water tanks through the main air compressor.
Provide a filtered air supply from a nonpermanent, quick-disconnect, or independent compressor. The compressor must not emit oil into the final air product.
# J. Pumps
# Q.
Ensure that all filter accessories, such as pressure gauges, air-relief valves, and rate-of-flow indicators are provided.
# R.
Ensure that pool overflows are either directed by gravity to the make-up tank for recirculation through the filter system or disposed of as waste.
# S.
The make-up tank may be used to replace water lost by splashing and evaporation. If the tank is supplied with potable water, ensure that the supply enters through an airgap or backflow preventer. An overflow line at least twice the diameter of the supply line and located below the tank supply line may be used.
# T.
Provide automatic dosing of chemicals for disinfection and pH adjustment.
# U.
Provide easy access to the sand filters so that they can be monitored weekly and changed frequently.
V.
Ensure that drains are installed so as to allow for rapid drainage of the entire pump and filter system, and that a minimum 3-inch (7.6-cm) drain is installed on the lowest point of the system.
# W.
Ensure that disinfection is accomplished by chlorination or bromination and is controlled by an analyzer.
# X.
Ensure that pH adjustment is accomplished by using appropriate acids and bases and that a buffering agent (alkalinity) is used to stabilize the pH. Injection must be controlled by an analyzer.
# Y.
Ensure that the recirculation/halogenation room is accessible and well ventilated, and provide a potable water tap in this room.
# Z.
Mark all piping with directional-flow arrows.
1A.
Clearly post a flow diagram and operational instructions.
1B. Ensure that the system is designed for easy and safe storage of chemicals and refilling of chemical feed tanks.
# 1C.
Sample points shall be provided on the system for the testing of halogen levels and routine calibration of analyzer.
# 1D.
Wading pools may be part of the main swimming pool recirculation system.
# XXIII. MISCELLANEOUS
A.
# Facilities and Lockers for Cleaning Materials
Provide stainless steel cleaning lockers with coved floor and wall junctures for the storage of buckets, detergents, sanitizers, cloths, and sponges.
Provide wall-mounted racks on which to hang brooms and mops or provide sufficient space and hanging brackets within a cleaning locker. Wall-mounted units may be located only in nonfood preparation or service areas.
The number of lockers and the location and size of lockers is determined during the plan review process.
B. Filters
Chlorine filters may only be used on coffee machines, juice machines, ice machines, and soda dispensing machines.
# C.
# Drinking Fountains
Fan rooms shall be maintained free of accumulations of moisture. Condensate drainage from air chiller units shall be through closed piping to prevent pooling of wastewater on the decks.
Fan rooms shall be located so that any ventilation or processed exhaust air may not be drawn back into the ship spaces.
All food preparation, ware washing, and toilet rooms shall have a sufficient air supply.
All cabin air diffusers shall be designed for easy removal and allow for easy access for cleaning.
All air supply trunks shall have access panels to allow for periodic inspection and cleaning.
A single independent air supply system shall be provided for the engine room and other mechanical compartments, such as fuel separation or purifying rooms, which are located in and around the engine room.
# B.
Air Exhaust Systems | 12,668 | {
"id": "6c8161ba4a2a0644d8c902c5fbb82e573ae3774c",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # INTRODUCTION
The HL7 Batch Protocol may be employed by any implementation of HL7 messages to make messaging more efficient with the Public Health Information Network Messaging System (PHINMS). There are no restrictions on the types of messages sent in a particular batch.
Due to size limitations on PHINMS, a single batch file should not exceed 10 Mb in size.
The HL7 file header and trailer and the batch header and trailer segments are defined in exactly the same manner as the HL7 message segments; hence, the same HL7 message construction rules used for individual messages can be used to encode and decode HL7 batch files.
Implementation details such as the use of acknowledgments and expected payloads must be negotiated between the sender and receiver systems.
# AUDIENCE
This document is not intended as a tutorial for either HL7 or interfacing in general. The reader is expected to have a basic understanding of interface concepts and HL7.
This specification is designed for use by messaging analysts and technical implementers working to send or receive a specific PHIN notification. It must be used with the companion Message Mapping Guide to populate the specified structure with the content for the condition being passed.
# CONTACTS
# PHIN Help Desk National Center for Public Health Informatics
Phone: 1-800-532-9929 Email: [email protected]
The following message structure description portrays the HL7 batch file structure, constrained for use as a Case Notification container. The static definition is based on a message structure defined in the HL7 Standard. It is in compliance with the HL7 -messaging profiles, and may also define additional constraints on the standard HL7 message. Optional and Repeating Note that for Segment Groups there will not be a segment code present, but the square and curly braces will still be present.
# ABSTRACT MESSAGE ATTRIBUTES
# Name
Name of the Segment or Segment Group element.
# Usage
Use of the segment for this guide. Indicates if the segment is required, optional, or conditional in a message. Legal values are R -Required. Must always be populated.
Conformant sending applications shall be capable of sending this message element, and the message element must always be populated with non-empty values.
Conformant receiving applications shall not reject a message containing this message element. Conformant receivers may reject the message because this message element is not present or empty. The receiver may process or ignore this message element. RE -Required, but can be empty.
Conformant sending applications shall be capable of sending this message element, although the message element may be empty or not present in a message instance. Conformant sending applications should send this message element when they have data available to send. For example, an application that has data for a particular patient for this message element stored in its data store, but does not send the data in the message would be non-conformant. Conformant receiving applications shall not reject a message containing or missing this message element. The receiver may process or ignore this message element. O -Optional.
Use of optional message elements must be negotiated between the sender and receiver. C -Conditional. Must be populated based on computable Conditionality Statement.
If the conditionality statement is true, the message element is required, otherwise the message element is optional. [m.
.n] Element must appear at least m, and at most, n times.
# Section
Indicator of the part of this guide that describes the segment.
# Description
A short description of the use of the segment.
Note: In the tables throughout this document, items in yellow = Not supported by the PHIN Standard.
# ABSTRACT MESSAGE SYNTAX
# HL7 BATCH FILE SEGMENTS
# FHS -FILE HEADER SEGMENT
This segment is used as the lead-in to a file (group of batches).
# BHS -BATCH HEADER SEGMENT
The BHS segment is used to head a group of messages that comprise a batch.
# BTS -BATCH TRAILER SEGMENT
The BTS segment defines the end of a batch of messages.
# FTS -FILE TRAILER SEGMENT
The FTS segment defines the end of a file (group of batches). | 852 | {
"id": "4b2a84e64e35f5f34994f72787ad72349df01f1c",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Recent immigrants and refugees constitute a substantial proportion of malaria cases in the United States, accounting for nearly one in 10 imported malaria cases involving persons with known resident status in 2006 (1). This report describes three cases of Plasmodium falciparum malaria and two cases of Plasmodium ovale malaria that occurred during June 27-October 15, 2007 in King County, Washington. The infec tions were diagnosed in Burundian refugees who had recently arrived in the United States from two refugee camps in Tanza nia. Since 2005, CDC has recommended presumptive malaria treatment with artemisinin-based combination therapy (ACT) (e.g., artemether-lumefantrine) for refugees from sub-Saharan Africa before their departure for the United States (2). Rising levels of resistance to the previous mainstays of treatment, chloroquine and sulfadoxine-pyrimethamine, prompted CDC to make this recommendation. Implementa tion has been delayed in some countries, including Tanzania, where predeparture administration of presumptive ACT for refugees started in July 2007. The cases in this report high light the need for health-care providers who care for recently arrived Burundian and other refugee populations to be vigi lant for malaria, even among refugees previously treated for the disease. Washington state law requires health-care providers, hospi tals, and laboratories to report malaria and certain other con ditions to the local health department.- This report summarizes the findings from five cases reported to the local health department by health-care providers and laboratories (Table ). After these cases were reported, the patients' medical records were obtained from two local hospitals and reviewed to assist in case investigations. Initial investigations were limited to case investigation forms completed by public health officials based on available medical records. *Notifiable conditions. Ch. 246-101, Washington Administrative Code. Available at . Case 1. A female aged 3 years was diagnosed with P. falciparum malaria in May 2007 while in Tanzania. At that time, she was placed on a quinine-based regimen (formula tion, date of administration, and method of administration unknown) and clinically recovered. During an overseas predeparture exam, a requirement for entry into the United States, she received presumptive malaria treatment, with a course of sulfadoxine-pyrimethamine. She arrived in the United States on June 12, 2007, and became ill on June 25, 2007, with fevers, chills, and cough. On June 27, 2007, she was admitted to the local children's hospital. A blood smear revealed 7% hyperparasitemia (>5% = hyperparasitemia) with P. falciparum. Other laboratory findings included anemia, thrombocytopenia, and elevated aspartate aminotransferase. She received oral atovaquone-proguanil, clinically improved, and was discharged July 2, 2007 after 5 days in the hospital. Case 2. A female aged 9 years arrived in the United States on July 23, 2007. Before leaving Tanzania, she received pre sumptive 3-day treatment of twice daily artemether lumefantrine; the last doses were administered on July 19, 2007. She became ill on August 11, 2007, with fever, head ache, malaise, and cough. She was evaluated in the local county hospital emergency department on August 14, 2007. Blood smear (percent parasitemia unknown) and polymerase chain reaction (PCR) test results were positive for P. ovale. Other INSIDE#
In the United States atovaquone-proguanil mefloquine and primaquine chloroquine and primaquine quinidine and clindamycin, followed by atovaquone proguanil quinidine and clindamycin, followed by atovaquone proguanil Editorial Note: CDC recommends presumptive treatment of P. falciparum malaria in United States-bound refugees at high risk for infection rather than waiting for development of symptoms and risking severe complications or death after arrival in the United States (2). To be considered adequate presumptive therapy, the regimen must be completed no sooner than 3 days before departure (2). This approach reduces the risk for malaria-related morbidity and mortality among these refugees. Refugees are typically a medically underserved population with difficulty accessing care, which can lead to delays in diagnosis and treatment. Even if refugees are able to obtain care, health-care providers in the United States might not be familiar with recommended malaria treatment regimens. For example, the patient in case 1 did not receive adequate treatment for severe infection with P. falciparum. Instead, she received oral atovaquone-proguanil, which would have been appropriate for uncomplicated malaria.
The recommended regimens for severe infection with P. falciparum include either intravenous quinidine or artesunate (3). The latter is available from CDC via an investigational new drug protocol. Presumptive predeparture treatment for malaria in a geographically clustered population of refugees, as in a refugee camp, is easier logistically and less costly than treatment of symptomatic cases dispersed throughout the United States after arrival. Presumptive treatment also can reduce the risk for reintroduction of malaria into the United States. Reintroduction is a concern given that the malaria vec tor, the female Anopheles mosquito, is widespread in the United States. A recent malaria outbreak in the Caribbean resulting from reintroduction is an example of this possibility (4). The International Organization for Migration (IOM) is an intergovernmental agency that screens and treats most refugees bound for the United States. This is done at the request of the United States in an effort to reduce the incidence of infectious disease among refugees after they reach the United States. IOM administers presumptive treatment against P. falciparum malaria (and intestinal parasites) to refugees resettling from Tan zania before departure for the United States. In 2005, CDC recommended ACT as presumptive P. falciparum treatment for refugees resettling in the United States from sub-Saharan Africa. However, presumptive P. falciparum malaria treatment using sulfadoxine-pyrimethamine was used for Tanzanian refugees until July 7, 2007. CDC surveillance data indicate that among 1,805 Burundian refugees from Tanzania who resettled to 34 U.S. states during May 4-July 7, 2007, 29 symptomatic cases of malaria were identified in 12 states, including Washington. Twenty-six of these refugees (including the patient in case 1) were infected with P. falciparum alone, and two had mixed infections (P. falciparum and P. ovale or Plasmodium malariae). Speciation was not performed for the remaining case. Twenty-four of the 29 (82%) patients were hospitalized; none died (CDC, unpub lished data). These 29 refugees departed for the United States before July 7, 2007, the date when IOM implemented the CDC recommendations that refugees from Tanzania receive presump tive treatment with 6-dose artemether-lumefantrine within 3 days before departure for the United States. Instead, they all received sulfadoxine-pyrimethamine before departure; high rates of resistance to sulfadoxine-pyrimethamine have been reported (5), but the artemether-lumefantrine regimen has been effective in field settings in Africa (6).
Two of the patients in this report who were infected with P. falciparum, the patients in cases 4 and 5, were resettled to the United States after July 7, 2007, the date when IOM instituted the change to artemether-lumefantrine treatment. These two patients received a complete artemether lumefantrine presumptive treatment course before departure from Tanzania, yet both were diagnosed with P. falciparum after arrival in the United States. Possible explanations include incomplete treatment or nonadherence to the medication regi men (only 3 of 6 doses were directly observed in these two patients, and in the patients in cases 2 and 3), poor medica tion absorption, reinfection after treatment, or treatment dur ing a time in the parasite's lifecycle when it would be unaffected by this regimen. In response to such continuing cases, IOM now directly observes all 6 doses of artemether-lumefantrine treatment and provides milk with each dose to improve absorption.
Current IOM policy targets infection with P. falciparum only. However, cases 2 and 3 in this series involved relapses of P. ovale after arrival in the United States. Infection with P. ovale (or Plasmodium vivax) generally results in less severe disease than infection with P. falciparum. Hypnozoites of P. ovale or P. vivax can remain dormant in the liver for months or years before causing relapse, and primaquine is the only agent available that can eliminate malaria parasites at this stage of their life cycle (7,8). However, predeparture presumptive treatment with primaquine to prevent relapse of P. ovale or P. vivax currently is not recommended because the cost, logis tics of implementing a 14-day medication course, and risk for severe hemolytic anemia in glucose-6-phosphate dehydroge nase (G6PD)-deficient patients outweigh the potential ben efit of avoiding a small number of non-P. falciparum malaria cases.
Up to 10,000 Burundian refugees from Tanzania will have been resettled in the United States during 2007-2008 (9). Health-care providers in the United States caring for refugee populations resettling from malarial regions should remain aware of the possibility of malaria in these groups, regardless of prior treatment.
# Syphilis Testing Algorithms Using Treponemal Tests for Initial Screening -Four Laboratories, New York City, 2005-2006
In the United States, testing for syphilis traditionally has consisted of initial screening with an inexpensive nontreponemal test, then retesting reactive specimens with a more specific, and more expensive, treponemal test. When both test results are reactive, they indicate present or past infection. However, for economic reasons, some high-volume clinical laboratories have begun using automated treponemal tests, such as automated enzyme immunoassays (EIAs) or immunochemoluminescence tests, and have reversed the test ing sequence: first screening with a treponemal test and then retesting reactive results with a nontreponemal test. This approach has introduced complexities in test interpretation that did not exist with the traditional sequence. Specifically, screening with a treponemal test sometimes identifies persons who are reactive to the treponemal test but nonreactive to the nontreponemal test. No formal recommendations exist regard ing how such results derived from this new testing sequence should be interpreted, or how patients with such results should be managed. To begin an assessment of how clinical laborato ries are addressing this concern, CDC reviewed the testing algorithms used and the test interpretations provided in four laboratories in New York City. Substantial variation was found in the testing strategies used, which might lead to confusion about appropriate patient management. A total of 3,664 (3%) of 116,822 specimens had test results (i.e., reactive trepone mal test result and nonreactive nontreponemal test result) that would not have been identified by the traditional testing algorithms, which end testing if the nontreponemal test result is nonreactive. If they have not been previously treated, patients with reactive results from treponemal tests and nonreactive results from nontreponemal tests should be treated for late latent syphilis.
Four New York City laboratories that routinely conduct syphilis testing using EIA treponemal screening tests were able to provide their testing algorithms, test volume, and test results for a convenience sample of specimens. Each labora tory used a slightly different testing algorithm and tested approximately 26,000-130,000 specimens for syphilis per year. CDC reviewed test results from a convenience sample of 116,822 specimens tested at these four laboratories during October 1, 2005-December 1, 2006.
In all four laboratories, no further testing was done on speci mens that were nonreactive with the treponemal screening EIA. In all four laboratories, specimens considered reactive by EIA test were next tested with a rapid plasma reagin (RPR) test. However, the approach to follow-up testing then differed. At two laboratories, specimens that were reactive with EIA and nonreactive with RPR were retested using a different tre ponemal test: Treponema pallidum particle agglutination (TP-PA) or fluorescent treponemal antibody (FTA-ABS). At a third laboratory, specimens that were reactive to both the EIA test and the RPR test were retested using a different tre ponemal test (i.e., FTA-ABS or TP-PA). At the fourth labora tory, no further testing was done after the EIA and RPR tests. Of the 116,822 specimens included in the convenience sample, 6,587 (6%) were initially reactive to the EIA test (Figure). When 6,548 of the EIA-reactive specimens were tested with an RPR test, 2,884 (44%) were reactive and 3,664 (56%) were nonreactive to the RPR test. Further testing with FTA-ABS or TP-PA tests on 2,512 of the specimens reactive to the EIA test but nonreactive to the RPR test found 2,079 (83%) specimens reactive to the second treponemal tests (i.e., FTA-ABS or TP-PA). In addition, the one laboratory that performed TP-PA testing on specimens that were reactive to both the EIA and RPR tests found 78 of 80 (98%) specimens were reactive to the TP-PA test.
One laboratory provided limited interpretation of the vari ous permutations of syphilis test results. The other three labo ratories gave providers an objective summary of the test results (e.g., EIA reactive, RPR reactive, or EIA reactive and RPR nonreactive) with no interpretation. No additional informa tion was available from the four laboratories regarding patient treatment. Editorial Note: In the four New York City laboratories stud ied, reversing the traditional order of screening and confirma tory tests for syphilis resulted in 3,664 (3%) of 116,822 specimens with test results (i.e., reactive treponemal test result and nonreactive nontreponemal test result) that would not have been identified by the traditional testing algorithm. The importance of these test results is unclear because no specific prognostic information exists to guide patient evalua tion and treatment.
Treponemal tests detect antibodies specific to T. pallidum. In addition to T. pallidum pallidum, which causes syphilis, other treponemal subspecies (e.g., pertenue, which causes yaws, and carateum, which causes pinta) also can produce reactive results to treponemal tests, but these subspecies are rare in the United States (1). A reactive treponemal test result indicates that treponemal infection has occurred at some point in the past but cannot distinguish between treated and untreated infections. As such, treponemal tests, such as the T. pallidum EIA test, TP-PA test, and FTA-ABS test, can produce reactive results for life, even after adequate treatment for syphilis.
Nontreponemal tests, such as the RPR test and venereal dis ease research laboratory (VDRL) test, detect antibodies to cardiolipin and are not specific for treponemal infection. Nontreponemal tests are more likely than treponemal tests to produce nonreactive results after treatment; therefore, reac tive results from nontreponemal tests are more reliable indi cators of untreated infection. Quantitative nontreponemal tests also are used to monitor responses to treatment or to indicate new infections. False-positive nontreponemal tests occur in 1%-2% of the U.S. population, and have been associated with multiple conditions, including pregnancy, human immuno deficiency virus (HIV) infection, intravenous drug use, tuberculosis, rickettsial infection, spirochetal infection other than syphilis, bacterial endocarditis, and disorders of immu noglobulin production (2,3). Nontreponemal test results might be falsely negative in longstanding latent infection (4). Both treponemal and nontreponemal tests can produce nonreactive results when the infection has been acquired recently; approxi mately 20% of test results are negative when patients have primary syphilis (4).
The four New York City laboratories in this report used various algorithms to evaluate specimens that were reactive to treponemal tests and nonreactive to nontreponemal tests. The different algorithms might lead to confusion in the interpre tation of test results and, in turn, in the management and treatment of patients. Test results that would not have been identified by the traditional algorithm were obtained for 3% of the specimens tested for syphilis; thus, such results might be expected to occur several thousand times per year in New York City alone.
When results are reactive to both treponemal and RPR tests, persons should be considered to have untreated syphilis unless it is ruled out by treatment history. Persons who were treated in the past are considered to have a new syphilis infection if quantitative testing on an RPR test or another nontreponemal test reveals a four fold or greater increase in titer (health departments maintain registries of past positive tests). When results are reactive to the treponemal test but nonreactive to the RPR test, persons with a history of previous treatment will require no further management. For persons without a history of treatment, a second, different treponemal test should be per formed (5). If the second treponemal test is nonreactive, the clinician may decide that no further evaluation or treatment is indicated, or may choose to perform a third treponemal test to help resolve the discrepancy.
If the second treponemal test is reactive, clinicians should discuss the possibility of infection and offer treatment to patients who have not been previously treated. Unless history or results of a physical examination suggest a recent infection, such patients are unlikely to be infectious and should be treated for late latent infections, even though they do not meet the surveillance case definition (7). Treatment can prevent severe (i.e., tertiary) complications that can result from untreated syphilis, although the probability of such complications occurring without treatment, while unknown, likely is small (6) Treatment also allows patients to report that they have been treated for syphilis if they ever receive similar results from future treponemal screening tests. Public health departments determine their own priorities for partner notification and other prevention activities; however, because late infections are unlikely to be infectious, they would likely be considered low priority for health department intervention activities.
Reversal of the traditional syphilis screening sequence has been driven by economics. For high-volume laboratories, an automated treponemal test can be less expensive than using an RPR test for the initial screening. An important conse quence of this reversal is the identification of a combination of reactive and nonreactive test results that would not other wise have been identified. The clinical interpretation of these results is complicated by the lack of standardized follow-up testing algorithms among the four laboratories, and by the lack of an evidence base with which to judge the merits of each algorithm. Consequently, use of a reversed sequence of syphilis testing might result in overdiagnosis and overtreatment of syphilis in some clinical settings.
The recommendations in this report might not be appro priate in countries with different patterns of seroreactivity, systems of health care, and epidemiology of disease. Further more, additional analyses are needed that further elucidate the use and total costs of these alternative screening approaches for syphilis, given the anticipated increase in use of trepone mal tests for screening in the United States.
# Infection Control Requirements for Dialysis Facilities and Clarification Regarding Guidance on Parenteral Medication Vials
In April 2008, the Centers for Medicare and Medicaid Ser vices (CMS) published in the Federal Register its final rule on
# Conditions for Coverage for End-Stage Renal Disease (ESRD)
Facilities (1). The rule establishes new conditions dialysis facilities must meet to be certified under the Medicare pro gram and is intended to update CMS standards for delivery of quality care to dialysis patients. CDC's 2001 Recommenda tions for Preventing Transmission of Infections among Chronic Hemodialysis Patients (2) have been incorporated by reference into the new CMS conditions for coverage. Thus, effective October 14, 2008, all ESRD facilities are expected to follow the CDC recommendations as a condition for receiving Medi care payment for outpatient dialysis services.
In recent years, outbreak investigations in dialysis and other health-care settings have demonstrated that mishandling of parenteral medication vials can contribute to the risk for hepa titis C virus (HCV) infection and bacterial and other infec tions (3)(4)(5)(6)(7). In 2002, a CDC communication to CMS suggested that reentry into single-use parenteral medication vials (i.e., to administer medication to more than one patient), when performed on a limited basis and under strict condi tions in hemodialysis settings, likely would result in low risk for bacterial infection (8). However, the 2002 communica tion did not address risks for bloodborne viral infections (e.g., HCV and hepatitis B virus infection). This report is intended to clarify and restate CDC's recommendation on parenteral medication to include bloodborne viral infections. The rec ommendations in this report supersede the 2002 CDC com munication to CMS.
To prevent transmission of both bacteria and bloodborne viruses in hemodialysis settings, CDC recommends that all single-use injectable medications and solutions be dedicated for use on a single patient and be entered one time only. Medi cations packaged as multidose should be assigned to a single patient whenever possible. All parenteral medications should be prepared in a clean area separate from potentially contami nated items and surfaces. In hemodialysis settings where envi ronmental surfaces and medical supplies are subjected to frequent blood contamination, medication preparation should occur in a clean area removed from the patient treatment area. Proper infection control practices must be followed during the preparation and administration of injected medications (9). This is consistent with official CDC recommendations for infection control precautions in hemodialysis (2) and other health-care settings (9).
Health departments and other public health partners should be aware of the new CMS conditions for ESRD facilities. All dialysis providers are advised to follow official CDC recom mendations regarding Standard Precautions and infection control in dialysis settings (2,9). Specifically, CDC has recommended the following: "Intravenous medication vials labeled for single use, including erythropoietin, should not be punctured more than once. Once a needle has entered a vial labeled for single use, the sterility of the product can no longer be guaranteed" (2). Additional guidance on safe injection prac tices can be found in the Guideline for Isolation Precautions: Preventing Transmission of Infectious Agents in Healthcare Set tings 2007 (9).
Dialysis providers also should be aware of their responsibil ity to report clusters of infections or other adverse events to the appropriate local or state public health authority. Failure to report illness clusters to public health authorities can result in delays in recognition of disease outbreaks ( 10
# Notice to Readers
# Preventive Medicine Residency Application Deadline -October 1, 2008
CDC's Preventive Medicine Residency (PMR) program is accepting applications from physicians with public health and applied epidemiology experience. Application materials must be postmarked by October 1, 2008 for the 12-month program that begins in mid-June 2009.
The PMR prepares physicians for leadership roles in public health at federal, state, and local levels through instruction and supervised practical experiences focused on translating epidemiology to public health practice, management, and policy and program development. Residents spend the practicum year at CDC or in a state or local health department.
PMR alumni occupy leadership positions at CDC, at state and local health departments, in academia, and in privatesector agencies. Completion of the residency, which is accred ited by the Accreditation Council for Graduate Medical Education for 12 months of practicum training, qualifies graduates to apply for certification by the American Board of Preventive Medicine in Public Health and General Preventive Medicine.
Additional information regarding the residency, eligibility criteria, and application process is available at . cdc.gov/epo/dapht/pmr/pmr.htm or by calling 404-498-6140.
# Erratum: Vol. 57, No. SS-6
In the MMWR Surveillance Summary (Vol. 57, No. SS-6), "Epilepsy Surveillance Among Adults -19 States, Behavioral Risk Factor Surveillance System," 2005, an error occurred on page 1 in the fourth sentence of the second paragraph of the Results/Interpretation. The sentence should read, "Among adults with active epilepsy with recent seizures, 16.1% reported not currently taking their epilepsy medication, and 65.1% reported having had more than one seizure in the past 3 months." QuickStats from the national center for health statistics from the national center for health statistics from the national center for health statistics from the national center for health statistics from the national center for health statistics Age-Adjusted Death Rates- by Race and Sex -United States, 2006 † - Per 100,000 standard population.
† Preliminary data.
In 2006, age-adjusted death rates were higher for males (924.6 per 100,000 population) than females (657.8 per 100,000 population) overall and within black and white populations. By race, death rates were higher for blacks than for whites. A death is reported by the place of its occurrence and by the week that the death certificate was filed. Fetal deaths are not included. † Pneumonia and influenza. § Because of changes in reporting methods in this Pennsylvania city, these numbers are partial counts for the current week. Complete counts will be available in 4 to 6 weeks. ¶ Because of Hurricane Katrina, weekly reporting of deaths has been temporarily disrupted. Total includes unknown ages.
- N N N - - N - N N N N 1 N N - 1 N - N N - - N N N - - - N N - N N N N - N N N N - N - N N 78 77 N N N 1 - - - 20 N 20 N N N N - - N - 125 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 89 85 0 0 0 1 0 0 0 31 0 31 0 0 0 0 - 0 0 0 341 1 0 0 0 1 0 0 0 0 0 0 0 3 0 0 2 1 0 77 0 0 77 1 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0 0 170 168 0 0 0 7 3 7 1 217 0 217 0 0 0 0 - 0 0 0 3,988 1 N N N 1 - N - N N N N 30 N N 22 8 N - N N - - N N N - - - N N - N N N N - N N N N 1 N 1 N N 2,
- - - 1 - 5 4 1 - - 1 1 - - - 11 6 2 2 - - 1 - - - - - - - - N - - N -
American Samoa - 0 0 - - - 0 1 3 3 - 0 0 - - C.N.M.I. - - - - - - - - - - - - - - - Guam - 0 0 - 2 - 1
- 0 3 1 5 2 - 0 2 5 8 - 0 1 2 4 Montana § - 0 1 - 6 - 0 1 - - - 0 1 3 3 Nevada § - 0 2 5 8 1 1 3 29 32 - 0 2 6 6 New Mexico § - 0 3 1 4 5 - 0 2 8 9 - 0 1 3 7 U t a h - 0 2 2 4 - 0 5 2 3 5 - 0 3 1 3 5 Wyoming § - 0 1 2 2 - 0 1 3 4 - 0 0 - 3 Pacific 6
American Samoa - 0 0 - - - 0 0 - 14 N 0 0 N N C.N.M.I. - - - - - - - - - - - - - - - G u a m - 0 0 - - - 0 1 - 2 - 0 0 - - Puerto
1 0 1 5 5 - 0 2 6 4 - 0 2 9 2 North Dakota - 0 9 1 2 - 0 2 - - - 0 1 1 2 South Dakota - 0 1 2 - - 0 0 - 1 - 0 1 1 3 S.
- 0 1 2 1 - 0 1 5 8 - 0 2 5 11 Colorado - 0 1 3 - - 0 2 3 14 - 0 2 9 18 Idaho § - 0 2 6 7 - 0 1 - 2 - 0 2 3 4 Montana § 1 0 2 3 1 - 0 0 - 3 - 0 1 4 1 Nevada § - 0 2 4 7 - 0 3 4 2 - 0 2 6 3 New Mexico § - 0 2 3 5 - 0 1 1 2 - 0 1 6 2 U t a h - 0 1 - 2 - 0 1 2 9 - 0 2 3 8 Wyoming § - 0 1 1 2 - 0 0 - - - 0 1 2 2
American Samoa N 0 0 N N - 0 0 - - - 0 0 - - C.N.M.I. - - - - - - - - - - - - - - - Guam - 0 0 - - - 0 1 1 1 - 0 0 - - Puerto Rico N 0 0 N N - 0 1 1 3 - 0 1 2 6 U.S. Virgin Islands N 0 0 N N - 0 0 - - - 0 0 - - C.
- N 0 0 N N American Samoa - 0 0 - - N 0 0 N N N 0 0 N N C.N.M.I. - - - - - - - - - - - - - - - Guam - 0 0 - - - 0 0 - - N 0 0 N N Puerto Rico - 0 0 - - 2 1 | 6,963 | {
"id": "bbb99ffd91f21a2c97c0ab42423b55ebb5368239",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Hepatitis C virus (HCV) is an increasing cause of morbidity and mortality in the United States. Many of the 2.7-3.9 million persons living with HCV infection are unaware they are infected and do not receive care (e.g., education, counseling, and medical monitoring) and treatment. CDC estimates that although persons born during 1945-1965 comprise an estimated 27% of the population, they account for approximately three fourths of all HCV infections in the United States, 73% of HCV-associated mortality, and are at greatest risk for hepatocellular carcinoma and other HCV-related liver disease. With the advent of new therapies that can halt disease progression and provide a virologic cure (i.e., sustained viral clearance following completion of treatment) in most persons, targeted testing and linkage to care for infected persons in this birth cohort is expected to reduce HCV-related morbidity and mortality. CDC is augmenting previous recommendations for HCV testing (CDC. Recommendations for prevention and control of hepatitis C virus (HCV) infection and HCV-related chronic disease. MMWR 1998;47) to recommend one-time testing without prior ascertainment of HCV risk for persons born during , a population with a disproportionately high prevalence of HCV infection and related disease. Persons identified as having HCV infection should receive a brief screening for alcohol use and intervention as clinically indicated, followed by referral to appropriate care for HCV infection and related conditions. These recommendations do not replace previous guidelines for HCV testing that are based on known risk factors and clinical indications. Rather, they define an additional target population for testing: persons born during . CDC developed these recommendations with the assistance of a work group representing diverse expertise and perspectives. The recommendations are informed by the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) framework, an approach that provides guidance and tools to define the research questions, conduct the systematic review, assess the overall quality of the evidence, and determine strength of the recommendations. This report is intended to serve as a resource for health-care professionals, public health officials, and organizations involved in the development, implementation, and evaluation of prevention and clinical services. These recommendations will be reviewed every 5 years and updated to include advances in the published evidence.# Introduction
In the United States, an estimated 2.7-3.9 million persons (1.0%-1.5%) are living with hepatitis C virus (HCV) infection (1), and an estimated 17,000 persons were newly infected in 2010, the most recent year that data are available (2). With an HCV antibody prevalence of 3.25%, persons born during 1945-1965 account for approximately three fourths of all chronic HCV infections among adults in the United States (3). Although effective treatments are available to clear HCV infection from the body, most persons with HCV do not know they are infected (4-7), do not receive needed care (e.g., education, counseling, and medical monitoring), and are not evaluated for treatment. HCV testing is the first step toward improving health outcomes for persons infected with HCV.
Since 1998, routine HCV testing has been recommended by CDC for persons most likely to be infected with HCV (8) (Box). These recommendations were made on the basis of a known epidemiologic association between a risk factor and acquiring HCV infection. However, many persons with HCV infection do not recall or report having any of these specific risk factors.
In a recent analysis of data from a national health survey, 55% of persons ever infected with HCV reported an exposure risk (e.g., injection-drug use or blood transfusion before July 1992), and the remaining 45% reported no known exposure risk (CDC, unpublished data, 2012). Other potential exposures include ever having received chronic hemodialysis, being born to an HCV-infected mother, intranasal drug use, acquiring a tattoo in an unregulated establishment, being incarcerated, being stuck by a needle (e.g., in health care, emergency medical, home, or public safety settings) and receiving invasive health-care procedures (i.e., those involving a percutaneous exposure, such as surgery before implementation of universal precautions). Although HCV is inefficiently transmitted
# Recommendations for the Identification of Chronic Hepatitis C Virus Infection Among Persons Born during 1945-1965*
- Adults born during 1945-1965 should receive one-time testing for HCV without prior ascertainment of HCV risk. - All persons with identified HCV infection should receive a brief alcohol screening and intervention as clinically indicated, followed by referral to appropriate care and treatment services for HCV infection and related conditions.
# Guidelines for Prevention and Treatment of Opportunistic Infections in HIV-Infected Adults and Adolescents †
- HIV-infected patients should be tested routinely for evidence of chronic HCV infection. Initial testing for HCV should be performed using the most sensitive immunoassays licensed for detection of antibody to HCV (anti-HCV) in blood.
# Recommendations for Prevention and Control of Hepatitis C Virus (HCV) Infection and HCV-Related Chronic Disease §
Routine HCV testing is recommended for - Persons who ever injected illegal drugs, including those who injected once or a few times many years ago and do not consider themselves as drug users. - Persons with selected medical conditions, including -persons who received clotting factor concentrates produced before 1987; -persons who were ever on chronic (long-term) hemodialysis; and -persons with persistently abnormal alanine aminotransferase levels. - Prior recipients of transfusions or organ transplants, including -persons who were notified that they received blood from a donor who later tested positive for HCV infection; -persons who received a transfusion of blood or blood components before July 1992; and -persons who received an organ transplant before July 1992. Routine HCV testing is recommended for persons with recognized exposures, including - Health care, emergency medical, and public safety workers after needle sticks, sharps, or mucosal exposures to HCV-positive blood. - Children born to HCV-positive women.
# BOX. Recommendations for prevention and control of hepatitis C virus (HCV) infection and HCV-related chronic diseases
through sexual activity, the prevalence of HCV antibodies among persons who report having had ≥20 sex partners is 4.5 times greater compared with the general population (1).
These birth-year-based recommendations are intended to augment, not replace, the 1998 HCV testing guidelines (8). They were developed by the HCV Birth Cohort Testing Work Group, which consisted of experts from CDC and other federal agencies, professional associations, communitybased organizations, and medical associations. The Work Group used the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) framework (9)(10)(11)(12)(13)(14)(15)(16)(17) to inform the development of these recommendations. The GRADE approach provides guidance and tools to define the research questions, conduct systematic reviews, assess the overall quality of the evidence, and determine the direction and strength of the recommendations. Following this evidence review, CDC's Division of Viral Hepatitis (DVH) developed this report, which was then peer-reviewed by external experts and posted for public comment (www.regulations.gov). CDC reviewed and considered all public comments in developing the final recommendations.
# Background
HCV causes acute infection, which can be characterized by mild to severe illness but is usually asymptomatic. In approximately 75%-85% of persons, HCV persists as a chronic infection, placing infected persons at risk for liver cirrhosis, hepatocellular carcinoma (HCC), and extrahepatic complications that develop over the decades following onset of infection (18).
Because HCV is a bloodborne infection, risks for HCV transmission are primarily associated with exposures to contaminated blood or blood products (8). In 1998, the highest prevalence of antibody to HCV (anti-HCV) was documented among persons with substantial or repeated direct percutaneous exposures, such as persons who inject drugs (PWID), those who received blood from infected donors, and persons with hemophilia (60%-90%); moderate rates were found among those with repeated direct or unapparent percutaneous exposures involving smaller amounts of blood, such as hemodialysis patients (10%-30%). Persons with unapparent percutaneous or mucosal exposures, including those with high-risk sexual behaviors, sexual and household contacts of persons with chronic HCV infection (1%-10%), and persons with sporadic percutaneous exposures (e.g., health-care workers ), had lower rates. According to American Red Cross Blood Service systems in the United States, prevalence among first time blood donors was even lower (0.16% in 2008) (19). Before 1965, the estimated incidence of HCV infection (then known as Non A-Non B hepatitis) was low (18 cases per 100,000 population). However, the incidence of HCV infection increased steadily into the 1980s and remained high (130 cases per 100,000 population), representing an average of 230,000 infections per year during that decade (20). In 1988, HCV was identified, and by 1992, sensitive multiantigen serologic assays for testing the blood supply had been developed and licensed. During 1992-2004, the number of reported cases of new HCV infection decreased 78.4% (2), and during 1999-2008, HCV prevalence among first-time blood donors decreased 53%. Much of this decline can be attributed to a decrease in cases among PWID (21). Safer injection practices among PWID contributed to some of this decline, but the downward trend was most likely related to HCV infection saturation of the injection-drug-using population (21). A smaller proportion of the overall decline in HCV infection incidence was attributed to effective screening of blood donors to prevent HCV transmission. Since 2004, HCV incidence has remained stable (21). In 2010, the estimated number of newly acquired (i.e., acute) infections in the United States was 17,000 (2,22).
The overall prevalence of anti-HCV in the general population of the United States can be estimated by analyzing National Health and Nutrition Examination Survey (NHANES) data, a representative sample of the civilian noninstitutionalized population. NHANES data indicate that HCV infection prevalence was 1.6%-1.8% during 1988-2002, consistent with the finding that the incidence of infection declined and then remained stable during this time (1,20,21,23). Considering NHANES data collected from 1999-2008, the anti-HCV prevalence estimate is 1.5%, or 3.9 million persons (95% confidence interval = 1.3-1.7; 3.4-4.4 million persons). NHANES data underestimate the actual national prevalence because these surveys do not include samples of incarcerated or homeless persons, populations known to have high prevalence of HCV infection. Although no systematic surveys comparable to NHANES have sampled these populations, their inclusion has been estimated to increase the number of infected persons by 500,000-1,000,000 (24).
# Rationale for Augmenting HCV Testing Recommendations
In 1998, recommendations for identifying HCV-infected persons were issued as part of a comprehensive strategy for the prevention and control of HCV infection and HCVrelated chronic disease (8). HCV testing was recommended for persons at high risk for HCV transmission, including persons who 1) had ever injected drugs, 2) were ever on chronic hemodialysis, 3) received blood transfusions or organ transplants before July 1992, or 4) received clotting factor concentrates produced before 1987 (Box). Screening also was recommended for persons who had a recognized exposure (i.e., health-care, emergency medical, and public safety workers after needle sticks, sharps, or mucosal exposures and children born to HCV-infected mothers) and persons with laboratory evidence of liver inflammation (i.e., persistently elevated alanine aminotransferase levels). In 1999, HCV testing also was recommended for persons infected with HIV (25).
# Limited Effectiveness of Current Testing Strategies
Current risk-based testing strategies have had limited success, as evidenced by the substantial number of HCV-infected persons who remain unaware of their infection (26). Of the estimated 2.7-3.9 million persons living with HCV infection in the United States, 45%-85% are unaware of their infection status (4-7); this proportion varies by setting, risk level in the population, and site-specific testing practices. Studies indicate that even among high-risk populations for whom routine HCV testing is recommended, prevalence of testing for HCV seromarkers varies from 17%-87% (4,5); according to one study, 72% of persons with a history of injection-drug use who are infected with HCV remain unaware of their infection status (27). Barriers to testing include inadequate health insurance coverage and limited access to regular health care (7); however, risk-based testing practices have not been successful in identifying most HCV-infected persons, even those covered by health insurance (6).
Barriers exist at the provider level, limiting the success of the risk-based approach to HCV testing. Providers lack knowledge about hepatitis serology and treatment; studies indicate that providers' level of knowledge regarding HCV infection prevalence, natural history, available tests, and testing procedures is low (28)(29)(30). Although up-to-date professional guidelines on HCV testing are available from the American Association for the Study of Liver Disease (AASLD) (18,31), one survey found that 41.7% of primary care physicians reported being unfamiliar with these guidelines (32). In addition, accuracy of patient recall of risk behaviors, including drug use and sexual encounters, decreases over time (33).
# Increasing HCV-Associated Morbidity and Mortality
HCV-associated disease is the leading indication for liver transplantation and a leading cause of HCC in the United States (26,(34)(35)(36). HCC and cirrhosis have been increasing among persons infected with HCV (37,38), and these outcomes are projected to increase substantially in the coming decade (39,40). HCC is the fastest growing cause of cancer-related mortality, and infection with HCV accounts for approximately 50% of incident HCC (41). A CDC review of death certificate data found that the hepatitis C mortality rate increased substantially during 1999-2007 (annual mortality rate change: +0.18 deaths per 100,000 population per year); in 2007, HCV caused 15,106 deaths (42). Of the HCV-related deaths, 73.4% occurred among persons aged 45-64 years, with a median age of death of 57 years (approximately 20 years less than the average lifespan of persons living in the United States).
On the basis of data from prospective and retrospective cohorts, an estimated 20% of infected persons will progress to cirrhosis 20 years after infection, and up to 5% will die from HCV-related liver disease (43). Modeling studies forecast substantial increases in morbidity and mortality among persons with chronic hepatitis C as they age into their third, fourth, and fifth decades living with the disease (44,45). These models project that during the next 40-50 years, 1.76 million persons with untreated HCV infection will develop cirrhosis, with a peak prevalence of 1 million cases occurring from the mid-2020s through the mid-2030s (40); approximately 400,000 will develop HCC (40). Of persons with hepatitis C who do not receive needed care and treatment, approximately one million will die from HCV-related complications (40,46).
# Benefits of HCV Testing and Care
Clinical preventive services, regular medical monitoring, and behavioral changes can improve health outcomes for persons with HCV infection. HCV care and treatment recommendations have been issued by AASLD and endorsed by the Infectious Disease Society of America (IDSA) and the American Gastroenterological Association (AGA) (18). Because co-infection with HIV, hepatitis A virus (HAV), or hepatis B virus (HBV) and consumption of alcohol hasten the progression of HCV-related disease (47), professional practice guidelines (18) include counseling to decrease or eliminate alcohol consumption and vaccination against HAV and HBV for susceptible persons. Additional guidance includes counseling and education to reduce interactions between herbal supplements and over-the-counter and prescription medications (18,31). Because elevated body mass index (BMI) (weight /height 2 ) has been linked to increased disease progression among HCV-infected persons, counseling to encourage weight loss for persons who have BMI scores ≥25 is recommended to reduce the likelihood of insulin resistance and disease progression (18,48). As HCV-associated liver disease progresses, the likelihood of sustaining a treatment response decreases (48,49); therefore, early identification, linkage to care, and clinical evaluation are critical disease prevention interventions.
# Benefits of HCV Treatment
AASLD recommends considering antiviral treatment for HCV-infected persons with histological signs of bridging fibrosis, septal fibrosis, or cirrhosis (18). In 2011, the first generation of direct-acting antiviral agents (DAAs), the HCV NS3/4A protease inhibitors telaprevir and boceprevir, were licensed in the United States for treatment of HCV genotype 1(the most common genotype in the United States). Compared with conventional pegylated interferon and weightbased ribavirin therapy (PR) alone, the addition of one of these two protease inhibitors in clinical trials increased rates of sustained virologic response (SVR) (i.e., viral clearance following completion of treatment) from 44% to 75% and 38% to 63%, respectively, in persons with HCV (50,51). In a study of veterans with multiple co-morbidities, achieving an SVR after treatment was associated with a substantial reduction in risk for all-cause mortality of >50% ( 52) and substantially lower rates of liver-related death and decompensated cirrhosis (i.e., cirrhosis with the diagnosis of at least one of the following: ascites, variceal bleeding, encephalopathy, or impaired hepatitis synthetic function) (18). Because of the recent introduction of these treatment regimens, the long-term effects of DAA treatment in clinical practice have yet to be established, and the benefits might be different in community settings. In addition to the new Food and Drug Adminstration (FDA)-approved medications, approximately 20 HCV treatments (protease and polymerase inhibitors) are undergoing Phase II or Phase III clinical trials (53); treatment recommendations are expected to change as new medications become available for use in the United States.
# Consideration of a New HCV Testing Strategy
Because of the limited effectiveness of risk-based HCV testing, the rising HCV-associated morbidity and mortality, and advances in HCV care and treatment, CDC has evaluated public health strategies to increase the proportion of infected persons who know their HCV infection status and are linked to care. Several analyses of nationally representative data have found a disproportionately high prevalence of HCV infection among persons who were born during the mid-1940s through the mid-1960s. In an analysis of 1988-1994 NHANES data, 65% of 2.7 million persons with HCV infection were aged 30-49 years (23), roughly corresponding to this birth cohort.
In an analysis of NHANES data during 1999-2002, a similarly high proportion of persons with HCV antibody had been born during 1945-1964 (Figures 1 and 2) (1). A recent analysis of 1999-2008 NHANES data found that the prevalence of HCV antibody among persons in the 1945-1965 birth cohort was 3.25% (95% CI = 2.80-3.76); persons born during these years accounted for more than three fourths (76.5%) of the total anti-HCV prevalence in the United States (3).
# Selection of a Target Birth Cohort
To select a target birth cohort for an expanded testing strategy, CDC considered various birth cohorts with increased HCV prevalence (Table 1). For each proposed cohort, CDC 1910 1920 1930 1940 1950 1960 1970 1980 1990 Year of birth determined the weighted, unadjusted anti-HCV prevalence and the size of the population. On the basis of HCV prevalence and disease burden, the 1945-1965 birth cohort was selected as the target population. Three birth cohorts (1945-1965, 1950-1970, and 1945-1970) were additionally stratified by race/ethnicity and sex (Table 2). The differences in the male-to-female ratio were not substantial and were not critical in selecting the birth cohort. However, the difference in prevalence by race/ethnicity between the birth cohorts is notable. Both the 1950-1970 and 1945-1970 cohorts have a lower prevalence of HCV-infected non-Hispanic black populations than the 1945-1965 cohort. Of the 210,000 anti-HCV-positive persons in the 1945-1949 cohort, approximately 71,000 (35%) were black. Because non-Hispanic black populations account for a substantial proportion of the 1945-1965 birth cohort, these birth years were included to better address this health disparity.
When examining the possibility of including persons born during 1966-1970 with the target population (i.e., 1945-1965 cohort), it was determined that such a strategy would direct testing to approximately 20 million additional persons at a cost of approximately $1.08 billion, resulting in identification of an additional 300,000 persons with chronic infection. The number needed to screen to avert a single HCV-related death was lower in the 1945-1965 birth cohort compared with the 1945-1970 birth cohort (607 and 679, respectively). Data collected through a series of 12 consumer focus groups in three different U.S. cities demonstrated that the 1945-1965 birth cohort is a recognized subpopulation known as the "baby boomers;" familiarity with this subpopulation and the term used to describe it likely will facilitate adoption of the recommendation. On the basis of these assessments, CDC selected the 1945-1965 birth cohort as the target population.
# Prevalence of HCV Infection in the 1945-1965 Birth Cohort
The prevalence of anti-HCV among persons born during 1945-1965 is 3.25% (3), five times higher than among adults born in other years. The high prevalence of HCV among persons in this birth cohort reflects the substantial number of incident infections throughout the 1970s and 1980s and the persistence of HCV as a chronic infection. Males in this cohort had almost twice the prevalence as their female counterparts; HCV infection prevalence was highest among non-Hispanic black males (8.12%), followed by non-Hispanic white males (4.05%) and Mexican-American males (3.41%).
Complicating health outcomes among HCV-infected persons born during 1945-1965 are a lack of health insurance (31.5%) and use of alcohol (3). Of all anti-HCV positive persons in the 1945-1965 birth cohort who self-reported alcohol use, 57.8% reported consuming an average of two or more alcoholic drinks per day (3).
# Methods
CDC employed the GRADE methodology to inform the guideline development process. In April 2011, CDC convened the HCV Birth Cohort Testing Work Group to explore the practicality of developing a recommendation for one-time HCV testing for persons unaware of their infection status. Epidemiologic data exist to support the consideration of a birth year testing strategy; however, the GRADE process dictated that a formal review of the literature be conducted to examine the effect that this testing would have on diagnosing persons unaware of their HCV infection status, as well as the potential benefits and harms that this strategy would have on persons tested. The Work Group consisted of 1) a steering committee within CDC's DVH, which led and conducted the evidence reviews; 2) representatives from DVH's Laboratory, Prevention, and Epidemiology and Surveillance Branches, who were tasked with reviewing and providing input on the evidence compiled by the steering committee through biweekly meetings; and 3) external (to CDC) representatives, who provided input on materials compiled by the steering committee through teleconferences, an evidence grading methodology training workshop, and a consultation. External representatives were selected on the basis of expertise with viral hepatitis; members included representatives from hepatitis C-related community-based organizations, persons living with HCV infection, hepatologists, economists, infectious disease specialists, and guideline methodologists. A wide range of disciplines, organizations, and geographic regions was represented, to include in consultation or referral (AASLD, AGA, and IDSA). Several subject matter experts (e.g., hepatologists, economists, infectious disease specialists, and guideline methodologists) also served as members of the external group. Work Group participants were required to disclose conflicts of interest and were notified of the restrictions regarding lobbying during the recommendation development process (Appendix A). No members' activities were restricted based on the information disclosed.
Comprehensive systematic reviews of the literature were conducted, analyzed, and assessed in two stages to examine the availability and quality of the evidence regarding HCV infection prevalence and the health benefits and harms associated with one-time HCV testing for persons unaware of their status.
Work Group members communicated through teleconferences and attended an in-person workshop on GRADE methodology. Initial evidence from the systematic review of the prevalence data was shared during the teleconferences, and the target birth years were selected. Following that selection, the systematic review focused on the HCV-associated morbidity and mortality that might be altered by a recommendation for one-time testing of persons born during 1945-1965.
In August 2011, CDC convened a 2-day consultation with Work Group members to 1) review and evaluate the quality of the evidence for the proposed birth cohort-based strategy, 2) consider benefits versus harms of patient-important outcomes, 3) weigh the variability between the values and preferences of HCV testing among potential patients, and 4) consider resource implications. During the consultation, a summary of findings table addressing each patientimportant outcome was presented to consultation attendees for discussion (Appendix B). Work Group members later provided input on the quality of the evidence and strength of the recommendations. Following the consultation, the DVH Steering Committee and other DVH representatives reviewed the information and reached a decision regarding the strength of the recommendations. At that time, a recommendations statement and qualifying remarks were developed in accordance with GRADE methodology.
Feedback from the public was solicited through conference presentations, meetings with national stakeholders, and public comment. Further, the proposed guidelines were peer-reviewed by external experts in viral hepatitis. A Federal Register notice was released on May 18, 2012, announcing the availability of the draft recommendations for public comment through June 8, 2012. In addition, external Work Group members were asked to comment on the recommendations statement and remarks during the public comment process. Feedback from the public comment period was reviewed by the DVH Steering Committee, and the draft was modified accordingly. Throughout the development process, CDC also sought input from participants at national conferences, including AASLD's 2011 Single Topic Conference, the 2010 Annual Meeting of the American Public Health Association, the 2010 AASLD Conference, the 2011 Guidelines International Network Conference, and Digestive Disease Week 2012.
# GRADE Methodology
These recommendations were developed using GRADE methodology (9-17), which has been adopted by approximately 60 organizations, including CDC federal advisory committees (i.e., the Advisory Committee on Immunization Practices and the Healthcare Infection Control Practices Advisory Committee), the World Health Organization, IDSA, AGA, and the Cochrane Collaboration (www.gradeworkinggroup.org). GRADE provides guidance and tools to define research questions, develop an analytic framework, conduct systematic reviews, assess the overall quality of the evidence, and determine the direction and strength of the recommendations.
Research questions were formulated to guide the development of the recommendations using a population, intervention, comparator, and outcome (PICO) format (9). The research questions were developed to support a two-stage approach to the evidence review: 1) determine the baseline prevalence of HCV infection and 2) measure the effects of an intervention (i.e., patient-important benefits and harms).
Per the GRADE process, the HCV Birth Cohort Testing Work Group designed an analytic framework (Appendix C), which was used to examine patient-important outcomes associated with each step of the testing effort, from the identification of the target population to the treatment of persons found to be infected with HCV. To measure the benefits and harms of HCV screening and treatment, patient-important outcomes were compiled. These outcomes were ranked, each according to its relevance to the recommendation (a rating of 1-3 being of low importance; 4-6 being important but not critical to decision making; and 7-9 as critical to decision making). Literature reviews were conducted on outcomes identified as important or critical to decision making. Work Group members had three opportunities to rank the outcomes: 1) when the outcomes were first identified, 2) after the evidence was presented, and 3) during the discussion of the benefits and harms, allowing the Work Group to weigh the relative importance of the outcomes based on the evidence presented and the benefits and harms.
The quality of the evidence for each patient-important outcome was assessed collectively by individual outcome, not by individual studies, in the GRADE profiler software (GRADEpro 3.6). The quality of the evidence was categorized as being "high," "moderate," "low," or "very low" depending on the established criteria for rating the quality up or down. The quality of evidence for each of the outcomes was rated down if it met at least one of the following five criteria: 1) risk of bias; 2) inconsistency or heterogeneity; 3) indirectness (addressing a different population than the one under consideration); 4) imprecision; or 5) publication bias. Conversely, the quality of the evidence was rated up if it met any of three criteria: 1) large effect size; 2) dose-response; or 3) plausible residual confounders (i.e., when biases from a study might be affecting the estimated apparent intervention effect) (Appendix B). Outcomes were reranked for importance after consideration of evidence by the Work Group members.
The following four factors are considered when determining the relevance and strength of a GRADE-based recommendation:
1) quality of evidence, 2) balance between benefits and harms, 3) values and preferences, and 4) resource implications. During the consultation, the Work Group considered each of these factors in light of the evidence presented. A statement based on the direction and strength of the recommendation was developed using the GRADE criteria; statements were either "for" or "against" an intervention and were either strong (designated by a "should" statement) or conditional (designated by a "may consider" statement).
# Research Questions
To facilitate a succinct, systematic review of the evidence, the Work Group developed the following review questions to be considered when examining prevalence data and patientimportant outcomes:
- What is the effect of a birth-year based testing strategy versus the standard of care (i.e., risk-based testing) for identification of hepatitis C virus (HCV) infection? - Should HCV testing (versus no testing) be conducted among adults at average risk for infection who were born during 1945-1965? - Among persons tested and identified with HCV infection, is treatment-related SVR (versus treatment failure) associated with reduced liver-related morbidity and allcause mortality? - Should HCV testing followed by brief alcohol interventions (versus no intervention) be carried out to reduce or cease drinking among HCV-infected persons? Review questions were aligned with the analytic framework and were formed in accordance with PICO. The division of these questions into two topics, prevalence data and patientimportant outcomes, reflects the two-stage approach that was used to 1) define the testing strategy and birth years of interest, and 2) examine the effects of testing persons born during 1945-1965 for HCV infection. Because the patientimportant outcomes questions encompass many outcomes, they are formed without listing one specific outcome; they present only the population, intervention, and comparator.
# Literature Review
The DVH Steering Committee reviewed current HCV testing guidelines (8,18,(54)(55)(56)(57)(58) and existing scientific evidence; systematic reviews and meta-analyses were conducted to synthesize the evidence available for the review questions. This evidence was compiled and presented to the Work Group throughout the development process.
The systematic review process for these recommendations was separated into two stages: 1) a review of HCV infection prevalence to determine the effect of a birth-year testing strategy, and 2) a review of the effects of testing persons born during 1945-1965 on patient-important outcomes. Search strategies varied for each stage; however, following the initial collection of results from the search, titles and abstracts were reviewed by two persons. If disagreement on the inclusion of an article occurred, an independent third reviewer decided whether the article would be included. For the titles and abstracts that met the inclusion criteria, the full article was retrieved and reviewed. Information from the full articles was extracted for the GRADE profiles to conduct the meta-analyses.
# Prevalence Data
The review of prevalence data was conducted to identify literature addressing a birth-year-based strategy or providing additional support for the prevalence estimates (see Selection of a Target Birth Cohort). The DVH Steering Committee reviewed all literature regarding the effect of a birth-year-based testing strategy for HCV infection that had been considered and published after CDC's 1998 recommendation. To be selected for review, articles had to have been published during 1995-2011, describe results of U.S.-based studies, and include participants within the target population (i.e., the 1945-1965 birth cohort). Case studies and studies of persons co-infected with HBV or HIV were excluded. Six databases were searched for primary research, including grey literature and conference abstracts: MEDLINE, EMBASE, Sociological Abstracts, Cochrane Library (e.g., Database of Systematic Reviews, Central Register of Controlled Trials, and Economic Evaluation Database), CINAHL, and Database of Abstracts of Reviews of Effects (DARE) (Appendix D).
# Patient-Important Outcomes
A literature search for the effect of HCV testing and treatment on patient-important outcomes was conducted (Appendix E). A search of previously published systematic reviews and metaanalyses was conducted initially and used to address the patientimportant outcomes when available and of high quality. When systematic reviews or meta-analyses were unavailable, primary studies were sought and added to the results. When possible, data from primary studies were entered into systematic review software (Review Manager, 2008) to produce meta-analyses for estimation of effect sizes. Otherwise, effect size data were extracted directly from published meta-analyses.
Separate, targeted literature reviews were conducted for those outcomes considered important or critical to decisionmaking (i.e., given a GRADE rating of ≥4); these outcomes included:
- all-cause mortality;
- HCC;
- SVR (a marker of virologic cure);
- serious adverse events (SAEs) (i.e., treatment-related side effects); The selection criteria for the primary literature search included intervention studies (i.e., controlled trials, cohort studies, and case-control studies) conducted worldwide and published in English. Case studies were excluded, along with studies of transplant recipients and persons co-infected with HBV or HIV, if they were not controlled for in the analysis. To be selected, studies needed to present data inclusive of persons born during 1945-1965. Because DAAs have only recently been licensed, evidence was insufficient on their long-term effect on the patient-important outcomes. Therefore, only studies providing treatment regimens with pegylated interferon (with and without ribavirin) or interferon (with or without ribavirin) were examined.
A systematic, targeted review was conducted to examine potential harmful and beneficial patient-important outcomes associated with HCV testing and treatment. A similar review also was conducted to examine reduction or cessation of alcohol use associated with brief interventions provided to persons identified as HCV-infected. Only those outcomes considered critical to decision making (i.e., all-cause mortality, HCC, SVR, treatment-related SAEs, QoL, HCV transmission, and alcohol use) were graded on their quality and used to inform the strength of the recommendations.
# Results
# Review of HCV Infection Prevalence Data
Of the 10,619 articles that met the search criteria for the HCV infection prevalence review, 31 provided data on HCV infection prevalence by birth year (Appendix F). Three of those articles (1,23,59) examined nationally representative data from NHANES. Because data from population-based NHANES is nationally representative, the quality of the NHANES data was deemed higher than that from the other 28 articles. Therefore, NHANES data for 1999-2008 were used to determine the most effective birth years to target when testing persons for HCV infection. The NHANES analysis revealed a 3.25% prevalence of anti-HCV among persons born during 1945-1965 (95% CI = 2.80-3.76). The prevalence data were presented to the Work Group early in the development process. The results were reviewed again during a discussion of patient-important outcomes at the consultation.
# Patient-Important Outcomes
Of the patient-important outcomes determined by the Work Group to be either important or critical for decision making (see GRADE Methodology), evidence was found in the literature for all-cause mortality, HCC, SVR, SAEs, QoL, and alcohol use. However, for several other important or critical outcomes (i.e., HCV transmission, insurability, reassurance of testing negative, false reassurance of testing negative, and worry or anxiety caused by testing true positive), no studies examining their importance and relevance to a birth cohort recommendation could be identified. With the exception of HCV transmission, these outcomes were re-ranked as not critical to decision-making. For HCV transmission, the Work Group decided to keep the categorization as critical to decisionmaking to highlight the need for future research.
# All-Cause Mortality
Previously published systematic reviews and meta-analyses did not provide all-cause mortality information relevant to this population, so a systematic review was conducted (Appendix G). A total of 22 published articles examined allcause mortality among persons tested and treated for HCV infection. However, a review of the full articles revealed weaknesses in 21 of these studies resulting from insufficient sample sizes, unrepresentative study populations, and other sources of confounding, thus they did not meet the inclusion criteria (Appendix G). One study was identified as directly applicable to the target population (52). The study had a large sample size and rigorously controlled for covariates in post hoc analysis, which improved the Work Group's confidence in the estimate of the effect. This study, which included a sample size of 16,864 HCV-infected persons identified through the U.S. Department of Veterans Affairs, found that treatment-related SVR was associated with a reduction in risk for mortality among persons who had HCV infection diagnosed (Relative risk = 0.45; 95% CI = 0.41-0.51). However, this study only compares persons who responded to therapy with those who did not respond and does not address a screened population or an untreated population. Differences in stage of liver disease between the groups had the potential to bias these findings, but those data were not available. Therefore, the confidence in the estimate of effect was deemed to be low, and no change in rating of the quality of evidence was performed despite a large estimated treatment effect (Appendix B).
# Hepatocellular Carcinoma
A meta-analysis was conducted to examine HCC as a patient-important outcome. A total of 12 observational studies (n=25,752) providing adjusted relative risk measures examined the incidence of HCC among persons achieving an SVR versus those who did not respond to treatment (60-71) (Appendix H, Appendix I, Appendix J). Data from these studies revealed that treatment-related SVR was associated with a reduced risk for HCC (>75%) among persons at all stages of fibrosis (RR = 0.24; 95% CI = 0.18-0.31). Minimal heterogeneity was reported (I 2 =22%), mainly attributed to the few occurrences of HCC and small sample sizes in the studies. No other criteria were fully met to justify downgrading the quality of the evidence for this outcome. Instead, the quality of the evidence was rated up to moderate because of the substantial measure of relative risk (Appendix B).
# Sustained Virologic Response
Achieving SVR is the first step toward reducing future HCV morbidity and mortality. The combination of PR with a DAA increases the rate of SVR in treated persons with hepatitis C genotype 1 when compared with PR alone. Pooled estimates comparing boceprevir-and telaprevir-based regimens with PR suggest that these regimens are associated with 28% increases in SVR rates (RR = 0.28, 95% CI = 0.24-0.32) (50,51,72-74). Although SVR was initially judged by the Work Group to be directly associated with patient-important outcomes (e.g., reduced viral transmission), further deliberation resulted in SVR being defined as an intermediary outcome that is predictive of a reduction in morbidity and mortality, particularly from HCC. Thus, rating down the quality of the evidence for SVR from high to moderate was justified given the indirectness of the outcome (Appendix B).
# Treatment-Related Serious Adverse Events
Treatment for HCV infection with PR can result in serious adverse events (SAEs).- In May 2011, triple-drug therapy with PR and DAA became the standard of care for patients with HCV genotype 1, but limited data are available for systematic reviews on SAEs for regimens including these new agents. In the telaprevir phase III clinical trial, the most common adverse events included gastrointestinal disorders, pruritus, rash, and anemia, and 11% of those receiving telaprevir discontinued therapy because of SAEs compared with 1% of those receiving - Defined by the Food and Drug Administration (FDA) as any undesirable experience associated with the use of a medical product in a patient. The event is serious and should be reported to FDA when the patient outcome is: death, hospitalization, disability or permanent damage, congenital anomaly/birth defect, or required intervention to prevent permanent impairment or damage. SAEs can include nausea, anemia, rash, and neuropsychiatric disturbances.
Please note: An erratum has been published for this issue. To view the erratum, please click here.
PR alone (50). In the boceprevir phase III clinical trial, the most common adverse events included fatigue, headache, nausea, and anemia. No differences in discontinuation rates between study arms were observed (51). The harms of these new treatments might be different in community settings. Although the addition of boceprevir and telaprevir to standard treatment with PR increases the rate of SVR in persons with HCV genotype 1, it also has been shown to result in an increased rate of adverse events that are severe enough to lead to treatment discontinuation (RR = 1.34; 95% CI = 0.95-1.87) (50,51,(72)(73)(74). The quality of the evidence for SAEs was rated down because of imprecision and judged to be moderate (Appendix B).
# Quality of Life
One systematic review was identified that examined the effect of HCV testing and treatment on patients' QoL (75). This study included seven observational studies. Although analysis of these studies did not yield an effect size, the mean QoL associated with the SVR in the intervention group was 6.6 points higher on the SF-36 Health Survey (a standard tool used to measure QoL) () compared with the control group. On the basis of study design and the limited evidence available regarding QoL, the quality of the evidence for this outcome was rated as low (Appendix B).
# HCV Transmission
Literature searches were conducted for previously published systematic reviews, meta-analyses, and articles that addressed HCV transmission. No intervention studies examining the effect of HCV testing on the patient-important outcome of HCV transmission were identified. However, HCV transmission was a critical factor when determining the strength of the recommendations, despite the absence of related intervention studies. Future research is needed to address this gap in knowledge.
# Alcohol Use
A literature search was conducted for systematic reviews, metaanalyses, and articles on the effect of an intervention to reduce alcohol use among persons found to be infected with HCV. Because evidence is limited, the search was broadened to include reviews focused on alcohol interventions for persons tested for HCV, not just those found positive. Recently, a meta-analysis of 22 randomized, controlled trials (n=7,619) examined the effects of HCV testing followed by a brief alcohol intervention (i.e., an assessment of the drinking behaviors of patients and provision of brief, one-on-one counseling if the health-care provider determines it to be clinically indicated) on drinking behaviors versus testing alone (76). The mean reduction of drinking alcohol (grams/week) in the intervention groups was 38.42% lower (95% CI = 30.91-65.44) than in the control groups after follow-up at ≥1 year. The quality of this evidence was initially rated as high because it was derived from randomized, controlled trials without major risk for bias. However, because the body of evidence was not specifically derived from persons with HCV infection, the quality of evidence was rated down to moderate because of indirectness (Appendix B).
# Factors Considered When Determining the Recommendations
Four factors must be considered when determining the relevance and strength of a GRADE-based recommendation: quality of evidence, balance between benefits and harms, values and preferences, and resource implications. During the consultation, the Work Group considered each of these factors in light of the evidence presented.
# Determining the Quality of the Evidence Across Outcomes Critical for Decision Making
The systematic reviews revealed a lack of evidence directly comparing the effectiveness of birth-year based testing to risk-based testing. Thus, the Work Group considered available evidence from studies examining 1) nationally representative observational data on HCV prevalence among varying birth cohorts, 2) clinical trial data on the effect of HCV treatment on achieving SVR, 3) observational data on the association of SVR with HCC and all-cause mortality, and 4) data from a metaanalysis of randomized controlled trials on the effectiveness of brief alcohol interventions in reducing alcohol use. Evidence from these studies was reviewed comprehensively to infer that birth-year based testing, in combination with alcohol reduction interventions, will lead to enhanced identification and treatment of the infected population and result in reduced morbidity and mortality.
The GRADE framework follows the principle that the overall quality of evidence should be determined based on the lowest quality of evidence of any outcome deemed critical for decision making. For the proposed HCV testing recommendation, critical factors included all-cause mortality, HCC, SVR, and SAEs (77). However, two factors were considered when rating the overall quality of evidence: 1) the desirable effects of testing and treatment (the low quality evidence of mortality reduction and the moderate quality evidence of reducing HCC) and 2) the harms of testing and treatment (the moderate quality evidence of adverse events associated with HCV eradication). Thus, if the reduction in HCC alone is a sufficiently desirable outcome to support testing and treatment (moderate quality evidence), and minimal uncertainty exists regarding the effect of the undesirable consequences (i.e., moderate quality evidence of SAEs), then the overall quality of evidence supporting testing and treatment in this cohort is determined to be moderate.
# Benefits versus Harms
A review of published and anecdotal evidence conducted in accordance with GRADE methodology indicated that the benefits of testing and treating persons with HCV infection were greater than the harms. Published evidence was predominantly drawn from the summary of findings tables (Appendix B) and additional literature shared by the Work Group. To supplement that information, anecdotal evidence on the benefits and harms associated with several factors was considered, including undergoing a liver biopsy, the receipt of a false-positive test result, the need to wait or return for test results, access to treatment, and the effect of HCV-infection notification on insurance and employment.
Although certain harms (i.e., worry or anxiety while waiting for test results, concern about insurability, and occurrence of SAEs during treatment) can be uncomfortable for patients, effective treatment can result in SVR, which is associated with reductions in liver-related morbidity and all-cause mortality. Liver biopsy also can result in complications, the most common of which is pain. Other less common complications include bleeding, intestinal perforation, and death (reported in <0.1% of persons) (78); therefore, the benefits associated with HCV treatment were judged to be greater than the harms. Additional factors support this judgment. For example, concerns about receipt of inaccurate HCV antibody test results can be assuaged by the accuracy of HCV RNA testing, and the time and resources needed to screen, provide a brief alcohol intervention, and refer patients to care is outweighed by the efficacy of these interventions in reducing alcohol use.
# Values and Preferences
Available data are limited regarding the acceptabililty by patients of HCV testing in the United States (79). However, this can be addressed during physician-patient discussions about individual preventive care.
# Resource Implications
Only two U.S.-based studies specifically examined the cost effectiveness and resource implications of birth-year-based HCV testing linked to HCV care and treatment; both studies found the interventions to be cost effective (46,80). These studies, which evaluated slightly different definitions of birth cohort, compared birth-cohort testing and treatment with the status quo of risk-and medical indication-based testing recommendations; both studies demonstrated nearly identical cost-effectiveness results. The first study, which defined the birth cohort as persons born during 1945-1965, estimated a cost per quality-adjusted life year (QALY) gained of $35,700 on the basis of a 12-week, response-guided course of telaprevir and PR; cost per QALY was an estimated $15,700 when assuming treatment with PR alone (46). The second study defined the birth cohort as persons born during 1946-1970 and estimated a cost per QALY gained of $39,963 for patients treated with telaprevir in addition to PR (80). Both modeling studies assumed that liver disease progression would not continue for those who achieve SVR.
These cost-effectiveness studies had different assumptions about the timing of HCV testing and treatment. The study that examined the 1945-1965 birth cohort included all possible costs and benefits in a single year (46), whereas the study that examined the 1946-1970 birth cohort assumed 20% of the eligible population would be screened and treated each year for 5 years (46,80). Testing costs (including antibody testing, nucleic acid testing of antibody positives, and posttest counseling) were estimated at $54 per person tested (40).
The birth-cohort testing strategy will reduce morbidity and mortality (Table 3), saving future HCV-related medical expenditures. However, in the immediate future, the increase in testing and treatment of persons born during 1945-1965 will cost more than that associated with current risk-based testing and treatment strategies. Several factors contribute to projected increases in treatment costs, including an expected increase in the number of persons tested and treated for HCV and the higher costs associated with combination PR/DAA therapy versus PR alone (Table 4). Costs can be compared using four different scenarios: risk-based testing with PR therapy; riskbased testing with PR therapy and DAA; birth-cohort testing with PR therapy; and birth-cohort testing with PR therapy and DAA, the current standard of care (Table 4).
To inform cost projections for the birth cohort HCV testing strategy, colorectal screening rates were reviewed to estimate the testing costs associated with one-time HCV testing for persons in the 1945-1965 birth cohort. Both interventions focus on screening at a single time point in time (i.e., at age 50 years for colorectal screening); therefore, data from colorectal screening programs are useful for estimating the rate of adoption of a recommendation for one-time prevention services. In an analysis of 2005 National Health Interview Survey data (a nationally representative household survey), 19.8% of women and 23.7% of men reported receiving colorectal screening during the preceding 3 years (the time since implementation of the United States Preventive Services Task Force screening recommendation) (78,81). These percentages were obtained after years of updated colorectal screening recommendations and implementation of educational campaigns, so they likely are higher than those expected to follow adoption of HCV testing recommendations. However, adopting the birth-cohort recommendations at the same level would result in testing approximately 5.6 million women and 6.7 million men for HCV within the first 3 years of implementation, at a cost of $664 million; approximately 400,000 persons with HCV infection would be identified.
# Recommendations
The following recommendations for HCV testing are intended to augment the Recommendations for Prevention and Control of Hepatitis C Virus (HCV) Infection and HCV-Related Chronic Disease issued by CDC in 1998 (8). In addition to testing adults of all ages at risk for HCV infection, CDC recommends that: (31). Treatment decisions should be made by the patient and provider after several factors are considered, including stage of disease, hepatitis C genotype, comorbidities, therapy-related adverse events, and benefits of treatment.
# Public Health Testing Criteria
HCV testing of persons in the 1945-1965 birth cohort is consistent with established general public health screening criteria (82) as evidenced by the following factors: 1) HCV infection is a substantial health problem that affects a large number of persons, causes negative health outcomes, and can be diagnosed before symptoms appear; 2) testing for HCV infection is readily available, minimally invasive, and reliable; 3) benefits include limiting disease progression and facilitating early access to treatments that can save significant life years; and 4) testing is cost effective. Such testing would help identify unrecognized infections, limit transmission, and help HCV-infected persons receive beneficial care and treatment before onset of severe HCV-related disease (82).
# Testing Methods
# Hepatitis C Antibody Testing
Laboratory testing methods for HCV included in these recommendations were established by CDC's Guidelines for Laboratory Testing and Result Reporting of Antibody to Hepatitis C Virus in 2003 (83). No new methods are introduced in these recommendations. HCV testing should be initiated with an FDA-approved test for antibody to HCV (anti-HCV). These assays are highly sensitive and specific. An HCV point-of-care assay that can provide results in <1 hour is available for clinical use (84). An immunocompetent person without risks for HCV infection who tests anti-HCV negative is not HCV-infected and no further testing for HCV is necessary. Additional testing might be needed for persons who have ongoing or recent risks for HCV exposure (e.g., injection-drug use) and persons who are severely immunocompromised (e.g., certain patients with HIV/AIDS or those on hemodialysis).
A person whose anti-HCV test is reactive should be considered to either 1) have current HCV infection or 2) have had HCV infection in the past that has subsequently resolved (i.e., cleared). To identify persons with active HCV infection, persons who initially test anti-HCV positive should be tested by an HCV nucleic acid test (NAT).
# Hepatitis C Nucleic Acid Testing
An FDA-approved HCV NAT (also referred to as an "HCV RNA test") should be used to identify active HCV infection among persons who have tested anti-HCV positive; FDA-approved tests include both quantitative HCV NATs (for HCV viral load) and qualitative NATs (for presence or absence of viremia). Persons who test anti-HCV positive or have indeterminate antibody test results who are also positive by HCV NAT should be considered to have active HCV infection; these persons need referral for further medical evaluation and care. A person who is anti-HCV positive but who tests negative by HCV NAT should be considered to not have active HCV infection.
# Other HCV-Related Testing Issues
Quantitative NATs assess the level of viremia in the bloodstream expressed as HCV viral load. Although viral load is a critical marker for the effectiveness of treatment, it is not a reliable indicator of stage of disease. Similarly, liver enzyme tests (i.e., alanine aminotransferase ) reflect the level of liver inflammation at the time of the test, but are not correlated consistently with the stage of liver disease. ALT levels are subject to fluctuations associated with many factors other than infection, including BMI and use of alcohol or medication.
# Management of Persons Tested for HCV Infection Communicating Test Results to Persons Tested for HCV Negative Anti-HCV Test Results
Persons with negative anti-HCV test results should be informed of their test results and reassured that they are not infected unless they were recently at risk for infection (e.g., current injection-drug use). Repeat testing should be considered for persons with ongoing risk behaviors.
# Positive Anti-HCV and Negative HCV RNA Test Results
Persons who are anti-HCV positive but have an HCV RNAnegative test result should be informed that they do not have HCV infection and do not need follow-up testing.
# Positive Anti-HCV and HCV RNA Test Results
Persons who test positive for both HCV antibody and HCV RNA should be informed that they have HCV infection and need further medical evaluation for liver disease, ongoing medical monitoring, and possible treatment. At the time positive test results are communicated to patients, health-care providers should evaluate the patient's level of alcohol use and provide a brief alcohol intervention if clinically indicated (see Alcohol-use Reduction). Persons with HCV infection also should be provided information (either through face-to-face sessions, video, or written materials) about 1) HCV infection, 2) risk factors for disease progression, 3) preventive self-care and treatment options, and 4) how to prevent transmission of HCV to others. HCV-infected persons also should be informed about the resources available to them within their communities, including providers of medical evaluation and social support.
# Post-Test Counseling Messages
Persons infected with HCV can benefit from the following counseling messages.
- Contact a health-care provider (either a primary-care clinician or specialist ), for -medical evaluation of the presence or development of chronic liver disease; -advice on possible treatment options and strategies; and -advice on how to monitor liver health, even if treatment is not recommended.
# Alcohol-Use Reduction
Messages to decrease alcohol use should be provided to persons infected with HCV. Alcohol screening and brief interventions (SBI) for referral for treatment can reduce the number of drinks consumed per week and episodes of binge drinking. SBI includes screening patients for excessive alcohol consumption, brief counseling for those who screen positive, and referral to specialized alcohol treatment for patients with possible alcohol dependence. The brief intervention is also an opportunity to communicate the HCV-associated risks posed by alcohol consumption and provide options for behavioral change. The U.S. Preventive Services Task Force (USPSTF) recommends screening and behavioral counseling interventions to reduce alcohol misuse by adults in primary-care settings (86). Screening tools shown to be effective in eliciting a history of alcohol use from patients include the Alcohol Use Disorders Identification Test (AUDIT). Screening tools are available from the National Institute on Alcohol Abuse and Alcoholism (/ Practitioner/CliniciansGuide2005/clinicians_guide.htm), and WHO has published intervention tools to help patients adopt healthy behaviors regarding alcohol use (/ substance_abuse/activities/sbi/en/index.html).
# Linkage to Care and Treatment
Many persons identified as HCV-infected do not receive recommended medical evaluation and care after the diagnosis of HCV infection (30); this gap in linkage to care can be attributed to several factors, including being uninsured or underinsured, failure of providers to provide a referral, failure of patients to follow up on a referral, drug or alcohol use, and other barriers. The lack of such care, or substantial delays before care is received, negatively impacts the health outcomes of infected persons. Routine testing of persons born during 1945-1965 is expected to lead to more HCV-infected persons being identified earlier in the course of disease. However, to improve health outcomes, persons testing positive for HCV must be provided with appropriate care and treatment. Linking patients to care and treatment is a critical component of the strategy to reduce the burden of disease.
Strategies are needed for HCV-infected persons who are experiencing barriers to care. These persons might benefit from the replication of effective linkage-to-care models and the development of other evidence-based interventions. Active linkage-to-care programs provided in a culturally sensitive manner (87-89) (e.g., the use of case managers to schedule appointments, bring infected patients to doctors' appointments, and follow-up with patients) have been found to be more effective (87) than passive referral methods (e.g., providing patients with information about the disease and a list of resources or referrals to medical care). Such linkage creates opportunities for patients to receive information, vaccinations, and prevention counseling messages and to more fully engage in care (90). Once patients receive care, case management can provide active linkage (91)(92)(93) to social services (88,94), referral to substance abuse services (95)(96)(97)(98), and assistance with transportation and housing (92,95). Recommendations for the medical management of HCV infection and disease are updated regularly by AASLD. Notable advances are being made in the care, management, and treatment of HCV infection at the time of publication of this recommendation. Although primary care clinicians can readily provide much of the care necessary for initial evaluation and management of persons with HCV infection, antiviral treatment is complex, and collaboration between primary-care providers and specialists facilitates delivery of optimal care. CDC is working with academic and clinical partners and with other federal and state agencies to replicate best practices and develop new models for HCV care (99).
# Future Directions
CDC will conduct demonstration projects to expand access to HCV testing and evaluate implementation of HCV testing in clinical and public health settings; data from these projects will identify best practices. In addition, CDC will employ national health surveys (e.g., NHIS) to assess implementation of this recommendation at the national level.
CDC is conducting systematic reviews of other testing and prevention recommendations that were included in the 1998 HCV testing recommendations (3). In addition, CDC will be reviewing evidence related to the potential benefits and harms of testing persons who were determined to be of "uncertain need" in the 1998 recommendations (i.e., those with risks that have not been well defined, such as intranasal drug use or a history of multiple sex partners). On completion of these reviews, recommendations for HCV testing and linkage to care will be revised as necessary. The revised guidelines, which will incorporate the present birth-cohort-based recommendations as well as risk-based strategies, will provide updated, comprehensive recommendations for the identification and management of HCV infection in the United States. † Rated down for imprecision. 95% CI includes harms as well as benefits. Sensitivity analysis: excluding one trial (SPRINT 2) that showed a lower discontinuation rate in the triple therapy group in one of the treatment arms compared with standard of care, the results would be as follows: RR=1.60 (95% CI = 1.16-2.22) (no imprecision). § Failure of viral negativity at 24 weeks post treatment. ¶ Rated down for indirectness. SVR considered an intermediary outcome for long-term benefit. | 13,589 | {
"id": "1adf759aadc54b8b8393b6c3f191dff1a571ca35",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | The Committee has reviewed and taken into consideration the recent report by the Institute of Medicine entitled, "Adverse Effects of Pertussis and Rubella Vaccines'' in making these recommendations. Vol. 40 / No. RR-10 MMWR 23 have not completed the four-dose primary series should complete the series with the minimal intervals (Table 1). Those who have completed a primary series but have not received a dose of DTP vaccine within 3 years of exposure should be given a booster dose. Prophylactic postexposure passive immunization is not recommended. The use of human pertussis immune globulin neither prevents illness nor reduces its severity. This product is no longer available in the United States.# Definition of Abbreviations
# INTRODUCTION
Simultaneous vaccination against diphtheria, tetanus, and pertussis during infancy and childhood has been a routine practice in the United States since the late 1940s. This practice has played a major role in markedly reducing the incidence of cases and deaths from each of these diseases.
# DIPHTHERIA
At one time, diphtheria was common in the United States. More than 200,000 cases, primarily among children, were reported in 1921. Approximately 5%-10% of cases were fatal; the highest case-fatality ratios were recorded for the very young and the elderly. Reported cases of diphtheria of all types declined from 306 in 1975 to 59 in 1979; most were cutaneous diphtheria reported from a single state (3). After 1979, cutaneous diphtheria was no longer notifiable. From 1980 to 1989, only 24 cases of respiratory diphtheria were reported; two cases were fatal, and 18 (75%) occurred among persons ^20 years of age.
# MMWR August 8, 1991
Diptheria is currently a rare disease in the United States primarily because of the high level of appropriate vaccination among children (97% of children entering school have received ^three doses of diphtheria and tetanus toxoids and pertussis vaccine ) and because of an apparent reduction in the circulation of toxigenic strains of Corynebacterium diphtheriae. Most cases occur among unvaccinated or inadequately vaccinated persons. The age distribution of recent cases and the results of serosurveys indicate that many adults in the United States are not protected against diphtheria. Limited serosurveys conducted since 1977 indicate that 22%-62% of adults 18-39 years of age and 41%-84% of those ^60 years of age may lack protective levels of circulating antitoxin against diphtheria (4-7). Thus, it appears that further reduc tions in the incidence of diphtheria would require more emphasis on adult immuni zation programs. Both toxigenic and nontoxigenic strains of C. diphtheriae can cause disease, but only strains that produce toxin cause myocarditis and neuritis. Further more, toxigenic strains are more often associated with severe or fatal illness in noncutaneous (respiratory or other mucosal surface) infections and are more com monly recovered in association with respiratory than from cutaneous infections.
C. diphtheriae can contaminate the skin, usually at the site of a wound. Although a sharply demarcated lesion with a pseudomembranous base often results, the appearance may not be distinctive, and infection can be confirmed only by culture. Usually other bacterial species can also be isolated. Cutaneous diphtheria has most commonly affected indigent adults and certain groups of American Indians.
A complete vaccination series substantially reduces the risk of developing diph theria, and vaccinated persons who develop disease have milder illnesses. Protection lasts at least 10 years. Vaccination does not, however, eliminate carriage of C. diphtheriae in the pharynx or nose or on the skin.
# TETANUS
The occurrence of tetanus in the United States has decreased dramatically from 560 reported cases in 1947, when national reporting began, to a record low of 48 reported cases in 1987 (8). The decline has resulted from widespread use of tetanus toxoid and improved wound management, including use of tetanus prophylaxis in emergency rooms.
Tetanus in the United States is primarily a disease of older adults. Of 99 tetanus patients with complete information reported to CDC during 1987 and 1988, 68% were ^50 years of age, while only six were <20 years of age. No cases of neonatal tetanus were reported. Overall, the case-fatality rate was 21% (8). The age distribution of recent cases and the results of serosurveys indicate that many U.S. adults are not protected against tetanus. Serosurveys undertaken since 1977 indicate that 6%-11°/° of adults 18-39 years of age and 49%-66% of those 2*60 years of age may lack protective levels of circulating tetanus antitoxin (4)(5)(6)(7). The disease continues to occur almost exclusively among persons who are unvaccinated or inadequately vaccinated or whose vaccination histories are unknown or uncertain (8).
Surveys of emergency rooms suggest that 1%-6% of all persons who receive medical care for injuries that can lead to tetanus receive less than the recommended prophylaxis (9,10). In 1987-1988, 58% of tetanus patients with acute injuries did not seek medical care for their injuries; of those who did, 81% did not receive prophylaxis as recommended by ACIP guidelines (8).
In 4% of tetanus cases reported during 1987 and 1988, no wound or other condition was implicated. Nonacute skin lesions such as ulcers, or medical conditions such as abscesses were reported in association with 14% of cases.
Neonatal tetanus occurs among infants born under unhygienic conditions to inadequately vaccinated mothers. Vaccinated mothers confer protection to their infants through transplacental transfer of maternal antibody. From 1972 through 1984, 29 cases of neonatal tetanus were reported in the United States ( 11 ). No cases of neonatal tetanus were reported in the period 1985-1989. Spores of Clostridium tetani are ubiquitous. Serologic tests indicate that naturally acquired immunity to tetanus toxin does not occur in the United States. Thus, universal primary vaccina tion, with subsequent maintenance of adequate antitoxin levels by means of appro priately timed boosters, is necessary to protect persons among all age-groups. Tetanus toxoid is a highly effective antigen; a completed primary series generally induces protective levels of serum antitoxin that persist for ^10 years.
# PERTUSSIS
Disease caused by Bordetella pertussis was once a major cause of infant and childhood morbidity and mortality in the United States (12,13). Pertussis became a nationally notifiable disease in 1922, and reports reached a peak of 265,269 cases and 7,518 deaths in 1934. The highest number of reported pertussis deaths (9,269) occurred in 1923. The introduction and widespread use of standardized whole-cell pertussis vaccines combined with diphtheria and tetanus toxoids (DTP) in the late 1940s resulted in a substantial decline in pertussis disease, a decline which continued without interruption for nearly 30 years.
By 1970, the annual reported incidence of pertussis had been reduced by 99%. During the 1970s, the annual numbers of reported cases stabilized at an average of approximately 2,300 cases each year. During the 1980s, however, the annual numbers of reported cases gradually increased from 1,730 cases in 1980 to 4,157 cases in 1989. An average of eight pertussis-associated fatalities was reported each year throughout the 1980s. It is not clear whether the increase in reported pertussis reflects a true increase in the incidence of the disease or improvement in the reporting of pertussis. However, these data underestimate the true number of cases, because many are unrecognized or unreported, and diagnostic tests for B. pertussis-culture and directimmunofluorescence assay-may be unavailable, difficult to perform, or incorrectly interpreted. Because direct-fluorescent-antibody testing of nasopharyngeal secre tions has been shown in some studies to have low sensitivity and variable specificity, it should not be relied on as a criterion for laboratory confirmation (14,15). In addition, reporting criteria have varied widely among the different states. Laboratory diagnosis based on serologic testing is not widely available and is still considered experimental ( 16). In 1990, to improve the accuracy of reporting, the U.S. Council of State and Territorial Epidemiologists adopted uniform case definitions for per tussis (17).
Before widespread use of DTP, 90% have been reported among unvaccinated household contacts) and can cause severe disease, particularly among very young children. Of 10,749 patients one seizure, 0.9% had encephalopathy, and 0.6% died (19). The high rate of hospitalization for infants with pertussis has been observed in several population-based studies (20)(21)(22). Because of the substantial risks of complications of the disease, completion of a primary series of DTP vaccine early in life is essential.
Among older children and adults, including those previously vaccinated, B. pertussis infection may result in symptoms of bronchitis or upper-respiratory-tract infection. Pertussis may not be diagnosed because classic signs, especially the inspiratory whoop, may be absent. Older preschool children and school-age siblings who are not fully vaccinated and who develop pertussis can be important sources of infection for infants <1 year of age. Adults also play an important role in the transmission of pertussis to unvaccinated or incompletely vaccinated infants and young children (23).
Controversy regarding the safety of pertussis vaccine during the 1970s led to several studies of the benefits and risks of this vaccination during the 1980s. These epidemiologic analyses clearly indicate that the benefits of pertussis vaccination outweigh any risks (24)(25)(26)(27)(28).
# PREPARATIONS USED FOR VACCINATION
Diphtheria and tetanus toxoids are prepared by formaldehyde treatment of the respective toxins and are standardized for potency according to the regulations of the U.S. Food and Drug Administration. The limit of flocculation (Lf) content of each toxoid (quantity of toxoid as assessed by flocculation) may vary among different products. The concentration of diphtheria toxoid in preparations intended for adult use is reduced because adverse reactions to diphtheria toxoid are apparently directly related to the quantity of antigen and to the age or previous vaccination history of the recipient, and because a smaller dosage of diphtheria toxoid produces an adequate immune response among adults.
Pertussis vaccine is a suspension of inactivated B. pertussis cells. Potency is assayed by comparison with the U.S. standard pertussis vaccine in the intracerebral mouse protection test. The protective efficacy of pertussis vaccines for humans has been shown to correlate with this measure of vaccine potency.
Diphtheria and tetanus toxoids and pertussis vaccine, as single antigens or various combinations, are available as aluminum-salt-adsorbed preparations. Only tetanus toxoid is available in nonabsorbed (fluid) form. Although the rates of seroconversion are essentially equivalent with either type of tetanus toxoid, the adsorbed toxoid induces a more persistent level of antitoxin antibody. The following preparations are currently available in the United States:
1. Diphtheria and Tetanus Toxoids and Pertussis Vaccine Adsorbed (DTP) and Diphtheria and Tetanus Toxoids Adsorbed (DT) (for pediatric use) are for use among infants and children <7 years of age. Each 0.5-mL dose is formulated to contain 6.7-12.5 Lf units of diphtheria toxoid, 5 Lf units of tetanus toxoid, and ^16 opacity units of pertussis vaccine. A single human immunizing dose of DTP contains an estimated 4-12 protective units of pertussis vaccine. 2. Tetanus and Diphtheria Toxoids Adsorbed for Adult Use (Td) is for use among persons ^7 years of age. Each 0.5-mL dose is formulated to contain 2-10 Lf units of tetanus toxoid and =^2 Lf units of diphtheria toxoid. 3. Pertussis Vaccine Adsorbed (P),- Tetanus Toxoid (fluid), Tetanus Toxoid Ad sorbed (T), and Diphtheria Toxoid Adsorbed (D)r (for pediatric use), are single-antigen products for use in special instances when combined antigen preparations are not indicated. Work is in progress to study the effectiveness of improved acellular pertussis vaccines that have reduced adverse reaction rates. Currently, several candidate vaccines containing at least one of the bacterial components thought to provide protection are undergoing clinical trials. Candidate antigens include filamentous hemagglutinin, lymphocytosis promoting factor (pertussis toxin), a recently identified 69-kiloDalton outer-membrane protein (pertactin), and agglutinogens (23). In pub lished studies, some of these vaccines are less prone to cause common adverse reactions than the current whole-cell preparations, and they are immunogenic (29)(30)(31)(32)(33)(34)(35)(36). Whether their clinical efficacy among infants is equivalent to that of the whole-cell preparations remains to be established.
# VACCINE USAGE
The standard, single-dose volume of each of DTP, DT, Td, single-antigen adsorbed preparations of pertussis vaccine, tetanus toxoid, and diphtheria toxoid, and of the fluid tetanus toxoid is 0.5 mL. Adsorbed preparations should be administered intramuscularly (IM). Vaccine administration by jet injection may be associated with more frequent local reactions (37).
# Primary Vaccination Children 6 weeks through 6 years old (up to the seventh birthday)
Table 1 details a routine vaccination schedule for children <7 years of age. One dose of DTP should be given IM on four occasions-the first three doses at 4-to 8-week intervals, beginning when the infant is approximately 6 weeks-2 months old; customarily, doses of vaccine are given at 2, 4, and 6 months of age. Individual circumstances may warrant giving the first three doses at 6, 10, and 14 weeks of age to provide protection as early as possible, especially during pertussis outbreaks (38). The fourth dose is given approximately 6-12 months after the third dose to maintain
# Children ^7 years of age and adults
Table 2 details a routine vaccination schedule for persons ^7 years of age. Because the severity of pertussis decreases with age, and because the vaccine may cause side effects and adverse reactions, pertussis vaccination has not been recommended for children after their seventh birthday or for adults. For primary vaccination, a series of three doses of Td should be given IM; the second dose is given 4-8 weeks after the first, and the third dose 6-12 months after the second. Td rather than DT is the preparation of choice for vaccination of all persons ^7 years of age because side effects from higher doses of diphtheria toxoid are more common than they are among younger children.
# Interruption of primary vaccination schedule
Interrupting the recommended schedule or delaying subsequent doses does not lead to a reduction in the level of immunity reached on completion of the primary series. Therefore, there is no need to restart a series if more than the recommended time between doses has elapsed. Prolonging the interval does not require restarting series. f Use DT if pertussis vaccine is contraindicated. If the child is sM year of age at the time that primary dose three is due, a third dose 6-12 months after the second completes primary vaccination with DT.
# Booster Vaccination Children 4-6 years old (up to the seventh birthday)
Those who received all four primary vaccination doses before their fourth birthday should receive a fifth dose of DTP before entering kindergarten or elementary school. This booster dose is not necessary if the fourth dose in the primary series was given on or after the fourth birthday.
# Children ^7 years of age and adults
Tetanus toxoid should be given with diphtheria toxoid as Td every 10 years. If a dose is given sooner as part of wound management, the next booster is not needed until 10 years thereafter. (See Tetanus Prophylaxis in Wound Management). More frequent boosters are not indicated and can result in an increased occurrence and severity of adverse reactions. One means of ensuring that persons receive boosters every 10 years is to vaccinate them routinely at mid-decade ages, i.e., 15 years old, 25 years old, 35 years old, etc.
# Special Considerations
# Children with contraindications to pertussis vaccination
For children <7 years of age with a contraindication to pertussis vaccine (see Precautions and Contraindications), DT should be used instead of DTP. To ensure that there will be no interference with the response to DT antigens from maternal antibodies, previously unvaccinated children who receive their first DT dose when <1 year of age should receive a total of four doses of DT as the primary series, the first three doses at 4-to 8-week intervals and the fourth dose 6-12 months later (similar to the recommended DTP schedule) (Table 1). If additional doses of pertussis vaccine become contraindicated after a DTP series is begun in the first year of life, DT should be substituted for each of the remaining scheduled DTP doses.
Unvaccinated children ^1 year of age for whom pertussis vaccine is contraindi cated should receive two doses of DT 4-8 weeks apart, followed by a third dose 6-12 months later to complete the primary series. Children who have already received one or two doses of DT or DTP after their first birthday and for whom further pertussis vaccine is contraindicated should receive a total of three doses of a preparation containing diphtheria and tetanus toxoids appropriate for age, with the third dose administered 6-12 months after the second dose.
Children who complete a primary series of DT before their fourth birthday should receive a fifth dose of DT before entering kindergarten or elementary school. This dose is not necessary if the fourth dose of the primary series was given after the fourth birthday.
# Pertussis vaccination for persons ^7 years of age
Routine vaccination against pertussis is not currently recommended for persons ^7 years of age. It should be noted, however, that adolescents and adults with waning immunity, whether derived from disease or vaccination, are a major reservoir for transmission of pertussis (2 3). For this reason it is possible that booster doses of acellular pertussis vaccine will be recommended in the future for persons ages ^7 years of age.
# MMWR August 8, 1991
Persons who have recovered from tetanus or diphtheria Tetanus or diphtheria infection may not confer immunity; therefore, active vacci nation should be initiated at the time of recovery from the illness, and arrangements made to ensure that all doses of a primary series are administered on schedule.
# Children who have recovered from pertussis
Children who have recovered from satisfactorily documented pertussis do not need pertussis vaccine. Satisfactory documentation includes recovery of B. pertussis on culture or typical symptoms and clinical course when epidemiologically linked to a culture-proven case, as may occur during outbreaks. When such confirmation of the diagnosis is lacking, DTP vaccination should be completed, because a presumed pertussis syndrome may have been caused by other Bordetella species, Chlamydia, or certain viruses.
# Prevention of neonatal tetanus
A previously unvaccinated pregnant woman whose child might be born under unhygienic circumstances (without sterile technique) should receive two doses of Td 4-8 weeks apart before delivery, preferably during the last two trimesters. Pregnant women in similar circumstances who have not had a complete vaccination series should complete the three-dose series. Those vaccinated more than 10 years previously should have a booster dose. No evidence exists to indicate that tetanus and diphtheria toxoids administered during pregnancy are teratogenic.
# Adult vaccination with Td
The proportions of persons lacking protective levels of circulating antitoxins against diphtheria and tetanus increase with age; at least 40% of those ^60 years of age may lack protection. Every visit of an adult to a health-care provider should be regarded as an opportunity to assess the person's vaccination status and, if indicated, to provide protection against tetanus and diphtheria. Adults with uncertain histories of a complete primary vaccination series should receive a primary series using the combined Td toxoid. To ensure continued protection, booster doses of Td should be given every 10 years.
# Use of Single-Antigen Preparations
A single-antigen adsorbed pertussis vaccine preparation can be used to complete vaccination against pertussis for children <7 years of age who have received fewer than the recommended number of doses of pertussis vaccine but have received the recommended number of doses of diphtheria and tetanus toxoids for their age. Alternately, DTP can be used, although the total number of doses of diphtheria and tetanus toxoids should not exceed six each before the seventh birthday.
Available data do not indicate substantially more adverse reactions following receipt of Td than following receipt of single-antigen, adsorbed tetanus toxoid. Furthermore, adults may be even less likely to have adequate levels of diphtheria antitoxin than of tetanus antitoxin. The routine use of Td in all medical settings, including office practices, clinics, and emergency rooms, for all persons 2*7 years of age who need primary vaccination or booster doses will improve levels of protection against both tetanus and diphtheria, especially among adults.
# SIDE EFFECTS AND ADVERSE REACTIONS FOLLOWING DTP VACCINATION
Local reactions (generally erythema and induration with or without tenderness) are common after the administration of vaccines containing diphtheria, tetanus, or pertussis antigens. Occasionally, a nodule may be palpable at the injection site of adsorbed products for several weeks. Sterile abscesses at the injection site have been reported rarely (6-10/million doses of DTP). Mild systemic reactions such as fever, drowsiness, fretfulness, and anorexia occur frequently. These reactions are substan tially more common following the administration of DTP than of DT, but they are self-limited and can be safely managed with symptomatic treatment.
Acetaminophen is frequently given by physicians to lessen fever and irritability associated with DTP vaccination, and it may be useful in preventing seizures among febrile-convulsion-prone children. However, fever that does not begin until ^24 hours after vaccination or persists for more than 24 hours after vaccination should not be assumed to be due to DTP vaccination. These new or persistent fevers should be evaluated for other causes so that treatment is not delayed for serious conditions such as otitis media or meningitis. Moderate-to-severe systemic events, include high fever (i.e., temperature of ^40.5 C ); persistent, inconsolable crying lasting ^3 hours; collapse (hypotonic-hyporesponsive episode); or short-lived convulsions (usually febrile). These events occur infrequently. These events appear to be without sequelae (39-41). Other more severe neurologic events, such as a prolonged convulsion or encephalopathy, although rare, have been reported in temporal association with DTP administration.
Approximate rates for the occurrence of adverse events following receipt of DTP vaccine (regardless of dose number in the series or age of the child) are shown in Table 3 (42,43). The frequencies of local reactions and fever are substantially higher with increasing numbers of doses of DTP vaccine, while other mild-to-moderate systemic reactions (e.g., fretfulness, vomiting) are substantially less frequent (41)(42)(43). Concern about the possible role of pertussis vaccine in causing neurologic reactions has been present since the earliest days of vaccine use. Rare but serious acute neurologic illnesses, including encephalitis/encephalopathy and prolonged convulsions, have been anecdotally reported following receipt of whole-cell pertussis vaccine given as DTP vaccine (28,44). Whether pertussis vaccine causes or is only coincidentally related to such illnesses or reveals an inevitable event has been difficult to determine conclusively for the following reasons: a) serious acute neurologic illnesses often occur or become manifest among children during the first year of life irrespective of vaccination; b) there is no specific clinical sign, pathological finding, or laboratory test which can determine whether the illness is caused by the DTP vaccine; c) it may be difficult to determine with certainty whether infants <6 months of age are neurologically normal, which complicates assessment of whether vaccinees were already neurologically impaired before receiving DTP vaccine; and d) because these events are exceedingly rare, appropriately designed large studies are needed to address the question.
To determine whether DTP vaccine causes serious neurologic illness and brain damage, the National Childhood Encephalopathy Study (NCES) was undertaken during 1976-1979 in Great Britain (27,(45)(46)(47). This large case-control study attempted to identity every patient with serious, acute, childhood, neurologic illness admitted to a hospital in England, Scotland, and Wales. A total of 1,182 young children 2-36 months of age was identified. Excluding those with infantile spasms, an illness shown in a separate analysis not to be attributable to DTP vaccine, 30 of these children (18 with prolonged convulsions and 12 with encephalitis/encephalopathy) had received DTP vaccine within 7 days of the reported onset of their neurologic illness (48). Analysis of the data from these patients and from age-matched control children showed a significant association (odds ratio = 3.3; 95% confidence interval 1.7-6.5) between the development of serious acute neurologic illness and receipt of DTP vaccine. Most of these events were prolonged seizures with fever. The attributable risk for all neurologic events was estimated to be 1:140,000 doses of DTP vaccine administered. These 30 children were followed up for at least 12 months to determine whether they had neurologic sequelae. Seven of these children presumed to have been previously normal neurologically had died or had subsequent neurologic impairment. A causal relation between receipt of DTP vaccine and permanent neurologic injury was suggested. The estimated attributable risk for DTP vaccine was 1:330,000 doses with a wide confidence interval.
The methods and results of the NCES have been thoroughly scrutinized since publication of the study. This reassessment by multiple groups has determined that the number of patients was too small and their classification subject to enough uncertainty to preclude drawing valid conclusions about whether a causal relation exists between pertussis vaccine and permanent neurologic damage (49)(50)(51)(52)(53)(54). Pre liminary data from a 10-year follow-up study of some of the children studied in the original NCES study also suggested a relation between symptoms following DTP vaccination and permanent neurologic disability (55). However, details are not available to evaluate this study adequately, and the same concerns remain about DTP vaccine precipating initial manifestations of pre-existing neurologic disorders.
Subsequent studies have failed to provide evidence to support a causal relation between DTP vaccination and either serious acute neurologic illness or permanent neurologic injury. These include: a) the 1979 Hospital Activity Analysis of the North West Thames Study in England, in which the hospital records of approximately Although each of these studies individually contained too few subjects to provide definitive conclusions, taken together they stand in contrast to the original NCES findings. A recent study performed in 1987-1988 in Washington and Oregon of neurologic illness among children did not provide evidence of a significantly in creased risk of all serious acute neurologic illnesses within 7, 14, or 28 days of DTP vaccination (60). However, as a pilot effort, this study had limited power to detect significantly increased risks for individual conditions. The NCES was the basis of prior ACIP statements suggesting that on rare occasions DTP vaccine could cause brain damage. However, on the basis of a more detailed review of the NCES data as well as data from other studies, the ACIP has revised its earlier view and now concludes:
1. Although DTP may rarely produce symptoms that some have classified as acute encephalopathy, a causal relation between DTP vaccine and permanent brain damage has not been demonstrated. If the vaccine ever causes brain damage, the occurrence of such an event must be exceedingly rare. A similar conclusion has been reached by the Committee on Infectious Diseases of the American Academy of Pediatrics, the Child Neurology Society, the Canadian National Advisory Committee on Immunization, the British Joint Committee on Vaccina tion and Immunization, the British Pediatric Association, and the Institute of Medicine (49-54). 2. The risk estimate from the NCES study of 1:330,000 for brain damage should no longer be considered valid on the basis of continuing analyses of the NCES and other studies. In addition to these considerations, acute neurologic manifestations related to DTP vaccine are mainly febrile seizures. In an individual case, the role of pertussis vaccine as a cause of serious acute neurologic illness or permanent brain damage is impossible to determine on the basis of clinical or laboratory findings. Anecdotal reports of DTP-induced acute neurologic disorders with or without permanent brain damage can have one of several alternate explanations. Some instances may represent simple coincidence because DTP is administered at a time in infancy when previously unrecognized underlying neurological and developmental disorders first become manifest. Some patients may have short-lived seizures with prompt recov ery, and these events represent the first seizure of a child with underlying epilepsy. When epilepsy has its onset in infancy, it is frequently associated with severe mental retardation and developmental delay. These conditions become apparent over a period of several months. The known febrile and other systemic effects of DTP vaccination may stimulate or precipitate inevitable symptoms of underlying centralnervous-system disorders, particularly since DTP may be the first pyrogenic stimulus an infant receives. When children who experience acute, severe central-nervoussystem disorders in association with DTP vaccination are studied promptly and carefully, an alternate cause is often found.
# MMWR August 8, 1991
Among a subset of NCES children with infantile spasms, both DTP and DT vaccination appeared either to precipitate early manifestations of the condition or to cause its recognition by parents (48). This and other studies suggest that neither vaccine causes this illness (59,61).
Approximately 5,200 infants succumb to sudden infant death syndrome (SIDS) in the United States each year. Because the peak incidence of SIDS for infants is between 2 and 3 months of age, many instances of a close temporal relation between SIDS and receipt of DTP are to be expected by simple chance. Only one methodolog ically rigorous study has suggested that DTP vaccine might cause SIDS (62). A total of four deaths were reported within 3 days of DTP vaccination, compared with 1.36 expected deaths. However, these deaths were unusual in that three of the four occurred within a 13-month interval during the 12-year study. These four children also tended to be vaccinated at older ages than their controls, suggesting that they might have other unrecognized risk factors for SIDS independent of vaccination. In contrast, DTP vaccination was not associated with SIDS in several larger studies performed in the past decade (28,(63)(64)(65). In addition, none of three studies that examined unexpected deaths among infants not classified as SIDS found an association with DTP vaccination (62,64,65).
Claims that DTP may be responsible for transverse myelitis, other more subtle neurologic disorders (such as hyperactivity, learning disorders and infantile autism), and progressive degenerative central-nervous-system conditions have no scientific basis. Furthermore, one study indicated that children who received pertussis vaccine exhibited fewer school problems than those who did not, even after adjustment for socioeconomic status (66).
Recent data suggest that infants and young children who have ever had convul sions (febrile or afebrile) or who have immediate family members with such histories are more likely to have seizures following DTP vaccination than those without such histories (67,68). For those with a family history of seizures, the increased risks of seizures occurring within 3 days of receipt of DTP or 4-28 days following receipt of DTP are identical, suggesting that these histories are non-specific risk factors and are unrelated to DTP vaccination (68).
Rarely, immediate anaphylactic reactions (i.e., swelling of the mouth, breathing difficulty, hypotension, or shock) have been reported after receipt of preparations containing diphtheria, tetanus, and/or pertussis antigens. However, no deaths caused by anaphylaxis following DTP vaccination have been reported to CDC since the inception of vaccine-adverse-events reporting in 1978, a period during which more than 80 million doses of publically purchased DTP vaccine were administered. While substantial underreporting exists in this passive surveillance system, the severity of anaphylaxis and its immediacy following vaccination suggest that such events are likely to be reported. Although no causal relation to any specific component of DTP has been established, the occurrence of true anaphylaxis usually contraindicates further doses of any one of these components. Rashes that are macular, papular, petechial, or urticarial and appear hours or days after a dose of DTP are frequently antigen-antibody reactions of little consequence or are due to other causes such as viral illnesses, and are unlikely to recur following subsequent injections (69,70). In addition, there is no evidence for a causal relation between DTP vaccination and hemolytic anemia or thrombocytopenic purpura.
# REPORTING OF ADVERSE EVENTS
The U.S. Department of Health and Human Services has established a new Vaccine Adverse Event Reporting System (VAERS) to accept all reports of suspected adverse events after the administration of any vaccine, including but not limited to the reporting of events required by the National Childhood Vaccine Injury Act of 1986 (71 ).The telephone number to call for answers to questions and to obtain VAERS forms is 1-800-822-7967.
The National Vaccine Injury Compensation Program, established by the National Childhood Vaccine Injury Act of 1986, requires physicians and other health-care providers who administer vaccines to maintain permanent vaccination records and to report occurrences of certain adverse events to the U.S. Department of Health and Human Services. These requirements took effect March 21, 1988. Reportable events include those listed in the Act for each vaccine and events specified in the manufac turer's vaccine package insert as contraindications to further doses of that vaccine (72,73).
# REDUCED DOSAGE SCHEDULES OR MULTIPLE SMALL DOSES OF DTP
The ACIP recommends giving only full doses (0.5 mL) of DTP vaccine; if a specific contraindication to DTP exists, the vaccine should not be given.
Concern about adverse events following pertussis vaccine has led some practitio ners to reduce the volume of DTP vaccine administered to <0.5mL/dose in an attempt to reduce side effects. No evidence exists to show that this decreases the frequency of uncommon severe adverse events, such as seizures and hypotonichyporesponsive episodes. Two studies have reported substantially lower rates of local reactions with the use of one half the recommended dose (0.25mL) compared with a full dose (43,74). However, a study among preterm infants showed that the incidence of side effects was unaltered when a reduced dosage of DTP vaccine was used (75). Two studies also showed substantially lower pertussis agglutinin re sponses after the second and third half-doses, although in one of the studies the differences were small (74,75). These investigations used pertussis agglutinins as a measure of clinical protection; however, agglutinins are not satisfactory measures of protection against pertussis disease. Further, no evidence exists to show that the low screening dilution used (1:16) indicates protection. Currently, no reliable measures of efficacy other than clinical protection exist. Other evidence against the use of reduced doses comes from earlier studies of DTP vaccine preparations with potencies equivalent to that of half-doses of current vaccine (76,77). The risk of pertussis for exposed household members who received these lower potency vaccines was approximately twice as high as the risk of pertussis for those who received vaccines as potent as full doses of current vaccine (29% compared with ^14%).
The use of an increased number of reduced-volume doses of DTP in order to equal the total volume of the five recommended doses of DTP vaccine is not recommended. Whether this practice reduces the likelihood of vaccine-related adverse events is unknown. In addition, the likelihood of a temporally associated but etiologically unrelated event may be enhanced by increasing the number of vaccinations.
# SIMULTANEOUS ADMINISTRATION OF VACCINES
The simultaneous administration of DTP, oral poliovirus vaccine (OPV), and measles-mumps-rubella vaccine (MMR) has resulted in seroconversion rates and rates of side effects similar to those observed when the vaccines are administered separately (78). Simultaneous vaccination with DTP, MMR, OPV, or inactivated poliovirus vaccine (IPV), and Haemophilus b conjugate vaccine (HbCV) is also acceptable (79). The ACIP recommends the simultaneous administration of all vaccines appropriate to the age and previous vaccination status of the recipient, including the special circumstance of simultaneous administration of DTP, OPV, HbCV, and MMR at ^15 months of age.
# PRECAUTIONS AND CONTRAINDICATIONS
# General Considerations
The decision to administer or delay DTP vaccination because of a current or recent febrile illness depends largely on the severity of the symptoms and their etiology. Although a moderate or severe febrile illness is sufficient reason to postpone vaccination, minor illnesses such as mild upper-respiratory infections with or w ith out low-grade fever are not contraindications. If ongoing medical care cannot be assured, taking every opportunity to provide appropriate vaccinations is particularly important.
Children with moderate or severe illnesses with or without fever can receive DTP as soon as they have recovered. Waiting a short period before administering DTP vaccine avoids superimposing the adverse effects of the vaccination on the underly ing illness or mistakenly attributing a manifestation of the underlying illness to vaccination.
Routine physical examinations or temperature measurements are not prerequi sites for vaccinating infants and children who appear to be in good health. Appropri ate immunization practice includes asking the parent or guardian if the child is ill, postponing DTP vaccination for those with moderate or severe acute illnesses, and vaccinating those without contraindications or precautionary circumstances.
When an infant or child returns for the next dose of DTP, the parent should always be questioned about any adverse events that might have occurred following the previous dose.
A history of prematurity generally is not a reason to defer vaccination (75,80,81). Preterm infants should be vaccinated according to their chronological age from birth.
Immunosuppressive therapies-including irradiation, antimetabolites, alkylating agents, cytotoxic drugs, and corticosteroids (used in greater than physiologic doses)-m ay reduce the immune response to vaccines. Short-term (<2-week) corti costeroid therapy or intra-articular, bursal, or tendon injections with corticosteroids should not be immunosuppressive. Although no specific studies with pertussis vaccine are available, if immunosuppressive therapy will be discontinued shortly, it is reasonable to defer vaccination until the patient has been off therapy for 1 month; otherwise, the patient should be vaccinated while still on therapy (82).
# Special Considerations for Preparations Containing Pertussis Vaccine
Precautions and contraindications guidelines that were previously published regarding the use of pertussis vaccine were based on three assumptions about the risks of pertussis vaccination that are not supported by available data: a) that the vaccine on rare occasions caused acute encephalopathy resulting in permanent brain damage; b) that pertussis vaccine aggravated preexisting central-nervous-system disease; and c) that certain nonencephalitic reactions are predictive of more severe reactions with subsequent doses (7). In addition, children from whom pertussis vaccine was withheld were thought to be well protected by herd immunity, a belief that is no longer valid. The current revised ACIP recommendations reflect better understanding of the risks associated not only with pertussis vaccine but also with pertussis disease.
# Contraindications
If any of the following events occur in temporal relationship to the administration of DTP, further vaccination with DTP is contraindicated (see Table 4):
1. An immediate anaphylactic reaction. The rarity of such reactions to DTP is such that they have not been adequately studied. Because of uncertainty as to which component of the vaccine might be responsible, no further vaccination with any of the three antigens in DTP should be carried out. Alternatively, because of the importance of tetanus vaccination, such individuals may be referred for evalu ation by an allergist and desensitized to tetanus toxoid if specific allergy can be demonstrated (83,84). 2. Encephalopathy (not due to another identifiable cause). This is defined as an acute, severe central-nervous-system disorder occurring within 7 days follow ing vaccination, and generally consisting of major alterations in consciousness, unresponsiveness, generalized or focal seizures that persist more than a few hours, with failure to recover within 24 hours. Even though causation by DTP cannot be established, no subsequent doses of pertussis vaccine should be given. It may be desirable to delay for months before administering the balance of the doses of DT necessary to complete the primary schedule. Such a delay allows time for the child's neurologic status to clarify.
# Precautions (Warnings)
If any of the following events occur in temporal relation to receipt of DTP, the decision to give subsequent doses of vaccine containing the pertussis component should be carefully considered (Table 4). Although these events were considered absolute contraindications in previous ACIP recommendations, there may be circum stances, such as a high incidence of pertussis, in which the potential benefits outweigh possible risks, particularly because these events are not associated with permanent sequelae (7). The following events were previously considered contrain dications and are now considered precautions: 1. Temperature of ^40.5 C (105 F) within 48 hours not due to another identifiable cause. Such a temperature is considered a precaution because of the likelihood that fever following a subsequent dose of DTP vaccine also will be high.
Because such febrile reactions are usually attributed to the pertussis compo nent, vaccination with DT should not be discontinued. 2. Collapse or shock-like state (hypotonic-hyporesponsive episode) within 48 hours. Although these uncommon events have not been recognized to cause death nor to induce permanent neurological sequelae, it is prudent to continue vaccination with DT, omitting the pertussis component (40,85). 3. Persistent, inconsolable crying lasting >3 hours, occurring within 48 hours.
Follow-up of infants who have cried inconsolably following DTP vaccination has indicated that this reaction, though unpleasant, is without long-term sequelae and not associated with other reactions of greater significance (47). Inconsol able crying occurs most frequently following the first dose and is less frequently reported following subsequent doses of DTP vaccine (42). However, crying for >30 minutes following DTP vaccination can be a predictor of increased likelihood of recurrence of persistent crying following subsequent doses (47).
Children with persistent crying have had a higher rate of substantial local reactions than children who had other DTP-associated reactions (including high fever, seizures, and hypotonic-hyporesponsive episodes), suggesting that pro longed crying was really a pain reaction (85). 4. Convulsions with or without fever occurring within 3 days. Short-lived convul sions, with or without fever, have not been shown to cause permanent sequelae (39,86). Furthermore, the occurrence of prolonged febrile seizures (i.e., status epilepticus*), irrespective of their cause, involving an otherwise normal child does not substantially increase the risk for subsequent febrile (brief or pro longed) or afebrile seizures. The risk is significantly increased (p = 0.018) only among those children who are neurologically abnormal before their episode of status epilepticus (87). Accordingly, although a convulsion following DTP vaccination has previously been considered a contraindication to further doses, under certain circumstances subsequent doses may be indicated, particularly if the risk of pertussis in the community is high. If a child has a seizure following the first or second dose of DTP, it is desirable to delay subsequent doses until the child's neurologic status is better defined. By the end of the first year of life, the presence of an underlying neurologic disorder has usually been determined, and appropriate treatment instituted. DT vaccine should not be administered before a decision has been made about whether to restart the DTP series. Regardless of which vaccine is given, it is prudent also to administer acetami nophen, 15 mg/kg of body weight, at the time of vaccination and every 4 hours subsequently for 24 hours (88,89).
*Any seizure lasting >30 minutes or recurrent seizures lasting a total of 30 minutes without the child fully regaining consciousness.
# Vaccination of infants and young children who have underlying neurologic disorders
Infants and children with recognized, possible, or potential underlying neurologic conditions present a unique problem. They seem to be at increased risk for the appearance of manifestations of the underlying neurologic disorder within 2-3 days after vaccination. However, more prolonged manifestations or increased progression of the disorder, or exacerbation of the disorder have not been recognized (90). In addition, most neurologic conditions in infancy and young childhood are associated with evolving, changing neurological findings. Functional abnormalities are often unmasked by progressive neurologic development. Thus, confusion over the inter pretation of progressive neurologic signs may arise when DTP vaccination or any other therapeutic or preventive measure is carried out.
Protection against diphtheria, tetanus, and pertussis is as important for children with neurologic disabilities as for other children. Such protection may be even more important for neurologically disabled children. They often receive custodial care or attend special schools where the risk of pertussis is greater because DTP vaccination is avoided for fear of adverse reactions. Also, if pertussis affects a neurologically disabled child who has difficulty in handling secretions and in cooperating with symptomatic care, it may aggravate preexisting neurologic problems because of anoxia, intracerebral hemorrhages, and other manifestations of the disease. Whether and when to administer DTP to children with proven or suspected underlying neurologic disorders must be decided on an individual basis. Important consider ations include the current local incidence of pertussis, the near absence of diphtheria in the United States, and the low risk of infection with Clostridium tetani. On the basis of these considerations and the nature of the child's disorder, the following ap proaches are recommended:
1. Infants and children with previous convulsions. Infants and young children who have had prior seizures, whether febrile or afebrile, appear to be at increased risk for seizures following DTP vaccination than children and infants without these histories (68). A convulsion within 3 days of DTP vaccination in a child with a history of convulsions may be initiated by fever caused by the vaccine in a child prone to febrile seizures, may be induced by the pertussis component, or may be unrelated to the vaccination. As noted earlier, current evidence indicates that seizures following DTP vaccination do not cause permanent brain damage. Among infants and children with a history of previous seizures, it is prudent to delay DTP vaccination until the child's status has been fully assessed, a treatment regimen established, and the condition stabilized. It should be noted, however, that delaying DTP vaccination until the second 6 months of life will increase the risk of febrile seizures among persons who are predisposed. When DTP or DT is given, acetaminophen, 15 mg/kg, should also be given at the time of the vaccination and every 4 hours for the ensuing 24 hours (88,89).
Infants as yet unvaccinated who are suspected of having underlying neurologic disease. It is prudent to delay initiation of vaccination with DTP or DT (but not other vaccines) until further observation and study have clarified the child's neurologic status and the effect of treatment. The decision as to whether to begin vaccination with DTP or DT should be made no later than the child's first birthday.
MMWR August 8, 1991
3. Children who have not received a complete series of vaccine and who have a neurologic event occurring between doses. Infants and children who have received 2*one dose of DTP and who experience a neurologic disorder (e.g., a seizure, for example) not temporally associated with vaccination, but before the next scheduled dose, present a special management challenge. If the seizure or other disorder occurs before the first birthday and before completion of the first three doses of the primary series of DTP, further doses of DTP or DT (but not other vaccines) should be deferred until the infant's status has been clarified. The decision whether to use DTP or DT to complete the series should be made no later than the child's first birthday, and should take into consideration the nature of the child's problem and the benefits and possible risks of the vaccine.
If the seizure or other disorder occurs after the first birthday, the child's neurologic status should be evaluated to ensure that the disorder is stable before a subsequent dose of DTP is given.
Children with resolved or corrected neurologic disorders. DTP vaccination is recommended for infants with certain neurologic problems, such as neonatal hypocalcemic tetany or hydrocephalus (following placement of a shunt and without seizures), that have been corrected or have clearly subsided without residua.
# Vaccination of infants and young children who have a family history of convulsion or other central nervous system disorders
A family history of convulsions or other central nervous disorders is not a contraindication to pertussis vaccination (2 ). Acetaminophen should be given at the time of DTP vaccination and every 4 hours for 24 hours to reduce the possibility of postvaccination fever (88,89).
# Preparations Containing Diphtheria Toxoid and Tetanus Toxoid
The only contraindication to tetanus and diphtheria toxoids is a history of 3 neurologic or severe hypersensitivity reaction following a previous dose. Vaccination with tetanus and diphtheria toxoids is not known to be associated with an increased risk of convulsions. Local side effects alone do not preclude continued use. If an anaphylactic reaction to a previous dose of tetanus toxoid is suspected, intradermal skin testing with appropriately diluted tetanus toxoid may be useful before a decision is made to discontinue tetanus toxoid vaccination (83). In one study, 94 of 95 persons with histories of anaphylactic symptoms following a previous dose of tetanus toxoid were nonreactive following intradermal testing and tolerated further tetanus toxoid challenge without incident (83). One person had erythema and induration immedi ately following skin testing, but tolerated a full IM dose without adverse effects. Mild, nonspecific skin-test reactivity to tetanus toxoid, particularly if used undiluted, appears to be fairly common. Most vaccinees develop inconsequential cutaneous delayed hypersensitivity to the toxoid.
Persons who experienced Arthus-type hypersensitivity reactions or a temperature of >103 F (39.4 C) following a prior dose of tetanus toxoid usually have high serum tetanus antitoxin levels and should not be given even emergency doses of Td more frequently than every 10 years, even if they have a wound that is neither clean nor minor.
If a contraindication to using tetanus toxoid-containing preparations exists for a person who has not completed a primary series of tetanus toxoid immunization and that person has a wound that is neither clean nor minor, only passive immunization should be given using tetanus immune globulin (TIG). (See Tetanus Prophylaxis in Wound Management).
Although no evidence exists that tetanus and diphtheria toxoids are teratogenic, waiting until the second trimester of pregnancy to administer Td is a reasonable precaution for minimizing any concern about the theoretical possibility of such reactions.
# Misconceptions Concerning Contraindications to DTP
Some health-care providers inappropriately consider certain conditions or circum stances as contraindications to DTP vaccination. These include the following:
1. Soreness, redness, or swelling at the DTP vaccination site or temperature of <40.5C(105 F). 2. Mild, acute illness with low-grade fever or mild diarrheal illness affecting an otherwise healthy child. 3. Current antimicrobial therapy or the convalescent phase of an acute illness. 4. Recent exposure to an infectious disease. 5. Prematurity. The appropriate age for initiating vaccination among the prema turely born infant is the usual chronological age from birth (75,80,81). Full doses (0.5 mL) of vaccine should be used. 6. History of allergies or relatives with allergies. 7. Family history of convulsions. 8. Family history of SIDS. 9. Family history of an adverse event following DTP vaccination.
# PREVENTION OF DIPHTHERIA AMONG CONTACTS OF A DIPHTHERIA PATIENT
# Identification of Close Contacts
The primary purpose of contact investigation is to prevent secondary transmission of C. diphtheriae and the occurrence of additional diphtheria cases. Only close contacts of a patient with culture-confirmed or suspected- diphtheria should be *For example, a patient for whom the decision has been made to treat with diphtheria antitoxin.
Antitoxin can be obtained either from a manufacturer (Connaught Labs, Inc., or Sclavo, Inc.) or the Division of Immunization, CDC (telephone: 404-639-2888).
MMWR August 8, 1991 considered at increased risk for acquiring secondary disease. Such contacts include all household members and other persons with a history of habitual, close contact with the patient, as well as those directly exposed to oral secretions of the patient. Identification of close contacts of a diphtheria patient should be promptly initiated.
# Cultures and Antimicrobial Prophylaxis
All close contacts (regardless of their vaccination status) should have samples taken for culture, receive prompt antimicrobial chemoprophylaxis, and be examined daily for 7 days for evidence of disease. Awaiting culture results before administering antimicrobial prophylaxis to close contacts is not warranted. The identification of carriers among close contacts may support the diagnosis of diphtheria for a patient whose cultures are negative either because of prior antimicrobial therapy or because of other reasons. Antimicrobial prophylaxis should consist of either an IM injection of benzathine penicillin (600,000 units for persons <6 years old and 1,200,000 units for those ^6 years old) or a 7-to 10-day course of oral erythromycin (children: 40 mg/kg/day; adults: 1 g/day). Erythromycin may be slightly more effective, but IM benzathine penicillin may be preferred, because it avoids possible noncompliance with a multi-day oral drug regimen. The efficacy of antimicrobial prophylaxis in preventing secondary disease is presumed but not proven. Identified carriers of C. diphtheriae should have follow-up cultures done after they complete antimicrobial therapy. Those who continue to harbor the organism after either penicillin or erythromycin should receive an additional 10-day course of oral erythromycin and follow-up cultures.
# Immunization Active
All household and other close contacts who have received cthree doses of diphtheria toxoid or whose vaccination status is unknown should receive an imme diate dose of a diphtheria toxoid-containing preparation and should complete the primary series according to schedule (Tables 1 and 2). Close contacts who have completed a primary series of cthree doses and who have not been vaccinated with diphtheria toxoid within the previous 5 years should receive a booster dose of a diphtheria toxoid-containing preparation appropriate for their age.
# Passive
The only preparation available for passive immunization against diphtheria is equine diphtheria antitoxin. Even when close surveillance of unvaccinated close contacts is impossible, use of this preparation is not generally recommended because of the risks of allergic reaction to horse serum. Immediate hypersensitivity reactions occur among approximately 7%, and serum sickness among 5% of adults receiving the recommended prophylactic dose of equine antitoxin. The risk of an adverse reaction to equine antitoxin must be weighed against the small risk that an unvacci nated household contact who receives chemoprophylaxis will contract diphtheria. No evidence exists to support any additional benefit of diphtheria antitoxin use for contacts who have received antimicrobial prophylaxis. If antitoxin is to be used, 5,000-10,000 units IM-after appropriate testing for sensitivity-at a site different from that of the toxoid injection is the dosage usually recommended. Diphtheria antitoxin is unlikely to impair the immune response to simultaneous administration of diphtheria toxoid, but this has not been adequately studied.
A serum specimen collected from a patient with suspected diphtheria (before antitoxin therapy is initiated) may be helpful in supporting the diagnosis of diphtheria if a level of diphtheria antitoxin below that considered to be protective (i.e., <0.01 lU/mL) can be demonstrated. Such testing may be particularly helpful with a patient for whom antimicrobial therapy had been initiated prior to obtaining diphtheria cultures.
# Cutaneous Diphtheria
Cases of cutaneous diphtheria generally are caused by infections with nontoxigenic strains of C. diphtheriae. If a toxigenic C. diphtheriae strain is isolated from a cutaneous lesion, investigation and prophylaxis of close contacts should be under taken, as with respiratory diphtheria. If a cutaneous case is known to be due to a nontoxigenic strain, routine investigation or prophylaxis of contacts is not necessary.
# TETANUS PROPHYLAXIS IN W OUND MANAGEMENT
Chemoprophylaxis against tetanus is neither practical nor useful in managing wounds. Wound cleaning, debridement when indicated, and proper immunization are important. The need for tetanus toxoid (active immunization), with or without TIG (passive immunization), depends on both the condition of the wound and the patient's vaccination history (Table 5; see also Precautions and Contraindications). Rarely has tetanus occurred among persons with documentation of having received a primary series of toxoid injections.
A thorough attempt must be made to determine whether a patient has completed primary vaccination. Patients with unknown or uncertain previous vaccination histo ries should be considered to have had no previous tetanus toxoid doses. Persons who had military service since 1941 can be considered to have received at least one dose. Although most people in the military since 1941 may have completed a primary series of tetanus toxoid, this cannot be assumed for each individual. Patients who have not completed a primary series may require tetanus toxoid and passive immunization at the time of wound cleaning and debridement (Table 5). Available evidence indicates that complete primary vaccination with tetanus toxoid provides long-lasting protection years for most recipients. Consequently, after complete primary tetanus vaccination, boosters-even for wound management -need be given only every 10 years when wounds are minor and uncontaminated. For other wounds, a booster is appropriate if the patient has not received tetanus toxoid within the preceding 5 years. Persons who have received at least two doses of tetanus toxoid rapidly develop antitoxin antibodies.
Td isthe preferred preparation for active tetanus immunization in wound manage ment of patients ^7 years of age. Because a large proportion of adults are suscepti ble, this plan enhances diphtheria protection. Thus, by taking advantage of acute health-care visits, such as for wound management, some patients can be protected who otherwise would remain susceptible. For routine wound management among children <7 years of age who are not adequately vaccinated, DTP should be used instead of single-antigen tetanus toxoid. DT may be used if pertussis vaccine is contraindicated or individual circumstances are such that potential febrile reactions following DTP might confound the management of the patient. For inadequately vaccinated patients of all ages, completion of primary vaccination at the time of discharge or at follow-up visits should be ensured (Tables 1 and 2).
If passive immunization is needed, human TIG is the product of choice. It provides protection longer than antitoxin of animal origin and causes few adverse reactions. The TIG prophylactic dose that is currently recommended for wounds of average severity is 250 units IM. When tetanus toxoid and TIG are given concurrently, separate syringes and separate sites should be used. The ACIP recommends the use of only adsorbed toxoid in this situation.
# PROPHYLAXIS FOR CONTACTS OF PERTUSSIS PATIENTS
Spread of pertussis can be limited by decreasing the infectivity of the patient and by protecting close contacts. To reduce infectivity as quickly as possible, a course of oral erythromycin (children: 40 mg/kg/day; adults: 1g/day) or trimethoprimsulfamethoxazole (children: trimethoprim 8 mg/kg/day, sulfamethoxazole 40 mg/ kg/day; adults: trimethoprim 320mg/day, sulfamethoxazole 1,600mg/day) is recom mended for patients with clinical pertussis. Antimicrobial therapy should be continued for 14 days to minimize any chance of treatment failure. It is generally accepted that symptoms may be ameliorated when effective therapy is initiated during the catarrhal stage of disease (91). Some evidence suggests erythromycin therapy can alter the clinical course of pertussis when initiated early in the paroxys mal stage (19,92,93).
Erythromycin or trimethoprim-sulfamethoxazole prophylaxis should be adminis tered for 14 days to all household and other close contacts of persons with pertussis, regardless of age and vaccination status. Although data from controlled clinical trials are lacking, prophylaxis of all household members and other close contacts may prevent or minimize transmission (92,(94)(95)(96). All close contacts <7 years of age who | 13,627 | {
"id": "0f280f0be1c86d67d16c8b2fdd21022b9758330f",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Microbial contamination-both bacterial and viral-of flood waters can cause great concern for use of previously flooded outdoor areas. Limited guidance exists on how to determine safe use of these areas. This guidance was developed for public health authorities, emergency response managers, and government decision makers. This document defines how to assess the public health risks for using outdoor areas after a flood event where potential exposure to microbial contamination exists. This guidance is not intended to serve as a conclusive determination on public access and use of previously flooded outdoor areas.# Introduction and Background
After a flood event, questions arise about health risks associated with using outdoor areas such as ball fields, playgrounds, and residential yards. Microbial exposure is a concern because wastewater treatment plants, residential septic systems, municipal sanitary sewer systems, and agricultural operations can be affected by flood waters and can contaminate flooded areas. This document addresses concerns associated only with microbial contamination after a flood event. Chemical contamination issues associated with flood events are not addressed in this document.
Due to many variables, health authorities should characterize potential health exposure risks posed by flood waters on a case-by-case basis. Risk characterization involves identifying potential contamination sources, determining factors that may influence microbial concentration and survival, determining the potential effect on exposed populations, and considering the intended use for previously flooded outdoor areas. A discussion about safely occupying previously flooded areas is provided later in this document in the risk assessment section.
Flood waters commonly contain microbial contaminants and can directly affect public health. Increased levels of microbes in floodwaters increase the risk of human exposure and the likelihood for infection. A study (1) after Hurricane Katrina determined that microbial contaminants, specifically fecal coliforms, were elevated and considered consistent with levels detected historically in typical storm-water discharges in the area. A study (2) conducted during the Midwest flooding of 2001 identified an increased incidence of gastrointestinal illness during the flood event.
# Microbes and Viability
Floodwater contaminated by microbes may contain bacteria, viruses, protozoa, and helminthes (3). Exposure to these pathogens can cause illnesses ranging from mild gastritis to serious diseases such as dysentery, infectious hepatitis, and severe gastroenteritis (4). The concentration of microbes in flood water depends on how many and what kind of sources contributed to the contamination, the volume of contaminants released and the degree of their dispersion in the environment, and the level of treatment of the affected wastewatertreatment facilities before the flooding (3,5).
Typically, it takes 2-3 months for enteric bacteria to significantly reduce in soil, with certain exceptions (6). Environmental factors including temperature, soil desiccation, pH, soil characteristics, and sunlight influence microbial survival and persistence (5)(6)(7)(8)(9). Microbial survival in soil and the resulting potential for human exposure is difficult to predict because of natural variability in those environmental factors and varying microbial susceptibilities. For example, shigella has survived in soil at room temperature for 9-12 days (10) and cryptosporidium oocysts may survive in a moist environment for 60-180 days (3). Sporeforming microbes such as coccidioides, a fungus that exists in semiarid southwestern U.S. soil (11), and anthrax spores can survive in soil for many years (12). Aside from the microbe's ability to survive, availability is another important factor to consider. Certain microbes can sorb to stable soil, which may lengthen their survival time.
Due to different microbial responses to the environment, providing universal guidance is difficult. Intensity of sunlight exposure, level of soil desiccation, and ambient temperatures necessary to effectively kill all microbes within a specified time varies among microbes. Survival characteristics for microbes under specified conditions have been reported, however generalizing study results proves more difficult. The scientific inability to generalize microbial viability reinforces the need to implement a risk-assessment approach that considers all variables that could influence potential exposure.
# Control and Remediation
Exposure risk to microbes in soil after a flood event can be influenced by emphasizing the importance of personal hygiene. Public health education efforts should include personal hygiene precautions and guidance. Education efforts should emphasize proper handwashing and adequate handwashing and drying supplies and equipment in public restrooms and at temporary handwashing facilities should be provided. Education efforts should include cautions to avoid standing water, areas saturated with floodwater, and areas with visible debris. Those areas create concern for microbial exposure and may also cause public safety concerns.
Signs may be used to indicate public health and safety concerns and to discourage use of potentially hazardous areas. Intended use of outdoor areas (e.g., grass-covered high school soccer field versus daycare outdoor play area), with special consideration for areas where young children are likely to play, should be determined and considered. For example, sand in sandboxes and soil, mulch, and wood chips around outdoor playground equipment may need to be removed. All outdoor items with cleanable surfaces that were in contact with flood water should be adequately cleaned before they are used.
Small areas of gross contamination (i.e., sewage with visible solid material) should be cleaned, and treatment with hydrated lime may be considered. Hydrated lime can be applied to increase pH to a level that kills microbes. The U.S. Environmental Protection Agency (EPA) requires that the pH of sewage sludge treated for land application be held at 12 for a minimum of 2 hours to kill microbes, and be held at a minimum of 11.5 for 22 additional hours to reduce vector attraction (13). In addition to maintaining an adequate pH level, sludge dryness can affect how easily and quickly microbes die (14). Applying quicklime, which can help dry areas of gross contamination, may be considered. The National Lime Association promotes using quicklime to expedite drying of mudded areas (15).
Of significance, the pH level requirements discussed earlier pertain to treating sewage sludge and not soil. Lime effectiveness for treating microbial-contaminated soils was not proven during literature review. Wide-scale application of lime could affect human health and the environment, which could outweigh potential risks posed by a flood event. Exposure to hydrated or quicklime may be hazardous to applicators and the public. Exposure routes include inhalation, ingestion, and skin or eye contact. Exposure to hydrated or quicklime may cause irritation to skin, eyes, upper respiratory system, skin vesiculation, cough, bronchitis, and pneumonitis, and may burn eyes and skin (16).
If lime is applied in small, heavily contaminated areas, applicators should wear appropriate personal protective equipment as required by occupational health and safety regulations and described in the manufacturer's Material Safety Data Sheet and product label. In addition to health hazards, the inappropriate use of lime can cause damage to personal property (17). Environmental effects may include damaged vegetation (increasing potential for soil erosion), excessive soil dehydration, and lime in run-off waters.
Other remedial and control options may be considered. Exposure to potential pathogens in soil may be controlled by depositing new soil on top of the affected soil and compacting, planting new grass, watering to flush organisms out of the upper soil layers, covering the affected ground with asphalt, brick, stone, cement, or other solid paving material, and applying dust-suppressant products where air dispersion is a concern.
# Risk-assessment Approach
After a flood event, health authorities should assess human health risk by using a systematic approach because many variables must be considered. Following a risk-assessment process will help authorities determine how to safely use previously flooded outdoor areas.
The four steps of the risk-assessment process (18) (Figure 1; see page 7) are 1. Hazard identification: determines if adverse health effects may be caused by exposure to the contaminant (Can the contaminants found affect human health?). 2. Dose-response assessment: examines the magnitude of the exposure and probability of adverse health effects (Are contaminants found to the extent that can affect health?). 3. Exposure assessment: measures or estimates the extent of human exposure to the contaminant (Who may be exposed, for how long or how frequently, and how much?).
4. Risk characterization: interprets information from the proceeding steps to form an overall conclusion about human risk.
This comprehensive approach also considers risks to flora and fauna, and the effect of remedial action on human health and the environment.
# Conclusion
Determining when to allow use of previously flooded public areas requires analyzing and considering many variables. This guidance is intended to help health authorities assess the level of risk posed by microbial contamination after a flood event. This guidance is not intended to represent all variables that should be considered-any flood event may present many complexities. The following flow chart may help prompt discussion and consideration of various risk factors.
# Decision and Actions / Interventions
Determine whether to allow occupancy of flooded areas and if intervention/precautionary actions are necessary (i.e., promote personal hygiene, signage, remedial actions, etc...)
# Risk Characterization
Consider all information gathered in previous steps and determine magnitude of the public health problem 3. Exposure Assessment | 1,854 | {
"id": "a53d117321bc47f3dc5413baf4019f61e4a99c3c",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | These recommendations represent the first statement by the Advisory Committee on Immunization Practices (ACIP) on the use of an oral, live rotavirus vaccine licensed by the Food and Drug Administration on August 31, 1998, for use among infants. This report reviews the epidemiology of rotavirus, describes the licensed rotavirus vaccine, and makes recommendations regarding its use for the routine immunization of infants in the United States. These recommendations are based on estimates of the disease burden of rotavirus gastroenteritis among children in the United States and on the results of clinical trials of the vaccine. Rotavirus affects virtually all children during the first 5 years of life in both developed and developing countries, and rotavirus infection is the most common cause of severe gastroenteritis in the United States and worldwide. In the United States, rotavirus is a common cause of hospitalizations, emergency room visits, and outpatient clinic visits, and it is responsible for considerable health-care costs. Because of this large burden of disease, several rotavirus vaccines have been developed. One of these vaccines -an oral, live, tetravalent, rhesus-based rotavirus vaccine (RRV-TV) -was found to be safe and efficacious in clinical trials among children in North America, South America, and Europe and on the basis of these studies is now licensed for use among infants in the United States. The vaccine is an oral, live preparation that should be administered to infants between the ages of 6 weeks and 1 year. The recommended schedule is a threedose series, with doses to be administered at ages 2, 4, and 6 months. The first dose may be administered from the ages of 6 weeks to 6 months; subsequent doses should be administered with a minimum interval of 3 weeks between any two doses. The first dose should not be administered to children aged ≥7 months because of an increased rate of febrile reactions after the first dose among older infants. Second and third doses should be administered before the first birthday. Implementation of these recommendations in the United States should prevent most physician visits for rotavirus gastroenteritis and at least two-thirds of hospitalizations and deaths related to rotavirus.# CLINICAL AND EPIDEMIOLOGIC FEATURES OF ROTAVIRUS DISEASE
Rotavirus is the most common cause of severe gastroenteritis in infants and young children in the United States. Worldwide, rotavirus is a major cause of childhood death. The spectrum of rotavirus illness ranges from mild, watery diarrhea of limited duration to severe, dehydrating diarrhea with vomiting and fever, which results in death (1)(2)(3)(4)(5). Virtually all children become infected in the first 3-5 years of life, but severe diarrhea and dehydration occur primarily among children aged 3-35 months.
Rotaviruses are shed in high concentrations in the stools of infected children and are transmitted by the fecal-oral route, both through close person-to-person contact and through fomites (6 ). Rotaviruses also might be transmitted by other modes, such as respiratory droplets (7 ). In the United States, rotavirus causes seasonal peaks of gastroenteritis from November to May each year, with activity beginning in the Southwest United States and spreading to the Northeast (8)(9)(10).
Rotavirus appears to be responsible for approximately 5%-10% of all diarrheal episodes among children aged <5 years in the United States, and for a much higher proportion of severe diarrheal episodes (2,11 ). Although rotavirus gastroenteritis results in relatively few deaths in the United States (approximately 20 per year among children aged <5 years) (12 ), it accounts for more than 500,000 physician visits (13,14 ) and approximately 50,000 hospitalizations each year among children aged <5 years (4,9,15 ). Rotavirus is responsible for 30%-50% of all hospitalizations for diarrheal disease among children aged <5 years, and more than 50% of hospitalizations for diarrheal disease during the seasonal peaks (11,(16)(17)(18). Among children aged <5 years in the United States, 72% of rotavirus hospitalizations occur during the first 2 years of life, and 90% occur by age 3 years (15 ).
In the first 5 years of life, four out of five children in the United States will develop rotavirus diarrhea (2,19 ); one in seven will require a clinic or emergency room visit; one in 78 will require hospitalization; and one in 200,000 will die from rotavirus diarrhea (4,14 ). The risk for rotavirus diarrhea and its outcomes do not appear to vary by geographic region within the United States. Limited data suggest that children from disadvantaged socioeconomic backgrounds and premature infants have an increased risk for hospitalization from diarrheal disease, including rotavirus diarrhea (20 ). In addition, some children and adults who are immunocompromised because of congenital immunodeficiency, hematopoetic transplantation, or solid organ transplantation experience severe, prolonged, and sometimes fatal rotavirus diarrhea (21)(22)(23). Rotavirus is also an important cause of nosocomial gastroenteritis (1,11,16,24,25 ). Among adults in the United States, rotavirus infection infrequently causes diarrhea in travelers, persons caring for children with rotavirus diarrhea, and the elderly (26 ). Each year in the United States, rotavirus diarrhea results in $264 million in direct medical costs and more than $1 billion in total costs to society (14 ). Direct medical costs are primarily the result of hospitalizations for severe diarrhea and dehydration, and societal costs are attributable primarily to loss of work time among parents and other caregivers.
Several reasons exist to adopt immunization of infants as the primary public health intervention to prevent rotavirus disease in the United States. First, similar rates of illness among children in industrialized and less developed countries indicate that clean water supplies and good hygiene have not decreased the incidence of rotavirus diarrhea in developed countries, so further improvements in water or hygiene are unlikely to have a substantial impact (2,(27)(28)(29)(30)(31). Second, in the United States, a high level of rotavirus morbidity continues to occur despite currently available therapies. For example, hospitalizations for diarrhea in young children declined only 16% from 1979 to 1992 (9 ), despite the widespread availability of oral rehydration solutions and recommendations by experts, including the American Academy of Pediatrics, for the use of oral rehydration solutions in the treatment of dehydrating gastroenteritis (32)(33)(34). Third, studies of natural rotavirus infection indicate that initial infection protects against subsequent severe diarrheal disease, although subsequent asymptomatic infections and mild disease might still occur (30,35 ). Thus, immunization early in life, which mimics a child's first natural infection, will not prevent all subsequent disease but should prevent most cases of severe rotavirus diarrhea and its sequelae (e.g., dehydration, physician visits, and hospitalizations).
# Laboratory Testing for Rotavirus
Because the clinical features of rotavirus gastroenteritis are nonspecific, confirmation of rotavirus infection in children with gastroenteritis by laboratory testing of fecal specimens will be necessary for reliable rotavirus surveillance and could be useful in clinical settings (1,36 ). The most available method is antigen detection by enzyme immunoassay directed at a group antigen common to all Group A rotaviruses. Several commercial enzyme immunoassay test kits are available that are inexpensive, easy to use, rapid, and highly sensitive (approximately 90% compared with detection by electron microscopy); these properties make rapid antigen detection kits suitable for use in rotavirus surveillance systems. Other techniques -including electron microscopy, reverse transcription-polymerase chain reaction, nucleic acid hybridization, polyacrylamide gel electrophoresis, and culture -are used primarily in research settings.
Serologic methods that detect a rise in serum antibodies, primarily enzyme immunoassay for rotavirus serum immunogloblulin G (IgG) and immunogloblulin A (IgA) antibodies, have been used to confirm recent infections. In vaccine trials, detection of rotavirus-specific IgA and neutralizing antibodies to vaccine strains have been used to study the immunogenicity of rotavirus vaccines (37 ).
# Morphology, Antigen Composition, and Immune Response
Rotaviruses are 70-nm nonenveloped RNA viruses in the family Reoviridae. The viral nucleocapsid is composed of three concentric shells that enclose 11 segments of double-stranded RNA. The outermost layer contains two structural proteins: VP7, the glycoprotein (G protein), and VP4, the protease-cleaved protein (P protein). These two proteins define the serotype of the virus and are considered critical to vaccine development because they are targets for neutralizing antibodies that might be important for protection (38,39 ). Because the two gene segments that encode these proteins can, in theory, segregate independently, a typing system has been developed to specify each protein; 14 VP7 (G) serotypes and 20 VP4 (P) genotypes have been described. Only viruses containing four distinct combinations of G and P proteins are known to commonly circulate in the United States -G1P1A, G2P1B, G3P1A, G4P1A (40 ); these strains are generally designated by their G serotype specificity (serotypes 1-4). In some areas of the United States, recent surveillance has detected strains with additional combinations -G9P6 and G9P8 (serotype 9) (41 ). In addition to these human strains, animal strains of rotavirus that are antigenically distinguishable are found in many species of mammals; these strains only rarely appear to cause infection in humans.
Although children can be infected with rotavirus several times during their lives, initial infection after age 3 months is most likely to cause severe diarrhea and dehydration (30,42,43 ). After a single natural infection, 40% of children are protected against any subsequent infection with rotavirus, 75% are protected against diarrhea from a subsequent rotavirus infection, and 88% are protected against severe diarrhea. Second, third, and fourth infections confer progressively greater protection (30 ).
The immune correlates of protection from rotavirus infection and disease are not completely understood. Both serum and mucosal antibodies are probably associated with protection from disease, and in some studies, serum antibodies against VP7 and VP4 have correlated with protection. However, in other studies, including vaccine studies, correlation between serum antibody and protection has been poor (44 ). The first infection with rotavirus elicits a predominantly homotypic, serum-neutralizing antibody response to the virus, and subsequent infections elicit a broader, heterotypic response (1,45 ). The influence of cell-mediated immunity is less clearly understood, but likely is related both to recovery from infection and to protection against subsequent disease (44,46 ).
# ROTAVIRUS VACCINE Background
Research to develop a safe, effective rotavirus vaccine began in the mid-1970s when investigators demonstrated that previous infection with animal rotavirus strains protected laboratory animals from experimental infection with human rotaviruses (47 ). During the past two decades, two types of rotavirus vaccines have been evaluated, and one vaccine has been licensed for use in the United States.
Monovalent vaccines. The first candidate rotavirus vaccines were derived from monovalent rotavirus strains isolated from either bovine or rhesus hosts. Trials, often with a single dose, demonstrated that these live, oral vaccines were safe and could prevent rotavirus diarrhea in young children (48)(49)(50)(51). However, the efficacy of these vaccines varied in trials. Because these vaccines had relied on heterotypic protection, researchers postulated that a multivalent vaccine that provided serotype-specific immunity against all common human rotavirus strains might be more effective.
Multivalent vaccines. Multivalent vaccine candidates were developed in 1985 by using gene reassortment (52 ). This process produces vaccine virus strains that have been modified from parent animal strains by single gene reassortment so that each strain contains 10 genes from the animal strain along with a single gene from a human rotavirus strain; this single gene encodes the VP7 protein. In theory, a reassortant strain maintains the attenuation of the parent animal strain in the human host but also has the neutralization specificity of a major G serotype of human rotavirus (53 ). The only rotavirus vaccine currently licensed by the Food and Drug Administration for use in the United States is rhesus-based rotavirus vaccine-tetravalent. A reassortant vaccine that is based on a bovine rotavirus parent strain (WC-3) is undergoing clinical trials (54 ).
# Rhesus-based rotavirus vaccine-tetravalent (RRV-TV).
The licensed tetravalent vaccine RRV-TV (RotaShield ™ ) is produced by Wyeth-Lederle Vaccines and Pediatrics. RRV-TV is a live, oral vaccine that incorporates rhesus rotavirus strain MMU 18006 (with human serotype G3 specificity) and three single-gene human-rhesus reassortants: D x RRV (human serotype G1), DS-1 x RRV (human serotype G2), and ST3 x RRV (human serotype G4). The parent rhesus rotavirus strain MMU 18006 was isolated from a rhesus monkey with diarrhea at the California Regional Primate Center in Davis and was passed nine times in monkey kidney cells and seven times in normal fetal rhesus diploid cells (FRhL-2) cells. The vaccine virus strains are grown in FRhL-2 cells.
RRV-TV is supplied as a lyophilized pink solid. Because the vaccine strains are acidlabile, RRV-TV is reconstituted with 2.5 mL of irradiated sterile diluent containing citrate-bicarbonate. When reconstituted, the vaccine might contain a fine precipitate, and it usually is yellow-orange in color but occasionally is purple. Each dose of vaccine contains 1 x 10 5 plaque-forming units (pfu) of each component rotavirus strain. Trace amounts of fetal bovine serum, neomycin sulfate, and amphotericin B are present in the vaccine (<1 µg per dose). The vaccine does not contain preservatives.
Studies to evaluate the safety, immunogenicity, and efficacy of RRV-TV have involved 17,963 infants in the United States, Venezuela, and Finland. The efficacy of this vaccine has been evaluated in four field trials, two in the United States (55,56 ) and one each in Venezuela (57 ) and Finland (58 ). Three additional trials have been conducted with lower doses of RRV-TV in the United States (59 ), Brazil (60 ), and Peru (61 ).
# Immunogenicity
The immunogenicity of rotavirus vaccines is generally measured by detecting rotavirus group-specific serum IgA seroconversion or by detecting serum-neutralizing antibodies to vaccine strains and to prevalent human strains. In industrialized countries, immunogenicity studies of RRV-TV have produced consistent and reproducible results similar to those found in U.S. trials (Table 1) (55 ) (unpublished data, Wyeth-Lederle, 1997). In all studies, vaccinated children developed significantly higher IgA enzyme-linked immunosorbent assay (ELISA) and neutralizing antibodies to rotavirus than did children who received placebo (p90% of children who received RRV-TV demonstrated a serologic response to vaccination that included a neutralizing antibody response to rhesus rotavirus (83%-90%) or at least a fourfold rise in rotavirus-specific IgA titers (56%-93%) (55,56,59 ). Neutralizing antibody responses to human rotavirus strains were less common (14%-43%).
When administered simultaneously, a three-dose series of RRV-TV does not diminish the immune response to oral poliovirus vaccine (OPV) (62 ), diphtheria and tetanus toxoids and whole-cell pertussis vaccine (DPT) (63 ), Haemophilus influenzae type b conjugate (Hib) vaccine (63 ), inactivated poliovirus vaccine (IPV), or hepatitis B vaccine (unpublished data, Wyeth-Lederle, 1998). Studies of simultaneous administration of RRV-TV with diphtheria and tetanus toxoids and acellular pertussis vaccine (DTaP)
have not yet been completed, but no diminished immune response is expected on the basis of findings regarding the administration of RRV-TV with DTP. Concurrent administration of RRV-TV with OPV does not affect the immunogenicity and efficacy of a three-dose series of rotavirus vaccine (64,65 ). Breastfeeding does not appear to significantly diminish either the immune response to or the efficacy of the three-dose series (p>0.9) (64,66,67 ).
# Efficacy
Four efficacy trials of RRV-TV have been completed in the United States and Finland: three trials with the 4 x 10 5 pfu dose submitted for licensure (55,56,58 ) and one trial with a lower dose (4 x 10 4 pfu) (Table 2) (59 ). The findings of all four studies were similar; the vaccine demonstrated 49%-68% efficacy against any rotavirus diarrhea, 69%-91% efficacy against severe diarrhea, and 50%-100% efficacy in preventing doctor visits for evaluation and treatment of rotavirus diarrhea. The vaccine was also effective in reducing the duration of rotavirus diarrhea. The trial in Finland was large enough to examine the vaccine's efficacy in preventing rotavirus hospitalizations: protection was 100% (13 children in the placebo group were hospitalized compared with zero children in the vaccine group) (58 ). In this study, vaccinated children also were protected from nosocomially acquired rotavirus diarrhea. Extended follow-up in the study in Finland demonstrated that protection against severe disease persisted (55 ). All comparisons between vaccine and placebo recipients showed differences that were statistically significant (p<0.01). ¶ In mean titer calculation, n=142 for vaccinated children; in calculation of seroconversion rate, n=185 for vaccinated children. In mean titer calculation, n=108 for children receiving placebo; in calculation of seroconversion rate, n=193 for children receiving placebo. † † Unpublished data, Wyeth-Lederle Vaccines and Pediatrics, 1997.
# TABLE 1. Geometric mean titers and seroconversion rates for children participating in an efficacy trial and a large-scale consistency lot trial of rhesus-based rotavirus vaccine-tetravalent (RRV-TV) -United States
through three rotavirus seasons (68 ). Because infections with serotype G1 viruses have predominated in most studies, the efficacy of RRV-TV against this serotype is well established. In studies conducted in the United States and Finland, RRV-TV was also effective in preventing nonserotype G1 disease (55,56,58 ). In each study, the efficacy of the vaccine was high despite low neutralizing antibody responses to human strains among the vaccinated children -a finding that illustrates the variable correlation between serologic responses and efficacy. No data are available on the efficacy of administration of fewer than three doses of RRV-TV.
# Transmission of Attenuated Rotavirus Vaccine Strains
In studies performed in U.S. day care centers, no evidence of seroconversion to, or shedding of, vaccine strains was observed among unvaccinated children (69)(70)(71)(72)(73). However, in a large vaccine trial in Venezuela (57 ), stool samples from study children who had rotavirus diarrhea were tested by multiple methods. Wild-type rotavirus was found in high concentration in all samples. In addition, rotavirus vaccine strains were detected by polymerase chain reaction in stool samples from 15% of vaccinated and 13% of nonvaccinated children in concentrations too low to be detected by enzyme immunoassay or polyacrylamide gel electrophoresis. These data support the possibility that vaccine strains spread to some unvaccinated children but indicate that the vaccine strains alone were not the cause of diarrhea.
# Vaccine Distribution, Handling, and Storage
Each dose of RRV-TV is approximately 2.5 mL in volume, supplied as a lyophilized vaccine containing 4 x 10 5 pfu total virus and one dispette of buffer diluent for reconstitution; the diluent contains 9.6 mg/mL of citric acid and 25.6 mg/mL of sodium bicarbonate. Neither vaccine nor diluent contain preservatives. Before reconstitution, RRV-TV is stable for at least 24 months when stored at room temperatures <25 C (77 F). The lyophilized vaccine and diluent may be refrigerated at temperatures between 2 C and 8 C (36 F and 45 F) but should not be frozen. Once reconstituted, the vaccine is stable for up to 60
# Cost-Effectiveness of a Universal Childhood Immunization Program to Prevent Rotavirus
In a recent study that used current estimates of rotavirus disease burden, vaccine efficacy, vaccine coverage rates, and health costs, investigators estimated that a national rotavirus immunization program in which three doses of RRV-TV are administered at ages 2, 4, and 6 months would result in 227,000 fewer physician visits, 95,000 fewer emergency room visits, 34,000 fewer hospitalizations, and 13 fewer deaths per year (14 ). After revising this study model by incorporating the costs of adverse events, researchers estimated that a national rotavirus immunization program would yield savings in direct medical costs if the vaccine cost $8 or less per dose and would yield savings in total societal costs if the vaccine cost $41 or less per dose (CDC, unpublished data, 1998).
Note: The Advisory Committee on Immunization Practices (ACIP) has summarized the following rotavirus vaccine recommendations, contraindications, and precautions (see Summary Table on page 23). To provide further guidance to practitioners, the ACIP has rated the evidence for each recommendation.
# OBJECTIVE
This MMWR provides recommendations regarding rotavirus vaccine for the prevention of rotavirus gastroenteritis among children. These recommendations were developed by CDC staff members and the Rotavirus Working Group of the ACIP. This report is intended to guide clinical practice and policy development related to administration of the rotavirus vaccine to infants. Upon completion of this educational activity, the reader should be able to describe the disease burden of rotavirus in the United States; describe the characteristics and use of rhesus-based rotavirus vaccine-tetravalent (RRV-TV); identify the contraindications and precautions for the use of RRV-TV; and recognize the most common adverse events that can occur after administration of RRV-TV.
# EXPIRATION -March 19, 2000
The response form must be completed and returned electronically, by fax, or by mail, postmarked no later than 1 year from the publication date of this report, for eligibility to receive continuing education credit.
# INSTRUCTIONS
# Recommendations and Reports
To receive continuing education, please answer all of the following questions.
# Which of the following statements is NOT true concerning the burden of rotavirus disease in the United States among children aged <5 years?
A. Rotavirus diarrhea results in more than 500,000 physician visits per year.
B. Rotavirus diarrhea is responsible for an estimated 50,000 hospitalizations per year.
C. Rotavirus accounts for 5%-10% of all diarrhea episodes. D. Rotavirus accounts for 30%-50% of hospitalizations for diarrheal disease.
E. More than 100 deaths per year are attributed to rotavirus diarrhea.
# Which of the following statements is true concerning rotavirus infection in children?
A. Children can be infected with rotavirus several times during their lives.
B. The first infection with rotavirus after 3 months of age is usually the most severe.
C. After a single natural infection, 40% of children are protected against any subsequent infection with rotavirus.
D. Subsequent infections with rotavirus confer progressively greater protection from rotavirus infection.
E. All the above statements are true.
# What is the recommended route of administration of rhesus-based rotavirus vaccine-tetravalent (RRV-TV)?
A
# What is the recommended course of action if an infant regurgitates or spits up all or part of a dose of RRV-TV rotavirus vaccine?
A. Repeat the dose immediately, but only if more than half of the dose was regurgitated.
B. Repeat the dose immediately regardless of the amount that was regurgitated.
C. Request that the child return the next day, and repeat the dose at that time.
D. Do not repeat the dose, and administer the remaining doses on the usual schedule.
E. Do not repeat the dose, and discontinue the vaccination series.
# What is the most common adverse event following RRV-TV rotavirus vaccine?
A. Diarrhea To receive continuing education credit, you must answer all of the questions.
1. A B C D E F 2. A B C D E F 3. A B C D E F 4. A B C D E F 5. A B C D E F 6. A B C D E F 7. A B C D E F 8. A B C D E F 9. A B C D E F 10. A B C D E F 11. A B C D E F 12. A B C D E F 13. A B C D E F 14. A B C D E F 15. A B C D E F 16. A B C D E F 17. A B C D E F 18. A B C D E F
Detach or photocopy.
# RECOMMENDATIONS FOR THE USE OF ROTAVIRUS VACCINE Routine Administration
Routine immunization with three oral doses of RRV-TV is recommended for infants at ages 2, 4, and 6 months. Because natural rotavirus infections occur early in life, RRV-TV should be incorporated into the routine childhood immunization schedule. The first dose should be administered at age 2 months, the second dose at age 4 months, and the third dose at age 6 months. However, RRV-TV vaccination can be initiated at any time between the ages of 6 weeks and 6 months, with second and third doses following the preceding dose by a minimum of 3 weeks. Vaccination should not be initiated for children aged ≥7 months because these older infants might have an increased risk of fever occurring 3-5 days after receiving the first dose of vaccine (74)(75)(76). All doses of vaccine should be administered during the first year of life because data regarding the safety and efficacy of RRV-TV among children aged ≥1 year are lacking. Special efforts should be made to vaccinate children before onset of the winter rotavirus season. Infants documented to have had rotavirus gastroenteritis before receiving the full course of rotavirus vaccinations should still complete the three-dose schedule because the initial infection frequently provides only partial immunity.
RRV-TV is recommended for children who are breastfed. Although breastfeeding can slightly decrease the child's humoral immune response to RRV-TV after a first dose, no significant decrease in immune response or in overall efficacy has been observed among breastfed babies compared with nonbreastfed babies after three doses (p>0.9) (64,66,77,78 ).
RRV-TV can be administered together with DTP (or DTaP), Hib vaccine, OPV, IPV, and hepatitis B vaccine. RRV-TV is safe and effective when administered with other vaccines. Available evidence suggests that the vaccine does not interfere significantly with the immune response to DTP, Hib vaccine, IPV, or hepatitis B vaccine, and interference with DTaP is not expected to occur (63 ) (unpublished data, Wyeth-Lederle, 1998). Some children who receive RRV-TV and OPV concurrently have slightly decreased immune responses to RRV-TV and serotype 1 poliovirus after the first dose of vaccine, but no decrease is evident after three doses of these vaccines (56,62,64 ). No decrease in efficacy against rotavirus has been found among children receiving OPV compared with children not receiving OPV, although the sample size in this study was limited (64 ).
Like other vaccines, RRV-TV can be administered to infants with transient, mild illnesses, with or without low-grade fever.
# Contraindications Altered Immunity
RRV-TV is not recommended for infants who have known or suspected immunodeficiency. Children with primary immunodeficiency disorders and both children and adults who have received hematopoetic, hepatic, or renal transplants are at risk for severe or prolonged rotavirus gastroenteritis and can shed rotavirus for prolonged periods (20)(21)(22)(79)(80)(81). One study also identified rotavirus infection of liver and kidney tissue in a small number of severely immunodeficient children (79 ). Because the safety and efficacy of RRV-TV is not established in these populations, RRV-TV should not be administered to infants with compromised immune status because of immunosuppressive disease or therapies, leukemia, lymphoma, or other malignancies. The safety of RRV-TV has not been established in children with chronic granulomatous disease and other primary disorders of neutrophil function, but no evidence of increased severity of rotavirus infection has been observed in these children. RRV-TV should not be administered to infants born to mothers with human immunodeficiency virus (HIV) infection, unless a clinician has established that the infant is not HIVinfected.
# Allergy to Vaccine Components
RRV-TV should not be administered to persons who have hypersensitivity to any component of the vaccine (e.g., aminoglycoside antibiotics, monosodium glutamate, or amphotericin B) or who have experienced an anaphylactic reaction to a previous dose of RRV-TV.
# Acute Gastrointestinal Disease
RRV-TV should not be administered to infants with acute, moderate to severe vomiting or diarrhea until the condition resolves; however, vaccination might be warranted for infants with mild gastrointestinal illness. RRV-TV has not been studied among infants with concurrent gastrointestinal disease. Although RRV-TV is probably safe for infants with gastrointestinal disease, immunogenicity and efficacy can theoretically be compromised. For example, infants who receive OPV during an acute diarrheal illness might have diminished poliovirus antibody responses to OPV (82 ). Although similar studies with RRV-TV have not been reported, health-care providers should be aware of the theoretical potential for diminished immunogenicity and efficacy among infants with diarrhea. Therefore, RRV-TV should be withheld from infants with acute, moderate to severe vomiting or diarrhea. Vaccination of infants with mild gastrointestinal illness might be warranted if the delay in vaccination against rotavirus is expected to be substantial. Otherwise, infants with acute gastroenteritis should be vaccinated as soon as the condition resolves.
# Moderate to Severe Febrile Illness
Infants with moderate to severe febrile illness should be vaccinated as soon as they have recovered from the acute phase of the illness (83 ). This precaution avoids superimposing adverse effects of the vaccine on the underlying illness or mistakenly attributing a manifestation of the underlying illness to the vaccine.
# Precautions and Special Situations
Premature Infants (i.e., those born at <37 weeks' gestation) Practitioners should consider the potential risks and benefits of vaccinating premature infants against rotavirus. Limited data suggest that premature infants are at increased risk for hospitalization from diarrheal disease during their first year of life. The ACIP supports immunization of prematurely born infants if they a) are at least 6 weeks of age, b) are being or have been discharged from the hospital nursery, and c) are clinically stable. However, the number of premature infants studied in clinical trials is insufficient to confidently establish the safety and efficacy of RRV-TV for all premature infants. The lower level of maternal antibody to rotaviruses in very-lowbirthweight, premature infants theoretically could increase the risk of fever from rotavirus vaccine. Until further data are available, the ACIP considers that the benefits of RRV-TV vaccination of premature infants outweigh the theoretical risks.
# Exposure of Immunocompromised Persons to Vaccinated Infants
Infants living in households with persons who have or are suspected of having an immunodeficiency disorder or impaired immune status can be vaccinated. Most experts believe the protection of the immunocompromised household member afforded by immunization of young children in the household probably outweighs the small risk of transmitting vaccine virus to the immunocompromised household member and any subsequent theoretical risk of vaccine virus-associated disease. To minimize potential virus transmission, all members of the household should employ measures such as good hand washing after contact with the feces of the vaccinated infant (e.g., after changing a diaper).
# Recent Administration of Antibody-Containing Blood Products
No restrictions are necessary regarding the timing of administering RRV-TV and antibody-containing blood products. Although no data are available concerning the efficacy of RRV-TV administered simultaneously with antibody-containing blood products, data from studies of OPV indicate that simultaneous administration of OPV with these products does not affect OPV immunogenicity.
# Preexisting Chronic Gastrointestinal Disease
Practitioners should consider the potential risks and benefits of administering rotavirus vaccine to infants. Infants with preexisting chronic gastrointestinal conditions might benefit from RRV-TV vaccination. However, the safety and efficacy of RRV-TV have not been established for infants with these preexisting conditions (e.g., congenital malabsorption syndromes, Hirschsprung's disease, short-gut syndrome, or persistent vomiting of unknown cause).
# Regurgitation of Vaccine
The practitioner should not readminister a dose of vaccine to an infant who regurgitates, spits out, or vomits during or after administration of vaccine. The infant can receive the remaining recommended doses of RRV-TV at appropriate intervals outlined previously (see Routine Administration). Data are limited regarding the safety of administering a dose of RRV-TV higher than the recommended dose and on the efficacy of administering a partial dose. Additional data on safety and efficacy are needed to evaluate the benefits and risks of readministration.
# Late or Incomplete Immunization
Pending additional data, initial vaccination of children aged ≥7 months or administration of any dose of RRV-TV to children on or after their first birthday is not recommended. If a child fails to receive RRV-TV on the recommended schedule of 2, 4, and 6 months together with other routine immunizations, the child can receive the first dose of vaccine at any time after age 6 weeks but before age 7 months. Second and third doses of RRV-TV can be administered at any time during the first year of life as long as at least a 3-week interval separates doses. Data from the efficacy trials regarding administration of second and third doses are limited to children aged ≤8 months.
# Hospitalization After Vaccination
If a recently vaccinated child is hospitalized for any reason, no precautions other than routine universal precautions need be taken to prevent the spread of vaccine virus in the hospital setting.
# Latex Hypersensitivity
Health-care workers with a history of latex sensitivity should handle this vaccine with caution because its packaging contains dry natural rubber.
# ADVERSE EVENTS AFTER ROTAVIRUS VACCINATION
Serious adverse events that occur after administration of rotavirus vaccine should be reported to the Vaccine Adverse Events Reporting System (VAERS). The National Childhood Vaccine Injury Act of 1986 requires health-care providers to report to VAERS any serious adverse events that occur after vaccination, but persons other than health-care workers can also report adverse events. Adverse events that must be reported after rotavirus vaccination are those described in the manufacturer's package insert as contraindications to additional doses of vaccine (84 ). Other adverse events occurring after administration of a vaccine, especially events that are serious or unusual, also should be reported to VAERS, regardless of the provider's opinion about whether the association is causal. VAERS reporting forms and information can be requested 24 hours a day by calling (800) 822-7967 or by accessing the VAERS World-Wide Web site at .
RRV-TV has been administered to almost 7,000 infants aged 6-28 weeks in three doses of at least 4 x 10 5 pfu, including 2,208 infants in placebo-controlled studies (55,56,58,76 ) (unpublished data, Wyeth-Lederle, 1997), and 4,740 infants in three studies that were not placebo-controlled (unpublished data, Wyeth-Lederle, 1997). The vaccine has been associated with a statistically significant excess of fever following the first dose compared with placebo (>38 C , 21% versus 6% , 2% versus 1% ) also was noted after the second dose of RRV-TV; no increase in any symptoms was noted after the third dose of RRV-TV.
In the placebo-controlled trials, investigators found no overall difference in the rate of diarrhea (55,56,58,76 ) (unpublished data, Wyeth-Lederle, 1997). However, in the efficacy study in Finland (58 ), vaccinated children had a significantly increased rate of diarrhea after the first dose of vaccine compared with placebo recipients (2.8% versus 1.4% ) (Table 3); the diarrhea was associated with the presence of fever (85 ). No evidence exists that RRV-TV causes vomiting.
Initial reports noted failure to thrive or growth delay rarely but more frequently among RRV-TV recipients than among placebo recipients in the Finland and U.S. efficacy trials (18/2,015 among vaccinated children versus 6/2,023 among recipients of placebo ) (unpublished data, Wyeth-Lederle, 1997). On blinded expert review, most cases were found to represent normal variation in growth rates; five cases (three among vaccinated children and two among placebo recipients) were suspected of representing abnormal growth delays.
In all studies of rhesus rotavirus vaccines combined, intussusception was noted in five of 10,054 (0.05%) recipients of any reassortant rhesus vaccine (two of these five children received RRV-TV) compared with one of 4,633 placebo recipients. The difference between the rates of intussusception in these groups was not statistically significant (p=0.92 for children receiving vaccine; p=0.45 for children receiving placebo), and the rates observed among vaccinated children were similar to those seen in comparison populations (86 ). Although the association of these events with RRV-TV appears to be temporal rather than causal, postlicensure surveillance is needed for these and other rare adverse events that might occur. Data are limited on adverse events after RRV-TV is administered to premature infants. Of 23 premature infants who were ≤35 weeks' gestational age and who received RRV-TV, one infant developed fever (38.6 C on day 2 after vaccination) and two infants developed diarrhea (one infant on days 2 and 5 after vaccination and the other infant on days 6 and 12) (unpublished data, Wyeth-Lederle, 1997).
The recommendation for routine rotavirus immunization is made in view of the high morbidity associated with rotavirus gastroenteritis and the favorable costeffectiveness of immunization. Among approximately 20,000 children immunized to date, the vaccine has been found to be generally safe and well tolerated. As with any new vaccine, rare adverse events might be identified when many more children are immunized, and postlicensure surveillance will be required to identify such rare events. for monitoring the effectiveness of a national immunization program. At state and local levels, additional surveillance efforts -by enhanced surveillance at sentinel hospitals or by review of hospital discharge databases -will be necessary to monitor program effectiveness.
# Detection of Unusual Strains of Rotavirus
A national strain surveillance system of sentinel laboratories has been established to monitor the prevalence of rotavirus strains before and after the introduction of rotavirus vaccines. This system is designed to detect unusual strains that might not be effectively prevented by vaccination and that might affect the success of the immunization program.
# Research
Future research should include studies to determine the safety and efficacy of RRV-TV administered to infants born prematurely, infants with immune deficiencies, infants who live in households with immunocompromised persons, infants with chronic gastrointestinal disease, and children aged >1 year. Postlicensure studies also should be conducted to determine the relative efficacy of fewer than three doses of vaccine and to address the cost-effectiveness of vaccination programs in various settings.
# Education of Health-Care Providers and Parents
The success of a rotavirus immunization program depends on the acceptance and enthusiasm of physicians and other health-care providers who care for children. Vaccination program personnel will benefit from education about rotavirus disease and rotavirus vaccine. Parental education on rotavirus diarrhea and on the vaccine also will be essential to establish and maintain public confidence in this vaccine and to avoid confusion by cases of diarrhea in early childhood resulting from nonrotaviral etiologies not preventable by RRV-TV.
# Implementation
Physicians and health-care providers will require time and resources to incorporate this new vaccine into practice. Therefore, full implementation of these recommendations will not be achieved immediately. During the period of implementation, postmarketing surveillance should be conducted to further delineate the benefits and risks of rotavirus vaccine.
# FUTURE NEEDS IN ROTAVIRUS SURVEILLANCE, RESEARCH, EDUCATION, AND IMPLEMENTATION
# Surveillance Incidence of Rotavirus Gastroenteritis
Rotavirus gastroenteritis is not a reportable disease, and testing for rotavirus infection is not always performed when a child seeks medical care for acute gastroenteritis. Therefore, additional efforts will be needed to establish rotavirus disease surveillance systems that are adequately sensitive and specific to document the effectiveness of immunization programs. Current national surveillance systems include a) review of national hospital discharge databases for rotavirus-specific or rotavirus-compatible diagnoses and b) reports of rotavirus isolation from a sentinel system of laboratories. Additional systems will be needed to provide the timely representative data necessary | 8,937 | {
"id": "13822a18c90165bae8b9202110a2d25675e51872",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | In recent years, concern has increased regarding use of biologic materials as agents of terrorism, but these same agents are often necessary tools in clinical and research microbiology laboratories. Traditional biosafety guidelines for laboratories have emphasized use of optimal work practices, appropriate containment equipment, well-designed facilities, and administrative controls to minimize risk of worker injury and to ensure safeguards against laboratory contamination. The guidelines discussed in this report were first published in 1999 (U.S. Department of Health and Human Services/CDC and National Institutes of Health. Biosafety in microbiological and biomedical laboratories . Richmond JY, McKinney RW, eds. 4 th ed. Washington, DC: US Department of Health and Human Services, 1999 ). In that report, physical security concerns were addressed, and efforts were focused on preventing unauthorized entry to laboratory areas and preventing unauthorized removal of dangerous biologic agents from the laboratory. Appendix F of BMBL is now being revised to include additional information regarding personnel, risk assessments, and inventory controls. The guidelines contained in this report are intended for laboratories working with select agents under biosafety-level 2, 3, or 4 conditions as described in Sections II and III of BMBL. These recommendations include conducting facility risk assessments and developing comprehensive security plans to minimize the probability of misuse of select agents. Risk assessments should include systematic, site-specific reviews of 1) physical security; 2) security of data and electronic technology systems; 3) employee security; 4) access controls to laboratory and animal areas; 5) procedures for agent inventory and accountability; 6) shipping/transfer and receiving of select agents; 7) unintentional incident and injury policies; 8) emergency response plans; and 9) policies that address breaches in security. The security plan should be an integral part of daily operations. All employees should be well-trained and equipped, and the plan should be reviewed annually, at least. Recommendation: Develop procedures for transferring or shipping select agents from the laboratory. - Package, label, and transport select agents in conformance with all applicable local, federal, and international transportation and shipping regulations, including U.S. Department of Transportation (DOT) regulations. ¶ Materials that are transported by airline carrier should also comply with packaging and shipping regulations set by# Introduction
Traditional laboratory biosafety guidelines have emphasized use of optimal work practices, appropriate containment equipment, well-designed facilities, and administrative controls to minimize risks of unintentional infection or injury for laboratory workers and to prevent contamination of the outside environment (1). Although clinical and research microbiology laboratories might contain dangerous biologic, chemical, and radioactive materials, to date, only a limited number of reports have been published of materials being used intentionally to injure laboratory workers or others (2)(3)(4)(5)(6)(7). However, recently, concern has increased regarding possible use of biologic, chemical, and radioactive materials as terrorism agents (8,9). In the United States, recent terrorism incidents (10) have resulted in the substantial enhancement of existing regulations and creation of new regulations governing laboratory security to prevent such incidents.
The Public Health Security and Bioterrorism Preparedness and Response Act of 2002- (the Act) required institutions to notify the U.S. Department of Health and Human Services (DHHS) or the U.S. Department of Agriculture (USDA) of the possession of specific pathogens or toxins (i.e., select agents † ), as defined by DHHS, or certain animal and plant pathogens or toxins (i.e., high-consequence pathogens), as defined by USDA. The Act provides for expanded regulatory oversight of these agents and a process for limiting access to them to persons who have a legitimate need to handle or use such agents. The Act also requires specified federal agencies to (1). However, that publication primarily addressed physical security concerns (e.g., preventing unauthorized entry to laboratory areas and preventing unauthorized removal of dangerous biologic agents from the laboratory). The guidelines presented here are provided to assist facility managers with meeting the regulatory mandate of 42 Code of Federal Regulation (CFR) 73 and, therefore, include information regarding personnel, risk assessments, and inventory controls. These guidelines are intended for laboratories where select agents are used under biosafety levels (BSL) 2, 3, or 4 as described in Sections II and III of BMBL. Appendix F of BMBL is being revised to include consideration of the following biosecurity policies and procedures:
- risk and threat assessment; - facility security plans;
- physical security;
- data and electronic technology systems;
- security policies for personnel;
- policies regarding accessing the laboratory and animal areas; - specimen accountability; - receipt of agents into the laboratory; - transfer or shipping of select agents from the laboratory; - emergency response plans; and - reporting of incidents, unintentional injuries, and security breaches.
# Definitions
Biosafety: Development and implementation of administrative policies, work practices, facility design, and safety equipment to prevent transmission of biologic agents to workers, other persons, and the environment.
Biosecurity: Protection of high-consequence microbial agents and toxins, or critical relevant information, against theft or diversion by those who intend to pursue intentional misuse.
Biologic Terrorism: Use of biologic agents or toxins (e.g., pathogenic organisms that affect humans, animals, or plants) for terrorist purposes.
Responsible official: A facility official who has been designated the responsibility and authority to ensure that the requirements of Title 42, CFR, Part 73, are met.
Risk: A measure of the potential loss of a specific biologic agent of concern, on the basis of the probability of occurrence of an adversary event, effectiveness of protection, and consequence of loss.
Select agent: Specifically regulated pathogens and toxins as defined in Title 42, CFR, Part 73, including pathogens and toxins regulated by both DHHS and USDA (i.e., overlapping agents or toxins).
Threat: The capability of an adversary, coupled with intentions, to undertake malevolent actions.
Threat assessment: A judgment, based on available information, of the actual or potential threat of malevolent action.
Vulnerability: An exploitable capability, security weakness, or deficiency at a facility. Exploitable capabilities or weaknesses are those inherent in the design or layout of the biologic laboratory and its protection, or those existing because of the failure to meet or maintain prescribed security standards when evaluated against defined threats.
Vulnerability assessment: A systematic evaluation process in which qualitative and quantitative techniques are applied to arrive at an effectiveness level for a security system to protect biologic laboratories and operations from specifically defined acts that can oppose or harm a person's interest.
# Risk Assessment
Recommendation: Conduct a risk assessment and threat analysis of the facility as a precursor to the security plan.
Background: In April 1998, the General Accounting Office issued a report regarding terrorism (11). A key finding of that report was that threat and risk assessments are widely recognized as valid decision-support tools for establishing and prioritizing security program requirements. A threat analysis, the first step in determining risk, identifies and evaluates each threat on the basis of different factors (e.g., the capability and intent to attack an asset, the likelihood of a successful attack, and the attack's probable lethality). Risk management is the deliberate process of understanding risk (i.e., the likelihood that a threat will harm an asset with certain severity of § Public Law 107-56, October 26, 2001. consequences) and deciding on and implementing actions to reduce that risk. Risk management principles are based on acknowledgment that 1) although risk usually cannot be eliminated, it can be reduced by enhancing protection from validated and credible threats; 2) although threats are possible, certain threats are more probable than others; and 3) all assets are not equally critical. Therefore, each facility should implement certain measures to enhance security regarding select agents. The following actions should assist decision-makers in implementing this recommendation:
- Each facility should conduct a risk assessment and threat analysis of its assets and select agents. The threat should be defined against the vulnerabilities of the laboratory to determine the necessary components of a facility security plan and system (12,13). - The risk assessment should include a systematic approach in which threats are defined and vulnerabilities are examined; risks associated with those vulnerabilities are mitigated with a security systems approach (12,13). - Ensure the security plan includes collaboration between senior management, scientific staff, human resource officials, information technology (IT) staff, engineering officials, and security officials. This coordinated approach is critical to ensuring that security recommendations provide a reasonable and adequate assurance of laboratory security without unduly impacting the scientific work.
# Facility Security Plans
Recommendation: Establish a facility security plan.
# Security Policies for Personnel
Recommendation: Establish security-related policies for all personnel.
- Honest, reliable, and conscientious workers represent the foundation of an effective security program. Facility administrators and laboratory directors should be familiar with all laboratory workers. - Establish a policy for screening employees who require access to select agent areas to include full-and part-time employees, contractors, emergency personnel, and visitors. Additional screening might be necessary for employees who require access to other types of sensitive or secure data and work areas. These screening procedures should be commensurate with the sensitivity of the data and work areas (e.g., federal security clearances for government employees and contractors). - Ensure that all workers approved for access to select agents (e.g., students, research scientists, and other short-term employees) wear visible identification badges that include, at a minimum, a photograph, the wearer's name, and an expiration date. Facility administrators should consider using easily recognizable marks on the identification badges to indicate access to sensitive or secure areas.
# Access Control
Recommendation: Control access to areas where select agents are used or stored.
- Consolidate laboratory work areas to the greatest extent possible to implement security measures more effectively. Separate select agent areas from the public areas of the buildings. Lock all select agent areas when unoccupied. Use keys or other security devices to permit entry into these areas. - Methods of secure access and monitoring controls can include key or electronic locking pass keys, combination key pad, use of lock-boxes to store materials in freezers or refrigerators, video surveillance cameras, or other control requirements. Protocols for periodically changing combination keypad access numbers should be developed. - Assess the need for graded levels of security protection on the basis of site-specific risk and threat analysis. This security can be accomplished through card access systems, biometrics, or other systems that provide restricted access. - Lock all freezers, refrigerators, cabinets, and other containers where select agents are stored when they are not in direct view of a laboratory worker. - Limit access to select agent areas to authorized personnel who have been cleared by the U.S. Department of Justice as indicated in 42 CFR Part 73. All others entering select agent areas must be escorted and monitored by authorized personnel. - Record all entries into these areas, including entries by visitors, maintenance workers, service workers, and others needing one-time or occasional entry. - Limit routine cleaning, maintenance, and repairs to hours when authorized employees are present and able to serve as escorts and monitors. - Establish procedures and training for admitting repair personnel or other contractors who require repetitive or emergency access to select agent areas. - Ensure visitors are issued identification badges, including name and expiration date, and escorted and monitored into and out of select agent areas. Such visits should be kept to a minimum. - Ensure procedures are in place for reporting and removing unauthorized persons. These procedures should be developed through collaboration among senior scientific, administrative, and security management personnel. These procedures should be included in security training and reviewed for compliance at least annually.
# Select Agent Accountability
Recommendation: Establish a system of accountability for select agents.
- Establish an accounting procedure to ensure adequate control of select agents and maintain up-to-date inventory of seed stocks, toxins, and agents in long-term storage. Records should include data regarding the agent's location, use, storage method, inventory, external transfers (sender/receiver, transfer date, and amount), internal transfer (sender/receiver, transfer date, amount), further distribution, and destruction (method, amount, date, and a point of contact). - Establish procedures that maintain accurate and up-todate records of authorizations for entry into limited access areas (i.e., a current list of persons who possess door keys and those who have knowledge of keypad access numbers or the security system).
# Receiving Select Agents
Recommendation: Develop procedures for bringing select agent specimens into the laboratory.
- A centralized receiving area for select agents is recommended to maximize safety and minimize security hazards associated with damaged or unknown packages. - Consider circumstances that might require the emergency relocation of select agents to another secure location. - Reevaluate and train employees and conduct exercises of the emergency response plan at least annually.
# Incident Reporting
Recommendation: Establish a protocol for reporting adverse incidents.
- Ensure that laboratory directors, in cooperation with facility safety, security, and public relations officials, have policies and procedures in place for reporting and investigating unintentional injuries, incidents (e.g., unauthorized personnel in restricted areas, missing biologic agents or toxins, and unusual or threatening phone calls), or breaches in security measures. - DHHS or USDA should be notified immediately if select agents are discovered to be missing, released outside the laboratory, involved in worker exposures or infections, or misused. Additionally, all incidents involving select agents (e.g., occupational exposure or breaches of primary containment) should be reported to local and state public health authorities. | 2,758 | {
"id": "705ceb5967953eabce9f7f71fc2b6f8048537aba",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # Updated Recommendations for Use of VariZIG -United States, 2013
In December 2012, the Food and Drug Administration (FDA) approved VariZIG, a varicella zoster immune globulin preparation (Cangene Corporation, Winnipeg, Canada) for use in the United States for postexposure prophylaxis of varicella for persons at high risk for severe disease who lack evidence of immunity to varicella- and for whom varicella vaccine is contraindicated (1). Previously available under an investigational new drug (IND) expanded access protocol, VariZIG, a purified immune globulin preparation made from human plasma containing high levels of anti-varicella-zoster virus antibodies (immunoglobulin G), is the only varicella zoster immune globulin preparation currently available in the United States. VariZIG is now approved for administration as soon as possible following varicella-zoster virus exposure, ideally within 96 hours (4 days) for greatest effectiveness (2). CDC recommends administration of VariZIG as soon as possible after exposure to the varicella-zoster virus and within 10 days. CDC also has revised the patient groups recommended by the Advisory Committee on Immunization Practices (ACIP) to receive VariZIG by extending the period of eligibility for previously recommended premature infants from exposures to varicella-zoster virus during the neonatal period to exposures that occur during the entire period for which they require hospital care for their prematurity. The CDC recommendations for VariZIG use are now harmonized with the American Academy of Pediatrics (AAP) recommendations (3). This report summarizes data on the timing of administration of varicella zoster immune globulin in relation to exposure to varicella-zoster virus and provides the CDC updated recommendations for use of VariZIG that replace the 2007 ACIP recommendations.
# Background
Studies conducted in the late 1960s indicated that clinical varicella was prevented in susceptible, healthy children by administration of zoster immune globulin (ZIG) (prepared from patients recovering from herpes zoster) within 72 hours of household exposure (4). ZIG also lowered attack rates and modified disease severity among susceptible immunocompromised children when administered within 72 hours after household exposure (5,6). The definitions for susceptible children varied across studies and included children with negative or unknown history of varicella or those who were seronegative for varicella-zoster antibodies. The first commercial varicella zoster immune globulin preparation available in the United States, VZIG, was prepared from plasma obtained from healthy, volunteer blood donors identified by routine screening to have high antibody titers to varicella-zoster virus, and became available in 1978. Both serologic and clinical evaluations demonstrated that VZIG was equivalent to ZIG in preventing or modifying clinical illness in susceptible, immunocompromised children if administered within 96 hours of exposure to varicella (7,8). In a study of immunocompromised children who were administered VZIG within 96 hours of exposure, approximately one in five exposed children developed clinical varicella, and one in 20 developed subclinical disease compared with 65%-85% attack rates among historical controls (8). Among those in the study who became ill, the severity of clinical varicella (evaluated by percentage of patients with >100 lesions or with complications) was lower than expected on the basis of historic controls. The effectiveness of VZIG when administered >96 hours after initial exposure was not evaluated. Based on these findings and the licensure indications of the VZIG available in the United States, ACIP recommended VZIG for use within 96 hours of exposure (9). In February 2006, the VZIG supply was discontinued and a new product, VariZIG, became available under an IND protocol for administration within 96 hours of exposure (9,10).
# Methods
These recommendations reflect the ACIP work group discussions and review of scientific evidence related to use of varicella zoster immune globulin conducted during the development of the ACIP statements on prevention of varicella as well as a review of published literature to include reports with immune globulins with high anti-varicella-zoster virus antibodies used outside the United States >4 days after exposure to varicellazoster virus. When data were not available, expert opinion was considered.
# Summary of Rationale for VariZIG Recommendations
Timing of VariZIG administration. In May 2011, the FDA approved amendment of the IND protocol to extend the period for administration of VariZIG after exposure to varicella-zoster virus from 4 days (96 hours) to 10 days. Subsequently, in 2012, CDC published notification of FDA agreement with administration of investigational VariZIG as soon as possible after exposure and within 10 days (11). Limited experience - Evidence of immunity to varicella includes 1) documentation of age-appropriate vaccination with varicella vaccine, 2) laboratory evidence of immunity or laboratory confirmation of disease, 3) birth in the United States before 1980 (except for health-care personnel, pregnant women, and immunocompromised persons), or 4) health-care provider diagnosis or verification of a history of varicella or herpes zoster. For immunocompromised children aged 12 months to 6 years, 2 doses of varicella vaccine are considered age-appropriate vaccination.
from outside the United States with use of other immune globulin products with high levels of anti-varicella-zoster virus antibodies suggested that, compared with administration of the immune globulins within 4 days of exposure, administration >4 days (up to 10 days) after exposure resulted in comparable incidence of varicella and attenuation of disease (12)(13)(14)(15). One study indicated an increase in varicella incidence with increasing time between exposure and administration of ZIG, but disease was attenuated in all cases (16). Considering these data, CDC recommends that VariZIG be administered as soon as possible after exposure and within 10 days. AAP also recommends administration of VariZIG within 10 days of exposure (3).
Patient groups for whom VariZIG is recommended. In anticipation of availability of a licensed product for which the supply is projected to be adequate and to harmonize with recommendations from AAP, CDC revised the patient groups previously recommended by ACIP for use of VariZIG. The change refers to extending the period of eligibility for VariZIG administration for previously recommended premature infants from exposures to varicella-zoster virus during the neonatal period to exposures that occurred during the entire period for which they require hospital care for their prematurity. The risk for complications of postnatally acquired varicella in premature infants is unknown. Because the immune systems of premature infants (some of whom might be extremely low birthweight and spend months in neonatal intensive care units) might be compromised, they are considered, on the basis of expert opinion, at high risk for severe varicella; this increased risk is likely continued for as long as these infants remain hospitalized. Patients receiving monthly high-dose (≥400 mg/kg) immune globulin intravenous (IGIV) are likely to be protected and probably do not require VariZIG if the most recent dose of IGIV was administered ≤3 weeks before exposure (9).
# CDC Recommendations for Use of VariZIG
The decision to administer VariZIG depends on three factors: 1) whether the patient lacks evidence of immunity to varicella, 2) whether the exposure is likely to result in infection, and 3) whether the patient is at greater risk for varicella complications than the general population. For high-risk patients who have additional exposures to varicella-zoster virus ≥3 weeks after initial VariZIG administration, another dose of VariZIG should be considered.
Timing of VariZIG administration. CDC recommends administration of VariZIG as soon as possible after exposure to varicella-zoster virus and within 10 days.
Patient groups for whom VariZIG is recommended. Patients without evidence of immunity to varicella who are at high risk for severe varicella and complications, who have been exposed to varicella or herpes zoster, and for whom varicella vaccine is contraindicated, should receive VariZIG. Patient groups recommended by CDC to receive VariZIG include the following:
Immunocompromised patients without evidence of immunity.
Newborn infants whose mothers have signs and symptoms of varicella around the time of delivery (i.e., 5 days before to 2 days after). Hospitalized premature infants born at ≥28 weeks of gestation whose mothers do not have evidence of immunity to varicella. Hospitalized premature infants born at <28 weeks of gestation or who weigh ≤1,000 g at birth, regardless of their mothers' evidence of immunity to varicella. Pregnant women without evidence of immunity.
# VariZIG Administration
VariZIG is supplied in 125-IU vials and should be administered intramuscularly as directed by the manufacturer. The recommended dose is 125 IU/10 kg of body weight, up to a maximum of 625 IU (five vials). The minimum dose is 62.5 IU (0.5 vial) for patients weighing ≤2.0 kg and 125 IU (one vial) for patients weighing 2.1-10.0 kg (2).
Unchanged from previous recommendations ( 9), for patients who become eligible for vaccination, varicella vaccine should be administered ≥5 months after VariZIG administration. Because varicella zoster immune globulin might prolong the incubation period by ≥1 week, any patient who receives VariZIG should be observed closely for signs and symptoms of varicella for 28 days after exposure. Antiviral therapy should be instituted immediately if signs or symptoms of varicella occur. Most common adverse reactions following VariZIG administration were pain at injection site (2%) and headache (2%) (2). Contraindications for VariZIG administration include a history of anaphylactic or severe systemic reactions to human immune globulins and IgA-deficient patients with antibodies against IgA and a history of hypersensitivity (2).
# How to Obtain VariZIG
VariZIG can be ordered from the exclusive U.S. distributor, FFF Enterprises (Temecula, California) (telephone, 800-843-7477; online at ).
# Comment
The demand for VariZIG has declined significantly, commensurate with declining incidence of varicella (9). Nevertheless, exposures from varicella and from herpes zoster might still occur. Extending the time window for administration of VariZIG should increase availability of postexposure prophylaxis with VariZIG for persons at high risk for severe varicella. However, physicians are reminded that VariZIG should be administered as soon as possible following exposure. CDC recommendations for use of this product are now harmonized with those of AAP (3).
# Reported by
Mona | 2,291 | {
"id": "c90a6542a5546ee4ce56a17bd138700adfbba8de",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | This NIOSH CIB, based on our assessment of the current available scientific in formation about this widely used material, (1) reviews the animal and human data relevant to assessing the carcinogenicity and other adverse health effects of TiO2, (2) provides a quantitative risk assessment using dose-response information from the rat and human lung dosimetry modeling and recommended occupational exposure limits for fine and ultrafine (including engineered nanoscale) TiO2, and (3) describes exposure monitoring techniques, exposure control strategies, and research needs. This report only addresses occupational exposures by inhalation, and conclusions derived here should not be inferred to pertain to nonoccupational exposures. NIOSH recommends exposure limits of 2.4 mg/m3 for fine TiO2 and 0.3 mg/m3 for ultrafine (including engineered nanoscale) TiO2, as time-weighted average (TWA) concentrations for up to 10 hours per day during a 40-hour work week. NIOSH has determined that ultrafine TiO2 is a potential occupational carcinogen but that there are insufficient data at this time to classify fine TiO2 as a potential occupational car cinogen. However, as a precautionary step, NIOSH used all of the animal tum or re sponse data when conducting dose-response modeling and determining separate iii RELs for ultrafine and fine TiO2. These recommendations represent levels that over a working lifetime are estimated to reduce risks of lung cancer to below 1 in 1,000. NIOSH realizes that knowledge about the health effects of nanomaterials is an evolving area of science. Therefore, NIOSH intends to continue dialogue with the scientific community and will consider any comments about nano-size titanium di oxide for future updates of this document.# Foreword
The purpose of the Occupational Safety and Health Act of 1970 (Public Law 91-596) is to assure safe and healthful working conditions for every working person and to preserve our human resources. In this Act, the National Institute for Occupational Safety and Health (NIOSH) is charged with recommending occupational safety and health standards and describing exposures that are safe for various periods of em ployment, including (but not limited to) the exposures at which no worker will suf fer diminished health, functional capacity, or life expectancy as a result of his or her work experience.
Current Intelligence Bulletins (CIBs) are issued by NIOSH to disseminate new sci entific information about occupational hazards. A CIB may draw attention to a for merly unrecognized hazard, report new data on a known hazard, or disseminate in formation about hazard control. CIBs are distributed to representatives of academia, industry, organized labor, public health agencies, and public interest groups as well as to federal agencies responsible for ensuring the safety and health of workers. Titanium dioxide (TiO2), an insoluble white powder, is used extensively in many commercial products, including paint, cosmetics, plastics, paper, and food, as an anticaking or whitening agent. It is produced and used in the workplace in varying particle-size fractions, including fine and ultrafine sizes. The number of U.S. work ers currently exposed to TiO2 dust is unknown.
# Executive Summary
In this Current Intelligence Bulletin, the National Institute for Occupational Safety and Health (NIOSH) reviews the animal and human data relevant to assessing the carcinogenicity of titanium dioxide (TiO2) (Chapters 2 and 3), presents a quantita tive risk assessment using dose-response data in rats for both cancer (lung tumors) and noncancer (pulmonary inflammation) responses and extrapolation to humans with lung dosimetry modeling (Chapter 4), provides recommended exposure limits (RELs) for fine and ultrafine (including engineered nanoscale) TiO2 (Chapter 5), describes exposure monitoring techniques and exposure control strategies (Chapter 6), and discusses avenues of future research (Chapter 7). This report only addresses occupational exposures by inhalation, and conclusions derived here should not be inferred to pertain to nonoccupational exposures. TiO2 (Chemical Abstract Service Number 13463-67-7) is a noncombustible, white, crystalline, solid, odorless powder. TiO2 is used extensively in many commer cial products, including paints and varnishes, cosmetics, plastics, paper, and food as an anticaking or whitening agent. Production in the United States was an estimated 1.45 million metric tons per year in 2007 . The number of U.S. workers currently exposed to TiO2 dust is not available.
TiO2 is produced and used in the workplace in varying particle size fractions includ ing fine (which is defined in this document as all particle sizes collected by respirable particle sampling) and ultrafine (defined as the fraction of respirable particles with a primary particle diameter of <0.1 ^m ). Particles <100 nm are also defined as nanoparticles.
The Occupational Safety and Health Administration (OSHA) permissible exposure limit for TiO2 is 15 mg/m 3, based on the airborne mass fraction of total TiO2 dust (Chapter 1). In 1988, NIOSH recommended that TiO2 be classified as a potential oc cupational carcinogen and that exposures be controlled as low as feasible . This recommendation was based on the observation of lung tumors (nonmalignant) in a chronic inhalation study in rats at 250 mg/m3 of fine TiO2 [Lee et al. 1985[Lee et al. , 1986a (Chapter 3).
Later, a 2-year inhalation study showed a statistically significant increase in lung cancer in rats exposed to ultrafine TiO2 at an average concentration of 10 mg/m3 . Two recent epidemiologic studies have not found a relation ship between exposure to total or respirable TiO2 and lung cancer , although an elevation in lung cancer mortality was obv served among male TiO2 workers in the latter study when compared to the gen eral population (standardized mortality ratio 1.23; 95% confidence interval = 1.10-1.38) (Chapter 2). However, there was no indication of an exposureresponse relationship in that study. Nonmalignant respiratory disease mortality was not increased significantly (P <0.05) in any of the epidemiologic studies.
In 2006, the International Agency for Research on Cancer (IARC) reviewed TiO2 and concluded that there was sufficient evidence of carcinogenicity in experimental animals and inadequate evidence of carcinogenicity in humans (Group 2B), "pos sibly carcinogenic to humans" . TiO2 and other poorly soluble, low-toxicity (PSLT) particles of fine and ultrafine sizes show a consistent dose-response relationship for adverse pulmonary responses in rats, including persistent pulmonary inflammation and lung tumors, when dose is expressed as particle surface area. The higher mass-based potency of ultrafine TiO2 compared to fine TiO2 is associated with the greater surface area of ultrafine particles for a given mass. The NIOSH RELs for fine and ultrafine TiO2 reflect this mass-based difference in potency (Chapter 5). NIOSH has reviewed and considered all of the relevant data related to respiratory effects of TiO2. This includes results from animal inhalation studies and epidemiologic studies. NIOSH has concluded that TiO2 is not a direct-acting carcinogen, but acts through a secondary genotoxicity mechanism that is not specific to TiO2 but primarily related to particle size and surface area. The most relevant data for assessing the health risk to workers are re sults from a chronic animal inhalation study with ultrafine (<100 nm) TiO2 in which a statistically significant increase in adenocarcinomas was observed . This is supported by a pattern of TiO2 induced responses that include persis tent pulmonary inflammation in rats and mice and cancer responses for PSLT particles related to surface area. Therefore, on the basis of the study by Heinrich et al. and the pattern of pulmonary inflam matory responses, NIOSH has determined that exposure to ultrafine TiO2 should be considered a potential occupational carcinogen.
For fine size (pigment grade) TiO2 (>100 nm), the data on which to assess carcino genicity are limited. Generally, the epidemiologic studies for fine TiO2 are incon clusive because of inadequate statistical power to determine whether they replicate or refute the animal dose-response data. This is consistent for carcinogens of low potency. The only chronic animal inhalation study , which demon strated the development of lung tumors (bronchioalveolar adenomas) in response to inhalation exposure of rats to fine sized TiO2 did so at a dose of 250 mg/m3 but not at 10 or 50 mg/m3. The absence of lung tum or development for fine TiO2 was also reported by Muhle et al. in rats exposed at 5 mg/m 3. However, the re sponses observed in animal studies exposed to ultrafine and fine TiO2 are consistent with a continuum of biological response to TiO2 that is based on particle surface area. In other words, all the rat tum or response data on inhalation of TiO2 (ultrafine and fine) fit on the same dose-response curve when dose is expressed as total par ticle surface area in the lungs. However, exposure concentrations greater than 100 mg/m3 are generally not considered acceptable inhalation toxicology practice today. Consequently, in a weight-of-evidence analysis, NIOSH questions the relevance of the 250 mg/m3 dose for classifying exposure to TiO2 as a carcinogenic hazard to workers and therefore, concludes that there are insufficient data at this time to clas sify fine TiO2 as a potential occupational carcinogen. Although data are insufficient on the cancer hazard for fine TiO2, the tumor-response data are consistent with that observed for ultrafine TiO2 when converted to a particle surface area metric. Thus to be cautious, NIOSH used all of the animal tumor response data when conducting dose-response modeling and determining separate RELs for ultrafine and fine TiO2.
NIOSH also considered the crystal structure as a modifying factor in TiO2 carci nogenicity and inflammation. The evidence for crystal-dependent toxicity is from observed differences in reactive oxygen species (ROS) generated on the surface of TiO2 of different crystal structures (e.g., anatase, rutile, or mixtures) in cell-free systems, with differences in cytotoxicity in in vitro studies and with greater inflammation and cell proliferation at early tim e points following intratracheal instillation in rats . However, when rats were exposed to TiO2 in subchronic inhalation studies, no difference in pulmonary inflammation response to fine and ultrafine TiO2 particles of different crystal structure (i.e., 99% rutile vs. 80% anatase/20% rutile) was observed once dose was adjusted for particle surface area . Therefore, NIOSH concludes that the scientific evidence supports surface area as the critical metric for occupational inhalation exposure to TiO2.
NIOSH also evaluated the potential for coatings to modify the toxicity of TiO2, as many industrial processes apply coatings to TiO2 particles. TiO2 toxicity has been shown to increase after coating with various substances . How ever, the toxicity of TiO2 has not been shown to be attenuated by application of coat ings. NIOSH concluded that the TiO2 risk assessment could be used as a reasonable floor for potential toxicity, with the notion that toxicity may be substantially in creased by particle treatment and process modification. These findings are based on the studies in the scientific literature and may not apply to other formulations, sur face coatings, or treatments of TiO2 for which data were not available. An extensive review of the risks of coated TiO2 particles is beyond the scope of this document.
NIOSH recommends airborne exposure limits of 2.4 m g/m3 for fine TiO2 and 0.3 mg/m3 for ultrafine (including engineered nanoscale) TiO2, as time-weighted aver age (TWA) concentrations for up to 10 hr/day during a 40-hour work week. These recommendations represent levels that over a working lifetime are estimated to re duce risks of lung cancer to below 1 in 1,000. The recommendations are based on using chronic inhalation studies in rats to predict lung tum or risks in humans.
# vii
In the hazard classification (Chapter 5), NIOSH concludes that the adverse effects of inhaling TiO2 may not be material-specific but appear to be due to a generic effect of PSLT particles in the lungs at sufficiently high exposure. While NIOSH concludes that there is insufficient evidence to classify fine TiO2 as a potential occupational carcinogen, NIOSH is concerned about the potential carcinogenicity of ultrafine and engineered nanoscale TiO2 if workers are exposed at the current mass-based exposure limits for respirable or total mass fractions of TiO2. NIOSH recommends controlling exposures as low as possible, below the RELs. Sampling recommenda tions based on current methodology are provided (Chapter 6).
Although sufficient data are available to assess the risks of occupational exposure to TiO2, additional research questions have arisen. There is a need for exposure assess ment for workplace exposure to ultrafine TiO2 in facilities producing or using TiO2. Other research needs include evaluation of the (1) exposure-response relationship of TiO2 and other PSLT particles and human health effects, (2) fate of ultrafine particles in the lungs and the associated pulm onary responses, and (3) effective ness of engineering controls for controlling exposures to fine and ultrafine TiO2. (Research needs are discussed further in Chapter 7). . TiO2 is insoluble in water, hydrochloric acid, ni tric acid, or alcohol, and it is soluble in hot con centrated sulfuric acid, hydrogen fluoride, or alkali . TiO2 has several natu rally occurring mineral forms, or polymorphs, which have the same chemical formula and different crystalline structure. Common TiO2 polymorphs include rutile (CAS Number 1317 80-2) and anatase (CAS Number 1317-70-0). While both rutile and anatase belong to the te tragonal crystal system, rutile has a denser ar rangement of atoms (Figure 1).
# Abbreviations
At temperatures greater than 915oC, anatase reverts to the rutile structure . The luster and hardness of anatase and rutile are also similar, but the cleavage dif fers. The density (specific gravity) of rutile is 4.25 g/ml , and that of anatase is 3.9 g/ml . Com mon im purities in rutile include iron, tan talum , niobium , chrom ium , vanadium, and tin , while those in anatase include iron, tin, vana dium, and niobium .
The sulfate process and the chloride process are two main industrial processes that produce TiO2
Rutile Anatase pigment . In the sulfate process, anatase or rutile TiO2 is pro duced by digesting ilmenite (iron titanate) or titanium slag with sulfuric acid. In the chloride process, natural or synthetic rutile is chlori nated at temperatures of 850 to 1,000oC , and the titanium tetrachloride (TiCl4) is converted to the rutile form by vapor-phase oxidation . Both anatase and ru tile are used as white pigment. Rutile TiO2 is the most commonly used white pigment be cause of its high refractive index and relatively low absorption of light . Anatase is used for specialized applications (e.g., in pa per and fibers). TiO2 does not absorb visible light, but it strongly absorbs ultraviolet (UV) radiation. Commercial rutile TiO2 is prepared with an average particle size of 0.22 ^m to 0.25 ^m . Pigment-grade TiO2 refers to anatase and rutile pigments with a median particle size that usually ranges from 0.2 ^m to 0.3 ^m . Particle size is an important determinant of the prop erties of pigm ents and other final products .
# Uses
TiO2 is used mainly in paints, varnishes, lac quer, paper, plastic, ceramics, rubber, and printing ink. TiO2 is also used in welding rod coatings, floor coverings, catalysts, coated fab rics and textiles, cosmetics, food colorants, glassware, pharmaceuticals, roofing granules, rubber tire manufacturing, and in the produc tion of electronic components and dental im pressions . Both the anatase and rutile forms of TiO2 are semiconductors . TiO2 white pigment is widely used due to its high refractive index. Since the 1960s, TiO2 has been coated with other materials (e.g., silica, alumina) for commercial applications .
# Production and Number of Workers Potentially Exposed
An estimate of the num ber of U.S. workers currently exposed to TiO2 dust is not avail able. The only current information is an un referenced estimate submitted by industry to NIOSH in response to request for public comment on the draft document. Industry estimates that the num ber of U.S. workers in the "so-called 'white' end of TiO2 production plants" is "approximately 1,100 workers na tionwide" and that there is no reliable estimate of the num ber of workers involved in the "ini tial compounding of downstream products" .
In 2007, an estimated 1.45 million metric tons of TiO2 pigment were produced by four U.S. companies at eight facilities in seven states that employed an estimated 4,300 workers (jobs not described) . The paint (includes varnishes and lacquers), plastic and rubber, and paper industries accounted for an estimated 95% of TiO2 pigment used in the United States in 2004 . In 2006, the U.S. Bureau of Labor Statistics estimated that there were about 68,000 U.S. workers in all oc cupations (excluding self-employed workers) in paint, coating, and adhesive manufacturing (North American Industry Classification Sys tem code 325500), 803,000 in plastics and rubber products manufacturing (NAICS code 326000), and about 138,000 employed in pulp, paper, and paperboard mills (NAICS code 322100) . In 1991, TiO2 was the 43rd highest-volume chemical produced in the United States .
# Current Exposure Limits and Particle Size Definitions
Occupational exposure to TiO2 is regulated by the Occupational Safety and Health Adminis tration (OSHA) under the permissible expo sure limit (PEL) of 15 mg/m3 for TiO2 as total dust (8-hr time-weighted average con centration) .
The OSHA PEL for particles not otherwise reg ulated (PNOR) is 5 mg/m3 as respirable dust . These and oth er exposure limits for TiO2 and PNOR or par ticles not otherwise specified (PNOS) are listed in Table 1. PNOR/S are defined as all inert or nuisance dusts, whether mineral, inorganic or organic, not regulated specifically by substance name by OSHA (PNOR) or classified by the American Conference of Governmental Indus trial Hygienists (ACGIH)(PNOS). The same exposure limits are often given for TiO2 and PNOR/PNOS (Table 1). OSHA definitions for the total and respirable particle size fractions refer to specific sampling methods and devices , while the maximum concentra tion value in the workplace (MAK) and the ACGIH definitions for respirable and inhalable particle sizes are based on the internation ally developed definitions of particle size selec tion sampling .
Aerodynamic diameter affects how a particle behaves in air and determines the probability of deposition at locations within the respira tory tract. Aerodynamic diameter is defined as the diameter of a spherical particle that has the same settling velocity as a particle with a - See CFR in references.
density of 1 g/cm3 (the density of a water drop let) .
"Respirable" is defined as the aerodynamic size of particles that, when inhaled, are capable of depositing in the gas-exchange (alveolar) region of the lungs . Sampling methods have been developed to estimate the airborne mass concentration of respirable particles .
"Fine" is defined in this document as all parti cle sizes that are collected by respirable particle sampling (i.e., 50% collection efficiency for par ticles of 4 ^m, with some collection of particles up to 10 ^m). Fine is sometimes used to refer to the particle fraction between 0.1 ^m and ap proximately 3 ^m , and to pigment-grade TiO2 . The term "fine" has been replaced by "respirable" by some organizations, e.g., MAK , which is consistent with international sampling conventions .
"Ultrafine" is defined as the fraction of respirable particles with prim ary particle diame ter <0.1 ^m (<100 nm), which is a widely used definition. Particles <100 nm are also defined as nanoparticles. A primary particle is defined as the smallest identifiable subdivision of a par ticulate system . Additional methods are needed to determine if an airborne respirable particle sample includes ultrafine TiO2 (Chapter 6). In this document, the terms fine and respirable are used interchangeably to re tain both the common term inology and the international sampling convention.
In 1988, NIOSH classified TiO2 as a potential occupational carcinogen and did not establish a recommended exposure limit (REL) for TiO2 . This classification was based on the observation that TiO2 caused lung tumors classifiable as a human carcinogen); TiO2 is under study by ACGIH .
# § Respirable
MAK1' 1' Inhalable fraction 4 Inhalable [DFG 2000[DFG , 2008 except for ultrafine particles; suspected carcinogen (MAK Category 3A)
# Respirablê
Abbreviations: ACGIH = American Conference of Governmental Industrial Hygienists; MAK = Federal Republic of Germany Maximum Concentration Values in the Workplace; NIOSH = National Institute for Occupational Safety and Health; OSHA = Occupational Safety and Health Administration; PNOR/S = particles not otherwise regulated or specified; TiO2 = titanium dioxide; TWA = time-weighted average; TLV® = threshold limit valuê Recommendations in effect before publication of this document. *Total, inhalable, and respirable refer to the particulate size fraction, as defined by the respective agencies. §PNOS guideline (too little evidence to assign TLV®). Applies to particles without applicable TLV®, insoluble or poorly soluble, low toxicity (PSLT) . 11MAK values are long-term averages. Single shift excursions are perm itted within a factor of 2 of the MAK value.
The TiO2 MAK value has been withdrawn and pregnancy risk group C is not applicable .
in rats in a long-term, high-dose bioassay
# Human Studies
# Case Reports
Case reports can provide information about the potential health effects of exposure to titanium dioxide (TiO2) that may lead to formal epide miologic studies of a relationship between oc cupational exposure and observed cases.
A few case reports described adverse health ef fects in workers with potential TiO2 exposure. These effects included adenocarcinoma of the lung and TiO2-associated pneumoconiosis in a male TiO2 packer with 13 years of potential dust exposure and a 40-year history of smok ing . Pulmonary fibrosis or fibrotic changes and alveolar macrophage responses were identified by thoracotomy or autopsy tissue sampling in three workers with 6 to 9 years of dusty work in a TiO2 factory. No workplace exposure data were reported. Two workers were "moderate" or "heavy" smokers (pack-years not reported) and smoking habits were not reported for the other worker . Small amounts of silica were present in all three lung samples, and significant nickel was present in the lung tissue of the autopsied case. Exposure was confirmed using sputum samples that contained macrophages with high concentrations of titanium 2 to 3 years after their last exposure . Ti tanium particles were identified in the lymph nodes of the autopsied case. The lung con centrations of titanium were higher than the lung concentration range of control autopsy specimens from patients not exposed to TiO2 (statistical testing and number of controls not reported). Moran et al. presented cases of TiO2 ex posure in four males and two females. Howev er, occupation was unknown for one male and one female, and the lung tissue of one worker (artist/painter) was not examined (skin biopsy of arm lesions was performed). Smoking hab its were not reported. Diffuse fibrosing inter stitial pneumonia, bronchopneumonia, and al veolar metaplasia were reported in three male patients (a TiO2 worker, a painter, and a paper mill worker) with lung-deposited TiO2 (rutile) and smaller amounts of tissue-deposited silica . Titanium was also identi fied in the liver, spleen, and one peribronchial lymph node of the TiO2 worker, and talc was identified in the lungs of that patient and the paper mill worker.
A case of pulmonary alveolar proteinosis (i.e., deposition of proteinaceous and lipid material within the airspaces of the lung) was reported in a worker employed for more than 25 years as a painter, with 8 years of spray painting expe rience. He smoked two packs of cigarettes per day until he was hospitalized. Titanium was the major type of metallic particle found in his lung tissues .
According to a four-sentence abstract from the Toxic Exposure Surveillance System (TESS) of the American Association of Poison Control Centers, death occurred suddenly in a 26-year-old male worker while pressure-cleaning inside a tank containing TiO2; death was "felt to be due to inhalation of this particulate chemical" . There was no other information about the cause of death or indication that an autopsy was conducted.
TESS data are used for hazard identification, education, and training .
In pathology studies of TiO2 workers, tissuedeposited titanium was often used to confirm exposure. In many cases, titanium, rather than TiO2, was identified in lung tissues; the pres ence of TiO2 was inferred when a TiO2-exposed worker had pulmonary deposition of titanium (e.g., Ophus et al. ; Rode et al. ; Maatta and Arstila ; Elo et al. ; Humble et al. ). In other case reports, X-ray crystallography identified TiO2 (i.e., anatase) in tissue digests , and X-ray diffraction distinguished rutile from anatase . Similarly, with the excep tion of one individual in whom talc was identi fied , pathology studies (i.e., Elo et al. ; Moran et al. ) identified the silica as "SiO2" (silicon dioxide) or "silica" in tissue and did not indicate whether it was crys talline or amorphous.
In summary, few TiO2-related health effects were identified in case reports. None of the case reports provided quantitative industrial hygiene information about workers' TiO2 dust exposure. Lung particle analyses indicated that workers exposed to respirable TiO2 had par ticle retention in their lungs that included ti tanium, silica (form not specified), and other minerals sometimes years after cessation of exposure. The chronic tissue reaction to lungdeposited titanium is distinct from chronic sili cosis. Most cases of tissue-deposited titanium presented with a local macrophage response with associated fibrosis that was generally mild, but of variable severity, at the site of deposition. More severe reactions were observed in a few cases. The prevalence of similar histopathologic responses in other TiO2-exposed populations is not known. The effects of concurrent or sequential exposure to carcinogenic particles, such as crystalline silica, nickel, and tobacco smoke, were not determined.
# Epidemiologic Studies
A few epidemiologic studies have evaluated the carcinogenicity of TiO2 in humans; they are described here and in Table 2-1. Epidemiolog ic studies of workers exposed to related com pounds, such as TiCl4 or titanium metal dust (i.e., Fayerweather et al. and Garabrant et al. ) were not included because those compounds may have properties and effects that differ from those of TiO2 and discussion of those differences is beyond the scope of this document.
# Chen and Fayerweather
Chen and Fayerweather conducted a mortality, morbidity, and nested case-control study of 2,477 male wage-grade workers em ployed for more than 1 year before January 1, 1984 in two TiO2 production plants in the United States. The objectives of the study were to determine if workers potentially exposed to TiO2 had higher risks of lung cancer, chronic respiratory disease, pleural thickening/plaques, or pulmonary fibrosis than referent groups.
Of the 2,477 male workers, 1,576 were poten tially exposed to TiO2. Other exposures includ ed TiCl4, pigmentary potassium titinate (PKT), and asbestos. (The TiCl4-exposed workers were evaluated in Fayerweather et al. Small num ber of cases ever exposed to T i0 2 (n = 33).
Limitations include self-or proxy-reporting of occupa tional exposures.
Most T i0 2 fum e-exposed cases (n = 5) and controls (n = 1) were also exposed to chrom ium and nickel.
# See footnotes at end of table. (Continued)
NIOSH CIB 63 - Titanium Dioxide
No SMR = 1.0 SMR = 1.0 SMR = 0.8 SMR = 0.4 SMR = 0.8 SMR = 0.7 0 . 8 -1. 3 0.5-1.7 0.6-1.2 0.1-1.3 0.8-0.9 0.6-0.9
Lung cancer and nonm a lignant respiratory disease SMRs not elevated sig nificantly with exception of lung cancer death in the subgroup of shortest-term workers (0-9 years worked) with >20 years since first hire (SMR for cancer of trachea, bronchus, lung = 1.5; 95% Cl 1.0-2.3; P < 0.05; num ber of deaths in subgroup not reported). Internal analyses with models found no significant exposure-response trends for those diseases or total Study limitations:
( 1) short follow-up period (avg. 21 years) and about half the cohort born after 1940;
(2) m ore than half worked fewer than 10 years;
(3) lim ited data on nonoccupational factors (e.g., smoking).
# See footnotes at end of table.
(Continued) Boffetta et al. subgroups: 43 ever exposed, 9 substantial exposure; 29 low exposure; 9 medium exposure; 5 high exposure; 22 worked 1-21 years; 21 worked > 22 years. *90% acceptance range for the expected num ber of deaths or cases ^Reported as "not statistically significantly elevated." 90% Cl.
A cumulative exposure index, duration of ex posure, and TWA exposure were derived and used in the analyses (details not provided).
Chest radiographic examination was used to detect fibrosis and pleural abnormalities, and the most recent chest X-ray of active employ ees (on 1/1/1984) was read blindly by two Breaders. Chest X-ray films were not available for retired and terminated workers.
Observed numbers of cancer morbidity cases (i.e., incident cases) were compared to ex pected numbers based on company rates. Ob served numbers of deaths were compared to expected numbers from company rates and national rates. Ninety percent (90%) "accep tance ranges" were calculated for the expected numbers of cases or deaths. The nested casecontrol study investigated decedent lung can cer and chronic respiratory disease, incident lung cancer and chronic respiratory disease (not described), and radiographic chest abnor malities. Incidence data from the company' s insurance registry were available from 1956 to 1985 for cancer and chronic respiratory dis ease. Mortality data from 1957 to 1983 were obtained from the company mortality registry. The study reported the number of observed deaths for the period 1935-1983; the source for deaths prior to 1957 is not clear. Vital status was determined for "about 94%" of the study cohort and death certificates were obtained for "about 94%" of workers known to be deceased.
The observed number of deaths from all can cers was lower than the expected number based on U.S. mortality rates; however, the observed number of deaths from all causes was greater than the expected number when based on com pany mortality rates (194 deaths observed; 175.5 expected; 90% acceptance range for the expect ed number of deaths = 154-198). Lung cancer deaths were lower than the expected number based on national rates (9 deaths observed/17.3 expected = 0.52; 90% acceptance range for the expected number of deaths = 11-24) and com pany rates (9 deaths observed/15.3 deaths ex pected = 0.59; 90% acceptance range for the expected number of deaths = 9-22). Lung can cer morbidity was not greater than expected (company rates; 8 cases observed; 7.7 expected; 90% acceptance range for the expected num ber of cases = 3-13).
Nested case-control analyses found no associa tion between TiO2 exposure and lung cancer morbidity after adjusting for age and exposure to TiCl4, PKT, and asbestos (16 lung cancer cases; 898 controls; TiO2 odds ratio = 0.6). The OR did not increase with increasing aver age exposure, duration of exposure, or cumu lative exposure index. No statistically signifi cant positive relationships were found between TiO2 exposure and cases of chronic respiratory disease (88 cases; 898 noncancer, nonrespiratory disease controls; TiO2 OR = 0.8). Chest X-ray findings from 398 films showed few abnormalities-there were four subjects with "questionable nodules" but none with fibrosis. Pleural thickening or plaques were present in 5.6% (n = 19) of the workers potentially ex posed to TiO2 compared with 4.8% (n = 3) in the unexposed group. Case-control analyses of 22 cases and 372 controls with pleural ab normalities found a nonstatistically significant OR of 1.4 for those potentially exposed and no consistent exposure-response relationship. This study did not report statistically significant increased mortality from lung cancer, chronic respiratory disease, or fibrosis associated with titanium exposure. However, it has limitations: (note: the study component or information af fected by the limitation is mentioned, when possible) (1) Existence of quantitative exposure data for respirable TiO2 after 1975 is implied; the type of measurement (e.g., total, respirable, or submicrometer), type of sample (e.g., area or personal), number of samples, sampling lo cation and times, nature of samples (e.g., epi demiologic study or compliance survey), and breathing zone particle sizes were not reported. (Exposure data were used in the nested casecontrol analyses of morbidity and mortality.) (2) The report did not describe the number of workers, cases, or deaths in each exposure dura tion quartile which could contribute to under standing of all component results. (3) The pres ence of other chemicals and asbestos could have acted as confounders. (4) Incidence and mortal ity data were not described in detail and could have been affected by the healthy worker effect.
(5) Company registries were the only apparent source for some incidence and mortality infor mation (e.g., company records may have been based on those workers eligible for pensions and thus not typical of the general workforce).
# Fryzek et al.
Fryzek et al. conducted a retrospective cohort mortality study of 4,241 workers with potential exposure to TiO2 employed on or af ter 1/1/1960 for at least 6 months at four TiO2 production plants in the United States.
The plants used either a sulfate process or a chlo ride process to produce TiO2 from the original ore. Nearly 2,400 records of air sampling mea surements of sulfuric acid mist, sulfur dioxide, hydrogen sulfide, hydrogen chloride, chlorine, TiCl4, and TiO2 were obtained from the four plants. Most were area samples and many were of short duration. Full-shift or near full-shift personal samples (n = 914; time-weighted av eraging not reported) for total TiO2 dust were used to estimate relative exposure concen trations between jobs over time. Total mean TiO2 dust levels declined from 13.7 mg/m 3 in 197(3-1980 to 3.1 mg/m3 during 1996-2000. Packers, micronizers, and addbacks had about 3 to 6 times higher exposure concentrations than other jobs. Exposure categories, defined by plant, job title, and calendar years in the job, were created to examine mortality patterns in those jobs where the potential for TiO2 expo sure was greatest.
M ortality of 409 female workers and 3,832 male workers was followed until 12/31/2000 (average follow-up time = 21 years; standard deviation = 11 years). The number of expected deaths was based on mortality rates by sex, age, race, time period, and the state where the plant was located, and standardized mortality ratios (SMRs) and confidence intervals (CIs) were calculated. Cox proportional hazards (PH) models, which adjusted for effects of age, sex, geographic area, and date of hire, were used to estimate relative risks (RR) of TiO2 exposure (i.e., average intensity, duration, and cumula tive exposure) in medium or high exposure groups versus the lowest exposure group.
Of the 4,241 workers (58% white, 22% non white, 20% unknown race, 90% male), 958 did not have adequate work history information and were omitted from some plant analyses. Some company records from the early period may have been lost or destroyed; however, the study authors "found no evidence to sup port such an assumption" . Wage status (i.e., hourly or salaried) was not described. Thirty-five percent of workers had been employed in jobs with the highest poten tial for TiO2 exposure. Workers experienced a significantly low overall mortality (533 deaths, SMR = 0.8, 95% CI = 0.8-0.9; P < 0.05). No significantly increased SMRs were found for any specific cause of death and there were no trends with exposure. (Results were not reported by category of race). The number of deaths from trachea, bronchus, or lung cancer was not greater than expected (i.e., 61 deaths; SMR = 1.0; 95% CI = 0.8-1.3). However, there was a significant 50% elevated SMR for workers employed 9 years or less with at least 20 years since hire (SMR = 1.5 ; 95% CI = 1.0-2.3; number of deaths not reported). SMRs for this cancer did not increase with increasing TiO2 concentra tions (i.e., as evidenced by job category in Ta ble 6 of the study). Workers in jobs with great est TiO2 exposure had significantly fewer than expected total deaths (112 deaths; SMR = 0.7; 95% CI = 0.6-0.9) and mortality from cancers of trachea, bronchus, or lung was not greater than expected (11 deaths; SMR = 1.0; 95% CI = 0.5-1.7). Internal analyses (i.e., Cox PH m od els) revealed no significant trends or exposureresponse associations for total cancers, lung cancer, or other causes of death. No associa tion between TiO2 exposure and increased risk of cancer death was observed in this study (i.e., Fryzek et al. ).
Limitations of this study include (1) about half the cohort was born after 1940, lung can cer in these younger people would be less fre quent and the latency from first exposure to TiO2 would be short, (2) duration of employ ment was often quite short, (3) no informa tion about ultrafine exposures (probably be cause collection methods were not available over the course of the study), and (4) limited data on nonoccupational factors (e.g., smok ing). Smoking information abstracted from medical records from 1960 forward of 2,503 workers from the four plants showed no im balance across job groups. In all job groups, the prevalence of smoking was about 55% and it declined over time by decade of hire. However, the information was inadequate for individual adjustments for smoking .
Fryzek et al. performed additional analyses in response to a suggestion that the RRs may have been artifi cially low, especially in the highest category of cumulative exposure, because of the statistical methods used . These analyses yielded hazard ratios similar to those in the original analysis and found no signifi cant exposure-response relationships for lung cancer mortality and cumulative TiO2 expo sure (i.e., "low," "medium," "high") with either a time-independent exposure variable or a timedependent exposure variable and a 15-year exposure lag (adjusted for age, sex, geographic area, and date of hire) . The hazard ratio for trachea, bron chus, and lung cancer from "medium" cumu lative TiO2 exposure (15-year lag) was greater than 1.0 (hazard ratio for "medium" cumula tive exposure, time-dependent exposure vari able and 15-year lag = 1.3; 95% CI = 0.6-2.8) and less than 1.0 for "high": (hazard ratio = 0.7; 95% CI = 0.2-1.8; "low" was the referent group) . Boffetta et al. reevaluated lung can cer risk from exposure to TiO2 in a subset of a population-based case-control study of 293 substances including TiO2 (i.e., Siemiatycki et al. ; see Table 2-1 for description of Si emiatycki et al. ).
# Boffetta et al.
Histologically confirm ed lung cancer cases (n = 857) from hospitals and noncancer ref erents were randomly selected from the popu lation of Montreal, Canada. Cases were male, aged 35 to 70, diagnosed from 1979 to 1985, and controls were 533 randomly selected healthy residents and 533 persons with cancer in other organs.
Job information was translated into a list of po tential exposures, including all Ti compounds and TiO2 as dust, mist, or fumes. Using profes sional judgment, industrial hygienists assigned qualitative exposure estimates to industry and job combinations worked by study subjects, based on information provided in interviews with subjects, proxies, and trained interview ers and recorded on a detailed questionnaire. The exposure assessment was conducted blindly (i.e., case or referent status not known). Dura tion, likelihood (possible, probable, definite), frequency (30%), and extent (low, medium, high) of exposure were assessed. Those with probable or definite exposure for at least 5 years before the interview were classified as "exposed." Boffetta et al. classified ex posure as "substantial" if it occurred for more than 5 years at a medium or high frequency and level. (Siemiatycki et al. used a different definition and included five workers exposed to titanium slag who were excluded by Boffetta et al. ; see Table 2-1). Only 33 cases and 43 controls were classified as ever exposed to TiO2 (OR = 0.9; 95% CI = 0.5-1.5). Results of unconditional logistic models were adjusted for age, socioeconomic status, ethnicity, re spondent status (i.e., self or proxy), tobacco smoking, asbestos, and benzo(a)pyrene (BAP) exposure. No trend was apparent for estimated frequency, level, or duration of exposure. The OR was 1.0 (95% CI = 0.3-2.7) for medium or high exposure for at least 5 years. Results did not depend on choice of referent group, and no significant associations were found with TiO2 exposure and histologic type of lung cancer. The likelihood of finding a small increase in lung cancer risk was limited by the small number of cases assessed. However, the study did find an excess risk for lung cancer associated with both asbestos and BAP, indicating that the study was able to detect risks associated with potent carcinogens. The study had a power of 86% to detect an OR of 2 at the 5% level, and 65% pow er for an OR of 1.5.
Limitations of this study include (1) self reporting or proxy reporting of exposure in formation; (2) use of surrogate indices for exposure; (3) absence of particle size character ization; and (4) the nonstatistically significant lung cancer OR for exposure to TiO2 fumes, which was based on a small group of subjects and most were also exposed to nickel and chro mium (5 cases; 1 referent; OR = 9.1; 95% CI = 0.7-118). In addition, exposures were limited mainly to those processes, jobs, and industries in the Montreal area. For example, the study probably included few, if any, workers who manufactured TiO2. Most workers classified as TiO2-exposed were painters and motor vehicle mechanics and repairers with painting experi ence; the highly exposed cases mixed raw m a terials for the manufacture of TiO2-containing paints and plastics. Boffetta et al. conducted a retrospective cohort mortality study of lung cancer in 15,017 workers (14,331 men, 686 women) employed at least 1 month in 11 TiO2 production facili ties in six European countries. The factories produced mainly pigment-grade TiO2. Esti mated cumulative occupational exposure to respirable TiO2 dust was derived from job title and work history. Observed numbers of deaths were compared with expected numbers based on national rates; exposure-response relation ships within the cohort were evaluated using the Cox PH model. Few deaths occurred in fe male workers (n = 33); therefore, most analyses did not include female deaths. The follow-up period ranged from 1950-1972 until 1997 2001; 2,619 male and 33 female workers were reported as deceased. (The follow-up periods probably have a range of years because the follow-up procedures varied with the partici pating countries.) The cause of death was not known for 5.9% of deceased cohort members. Male lung cancer was the only cause of death with a statistically significant SMR (SMR = 1.23; 95% CI = 1.10-1.38; 306.5 deaths ). However, the Cox regression analysis of male lung cancer mortality found no evidence of increased risk with increas ing cumulative respirable TiO2 dust exposure (P-value for test of linear trend = 0.5). There was no consistent and monotonic increase in SMRs with duration of employment, although workers with more than 15 years of employ ment had slightly higher SMRs than workers with 5 to 15 years of employment and an effect of time since first employment was suggested for workers employed more than 10 years. (The authors indicated that the increase in lung can cer mortality with increasing time since first employment could be "explained by the large contribution of person-years to the categories with longest time since first employment from countries such as Germany, with increased overall lung cancer mortality" .) For male nonm alignant respiratory disease mortality, the num ber of observed deaths was lower than the expected number (SMR = 0.88; 95% CI = 0.77-1.02; 201.9 deaths observed; 228.4 expected), and there was no evidence of an exposure-response relationship.
# Boffetta et al.
The authors suggested that lack of exposureresponse relationships may have been related to a lack of (1) statistical power or (2) inclu sion of workers who were employed before the beginning of the follow-up period when exposure concentrations tended to be high. (Regarding the latter point, the authors stated that this phenom enon could have occurred and resulted in a loss of power, but "the results of the analysis on the inception cohort, com posed of workers whose employment is entirely covered by the follow-up, are remarkably simi lar to the results of the whole cohort, arguing against survival bias" ). The authors also suggested that the statistically sig nificant SMR for male lung cancer could rep resent (1) heterogeneity by country, which the authors thought should be explained by chance and differences in effects of confounders (see next item), rather than factors of TiO2 dust ex posure; (2) differences in the effects of potential confounders, such as smoking or occupational exposure to lung carcinogens; or (3) use of na tional reference rates instead of local rates. Ramanakumar et al. analyzed data from two large, population-based case-control stud ies conducted in Montreal, Canada, and fo cused on lung cancer risk from occupational exposure to four agents selected from a large set of agents and mixtures: carbon black, TiO2, industrial talc, and cosmetic talc. Results from the first study (i.e., Study I by Boffetta et al. ) involving TiO2 were published and are described above (see Section 2.2.3). Subject interviews for Study I were conducted from 1982-1986 and included 857 lung cancer cases in men aged 35-70 years. Study II' s interviews were conducted from 1995-2001 and includ ed men and women aged 35-75 years (765 male lung cancer cases, 471 female lung can cer cases). In both studies, lung cancer cases were obtained from 18 of the largest hospitals in the metropolitan Montreal area and con firmed histologically. Controls were randomly sampled from the population. Study I had an additional control group of persons with non lung cancers (Study I: 533 population controls and 1,349 cancer controls; Study II: 899 male controls and 613 female controls). Exposure assessment methods were similar to those de scribed in Boffetta et al. and Siemiatycki et al. and included estimates by indus trial hygienists based on job histories (see Sec tion 2.2.3 and Table 2-1). Major occupations of cases and controls with TiO2 exposure (n = 206) were painting, paper hanging, and relat ed occupations (37%); construction laborer, grinder chipper and related occupations; motor-body repairmen; and paint plant laborer. Unconditional multivariate logistic regression models estimated the association between the exposure and lung cancer and included po tential confounders of age, ethnicity, educa tion, income, type of respondent (i.e., self or surrogate), smoking history, and exposure to at least one other known occupational hazard (i.e., cadmium compounds, silica, or asbestos). Association of substance with cell type (i.e., squamous cell, adenocarcinoma, small cell) was also assessed. ORs for lung cancer and ex posure to TiO2 (i.e., "any," "nonsubstantial," or "substantial") were not statistically significant ly increased, exposure-response trends were not apparent, and there was no evidence of a confounding effect of smoking or other confounder or an association with histologic type (results by histologic type were not shown). In the pooled analysis of cases, controls, and sexes from both studies, the ORs were 1.0 (95% CI = 0.8-1.5), 1.0 (95% CI = 0.6-1.7), and 1.2 (95% CI = 0.4-3.6) for TiO2 exposure categories of "any," "nonsubstantial," and "substantial," re spectively. ORs were adjusted for the possible confounders mentioned above .
# Ramanakumar et al.
Limitations of this study are similar to those for the Boffetta et al. study and include (1) self-reporting or proxy reporting of expo sure information, (2) use of surrogate indices for exposure, (3) absence of particle size char acterization, (4) few lung cancer cases in the "substantial exposure" to TiO2 category (n = 8, both studies combined) and no female cases in that category, and (5) exposures limited mainly to those processes, jobs, and industries in the Montreal area.
# Summary of Epidemiologic Studies
In general, the five epidemiologic studies of TiO2-exposed workers represent a range of environments, from industry to populationbased, and appear to be reasonably representa tive of worker exposures over several decades. One major deficiency is the absence of any co hort studies of workers who handle or use TiO2 (rather than production workers).
Overall, these studies provide no clear evi dence of elevated risks of lung cancer mortality or morbidity among those workers exposed to TiO2 dust.
Nonmalignant respiratory disease mortality was not increased significantly (i.e., P < 0.05) in any of the three epidemiologic studies that investigated it. Two of the three retrospective cohort mortality studies found small numbers of deaths from respiratory diseases other than lung cancer, and the number of pneumoco niosis deaths within that category was not re ported, indicating that these studies may have lacked the statistical power to detect an in creased risk of mortality from TiO2-associated pneumoconiosis (i.e., Chen and Fayerweather : 11 deaths from nonmalignant diseases of the respiratory system; Fryzek et al. : 31 nonmalignant respiratory disease deaths).
The third study had a larger number of male deaths from nonmalignant respiratory disease and found no excess mortality (SMR = 0.88; 201.9 deaths observed; 95% CI = 0.77-1.02) . None of the studies re ported an SMR for pneumoconiosis mortality; Boffetta et al. did discuss four pleural cancer deaths, although the number of ob served deaths was lower than expected num ber, based on national rates. Boffetta et al. suggested that "mortality data might not be very sensitive to assess risks of chronic respiratory diseases."
In addition to the methodologic and epide miologic limitations of the studies, they were not designed to investigate the relationship be tween TiO2 particle size and lung cancer risk, an important question for assessing the poten tial occupational carcinogenicity of TiO2. Fur ther research is needed to determine whether such epidemiologic studies of TiO2-exposed workers can be designed and conducted and to also study workers who manufacture or use products that contain TiO2 (see Chapter 7 Re search Needs).
# Experimental Studies in Animals and
Comparison to Humans
# In Vitro Studies
# Genotoxicity and M utagenicity
Titanium dioxide (TiO2) did not show genotoxic activity in several standard assays: cellkilling in deoxyribonucleic acid (DNA)-repair deficient Bacillus subtilis, mutagenesis in Salmo nella typhimurium or E. coli, or transformation of Syrian hamster embryo cells (particle size and crystal form not specified) . TiO2 was not genotoxic in a Drosphila wing spot test or mutagenic in mouse lymphoma cells (par ticle size and crystal structure not provided in either study). More recent genotoxicity stud ies have shown that TiO2 induced chrom o somal changes, including micronuclei in Chi nese hamster ovary cells (particularly when a cytokinesis-block technique was employed) and sister chromatid exchanges and micronuclei in lymphocytes (particle size and crystal form not provided in either study). Photo-illumination of TiO2 (anatase/rutile samples of various ra tios; particle size not known) catalyzed oxida tive DNA damage in cultured human fibroblast cells, which the assay indicated was due to hy droxyl radicals . Ultrafine TiO2 (particle diameter < 100 nm; crystal struc ture not provided) induced apoptosis in Syrian hamster embryo cells and in cultured human lymphoblastoid cells . Sanderson et al. provided additional physical-chemical data on the ul trafine TiO2 material studied in Wang et al.
(anatase > 99%; mean particle diam eter = 6.57; surface area = 147.9 m 2/g). DNA damage (micronuclei) was produced in human lymphoblastoid cells at a 65 ig/m l dose with out excessive cell killing (20%) .
Ultrafine TiO2 (80% anatase, 20% rutile) was not genotoxic without UV/vis light irradiation in treated cells but did show dose-dependent increase in chromosome aberrations in a Chi nese hamster cell line with photoactivation . Greater photocatalytic activity by mass was observed for anatase than for rutile TiO2 (specific surface areas: 14.6 and 7.8 m 2/g, respectively) and for anatase-rutile mixtures (e.g., 80% ana tase, 20% rutile ) com pared to either particle type alone (0.5% wt anatase due to photoinduced interfacial elec tron transfer from anatase to rutile ). Particle size influenced oxidative DNA damage in cultured human bronchial epithelial cells, which was detected for 10-and 20-nm but not 200-nm diameter anatase TiO2 . In the absence of photoac tivation, an anatase-rutile mixture (50%, 50%; diameter = 200 nm) induced higher oxidative DNA damage than did the pure anatase or ru tile (200 nm each) . Overall, these studies indicate that TiO2 exhibits geno toxicity (DNA damage) under certain condi tions but not mutagenicity (genetic alteration) in the assays used.
# Oxidant Generation and Cytotoxicity
TiO2 is considered to be of relatively low in herent toxicity, although the crystal phase can influence the particle surface properties and cytotoxicity in vitro. Sayes et al. re ported that nano-anatase produced more ROS and was more cytotoxic than nano-rutile, but only after UV irradiation. ROS generation by cells in vitro (mouse BV2 microglia) treated with P25 ultrafine TiO2 (70% anatase and 30% rutile) was suggested as the mechanism for damaging neurons in complex grain cell cul tures . In contrast, Xia et al. reported that TiO2 (80% anatase, 20% rutile; ~25 nm diameter) did not induce oxi dative stress or increase the heme oxygenase 1 (HO-1) expression in phagocytic cells (RAW 264.7), which the authors suggested may be due to passivation of the particle surfaces by culture medium components or neutralization by available antioxidants. In a study comparing in vitro cellular responses to P25 ultrafine TiO2 (21 nm particle size) and fine TiO2 (1 ^m parti cle size) at exposure concentrations of 0.5-200 Hg/ml, the generation of ROS was significantly elevated relative to controls after 4-hr expo sure to either fine or ultrafine TiO2, although the ROS induced by ultrafine TiO2 was greater than that of fine TiO2 at each exposure concen tration; no cytotoxicity was observed 24 hours after treatment at any of these doses . Thus, photoactivation appears to be an important mechanism for increasing cytotoxic ity of TiO2, especially for formulations contain ing anatase. However, TiO2 cytotoxicity is low relative to more inherently cytotoxic particles such as crystalline silica .
# Effects on Phagocytosis
Renwick et al. reported that both fine and ultrafine TiO2 particles (250 and 29 nm mean diameter; 6.6 and 50 m2/g surface area, re spectively) reduced the ability of J774.2 mouse alveolar macrophages to phagocytose 2 ^m latex beads at doses of 0.39 and 0.78 ^g/mm2. Ultrafine particles (TiO2 and carbon black) im paired macrophage phagocytosis at lower mass doses than did their fine particle counterparts, although this effect was primarily seen with ul trafine carbon black (254 m2/g surface area).
Moller et al. found that ultrafine TiO2 (20 nm diameter), but not fine TiO2 (220 nm2 diameter), significantly retarded relaxation and increased cytoskeletal stiffness in mouse alveolar macrophages (J774A.1 cell line) at a dose of 320 ^g/ml. Ultrafine TiO2 inhibited proliferation in J774A.1 cells to a greater ex tent than did fine TiO2 (100 |ig/ml dose; 50% and 90% of control proliferation, respectively). In primary alveolar macrophages (BD-AM, isolated from beagle dogs by bronchoalveolar lavage ), either fine or ultrafine TiO2 (100 ^g/ml dose) caused moderate retardation of relaxation. Neither fine nor ultrafine TiO2 caused impaired phagocytosis of latex micropheres in BD-AM cells, but both particle sizes significantly impaired phagocytosis in J774A.1 cells (~65% of control level by fine or ultrafine TiO2). Ultrafine TiO2 reduced the fraction of viable cells (either J774A.1 or BD-AM) to a greater extent than did fine TiO2 (at 100 ^g/ml dose).
These in vitro studies provide mechanistic in formation about how particle-macrophage interactions may influence cell function and disease processes in vivo. Overall, ultrafine TiO2 impairs alveolar macrophage function to a greater extent than does fine TiO2, which *Bronchoalveolar lavage (BAL) is a procedure for washing the lungs to obtain BAL fluid (BALF), which contains cellular and biochemical indicators of lung health and disease status.
may also relate to the greater inflammatory re sponse to ultrafine TiO2 at a given mass dose.
# In Vivo Studies in Rodent Lungs
# Intratracheal Instillation
# Short-term follow-up
Studies with male Fischer 344 rats instilled with 500 |ig of TiO2 of four different particle sizes and two crystal structures (rutile: 12 and 230 nm; anatase: 21 and 250 nm) indicate that the ultrafine TiO2 particles (~20 nm) are translocated to the lung interstitium (interstitialized) to a greater extent and cleared from the lungs more slowly than fine TiO2 particles (~250 nm) . Other-intratra cheal instillation (IT) studies conducted by the same laboratory showed that ultrafine TiO2 particles produced a greater pulmonary in flammation response than an equal mass dose of fine TiO2 particles. The greater toxicity of the ultrafine particles was related to their larger particle surface area dose and their increased interstitialization .
In a study of four types of ultrafine particles (TiO2, nickel, carbon black, and cobalt; particle sizes 14-20 nm), male Wistar rats (aged 10-12 weeks) were administered IT doses of 125 ^g of particles, and BAL was performed after 4 or 18 hours. Ultrafine TiO2 was the least toxic and inflammogenic, although it was still signifi cantly greater than the saline-treated controls . The adverse pulm onary responses were associated with the free radical activity of the particles, which was very low for ultrafine TiO2. Höhr et al. observed that, for the same surface area, the inflam m atory response (as measured by BAL fluid (BALF) markers of inflammation) in female Wistar rats to un coated TiO2 particles covered with surface hydroxyl groups (hydrophilic surface) was similar to that of TiO2 particles with surface OCH3-groups (hydrophobic surface) replacing OH-groups. The IT doses in this study were 1 or 6 mg fine (180 nm particle size; 10 m 2/g specific surface area) or ultrafine (20-30 nm; 40 m 2/g specific surface area) TiO2, and BAL was performed 16 hours after the IT.
Ultrafine TiO2 was more damaging than fine TiO2 in the lungs of male Wistar rats . The particle mean diameters of the ultrafine and fine TiO2 were 29 and 250 nm, re spectively, and the specific surface areas were 50 and 6.6 m 2/g; the crystal structure was not specified . Twenty-four hours after instillation of ultrafine or fine TiO2, rats treated with ultrafine TiO2, but not fine TiO2, had BALF elevations in the percent age of neutrophils (indicating inflammation), y-glutamyl transpeptidase concentration (a measure of cell damage), protein concentra tion (a measure of epithelium permeability), and lactate dehydrogenase (LDH) (an indica tor of cytotoxicity and cell death) . The 125 |ig IT dose of either ultra fine or fine TiO2 did not cause any significant adverse lung response in 24 hours. At 500 |ig, the phagocytic ability of the alveolar macro phages was significantly reduced by exposure to particles of either size. The 500 ^g IT dose of ultrafine TiO2, but not fine TiO2, was asso ciated with an increased sensitivity of alveolar macrophages to a chemotactic stimulus, an effect which can reduce the macrophage m o bility and clearance of particles from the lungs.
In a study that included both inhalation (see Section 3.2.3) and IT exposures to six different formulations of fine rutile TiO2 (including un coated or alumina-or amorphous silica-coated; particle size 290-440 nm; specific surface area 6-28 m 2/g), male 8-week old Sprague Dawley rats were dosed with 2 or 10 mg/kg by IT. Pul monary inflammation (measured by polymor phonuclear leukocytes in BALF) was statistically significantly elevated at 24 hours in the rats administered 10 mg/kg for each of the coated or uncoated formulations. The coated TiO2 formulations produced higher inflam mation than the uncoated TiO2 .
In a study of rutile TiO2 nanorods, inflamma tion responses were examined in BALF and whole blood in Wistar rats 24 hours after an IT dose of 1 or 5 mg/kg . At both doses, the neutrophilic inflammation in BALF was significantly greater than the ve hicle controls. The number of monocytes and granulocytes in blood was dose-dependently elevated, while the platelets were significantly reduced at the higher dose, indicating platelet aggregation.
Mice instilled with 1 mg fine TiO2 (250 nm mean diameter) showed no evidence of inflammation at 4, 24, or 72 hrs after instillation as assessed by inflammatory cells in BALF and expression of a variety of inflammatory cytokines in lung tissue .
Adult male ICR mice (2 months old, 30 g) were exposed to ultrafine (nanoscale) TiO2 (rutile, 21 nm average particle size; specific surface area of 50 m 2/g) or fine (microscale) TiO2 (180-250 nm diameter; specific surface area of 6.5 m2/g) by a single IT dose of either 0.1 or 0.5 mg per mouse . One week later, the lungs showed "significant changes in morphology and histology" in the mice receiv ing the 0.1-mg dose of nanoscale TiO2, includ ing disruption of the alveolar septa and alveolar enlargement (indicating emphysema), type II pneumocyte proliferation, increased alveolar epithelial thickness, and accumulation of par ticle-laden alveolar macrophages. Nanoscale TiO2 elicited a significantly greater increase in chemokines associated with pulmonary emphy sema and alveolar epithelial cell apoptosis than did the microscale TiO2.
A dose-response relationship was not seen, as the adverse effects of the 0.1-mg dose of na noscale TiO2 exceeded those of the 0.5-mg dose. "No significant pathological changes" were observed at either dose of the microscale TiO2.
In summary, these short-term studies show that while TiO2 was less toxic than several other particle types tested, TiO2 did elicit pulmonary inflammation and cell damage at sufficiently high surface area doses (i.e., greater response to ultrafine TiO2 at a given-mass dose). Rehn et al. observed an acute (3-day) in flammatory response to instillation of ultrafine TiO2 and found that the response from a single instillation decreased over time, returning to control levels by 90 days after the instillation. The reversibility of the inflammatory response to ultrafine TiO2 contrasted with the progres sive increase in inflammation over 90 days that was seen with crystalline silica (quartz) in the same study. This study also compared a silanized hydrophobic preparation of ultra fine TiO2 to an untreated hydrophilic form and concluded that alteration of surface properties by silanization does not greatly alter the bio logical response of the lung to ultrafine TiO2.
# Interm ediate-term follow-up
Three recent studies of various types of na noscale or microscale TiO2 used a similar ex perimental design, which involved IT dosing of male Crl:CD®(SD):IGS BR rats (approximate ly 8 weeks of age; 240-255 g body weight). In stilled particle doses were either 1 or 5 mg/kg, and BAL was perform ed at 24 hours, 1 week, 1 month, and 3 months after instillation [Warheit et al. 2006a,b;. Cell proliferation as says and histopathological examination were also performed. Min-U-Sil quartz was used as a positive control, and phosphate buffered saline (PBS) was the instillation vehicle in controls.
In the first study of two hydrophilic types of TiO2 ("R-100" or "Pigment A"), rats were ad ministered IT doses of either 1 or 5 mg/kg of either type of TiO2, carbonyl iron, or Min-U-Sil quartz. Primary average particle sizes were 300 nm, 290 nm, ~1.2 ^m, or ~1.5 |^m, re spectively . Significantly elevated PMNs in BALF was observed for the two types of TiO2 or carbonyl iron at 24 hours postexposure, but not at the later time points.
The second study compared two types of na noscale TiO2 rods (anatase, 92-233 nm length, 20-35 nm width; 26.5 m2/g specific surface area), nanoscale TiO2 dots (anatase, 5.8-6.1 nm spheres; 169 m 2/g specific surface area), and microscale rutile TiO2 (300 nm primary particle diameter; 6 m 2/g specific surface area) . A statistically significant increase in the percentage of PMNs in BALF was seen at the 5 mg/kg dose for all three TiO2 materials tested (which was higher in the rats administered the nanoscale TiO2) but returned to control levels at the 1-week time point. There were no statistically significant lung responses (inflammation or histopathology) to either the fine or the ultrafine TiO2 at either dose (1 or 5 mg/kg) compared to controls at the 1-week to 3-month time points.
In the third study, comparisons were made of the lung inflammation, cytotoxicity, cell pro liferation, and histopathological responses of two types of ultrafine rutile TiO2, a fine ru tile TiO2, an ultrafine anatase-rutile mixture (80%, 20%) TiO2, and Min-U-Sil quartz par ticles . Although the sur face area of these particles varied from 5.8 to 53 m 2/g, the median particle sizes in the PBS instillation vehicle were similar (2.1-2.7 (om), perhaps due to agglomeration. The pulmonary inflammation (% PMNs) of the 5 mg/kg-dose group of anatase/rutile TiO2 was statistically significantly greater than the PBS controls 24 hrs and 1 wk after IT (but not at 1 or 3 months), while the ultrafine and fine rutile TiO2 groups did not differ significantly from controls at any time point. The tracheobronchial epithelial cell proliferation (% proliferating cells) at the 5 mg/ kg-dose group of anatase/rutile TiO2 was also statistically significantly greater than controls 24 hrs after IT (but not at the later time points), while the ultrafine and fine rutile TiO2 groups did not differ significantly from controls at any time point. The two ultrafine rutile TiO2 prepa rations were passivated with amorphous silica and alumina coatings to reduce their chemi cal and photoreactivity to a low level similar to that of the fine rutile TiO2, while the ultra fine anatase/rutile TiO2 was not passivated and was also more acidic. The ultrafine anatase/ rutile TiO2 was more chemically reactive in a Vitamin C assay measuring oxidation poten tial. These results suggest that the crystal phase and surface properties can influence the lung responses to TiO2. In both studies, the Min-U-Sil quartz-instilled rats showed the expected persistent inflammation.
# Long-term follow-up
In a study of the role of lung phagocytic cells on oxidant-derived mutations, rats were dosed by IT with either 10 or 100 mg/kg of fine TiO2 (anatase; median diameter: 180 nm; surface area: 8.8 m 2/g) and held for 15 months . Type II cells isolated from the rats dosed with 100 mg/kg fine TiO2 exhibited an in creased hypoxanthine-guanine phosphoribosyl transferase (hprt) mutation frequency, but type II cells isolated from rats treated with 10 mg/kg fine TiO2 did not. Neutrophil counts were sig nificantly elevated in the BALF isolated from rats instilled 15 months earlier with 100 mg/ kg fine TiO2, as well as by 10 or 100 mg/kg of a-quartz or carbon black. Hprt mutations could be induced in RLE-6TN cells in vitro by cells from the BALF isolated from the 100 mg/kg fine TiO2-treated rats. The authors concluded that the results supported a role for particle-elicited macrophages and neutrophils in the in vivo m u tagenic effects of particle exposure, possibly me diated by cell-derived oxidants.
An IT study in hamsters suggested that fine TiO2 (97% < 5 om, including 51% < 0.5 ^m) may act as a co-carcinogen . When BAP and fine TiO2 (<0.5 ^m par ticle size) were administered by IT to 48 ham sters (male and female Syrian golden hamsters, 6-7 weeks of age), 16 laryngeal, 18 tracheal, and 18 lung tumors developed, compared to only 2 laryngeal tumors found in the BAPtreated controls. In hamsters receiving an IT dose of 3 mg fine TiO2 only in 0.2 ml saline once a week for 15 weeks, no respiratory tract tumors were found. The animals were kept un til death, which occurred by 80 weeks in treat ed hamsters and by 120 weeks in controls.
TiO2 was included in an IT study of 19 differ ent dusts in female SPF Wistar rats . The types of TiO2 tested were ultrafine hydrophilic (P25; "majority anatase"; ~0.025 ^m mean particle size; 52 m 2/g specific surface area), ultrafine hy drophobic (coated) (P805; 21 nm particle size; 32.5 m 2/g specific surface area), and small-fine anatase (hydrophilic) (200 nm particle size;
9.9 m 2/g specific surface area). Groups of 48 rats were administered two or three IT doses, and then maintained for 26 weeks before ter minal sacrifice. The IT doses (number of doses x mass per dose) and the corresponding lung tum or response (percentages of rats with be nign or malignant tumors) included ultrafine hydrophilic TiO2 (5 doses of 3 mg, 5 doses of 6 mg, and 10 doses of 6 mg-52%, 67%, 69% tumors, respectively), ultrafine hydrophobic (coated) TiO2 (15 doses of 0.5 mg, and 30 doses of 0.5 mg-0% and 6.7% tumors, respectively); and small-fine anatase TiO2 (10 doses of 6 mg, and 20 doses of 6 mg-30% and 64% tumors, respectively) . The original 6-mg dose for hydrophobic coated TiO2 was reduced to 0.5 mg because of acute mortality at the higher dose. The TiO2 was analyzed with the tum or data for the other poorly soluble particles (1,002 rats surviving 26 weeks), and they found that the dose metric that provided the best fit to the tum or data was particle vol ume and particle size . Borm et al. and Morfeld et al. analyzed a subset of these data (709 rats) for five different poorly soluble particles of different sizes (TiO2 of low and high surface area, carbon black, diesel exhaust particles, and amorphous silica). Morfeld et al. fit a multivariate Cox model to these pooled data (excluding silica) and reported a threshold dose of 10 mg and a saturation dose of 20 mg for lung tumors. Although ultrafine particles were more tumorigenic than fine par ticles, in their multivariate model no difference was seen between particle mass, volume, or surface area after accounting for particle type and rat survival time; they suggested a high de gree of agglomeration in these IT preparations may have reduced the effective particle surface area relative to that estimated from Brunauer, Emmett, and Teller (BET) analysis . Roller considered the thresh old findings to be inconsistent with the sta tistically significant lung tum or incidences in three dose groups that were within the 95% confidence interval of the estimated thresh old in Morfeld et al. . All of the analyses of these pooled data including TiO2 showed a greater tum or potency of the ultrafine versus fine particles, whether the dose was expressed as particle volume and size or as particle sur face area . This was considered to be due to the greater translocation of ultrafine particles to the lung interstitium . Although these IT studies used relative ly high-mass doses, increasing dose-response relationships were observed for particles of a given size; greater tum or responses were also observed for the ultrafines compared to fine particles at a given mass dose. One study sug gests a genotoxic mechanism involving DNA damage from oxidants produced by phagocytic and inflammatory cells in response to the TiO2 particles in the lungs.
# Acute or Subacute Inhalation
Nurkiewicz et al. investigated the role of particle size in systemic microvascular function in male SPF Sprague Dawley rats inhaling either fine TiO2 (1 om; 2.34 m2/g) or P25 ultrafine TiO2 (21 nm; 48.08 m2/g) at airborne exposures aimed at achieving similar particle mass deposition in the lungs (ultrafine: 1.5-12 mg/m3, 240-720 min; fine: 3-15 mg/m3, 240-480 min). No evidence of pulmonary inflammation or lung damage (based on BALF markers) was observed at these exposures. However, 24 hours after ex posure, the arteriolar vasodilation response (to Ca2+ intraluminal infusion of the spinotrapezius muscle) was found to be significant ly impaired in rats exposed to ultrafine TiO2 compared to either the control rats or the rats exposed to fine TiO2 with the same retained mass dose in the lungs. On an equivalent mass basis, ultrafine TiO2 was approximately an or der of magnitude more potent than fine TiO2 in causing systemic microvascular dysfunc tion. When converted to surface area dose, the potency of the fine TiO2 was greater, which the authors suggested was due to overestimation of ultrafine particle surface area delivered to the lungs due to agglomeration. Either fine or ul trafine TiO2 caused systemic microvessel dys function at inhalation doses that did not cause marked lung inflammation. This effect was related to the adherence of PMNs to the m i crovessel walls and production of ROS in the microvessels. This study indicates that cardio vascular effects may occur at particle exposure concentrations below those causing adverse pulmonary effects.
Rats (male WKY/Kyo@Rj, 246-316 g body weight) were exposed by inhalation (endo tracheal intubation) to ultrafine TiO2 (20 nm count median diameter) at an airborne mass concentration of approximately 0.1 m g/m 3 (7.2 x 106 mean num ber concentration) for 1 hour . BAL was performed either 1 or 24 hours after the inhalation expo sure. Elemental microanalysis of the particles provided evidence that the alveolar macro phages did not efficiently phagocytize the par ticles; rather particle uptake was "sporadic" and "unspecific."
Mice (C57Bl/6 male, 6 weeks of age) were exposed by whole-body inhalation to TiO2 nanoparticles (2-5 nm primary particle size; 210 m2/g specific surface area) for either 4 hours (acute) or 4 hr/day for 10 days (subacute) . Airborne TiO2 concentrations were 0.77 or 7.22 mg/m3 for the acute exposure and 8.88 mg/m3 for the subacute exposures. In the subacute study, groups of mice were necropsied at the end of the exposure period and at 1, 2, and 3 weeks postexposure. No adverse effects were observed after the 4-hour expo sure. A "significant but modest" inflammatory response was observed in the mice at 0, 1, or 2 weeks after the subacute exposures, with re covery at the 3rd week postexposure (the num ber of alveolar macrophages in BALF was sta tistically significantly greater than controls in the 1-and 2-week postexposure groups).
# Short-Term Inhalation
Short-term exposure to respirable fine TiO2 has been shown to result in particle accumulation in the lungs of rodents inhaling relatively high par ticle concentrations. The pulmonary retention of these particles increased as exposure concen trations increased. In one study, after 4 weeks of exposure to 5 mg/m3, 50 mg/m3, and 250 mg/ m 3, the fine TiO2 retention half-life in the lung increased (~68 days, ~110 days, and ~330 days, respectively) , which indi cates overloading of alveolar macrophage-me diated clearance of particles from the lungs.
In multiple studies, the most frequently noted change after 1 to 4 weeks of fine TiO2 inhala tion was the appearance of macrophages laden with particles, which were principally local ized to the alveoli, bronchus-associated lym phoid tissue, and lung-associated lymph nodes . Particle-laden macrophages in creased in number with increasing intensity of exposure and decreased in number after cessa tion of exposure . Alveolar macrophages from rats inhaling 250 mg/m3 fine TiO2 for 4 weeks also appeared to be func tionally impaired as demonstrated by persis tently diminished chemotactic and phagocytic capacity .
Inflammation in the lungs of fine rutile TiO2exposed rats was dependent upon exposure concentration and duration. Rats exposed to 250 mg/m3 fine TiO2 6 hr/day, 5 days/week for 4 weeks had markedly increased numbers of granulocytes in BALF .
The granulocytic response was muted after re covery, but numbers did not approach control values until 6 months after exposures ceased. Rats exposed to 50 mg/m 3 fine TiO2 6 hr/day, 5 days/wk for 4 weeks had a small but signifi cantly increased number of granulocytes in the BALF that returned to control levels at 3 months after exposures ceased . This study showed that high concen trations of poorly soluble, low toxicity (PSLT) dust (e.g., fine TiO2 and carbonyl iron) caused impaired pulmonary clearance and persistent inflammation in rats.
In a study of male Fischer 344 rats, inhaling 22.3 mg/m3 of ultrafine (20 nm particle size, anatase) TiO2 6 hr/day, 5 days/wk for up to 12 weeks, the antioxidant enzyme manganesecontaining superoxide dismutase (MnSOD) expression in the lungs increased dramatically and was correlated with pulmonary inflamma tion indicators . Fine TiO2 (23.5 m g/m3 of 250 nm particle size, anatase) did not produce this response at the exposure conditions in this study. Follow-up observa tion of the rats in this study showed that the in flammatory lesions "regressed" during a 1-year period following cessation of exposure . This observation suggests that the inflammatory response from short-term expo sures to TiO2 may be reversible to some degree, if there is a cessation of exposure.
In a separate study, rats exposed to airborne concentrations of 50 mg/m3 fine TiO2 7 hr/day, 5 days/week for 75 days had significantly ele vated neutrophil numbers, LDH (a measure of cell injury) concentration, and n-acetylglucosaminidase (a measure of inflammation) con centration in BALF . However, in this study the BALF of rats inhal ing 10 m g/m 3 or 50 m g/m 3 fine TiO2 7 hr/day, 5 days/week for 2 to 32 days had PMN num bers, macrophage numbers, and LDH concen trations that were indistinguishable from con trol values .
Rats exposed to airborne concentrations of 51 m g/m 3 fine TiO2 (1.0 ^m mass median aerodynamic diameter ) 6 hr/day for 5 days (whole body exposures) had no sig nificant changes in BALF neutrophil number, macrophage number, lymphocyte number, LDH concentration, n-acetylglucosaminidase concentration, or measures of macrophage ac tivation 1 to 9 weeks after exposure . The TiO2 lung burden at the end of exposure was 1.8 mg/lung, and the reten tion was 39% 28 days after the end of exposure. Similarly, rats exposed to 0.1, 1, or 10 mg/m3 6 h/ day, 5 days/week for 4 weeks had no evidence of lung injury as assessed by BAL 1 week to 6 months after exposure or by histopathology 6 months after exposure .
Pulmonary responses to six different formula tions of fine rutile TiO2 (including uncoated or alumina-or amorphous silica-coated; par ticle size 290-440 nm; specific surface area 6-28 m 2/g) were investigated in male, 8-week old Sprague Dawley rats, with both IT (see Section 3.2.1.1) and inhalation exposures. Rats were exposed to very high airborne concen trations (1130-1310 m g/m 3) of the different formulations of fine TiO2 for 30 days (6 hr/ day, 5 days/week). The pulmonary inflamma tion response (assessed by histopathology) remained significantly elevated 1 month after exposure to the TiO2 coated with alumina (7%)
and amorphous silica (8%). The coated TiO2 formulations produced higher inflammation than the uncoated TiO2 in the inhalation study (as in the IT study) .
# Subchronic Inhalation
In a study of two fine-sized, PSLT particles-TiO2 and barium sulfate (BaSO4)-no-observed -adverse-effect levels (NOAELs) were esti mated based on the relationship between the particle surface area dose, overloading of lung clearance, and neutrophilic inflammation in rats . These two PSLT parti cles had similar densities (4.25 and 4.50 g/cm3, respectively) but different particle sizes (2.1 and 4.3 ^m MMAD); since both of these fac tors influence particle deposition in the lungs, the exposure concentrations were adjusted to provide similar particle mass deposition in the lungs. Male Wistar rats (age 12 weeks, specificpathogen free) were exposed by whole body inhalation (7 hr/day, 5 days/wk) to either 25 m g/m 3 for 7.5 months (209 calendar days) or to 50 mg/m3 for 4 months (118 calendar days). The findings showed that the retardation of alveolar macrophage-mediated clearance, particle transfer to the lung-associated lymph nodes, and influx of PMNs were related to the lung burden as particle surface area dose. A mean airborne concentration of 3 mg/m3 fine sized TiO2 was estimated as the NOAEL, which was defined as a 95% probability that the lung responses would be below those predicted using the "no overload level" for the average animal.
The relationship between subchronic inhala tion of TiO2 and neutrophilic inflammation was also investigated in a study of Montserrat vol canic ash . Male Wistar rats (225 g, specific-pathogen free) were exposed by nose-only inhalation (6 hr/day, 5 days/wk) to 140 mg/m 3 of TiO2 (1.2 ^m MMAD) for up to 2 months. The concentration of ash (5.3 MMAD) was 253 mg/m3, which was predicted to provide the same retained lung burden as the TiO2. After the 8-week exposure, the histopathological examination showed relatively minor pathology changes in the lungs of rats treated with TiO2, although the pathological changes in the lungs of the ash-exposed rats were generally "more marked." The pulmonary inflammation response was also greater in the ash-exposed rats, based on the percentage of PMNs in the BALF, which ranged from 19%-53% PMNs for the ash and from 0.2%-16% for TiO2 at 14-56 days of exposure.
Fine TiO2 (pigmentary; 1.4 ^m MMAD) was studied in female rats (CDF/Crl-BR), mice (B3C3F1/CrlBR), and hamsters , 6 wk old, after a 13-week inhalation exposure to 10, 50, or 250 mg/m3, followed by up to 52 weeks without TiO2 expo sure .
Retained particle burdens in the lungs and lung-associated lymph nodes were measured at the end of exposure and at 4, 13, 26, and 52 weeks postexposure. The lung particle reten tion patterns indicated that after the 13 weeks of exposure to 50 or 250 mg/m3, clearance overload had occurred in rats and mice but not in hamsters. In mice and rats, the numbers of macrophages and the percentage of neutro phils were significantly increased in BALF after 13 weeks of exposure to 50 or 250 mg/m3 fine TiO2 and remained elevated through the 52week postexposure. These BALF cell responses were significantly elevated in hamsters after exposure to 250 mg/m3 but were no longer sig nificantly elevated by 26 weeks postexposure.
Alveolar cell proliferation was significantly el evated only at 52-week postexposure in the rats . Histopathology showed alveolar hypertrophy and hyperplasia of type II epithelial cells in rats after the 13-week exposure to 50 or 250 mg/m3 fine TiO2. In mice, alveo lar type II cell hypertrophy was observed (dose not given), while in hamsters minimal type II epithelial cell hypertropy and hyperplasia were observed at 50 or 250 mg/m 3. Foci of alveolar epithelial cell hypertrophy and hyperplasia were seen to be often associated with aggre gates of particle-laden alveolar macrophages in all three species. In rats, but not mice and hamsters, these foci of alveolar epithelial hy pertrophy became increasingly more prom i nent with time, even after cessation of expo sure, and at the high dose, rats progressed to bronchiolization of alveoli (metaplasia) and to fibrotic changes with focal interstitialization of TiO2 particles. Alveolar lipoproteinosis and cholesterol clefts were observed in rats 52 weeks after the 13-week exposure to 250 mg/m3. Al though "high particle burdens" were associ ated with "proliferative epithelial changes as sociated with particle-induced inflammation" in all three species, only rats developed meta plastic and fibrotic lesions .
P25 ultrafine TiO2 (21 nm prim ary particle size; 1.37 MMAD) was studied in female rats, mice, and hamsters after a 13-week inhalation exposure to 0.5, 2, or 10 m g/m 3, followed by a recovery period (without TiO2 exposure) for up to 52 weeks (same rodent strains as Bermudez et al. 2002). Pul monary responses and retained particle bur dens in the lungs and lung-associated lymph nodes were measured at the end of exposure and at 4, 13, 26, and 52 weeks postexposure. Retardation of pulmonary clearance follow ing exposure to 10 m g/m 3 was observed in rats and mice but not in hamsters. Pulmonary in flammation was also observed at the 10 m g/m 3 dose in rats and mice but not in hamsters. The total number of cells in BALF was significantly elevated in rats and mice after 13 weeks of ex posure to 10 mg/m3, and the percentages of neutrophils and macrophages remained statis tically significantly elevated through 52 weeks postexposure in both species. The BALF cell responses were not statistically significant fol lowing exposure to 0.5 or 2 mg/m3 in mice or rats (except for significantly elevated neutro phils in rats immediately after the 13-wk expo sure to 2 mg/m3) (Tables S1-S3 in Bermudez et al. 2004). BALF cell responses were not signifi cantly elevated in hamsters at any exposure or observation time.
The alveolar cell replication index was statis tically significantly increased at 0, 4, and 13 weeks after the 13-wk exposure to 10 m g/m 3 ultrafine TiO2 in rats, while in mice it was sig nificantly increased at 13 and 26 weeks after exposure cessation . In rats inhaling 10 m g/m 3, the histopathologic responses included epithelial and fibroproliferative changes, interstitial particle accumu lation, and alveolar septal fibrosis. Although most of the epithelial proliferative lesions had regressed postexposure, some remaining le sions were believed to have progressed. At 52 weeks postexposure, minimal to mild m eta plastic changes and minimal to mild particleinduced alveolar septal fibroplasia were seen in rats. In mice inhaling 10 m g/m 3, lesions were described as aggregations of heavily par ticle laden macrophages in the centriacinar sites. During postexposure, these cell aggre gates were observed to move to the interstitial areas over time. No epithelial, metaplastic, and fibroproliferative changes were observed by histopathology in the mice or hamsters (although the mice had significantly elevated alveolar cell replication at 13 and 26 weeks postexposure in the 10 mg/m3 dose group, this response apparently did not result in histopathologically visible changes). The absence of adverse pulmonary responses in hamsters was considered to reflect their rapid clearance of particles from the lungs.
# Chronic Inhalation
TiO2 has been investigated in three chronic in halation studies in rats, including fine TiO2 in Lee et al. and Muhle et al. and ultrafine TiO2 in Heinrich et al. . These studies were also reported in other publica tions, including Lee et al. , Muhle et al. . In an other 2-year rat inhalation study, an increase in lung tumors was found in rats exposed to TiCl4 . TiCl4 is an intermediate in the production of pigment TiO2, including by hydrolysis of TiCl4 to produce TiO2 and HCl. However, TiCl4 is a highly volatile compound with different properties than TiO2, and thus it is not addressed further in this document.
In Lee et al. , groups of 100 male and 100 female rats (CD, Sprague-Dawley derived; strain not specified) were exposed by whole body inhalation to fine rutile TiO2 (1.5-1.7 MMAD) for 6 hr/day, 5 days/week, for up to two years, to 10, 50, or 250 mg/m3 (84% respirable; < 13 ^m MMAD). A fourth group (con trol) was exposed to air. In each group, twenty rats were killed at 3, 6, or 12 months; 80 rats were exposed for 2 years, and all surviving rats were killed at the end of exposure. No increase in lung tumors was observed at 10 or 50 mg/m3. At 250 mg/m3, bronchioalveolar adenomas were observed in 12 out of 77 male rats and 13 out of 74 female rats. In addition, squamous cell carcinomas were reported in 1 male and 13 females at 250 mg/m3. The squamous cell carci nomas were noted as being dermoid, cyst-like squamous cell carcinomas , lat er reclassified as proliferative keratin cysts , and later still as a continuum ranging from pulmonary keratinizing cysts through pulmonary keratinizing eptheliomas to frank pulmonary squamous carcinomas . A recent reanalysis of the 16 tu mors originally classified as cystic keratinizing squamous cell carcinomas in Lee et al. had a similar interpretation: two were reclas sified as squamous metaplasia, one as a poorly keratinizing squamous cell carcinoma, and 13 as nonneoplastic pulmonary keratin cysts .
In both the Muhle et al. and Heinrich et al. studies, TiO2 was used as a nega tive control in 2-year chronic inhalation stud ies of toner and diesel exhaust, respectively. In Muhle et al. , the airborne concentration of TiO2 (rutile) was 5 mg/m3 (78% respirable, according to 1984 ACGIH criterion). Male and female Fischer 344 rats were exposed for up to 24 months by whole body inhalation and sac rificed beginning at 25.5 months. No increase in lung tumors was observed in TiO2-exposed animals; the lung tum or incidence was 2/100 in TiO2-exposed animals versus 3/100 in non exposed controls.
In the Heinrich et al. study, 100 female Wistar rats were exposed to ultrafine TiO2 (80% anatase, 20% rutile; 15-40 nm primary particle size; 0.8 mm MMAD; 48 (± 2.0) m 2/g specific surface area) at an average of approxi mately 10 m g/m 3, 18 h/day 5d/wk, for up to 24 months (actual concentrations were 7.2 mg/m3 for 4 months, followed by 14.8 m g/m 3 for 4 months, and 9.4 m g/m 3 for 16 months).
Following the 2-year exposure, the rats were held without TiO2 exposure for 6 months . At 6 months of expo sure, 99/100 of the rats had developed bronchioloalveolar hyperplasia, and by 2 years all rats had developed slight to moderate in terstitial fibrosis . After 24 months of exposure, four of the nine rats examined had developed tum ors (including a total of two squamous cell carcinomas, one adenocarcinoma, and two benign squamous cell tumors). At 30 months (6 months after the end of exposure), a statistically significant increase in adenocarcinomas was observed (13 adenocarcinomas, in addition to 3 squa mous cell carcinomas and 4 adenomas, in 100 rats). In addition, 20 rats had benign keratin izing cystic squamous-cell tumors. Only 1 ad enocarcinoma, and no other lung tumors, was observed in 217 nonexposed control rats.
NMRI mice were also exposed to ultrafine TiO2 in Heinrich et al. . The lifespan of N M r2 mice was significantly decreased by inhaling approximately 10 mg/m3 ultrafine TiO2, 18 hr/ day for 13.5 months . This exposure did not produce an elevated tu mor response in the NMRI mice, but the 30% lung tumor prevalence in controls may have decreased the sensitivity for detecting carcino genic effects in this assay. In a study of several types of particles, 100 female Wistar (Crl: Br) rats were exposed to 10.4 m g/m 3 TiO2, 18 hr/day 5 days/wk, for 24 months (followed by 6 months in clean air). No information was provided on the particle size or crystal struc ture of TiO2 used in this study. Cystic keratizing epitheliomas were observed in 16% of the rats. In addition, 3.0% cystic keratinizing squamous-cell carcinomas and 1% nonkera tinizing squamous-cell carcinoma were ob served .
The primary data used in the dose-response model in the TiO2 risk assessment (Chapter 4) include the ultrafine TiO2 data of Heinrich et al. and the fine TiO2 data of Lee et al. . Differences in postexposure follow-up (24 and 30 months, respectively, in Lee et al. and Heinrich et al. ) may have increased the likelihood of detecting lung tumors in the ultrafine TiO2-exposed rats. However, the dif ferences in the hours of exposure per day (6 and 18 hrs, respectively, in Lee et al. and Heinrich et al. )) were accounted for in the risk assessment models since the retained particle lung burden (at the end of 2-year in halation exposure) was the dose metric used in those models.
In summary, the chronic inhalation studies in rodents show dose-related pulmonary re sponses to fine or ultrafine TiO2. At sufficiently high particle mass or surface area dose, the re sponses in rats include reduced lung clearance and increased particle retention ("overload"), pulmonary inflammation, oxidative stress, tis sue damage, fibrosis, and lung cancer. Studies in mice showed impaired lung clearance and inflammation, but not fibrosis or lung cancer. Studies in hamsters found little adverse effect of TiO2 on either lung clearance or response.
# In Vivo Studies: Other
Routes of Exposure
# Acute Oral Administration
The acute toxicity of nanometer (25 and 80 nm) and submicron (155 nm) TiO2 was investigated after a large single dose of TiO2 (5 g/kg body weight) in male and female CD-I mice . The TiO2 was retained in liver, spleen, kidney, and lung tissues, indicating up take by the gastrointestinal tract and systemic transport to the other tissues. No acute toxicity was observed. However, statistically significant changes were seen in several serum biochemical parameters, including LDH and alpha-hydroxybutyrate (suggesting cardiovascular damage), which were higher in mice treated with either 25 and 80 nm nanoscale TiO2 compared to fine TiO2. Pathological evidence of hepatic injury and kidney damage was also observed. In fe male mice treated with the 80 nm nanoscale and the fine particles, the liver injury included hydropic degeneration around the central vein and spotty necrosis of hepatocytes; renal dam age included protein-filled liquid in the renal tubule and swelling of the renal glomerulus.
The tissue injury markers did not always re late to particle size; damage was often but not always greater in the mice treated with the nanoscale particles compared to the fine par ticles, and the 80-nm TiO2 was more damag ing than the 25-nm TiO2 by some indicators.
Given the single large dose in this study, it was not possible to evaluate the dose-response re lationship or to determine whether the effects were specific to the high dose.
# Chronic Oral Administration
The National Cancer Institute conducted a bio assay of TiO2 for possible carcinogenicity by the oral route. TiO2 was administered in feed to Fischer 344 rats and B6C3F1 mice. Groups of 50 rats and 50 mice of each sex were fed either 25,000 or 50,000 parts per million TiO2 (2.5% or 5%) for 103 weeks and then observed for an additional week. In the female rats, C-cell ade nomas or carcinomas of the thyroid occurred at an incidence of 1 out of 48 in the control group, 0 out of 47 in the low-dose group, and 6 out of 44 in the high-dose group. It should also be noted that a similar incidence of C-cell adeno mas or carcinomas of the thyroid, as observed in the high-dose group of the TiO2 feeding study, has been seen in control female Fischer 344 rats used in other studies. No significant excess tumors occurred in male or female mice or in male rats. It was concluded that, under the conditions of this bioassay, TiO2 is not car cinogenic by the oral route for Fischer 344 rats or B6C3F1 mice .
In a study of male and female Fischer 344 rats fed diets containing up to 5% TiO2-coated mica for up to 130 weeks, no treatment-related toxi cologic or carcinogenic effects were reported .
# Intraperitoneal Injection
Female Wistar rats received intraperitoneal in jections of P25 ultrafine anatase TiO2 (2 ml 0.9% NaCl solution) of either (1) a total dose of 90 mg per animal (once per week for five weeks); (2) a single injection of 5 mg; or (3) a series of injections of 2, 4, and 4 mg at weekly intervals . Controls received a single injection of saline alone. The average lifespan of rats in the three treatment groups were 120, 102, and 130, respectively, and 120 weeks for controls. In the first treatment group, 6 out of 113 rats developed sarcomas, mesothe liomas, or abdominal cavity carcinomas, com pared to 2 carcinomas in controls. No intra abdominal tumors were found in the other two treatment groups. This study may not be rele vant to inhalation exposure to TiO2 since stud ies have not shown that either fine or ultrafine TiO2 would be likely to reach the peritoneum after depositing in the lungs.
# Particle-Associated Lung Disease Mechanisms
# Role of Pulmonary
# Inflamm ation
Chronic pulmonary inflammation is charac terized by persistent elevation of the number of PMNs (measured in BALF) or by an increased number of inflammatory cells in interstitial lung tissue (observed by histopathology). Pul monary inflammation is a defense mechanism against foreign material in the lungs. PMNs are recruited from the pulmonary vasculature in response to chemotactic stimuli (cytokines) generated by lung cells including alveolar mac rophages, which patrol the lungs as part of the normal lung defense to phagocytose and clear foreign material . Additional alveolar macrophages are recruited from blood monocytes into the lung alveoli in response to particle deposition. Typically, the PMN response is short-lived but may become chronic in response to persistent stimuli, e.g., prolonged particle exposure .
Particle-induced pulmonary inflammation, oxidative stress, lung tissue damage, and epi thelial cell proliferation are considered to be the key steps leading to lung tum or develop ment in the rat acting through a secondary genotoxic mechanism . Oxidative stress is considered the underlying mechanism of the prolifera tive and genotoxic responses to poorly solu ble particles including TiO2 and other PSLT . Reactive oxygen or nitrogen species (ROS/RNS) are released by inflammatory cells (macrophages and PMNs) and/or by particle reactive surfaces. Oxida tive stress results from an imbalance of the damaging oxidants and the protective antiox idants. Oxidants can damage the lung epithe lial tissue and may also induce genetic dam age in proliferating epithelial cells, increasing the probability of neoplastic transformation . The mechanisms linking pulm onary inflammation with lung cancer may involve (1) induction of hprt mutations in DNA of lung epithelial cells by inflamma tory cell-derived oxidants and increased cell proliferation , and (2) inhibition of the nucleotide excision repair of DNA (with adducts from other exposures such as polycyclic aromatic hydrocarbons) in the lung epithelial cells . Both of these mechanisms involve induction of cell-derived inflammatory mediators, without requiring particle surface-generated oxidants.
A secondary genotoxic mechanism would, in theory, involve a threshold dose that triggers inflammation and overwhelms the body' s an tioxidant and DNA repair mechanisms . Antioxidant de fense responses vary across species, and inter individual variability is generally greater in hu mans than in laboratory animals , and the quantitative aspects of the inflammatory response (level and duration) that are sufficient to cause a high probability of lung tum or development are not known.
While there is a clear association between in flammation and genotoxicity, the specific link ages between key cellular processes such as cell cycle arrest, DNA repair, proliferation, and apoptosis are not well understood .
Thus, chronic pulmonary inflammation ap pears to be required in the development of lung tumors in rats following chronic inhala tion exposure to TiO2; i.e., acting through a secondary genotoxic mechanism involving oxidative DNA damage. An implication of this mechanism is that maintaining exposures be low those causing inflammation would also prevent tum or development, although the dis tribution of inflammatory responses in human populations is not known and a direct genotoxic mechanism cannot be ruled out for dis crete nanoscale TiO2 (see Section 3.5.2.1).
# Dose Metric and Surface
# Properties
High mass or volume dose of PSLT fine par ticles in the lungs has been associated with overloading, while ultrafine particles impair lung clearance at lower mass or volume dos es .
The increased lung retention and inflamma tory response of ultrafine PSLT particles com pared to fine PSLT particles correlates better to the particle surface area dose ]. Some evidence suggests that reduced lung clearance of ultrafine particles may involve mechanisms other than high-dose overload ing, such as altered alveolar macrophage func tion (phagocytosis or chemotaxis) .
The quantitative relationships between the particle dose (expressed as mass or surface area) and the pulmonary responses of inflam mation or lung tumors can be determined from the results of subchronic or chronic inha lation studies in rats. When the rat lung dose is expressed as particle mass, several different dose-response relationships are observed for pulmonary inflammation following subchron ic inhalation of various types of poorly soluble particles (Figure 3-1). However, when dose is converted to particle surface area, the different types (TiO2 and BaSO4) and sizes (ultrafine and fine TiO2) of PSLT particles can be described by the same dose-response curve (Figure 3-2), while crystalline silica, SiO2 (a high-toxicity particle) demonstrates a more inflammogenic response when compared to PSLT particles of a given mass or surface area dose. Similarly, the rat lung tum or response to vari ous types and sizes of respirable particles (in cluding fine and ultrafine TiO2, toner, coal dust, diesel exhaust particulate, carbon black, and talc) has been associated with the total particle surface area dose in the lungs (Figure 3-3). This relationship, shown by Oberdorster and Yu , was extended by Driscoll to include results from subsequent chronic inha lation studies in rats exposed to PSLT particles and by Miller who refit these data using a logistic regression model. The lung tumor response in these analyses is based on all lung tumors, since some of the studies did not dis tinguish the keratinizing squamous cell cysts from the squamous cell carcinomas . Keratinizing squamous cell cysts have been observed primarily in the lungs of female rats exposed to PSLT, including TiO2 (discussed further in Sections 3.2.5, 3.5.2.4, and 3.6).
Figure 3 -3 . Relationship between particle surface area dose in the lungs of rats after chronic inha lation to various types of poorly soluble, low toxicity (PSLT) particles and tum or proportion (all tumors including keratinizing squamous cell cysts)
Data source: toner , coal dust , diesel exhaust particulate , TiO2 , talc Figures 3-4 and 3-5 show the particle mass and surface area dose-response relationships for fine and ultrafine TiO2 for chronic inhala tion exposure and lung tumor response in rats.
In these figures, the lung tumor response data are shown separately for male and female rats at 24 months in Lee et al. and for fe male rats at 24 or 30 months, including either all tumors or tumors without keratinizing cystic tumors since this study distinguished these tum or types. The data are plotted per gram of lung to adjust for differences in the lung mass in the two strains of rats (Sprague-Dawley and Wistar). Figure 3-4 shows that when TiO2 is expressed as mass dose, the lung tumor response to ultrafine TiO2 is much greater at a given dose than that for fine TiO2; yet when TiO2 is expressed as par ticle surface area dose, both fine and ultrafine TiO2 data fit the same dose-response curve (Figure 3-5). These findings indicate that, like other PSLT particles, TiO2 is a rat lung car cinogen, and for a given surface area dose, the equivalent mass dose associated with elevated tum or response would be much higher for fine particles than for ultrafine particles.
T i0 2 mass concentration (mg/g lung)
Figure 3 -4 . TiO2 mass dose in the lungs of rats exposed by inhalation for 2 years and tum or pro portion (either all tumors or tumors excluding keratinizing squamous cell cysts)
Note: Spline model fits to Lee data. (Heinrich dose data are jittered, i.e., staggered) Data source: Heinrich et al. , Lee et al. Figure 3 -5 . TiO2 surface area dose in the lungs of rats exposed by inhalation for 2 years and tumor proportion (either all tumors or tumors excluding keratinizing squamous cell cysts)
Note: Spline model fits to Lee data. (Heinrich dose data are jittered, i.e., staggered) Data source: Heinrich et al. , Lee et al. The difference in T^ crystal structure in these subchronic and chronic studies did not influence the dose-response relationships for pulmonary inflammation and lung tumors . That is, the particle sur face area dose and response relationships were consistent for the ultrafine (80% anatase, 20% rutile) and fine (99% rutile) T^ despite the differences in crystal structure. In contrast, differences in ROS generation and toxicity have been observed for different T^ crystal structures in cell-free, in vitro, and short-term in vivo studies. Cell-free assays have reported crystal structure (anatase, rutile, or mixtures) influences particle surface ROS generation . In a cellfree study designed to investigate the role of sur face area and crystal structure on particle ROS generation, Jiang et al. observed that size, surface area, and crystal structure all contribute to ROS generation. The ROS generation was associated with the number of defective sites per surface area, which was constant for many of the particle sizes but varied for some of the smaller particle sizes (10-30 nm) due to dif ferences in particle generation methods. In an in vitro cell assay, cytotoxicity was associated with greater ROS generation from photoacti vated TiO2 . In a pulmonary toxicity study in rats, the surface activity of the TiO2 particle (related to crystal structure, pas sivation, and acidity) was associated with the inflammation and cell proliferation responses at early time points, but not at the later time points .
Although these studies (cited in preceding paragraph) indicate that the particle surface properties pertaining to the crystal structure of TiO2, including photoactivation, can influ ence the ROS generation, cytotoxicity, and acute lung responses, these studies also show that crystal structure does not influence the pulmonary inflammation or tumor responses following subchronic or chronic exposures.
The reason for these differences in the acute and longer-term responses with respect to TiO2 crystal structure are not known but could relate to immediate effects of exposure to pho toactivated TiO2 and to quenching of ROS on the TiO2 surfaces by lung surfactant. Unlike crystalline silica, which elicits a pronounced pulmonary inflammation response, the in flammatory response to TiO2 of either the ru tile or anatase/rutile crystal form was much less pronounced after subchronic exposure in rats [Bermudez et al. 2002[Bermudez et al. , 2004 (Figure 3-2).
Reactive species (ROS/RNS) are also produced by alveolar macrophages and inflammatory cells during lung clearance and immunologi cal responses to inhaled particles, and the oxidative stress that can occur when antioxi dant defenses are overwhelmed is considered an underlying mechanism of the proliferative and genotoxic responses to inhaled particles . PSLT particles, including TiO2, have relatively low surface reactivity compared to the more inher ently toxic particles with higher surface activ ity such as crystalline silica . These findings are based on the studies in the scientific literature and may not apply to other formulations, surface coatings, or treatments of TiO2 for which data were not available.
# Particle-Associated Lung Responses
# Rodent Lung Responses to Fine and Ultrafine TiO2
Like other PSLT, fine and ultrafine TiO2 can elicit persistent pulmonary inflammation at sufficiently high dose and/or duration of ex posure. This occurs at doses that impair the normal clearance of particles from the alveolar (gas exchange) region of the lungs, i.e., "over loading" of alveolar macrophage-mediated particle clearance from the lungs . As the lung burden increases, alveolar macrophages become activated and release ROS/RNS and cellular factors that stimu late pathological events . Lung overload is characterized in rats, and to some extent in mice and ham sters, as increased accumulation of particle laden macrophages, increased lung weight, infiltration of neutrophils, increased epithelial permeability, increased transfer of particles to lymph nodes, persistent inflammation, lipoproteinosis, fibrosis, alveolar epithelial cell hyperplasia, and (eventually in rats) metapla sia and nonneoplastic and neoplastic tumors .
Lung inflammation and histopathological re sponses to TiO2 and other PSLT particles were more severe in the rat than in the mouse or hamster strains studied . Qualitatively similar early lung responses have been observed in rodents, especially rats and mice, although differences in disease progression and severity were also seen . Hamsters continued to ef fectively clear particles from the lungs, while the mice and rats developed overloading of lung clearance and retained higher lung par ticle burdens postexposure . Mice and rats developed persistent pulm onary inflam m a tion through 52 weeks after a 13-week expo sure to 50 or 250 mg/m3 fine TiO2 or to 10 mg/m3 ultrafine TiO2 . The rat lung response to another PSLT (carbon black) was more inflammatory (e.g., greater and more sustained generation of ROS in lung cells and higher levels of some inflammatory cytokines in BALF) compared to mice or hamsters . All three rodent species developed proliferative epithelial changes at the higher doses; however, only the rat developed metapla sia and fibrosis , but only rats had a significantly elevated lung tumor response .
Different strains of mice were used in the sub chronic and chronic studies (B3C3F1/CrlBR in Bermudez et al. and NMRI in Hein rich et al. ). The NMRI mice had a high background tumor response (in the unexposed control mice), which might have limited the ability to detect any particle-related increase in that study . The B3C3F1/ CrlBR mouse strain, which showed a prolif erative alveolar cell response to ultrafine TiO2 , has not been evaluated for tumor response to TiO2. Thus, the mouse data are limited as to their utility in evaluating a TiO2 lung tum or response. In addition, the rapid particle lung clearance in hamsters and their relative insensitivity to inhaled particles also limits the usefulness of that species in eval uating lung responses to TiO2. In the rat stud ies, three different strains were used (Sprague-Dawley , Wistar , and Fischer-344 ), although no heterogeneity was observed in the dose-response relationship when these data were pooled and the dose was expressed as particle surface area per gram of lung (Ap pendix A). Based on the available data, the rat is the most sensitive rodent species to the pul monary effects (inflammation and tumors) of inhaling PSLT particles such as TiO2.
# Comparison of Rodent and Human Lung Responses to PSLT Including TiO2
# Particle retention and lung response
The similarities and differences between hu man and rat lung structures, particle deposi tion, and responses to inhaled poorly soluble particles including TiO2 have been described . For example, there are structur al differences in the small airways in the lungs of rats and humans (e.g., lack of well-defined respiratory bronchioles in rats). However, this region of the lungs is a primary site of particle deposition in both species, and particles that deposit in this region can translocate into the interstitium where they can elicit inflamma tory and fibrotic responses .
Both similarities and differences in rat and hu man nonneoplastic responses to inhaled par ticles were observed in a comparative pathol ogy study (based on lung tissues from autopsy studies in workers and in chronic inhalation studies in rats, with chronic exposure to ei ther coal dust, silica, or talc in both species) . Humans and rats showed some consistency in response by type of dust; i.e., granulomatous nodular inflammation was more severe among workers and rats exposed to silica or talc compared to coal dust. Simi larly, humans and rats showed some graded response by dose; e.g., more severe centriacinar fibrosis at high versus low coal dust expo sure in both humans and rats. In humans, the centriacinar fibrotic response was more severe among individuals exposed to either silica or coal dust compared to the rat response to these dusts. In rats, intra-alveolar acute inflamma tion, lipoproteinosis, and alveolar epithelial hyperplasia responses were more severe fol lowing chronic exposure to silica, talc, or coal dust compared to those responses in humans.
The greater inflammatory and cell proliferation responses in rats suggest they may be more sus ceptible than humans to lung tumor responses to inhaled particles via a secondary genotoxic mechanism involving chronic pulmonary in flammation, oxidative stress, and cell damage and proliferation. However, it is also important to note that alveolar epithelial hyperplasia was clearly observed in workers exposed to silica, talc, or coal dust (and this response was more severe in humans compared to rats at "low" coal dust exposure). An important consid eration in interpreting these study results is that the quantitative dust exposures were well known in rats and poorly known in humans . The rat exposures to these dusts ranged from 2 to 18 mg/m3 for 2 years. In humans, the exposure concentrations and durations were not reported for silica and talc; for coal dust, the durations were not reported, and "high" versus "low" exposure was defined as having worked before or after the 2 mg/m3 standard. Since it is not possible from these data to determine how well the dust exposures and lung burdens compare between the rats and humans, the qualitative comparisons are probably more reliable than the quantitative comparisons, which could be due in part to unknown differences in the lung dose in rats and humans.
Different patterns of particle retention have been observed in rats, monkeys, and humans . Although no data are available on TiO2 particle retention patterns, coal dust (fine size) or diesel exhaust particles (ultrafine) were retained at a higher volume percentage in the alveolar lumen in rats and in the interstitium in monkeys . The animals had been exposed by inha lation to 2 mg/m3 of coal dust and/or diesel ex haust particulate for 2 years .
A greater proportion of particles was also retained in the interstitium in humans com pared to rats . In humans, the proportion of particles in the interstitium increased as the duration of exposure and es timated coal dust concentration increased. In rats, the particle retention pattern did not vary with increasing concentration of diesel exhaust particulate from 0.35 to 7.0 mg/m3. The authors . The increased interstitialization of particles in rats at high doses has also been inferred by the in creasing particle mass measured in the lungassociated lymph nodes since the movement of particles from the al veolar region to the pulmonary lymphatics requires transport through the alveolar epithe lium and its basement membrane . Thus, the particle retention observed in rats at overloading doses may bet ter represent the particle retention in the lungs of workers in dusty jobs, such as coal miners, where little or no particle clearance was ob served among retired miners . In addition, similar particle retention patterns (i.e., location in lungs) have been observed in rats, mice, and hamsters , yet the rat lung response is more severe. The retained dose is clearly a main factor in the lung response, and the low response in the hamster has been attributed to its fast clearance and low retention of particles . These findings suggest that the particle retention pattern is not the only, or necessarily the most important, factor influencing lung response between species in cluding rat and human.
Particle size and ability to translocate from the lung alveolar region into the interstitium may also influence lung tissue responses. Borm et al. noted that the rat lung tumor response increased linearly with chronic pulmonary inflammation (following IT of various types of fine-sized particles), but that the rat lung tum or response to ultrafine TiO2 was much greater relative to the inflammation response. They suggested that tumor development from ultrafine particles may be due to high interstitialization rather than to overload and its sequella as seen for the fine-sized particles. Al though this hypothesis remains unproven, it is clear that the rat lung tum or response to ul trafine TiO2 was greater than that of fine TiO2 on a mass basis . The rat lung tum or response was con sistent across particle size on a particle surface area basis, suggesting that the particle surface is key to eliciting the response.
The extent of particle disaggregation in the lungs could influence the available particle sur face as well as the ability of particles to trans locate from the alveolar lumen into the interstitium, each of which could also influence pulmonary responses, as shown with various sizes of ultrafine and fine particles such as TiO2 . In a study using simulated lung surfactant, disaggregation of P25 ultrafine TiO2 was not observed . An other nanoscale TiO2 particle (4 nm primary particle diameter; 22 nm count median diam eter; 330 m 2/g specific surface area) was ob served inside cells and cell organelles including the nucleus . Based on this observation, it has been suggested that an al ternative genotoxic mechanism for nanoscale particles might involve direct interaction with DNA [Schins and . NIOSH is not aware of any studies of the carcinogenic ity of discrete nanoscale TiO2 particles that would provide information to test a hypothesis of direct genotoxicity. However, this possible mechanism has been considered in the risk as sessment (Chapters 4 and 5).
In rats that have already developed particleassociated lung tumors, the percentage of PMNs in the lungs is relatively high (e.g., 40%-60%) . However, it is not known what sustained level of PMNs is required to trigger epithelial proliferation and tumorigenic responses. In a chronic inhalation study, an average level of approximately 4% PMNs in BALF was measured in rats at the first dose and time point that was associated with a sta tistically significantly increased lung clear ance half-time (i.e., a relatively early stage of overloading) . Similarly, in a subchronic inhalation study, 4% PMNs was predicted in rats at an average lung dose of TiO2 or BaSO4 that was not yet overloaded (based on measured particle lung burden) . The relatively slight differences in the rat responses in these studies may be due in part to differences in rat strains (Fisher-344; Wistar) and measures to assess overloading (clearance half-time and lung burden retention).
In humans, chronic inflammation has been as sociated with nonneoplastic lung diseases in workers with dusty jobs. Rom found a statistically significant increase in the percent age of PMNs in BALF of workers with respi ratory impairment who had been exposed to asbestos, coal, or silica (4.5% PMN in cases vs. 1.5% PMNs in controls). Elevated levels of PMNs have been observed in the BALF of miners with simple coal workers' pneumoco niosis (31% of total BALF cells vs. 3% in con trols) and in patients with acute silicosis (also a ten-fold increase over controls) . Humans with lung diseases that are characterized by chronic in flammation and epithelial cell proliferation
# Inflammation in rats and humans
(e.g., idiopathic pulmonary fibrosis, diffuse interstitial fibrosis associated with pneumoco niosis) have an increased risk of lung cancer . Dose-related increases in lung cancer have been observed in workers exposed to respirable crystalline silica , which can cause inflammation and oxidative tissue dam age . Chronic inflammation appears to be important in the etiology of dustrelated lung disease, not only in rats, but also in humans with dusty jobs and interstitial fibrosis .
The percentage of PMNs in BALF of normal con trols "rarely exceeded" 4% . In other human studies, "normal" PMN percent ages ranged from 2%-17% . An average of 3% PMNs (range 0%-10%) was observed in the BALF of nonsmoker controls (without lung disease), while 0.8% (range 0.2%-3%) was seen in smoker controls (yet the PMN count in smokers was higher; this was due to ele vation in other lung cell populations in smokers) . An elevation of 10% PMNs in the BALF of an individual has also been cited as being clinically abnormal .
# Noncancer responses to TiO2 in rats and humans
Case studies of workers exposed to TiO2 pro vide some limited information to compare the human lung responses to those observed in the rat. In both human and animal studies, TiO2 has been shown to persist in the lungs. In some workers, extensive pulmonary deposition was observed years after workplace exposure to TiO2 had ceased . This suggests that human lung retention of TiO2 may be more similar to that in rats and mice compared to hamsters, which continue to clear the particles effectively at exposure concentrations that caused over loading in rats and mice [Everitt et al. 2000;Bermudez et al. 2002Bermudez et al. , 2004.
Inflammation, observed in pathologic ex amination of lung tissue, was associated with titanium (by X-ray or elemental analysis) in the majority of human cases with heavy TiO2 deposition in the lung . Pulmonary inflammation has also been observed in studies in rats, mice, and hamsters exposed to TiO2 . Contin ued pulmonary inflammation in the lungs of some exposed workers after exposure cessation also appears to be more consistent with the findings in rats and mice than in hamsters, where in flammation gradually resolved with cessation of exposure.
In one case study, pulmonary alveolar pro teinosis (lipoproteinosis) was reported in a painter whose lung concentrations of titanium (60-129 x 106 particles/cm3 lung tissue) were "among the highest recorded" in a database of similar analyses . The titani um-containing particles were "consistent with titanium dioxide" and were the major type of particle found in the workers' lungs. In this case, the lipoproteinosis appeared more extensive than the lipoproteinosis seen in TiO2-exposed rats . Thus, although the rat lipoproteinosis response was more severe than that in work ers exposed to various fine-sized dusts , this case study illustrates that hu mans can also experience severe lipoproteinosis response which is associated with particle retention in the lungs.
Mild fibrosis has been observed in the lungs of workers exposed to TiO2 and in rats with chronic inhalation exposure to TiO2 .
In laboratory animals, alveolar metaplasia has been observed in rats, but not mice or ham sters, with subchronic inhalation to fine and ultrafine TiO2 [Lee et al. 1985;Everitt et al. 2000;Bermudez et al. 2002Bermudez et al. , 2004.
Although these studies suggest some similari ties in the lung responses reported in case stud ies of workers exposed to respirable TiO2 and in experimental studies in mice and rats, the hu man studies are limited by being observational in nature and by lacking quantitative exposure data. Information was typically not available on various factors including other exposures that could have contributed to these lung responses in the workers. Also, systematic histopathological comparisons were not performed, for exam ple, on the specific alveolar metaplastic changes of the rat and human lungs.
# Lung tum or types in rats and humans
Lung tumors observed in rats following chronic inhalation of TiO2 include squamous cell keratin izing cysts, bronchioloalveolar adenomas, squa mous cell carcinomas, and adenocarcinomas (see Sec tion 3.2.5). The significance of the squamous cell keratinizing cystic tum or (a.k.a. prolifera tive keratin cyst) for human risk assessment has been evaluated . Squamous cell keratinizing cystic tu mors are most prevalent in female rats exposed to high mass or surface area concentrations of PSLT. In a recent reanalysis of the lung tumors in the Lee et al. chronic inhalation study of fine (pigment-grade) TiO2, the 15 lesions originally recorded as "squamous cell carci noma" were reclassified as 16 tumors including two squamous metaplasia, 13 pulmonary kera tin cysts, and one squamous cell carcinoma . This reevaluation is consistent with the earlier evaluation of the squamous cell keratinizing cystic tumors by Boorman et al. . The classification of the 29 bronchioloalveolar adenomas in Warheit and Frame remained unchanged from that reported in Lee et al. .
Human and rat lung cancer cell types have simi larities and differences in their histopathologies . The respira tory tracts of humans and rats are qualitatively similar in their major structures and functions, although there are also specific differences such as the absence of respiratory bronchioles in rats . In humans, the major cell types of lung cancer worldwide are adenocarcinoma and squamous-cell carcinoma (also observed in rats) and small-and large-cell anaplastic carcinoma (not seen in rats) . In rats exposed to PSLT, most cancers are adenocarcinomas or squamous-cell carcinomas of the alveolar ducts . In recent years, there has been a shift in the worldwide prevalence of human lung tumors toward adenocarcinomas in the bronchoalveolar region . Maronpot et al. suggest that some of the apparent difference in the bronchioloalveolar carcinoma incidence in rodents and humans can be explained by differences in terminology, and that a more accurate comparison would be to combine the adenocarcinomas and bronchioloalveolar carcinomas in humans, which would significantly reduce the apparent dif ference. Cigarette smoking in humans is also likely to contribute to the difference between the incidences of human and rodent tumor types. Maronpot et al. suggest that if the smoking-related tum or types were eliminated from the comparison, then the major lung tum or types in humans would be adenocar cinomas and bronchioloalveolar carcinomas, which would correspond closely to the types of lung tumors occurring in rodents.
# Rat Model in Risk Assessment of Inhaled Particles
The extent to which the rat model is relevant to predicting human lung doses and responses to inhaled particles including TiO2 has been the subject of debate . Lung clearance of particles is slower in humans than in rats, by approximate ly an order of magnitude , and some humans (e.g., coal miners) may be exposed to concen trations resulting in doses that would overload particle clearance from rat lungs . Thus, the doses that cause overloading in rats appear to be relevant to estimating disease risk in work ers given these similarities in particle retention in the lungs of rats at overloading doses and in workers with high dust exposures.
Some have stated that the inhalation doseresponse data from rats exposed to PSLT particles should not be used in extrapolating cancer risks to humans because the rat lung tum or response has been attributed as a ratspecific response to the overloading of particle clearance from the lungs . While some of the tumors (kera tinizing cystic tumors) may not be relevant to humans, other types of tumors (adenomas, ad enocarcinoma, squamous cell carcinoma) do occur in humans.
Mice and hamsters are known to give false neg atives to a greater extent than rats in bioassays for some particulates that have been classified by the IARC as human carcinogens (limited or sufficient evidence), including crystalline silica and nickel subsulfide; and the mouse lung tu mor response to other known human particu late carcinogens-including beryllium, cadmi um, nickel oxide, tobacco smoke, asbestos, and diesel exhaust-is substantially less in mice than that in rats . These particulates may act by various plausible mechanisms, which are not fully understood . The risks of several known human particulate carcinogens would thus be underestimated by using dose-response data from rodent models other than the rat. Al though the mechanism of particle-elicited lung tumors remains to be fully elucidated, the rat and human lung responses to poorly soluble particles of low or high toxicity (e.g., coal dust and silica) are qualitatively similar in many of the key steps for which there are data, including pulmonary inflammation, oxidative stress, and alveolar epithelial cell hyperplasia . Case studies of lung responses in humans exposed to TiO2 are consistent in showing some similarities with rat responses (see Section 3.5.2.3). Semiquantitative com parisons of the lungs of rats and humans ex posed to diesel exhaust, coal dust, silica, or talc indicate both similarities and differences, in cluding similar regions of particle retention in the lungs but at different proportions , and greater fibrotic response in humans but greater inflammation and epithe lial hyperplasia responses in rats . These sensitive inflammatory and pro liferative responses to particles in the lungs are considered key to the rat lung tum or response . Although the data are limited for quantitative comparison of rat and hum an dose-response relationships to inhaled par ticles, the available data (e.g., crystalline silica and diesel exhaust particles) indicate that the rat-based estimates are not substantially great er, and some are lower, than the human-based risk estimates and that, in the absence of mechanistic data to the contrary, it is reasonable to assume that the rat model can identify potential carcinogenic hazards of poorly soluble particles to humans , including PSLT such as TiO2.
4 Quantitative Risk Assessment
# Data and Approach
Dose-response data are needed to quantify risks of workers exposed to TiO2. Such data may be obtained either from human studies or extrapolated to humans from animal stud ies. The epidemiologic studies on lung cancer have not shown a dose-response relationship in TiO2 workers . However, dose-response data are avail able in rats, for both cancer (lung tumors) and early, noncancer (pulmonary inflammation) endpoints. The lung tumor data (see Table 4-4) are from chronic inhalation studies and in clude three dose groups for fine TiO2 and one dose group for ultrafine TiO2 (in addition to controls). The pulmonary inflammation data are from subchronic inhalation studies of fine and ultrafine particles . Various modeling approaches are used to fit these data and to estimate the risk of disease in workers exposed to TiO2 for up to a 45-year working lifetime.
The modeling results from the rat dose-response data provide the quantitative basis for develop ing the RELs for TiO2, while the mechanistic data from rodent and human studies (Chapter 3) provide scientific information on selecting the risk assessment models and methods. The prac tical aspects of mass-based aerosol sampling and analysis were also considered in the overall approach (i.e., the conversion between particle surface area for the rat dose-response relation ships and mass for the human dose estimates and RELs). Figure 4-1 illustrates the risk as sessment approach.
# Methods
Dose-response modeling was used to estimate the retained particle burden in the lungs asso ciated with lung tumors or pulmonary inflam mation. Both maximum likelihood (MLE) and 95% lower CI estimates of the internal lung doses in rats were computed. Particle surface area was the dose metric used in these models because it has been shown to be a better pre dictor than particle mass of both cancer and noncancer responses in rats (Chapter 3). In the absence of quantitative data comparing rat and human lung responses to TiO2, rat and human lung tissue are assumed to have equal sensitiv ity to an equal internal dose in units of particle surface area per unit of lung surface area .
# Particle Characteristics
Study-specific values of particle MMAD, geo metric standard deviation (GSD), and spe cific surface area were used in the dosimetric modeling when available (see Tables 4-1 and 4-4). The Heinrich et al. study reported a specific surface area (48 ± 2 m 2/g ultrafine TiO2) for the airborne particulate, as mea sured by the BET N2 adsorption method. For Risk assessment approach using rat dose-response data to derive recommended exposure limits for TiO2.
the Lee et al. study, the specific surface area (4.99 m 2/g fine TiO2) reported by Driscoll was used; that value was based on mea surement of the specific surface area of a ru tile TiO2 sample similar to that used in the Lee study . This specific surface area was also assumed for the fine TiO2 in the M uh le et al. study. Otherwise, fine TiO2 was assumed to have the particle characteristics re ported by Tran et al. and a specific sur face area of 6.68 m2/g, and ultrafine TiO2 was assumed to have the particle characteristics re ported by Heinrich et al. and a specific surface area of 48 m 2/g.
# Critical Dose
The term "critical dose" is defined as the re tained particle dose in the rat lung (maximum likelihood estimate or 95% lower confi dence limit ) associated with a specified response, including either initiation of inflam mation or a given excess risk of lung cancer. One measure of critical dose is the benchmark dose (BMD), which has been defined as ". . . a statistical lower confidence limit on the dose corresponding to a small increase in effect over the background level" . In cur rent practice, and as used in this document, the BMD refers to the MLE from the model; and the benchmark dose lower bound (BMDL) is the 95% LCL of the BMD , which is equivalent to the BMD as originally defined by Crump . For dichotomous noncancer responses the benchmark response level is typically set at a 5% or 10% excess risk; however, there is less agreement on benchmark response levels for continuous responses. The benchmark response level used in this analysis for pulmonary inflammation, as assessed by el evated levels of PMNs, was 4% PMNs in BALF (see Section 3.5.2.2). Another measure of critical dose, used in an earlier draft of this current intelligence bulletin, was the esti mated threshold dose derived from a piecewise linear model fit to the pulmonary inflamma tion data (Appendix B). As discussed in Section 4.3.1.2, not all of the current data sets are com patible with a threshold model for pulmonary inflammation; however, the threshold estimates described in Appendix B have been retained in this document for comparative purposes. For lung tumors the approach to estimating critical lung doses was to determine the doses associ ated with a specified level of excess risk (e.g., 1 excess case per 1,000 workers exposed over a 45-year working lifetime), either estimated di rectly from a selected model or by model av eraging using a suite of models . The 1 per 1,000 lifetime excess risk level was considered to be a significant risk based on the "benzene" decision .
# Estimating Human Equivalent Exposure
The critical doses were derived using particle surface area per gram of rat lung, which was estimated from the mass lung burden data, rat lung weights, and measurements or estimates of specific surface area (i.e., particle surface area per mass). The use of a particle surface area per gram of lung dose for the rat doseresponse analyses was necessary in order to normalize the rat lung weights to a common dose measure, since several strains of rats with varying lung weights were used in the analyses (see The critical doses were then multiplied by 1.5 in order to normalize them to rats of the size used as a reference for lung surface area, as these were estimated to have lung weights of approximately 1.5 grams, based on the animal' s body weights. The critical doses were then ex trapolated to humans based on the ratio of rat lung to human lung surface areas, which were assumed to be 0.41 m 2 for Fischer 344 rats, 0.4 m2 for Sprague-Dawley rats, and 102.2 m2 for humans . These critical particle surface area doses were then converted back to particle mass dose for humans because the current hum an lung dosim etry models (used to estimate airborne concentration lead ing to the critical lung doses) are all mass-based and because the current occupational exposure limits for most airborne particulates including TiO2 are also mass-based.
# Particle Dosimetry Modeling
The multiple-path particle dosimetry model, version 2, (MPPD2) human lung dosimetry model was used to es timate the working lifetime airborne mass con centrations associated with the critical doses in human lungs, as extrapolated from the rat dose-response data. The specific MPPD2 m od ule used was the Yeh/Schum Symmetric model, assuming 17.5 breaths per minute and a tidal volume of 1,143 ml, with exposures of 8 hours per day, 5 days per week for a working lifetime of 45 years. The total of alveolar TiO2 plus TiO2 in the lung-associated lymph nodes was con sidered to be the critical human lung dose.
The respiratory frequency and tidal volume were chosen to be consistent with the Inter national Commission on Radiological Protec tion (ICRP) parameter values for occupational exposure, which equate an occupational ex posure to 5.5 hours/day of light exercise and 2.5 hours/day of sedentary sitting, with a total inhalation volume of 9.6 m 3 in an 8-hour day . The ICRP assumes 20 breaths per minute and a tidal volume of 1,250 ml for light exercise and 12 breaths per minute and a tidal volume of 625 ml for sedentary sitting. The values assumed in this analysis for modeling TiO2 exposures-i.e., 17.5 breaths per minute and a tidal volume of 1,143 m l-are a weighted average of the respiratory values for the light exercise and sedentary sitting conditions and are designed to match the ICRP value for total daily occupational inhalation volume.
In summary, the dose-response data in rats were used to determine the critical dose asso ciated with pulmonary inflammation or lung tumors, as particle surface area per surface area of lung tissue. The working lifetime air borne mass concentrations associated with the human-equivalent critical lung burdens were estimated using human lung dosimetry m od els. The results of these quantitative analyses and the derivation of the RELs for fine and ul trafine TiO2 are provided in the remainder of this chapter.
# Dose-Response Modeling of Rat Data and Extrapolation to Humans
# Pulmonary Inflamm ation 4.3.1.1 Rat data
Data from four different subchronic inhalation studies in rats were used to investigate the rela tionship between particle surface area dose and pulmonary inflammation response: (1) TiO2 used as a control in a study of the toxicity of volcanic ash , (2) fine TiO2 and BaSO4 in a study of the particle surface area as dose metric , (3) fine TiO2 in a multidose study , and (4) ultrafine TiO2 in a multidose study . Details of these studies are provided in
# Critical dose estimates in rats
The TiO2 pulmonary inflammation data from the Tran et al. and Cullen et al. studies could be fitted with a piecewise linear model which included a threshold parameter (described in Appendix B), and the threshold parameter estimate was significantly different from zero at a 95% confidence level. The MLE of the threshold dose was 0.0134 m 2 (particle surface area for either fine or ultrafine TiO2) for TiO2 alone (90% CI = 0.0109-0.0145) based on data from Tran et al. and 0.0409 m2 (90% CI = 0.0395-0.0484) based on data from Cullen et al. . However, the fine and ul trafine TiO2 pulmonary inflammation data from the Bermudez et al. and Bermudez et al. data sets provided no indication of a nonzero response threshold and were not consistent with a threshold model. Therefore, critical dose estimation for the pulmonary in flammation data was carried out via a bench mark dose approach, since benchmark doses could be fit to all three of the data sets (Tran et al. , Cullen et al. , and the combined data from Bermudez et al. and Bermudez et al. ).
Continuous models in the benchmark dose software (BMDS) suite were fit ted to the pulmonary inflammation data us ing percent neutrophils as the response and TiO2 surface area (m2) per gram of lung as the predictor. The model that fit best for all three sets of TiO2 data was the Hill model, which converged and provided an adequate fit. Since there appears to be an upper limit to the degree of physiological response, the Hill model is able to capture this behavior better than the linear, quadratic, or power models. Models were fit ted using constant variance for the Cullen et al. data and a nonconstant variance for the Tran et al. data and the combined data from Bermudez et al. and Bermu dez et al. . The model fits for these data sets are illustrated in Figures 4-2, 4-3, and 4-4, respectively. In all models the critical dose or BMD was defined as the particle surface area per gram of lung tissue associated with a 4% inflammatory response of neutrophils, which has been equated to a low level inflammatory response ; see Section 3.5.2.2. The benchmark dose estimates for pulmonary inflammation in rats are shown in Table 4-2.
# Estimated human equivalent exposure
The critical dose estimates from
# Lung Tumors
# Rat data
Dose-response data from chronic inhalation studies in rats exposed to TiO2 were used to estimate working lifetime exposures and lung cancer risks in humans. These studies are de scribed in more detail in Table 4-4 and include fine (pigment-grade) rutile TiO2 and ultrafine anatase TiO2 . The doses for fine TiO2 were 5 mg/m3 and 10, 50, and 250 mg/m3 . For ultra fine TiO2 there was a single dose of approxi mately 10 mg/m3 . Each of these studies reported the retained particle mass lung burdens in the rats. The internal dose measure of particle burden at 24 months of exposure was used in the dose-response models, either as particle mass or particle sur face area (calculated from the reported or esti mated particle surface area per gram).
The relationship between particle surface area dose of either fine or ultrafine TiO2 and lung tumor response (including all tumors or tu mors excluding the squamous cell keratinizing cysts) in male and female rats was shown in Chapter 3. Statistically significant increases in lung tumors were observed at the highest dose of fine TiO2 (250 mg/m3) or ultrafine TiO2 (ap proximately 10 mg/m3), whether or not the squamous cell keratinizing cysts were included in the tum or counts.
Different strains and sexes of rats were used in each of these three TiO2 studies. The Lee et al. study used male and female Sprague-Dawley rats (crl:CD strain). The Heinrich study used female Wistar rats . The Muhle et al. study used male and female Fischer-344 rats but reported only the average of the male and female lung tumor proportions.
The body weights and lung weights differed by rat strain and sex (Table 4-4). These lung mass differences were taken into account when cal culating the internal doses, either as mass (mg TiO2/g lung tissue) or surface area (m2 TiO2/m 2 lung tissue).
# Critical dose estimates in rats
Statistical models for quantal response were fitted to the rat tumor data, including the suite of models in the BMDS . The re sponse variable used was either all lung tumors or tumors excluding squamous cell keratiniz ing cystic tumors. Figure 4-5 shows the fit of the various BMD models to the lung tum or response data (without squamous cell keratinizing cysts) in male and female rats chronically exposed to fine or ultrafine TiO2 .
The lung tum or response in male and female rats was significantly different for "all tum ors" but not when squamous cell keratinizing cys tic tumors were removed from the analysis (Appendix A, Table A-2). In other words, the male and female rat lung tum or responses were equivalent except for the squamous cell Lee et al. Muhle et al. ; Bellman et al. Sprague-Dawley 557 780 (cri: CD) At 24 months: 0/10 (controls) 4/9 (all tumors) At 30 months: 1/217 (controls) 19/100 (no keratinizing cysts) 32/100 (all tumors) GSD = geometric standard deviation; MMAD = mass median aerodynamic diameter; SA = surface area (mean or assumed mean); SD = arithmetic standard deviation; T i0 2 = titanium diox ide; crl:CD and crl:(WI)BR are the rat strain names from Charles River Laboratories, Inc. *Lung particle burdens in controls not reported; assumed to be zero.
Turn or types: controls, male: 2 bronchioloalveolar adenomas. At 10 mg/m3, males: 1 large cell anaplastic carcinoma and 1 bronchioloalveolar adenoma. At 50 mg/m3, male: 1 bronchioloalveolar adenoma. At 250 mg/m3, females: 13 bronchioloalveolar adenomas and 1 squamous cell carcinoma; males: 12 bronchioloalveolar adenomas. In addition to the tumors listed above, 13 keratin cysts were observed: 1 in a 10 mg/m3 male, 1 in a 250 mg/m3 male, and 11 in 250 mg/m3 females. Dose was averaged for male and female rats because the tumor rates were reported only for male and female rats combined. Tumor types: controls, 2 adenocarcinomas and 1 adenoma. At 5 mg/m3: 1 adenocarcinoma and 1 adenoma. ' " Tumor types: controls, at 30 months: 1 adenocarcinoma. At -10 mg/m3: 20 benign squamous-cell tumors, 3 squamous-cell carcinomas, 4 adenomas, and 13 adenocarcinomas (includes 8 rats with 2 tumors each). BMD models and three-model average fit to the lung tum or data (without squamous cell keratinizing cysts) in male and female rats chronically exposed to fine or ultrafine TiO2
Note: The specific models used in constructing the model average were the multistage, Weibull, and log-probit models. The confidence intervals represent 95% binomial confidence limits for the individual data points.
Data source: BMD models and three-model average , male and female rats exposed to TiO2 keratinizing cystic tumor response, which was elevated only in the female rats. To account for the heterogeneity in the "all tumor" response among male and female rats , a modified logistic regres sion model was developed (Appendix A); this model also adjusted for the combined mean tu mor response for male and female rats reported by Muhle et al. . As discussed in Chap ter 3, many pathologists consider the rat lung squamous cell keratinizing cystic tumor to be irrelevant to human lung pathology. Excess risk estimates of lung tumors were estimated both ways-either with or without the squamous cell keratinizing cystic tumor data. The full results of the analyses including squamous cell keratin izing cystic tumors can be found in Appendix A. Inclusion of the keratinizing cystic tumors in the analyses resulted in slightly higher excess risk es timates in females, but not males. Since the male and female rat tumor responses may be combined when the squamous cell keratinizing cystic tumors are excluded and exclusion does not have a major numeric impact on the risk esti mates, risk estimates for TiO2-induced lung tumors are based on the combined male and female rat lung tumors, excluding the squa mous cell keratinizing cystic tumors. All lung tumor-based risk estimates shown below have been derived on this basis. Classification of the squamous cell keratinizing cystic tumors in the Lee et al. study was based on a reanalysis of these lesions by Warheit and Frame .
The estimated particle surface area dose associ ated with a 1/1000 excess risk of lung tumors is shown in Table 4-5 for lung tumors excluding squamous cell keratinizing cystic lesions. The estimated particle surface area doses-BMD and BMDL-associated with a 1/1000 excess risk of lung cancer vary considerably depend ing on the shape of the model in the low-dose region. The model-based estimates were then summarized using a model averaging technique , which weights the various models based on the model fit. Model averaging provides an approach for summariz ing the risk estimates from the various models, which differ in the low-dose region which is of interest for human health risk estimation, and also provides an approach for addressing the uncertainty in choice of model in the BMD ap proach. The specific model averaging method used was the three-model average procedure as described by Wheeler and Bailer , using the multistage, Weibull, and log-probit models. These received weights of 0.14, 0.382 and 0.478, respectively, in the averaging procedure. This approach was considered to be appropriate for the TiO2 data set because the dose-response re lationship is nonlinear, and the specific models used in the three-model average procedure do not impose low-dose linearity on the model av erage if linearity is not indicated by the data. In this case the best fitting models are all strongly sub-linear; the multistage model is cubic in form, the Weibull model is similar with a power of 2.94, and the log-probit model has a slope of 1.45. Therefore the dose-response relationship is quite steep, and the estimated risk drops sharply as the dose is reduced, as shown in Table 4-7.
# Estimated human equivalent exposure
The critical dose estimates from Table 4-5 were converted to mass dose and extrapolated to humans by adjusting for species differences in lung surface area, as described in Section 4.2.3. Pulmonary dosimetry modeling was used to estimate the occupational exposure concentra tions corresponding to the benchmark dose estimates, as described in Section 4.2.4. Life time occupational exposure concentrations estimated to produce human lung burdens equivalent to the rat critical dose levels for lung tumors are presented in Table 4-6 for fine and ultrafine TiO2. Estimated occupational ex posure levels corresponding to various levels of lifetime excess risk are shown in Table 4-7.
This table shows the risk estimates for both fine and ultrafine TiO2. The 95% LCL estimates of the occupational exposure concentrations ex pected to produce a given level of lifetime ex cess risk are shown in the right-hand column. The concentrations shown in bold for fine and ultrafine TiO2 represent 1 per 1000 risk levels, which NIOSH has used as the basis for estab lishing RELs. The REL for ultrafine TiO2 was rounded from 0.29 mg/m3 to 0.3.
# Alternate Models and Assumptions
The choice of dosimetry models influences the estimates of the mean airborne concentration.
A possible alternative to the MPPD model of MA = model average; BMD = maximum-likelihood estimate of benchmark dose; BMDL = benchmark dose lower bound (95% lower confidence limit for the benchmark dose); BMDS = benchmark dose software; TiO2 = titanium dioxide.
Response modeled: lung tum ors excluding cystic keratinizing squam ous lesions-from two studies of fine TiO2 and one study of ultrafine TiO 2 . Acceptable m odel fit determ ined by P > 0.05. ¿BMDS did not converge. Model fitting and BMD and BMDL estim ation were carried out as described in W heeler and Bailer . Average model, as described by W heeler and Bailer , based on the multistage, Weibull, and log-probit models. P-values are not defined in m odel averaging because the degrees of freedom are unknown. Abbreviations: MA = m odel average; BMD = benchm ark dose; GSD = geometric standard deviation, LCL = lower confidence limit; MLE = m axim um likelihood estimate; TiO 2 = titanium dioxide, MPPD = m ultiple-path particle dosim etry model. *MLE and 95% LCL were determ ined in rats (Table 4-5) and extrapolated to hum ans based on species differences in lung surface area, as described in Section 4.2.3. Mean concentration estimates were derived from the CIIT and RIVM lung model. §W ithout keratinizing cystic lesions. ?Model averaging combined estimates from the multistage, Weibull, and log-probit models .
Mass m edian aerodynamic diam eter (MMAD). Agglomerated particle size for ultrafine TiO 2 was used in the deposition model . Specific surface area was used to convert from particle surface area dose to mass dose; thus airborne particles with different specific surface areas w ould result in different mass concentration estimates from those shown here. "Abbreviations: MA = model average; BMD = benchm ark dose; GSD = geometric standard deviation, LCL = lower confidence limit; MLE = m axim um likelihood estimate; TiO 2 = titanium dioxide, M PPD = m ultiple-path particle dosim etry model. fM odel averaging com bined estimates from the multistage, Weibull, and log-probit models . *MLE and 95% LCL were determ ined in rats (Table 4-5) and extrapolated to hum ans based on species differences in lung surface area, as described in Section 4.2.3. §Mean concentration estimates were derived from the CIIT and RIVM lung model. Without keratinizing cystic lesions. Mass m edian aerodynamic diam eter (MMAD). Agglomerated particle size for ultrafine TiO 2 was used in the deposition model . Specific surface area was used to convert from particle surface area dose to mass dose; thus airborne particles with different specific surface areas would result in different mass concentration estimates from those shown here. n The exposure levels shown in boldface are the 95% LCL estimates of the concentrations of fine and ultrafine TiO2 considered ap propriate for establishment of a REL. The ultrafine exposure level of 0.29 m g/m 3 was rounded to 0.3 for the REL.
particle deposition of CIIT and RIVM , which was used for particle dosimetry model ing in this analysis, would be an interstitialization/sequestration model that was developed and calibrated using data of U.S. coal miners and later validated using data of U.K. coal miners . The MPPD model uses the ICRP alveolar clearance model, which was developed using data on the clearance of radiolabeled tracer particles in humans, and has been in use for many years . More data are needed to evaluate the model structures and determine how well each model would de scribe the retained doses associated with low particle exposures in humans, particularly for ultrafine particles. The MPPD model was se lected for use in this analysis on the grounds that it is widely used and well accepted for par ticle dosimetry modeling in general, while use of the interstitialization/sequestration model has to date been limited primarily to modeling expo sures to coal dust. Nevertheless, it should be not ed that the interstitialization/sequestration mod el predicts lung burdens of fine and ultrafine TiO2 which are approximately double those predicted by the MPPD model. Thus, the use of the interstitialization/sequestration model for particle do simetry would approximately halve the estimates of occupational exposure levels equivalent to the rat critical dose levels, compared to estimates de veloped using the MPPD model.
The method selected for extrapolating between rats and humans also influences the estimates of occupational exposure levels equivalent to the rat critical dose levels. To extrapolate the critical particle surface area dose in the lungs of rats to the lungs of humans, either the relative mass or surface area of the lungs in each species could be used. The results presented in this analysis are based on the relative alveolar surface area, as suming 0.41 m 2 for Fischer 344 rats, 0.4 m2 for Sprague-Dawley rats, and 102.2 m2 for humans . Alternatively, extrapolation could be based on the relative lung weights of rat and human lungs, using a dose metric of particle surface area per gram of lung tissue. In that case, the estimates of the working lifetime occupational exposure levels equivalent to the rat critical dose levels would be higher by a factor of approximately four. The lung surface area-based approach was selected for this anal ysis because insoluble particles deposit and clear along the surface of the respiratory tract, so that dose per unit surface area is often used as a normalizing factor for comparing particle doses across species .
The critical dose estimates in Table 4-5 also vary depending on the dose-response model used and on whether the MLE or the 95% LCL is used as the basis for estimation. The MLE dose estimates associated with a 1/1000 excess risk of lung tumors vary by a factor of 38, while the 95% LCL estimates vary by a factor of 14. All of the models provided statistically adequate fits to the data, and there is little basis to select one model over another. This uncertainty regarding model form has been addressed by the use of MA, as described by Wheeler and Bailer , which weights the models based on the model fit as as sessed by the Akaike information criterion. How ever, any of the models shown could conceivably be selected as the basis for human risk estimation. Use of the model-averaged 95% LCL value, as op posed to the model-averaged MLE, is intended to address both model uncertainty and variability in the rat data; however, it is possible that the 95% LCL may underestimate the true variability of the human population.
# Mechanistic Considerations
The mechanism of action of TiO2 is relevant to a consideration of the associated risks because, as discussed earlier, the weight of evidence sug gests that the tumor response observed in rats exposed to fine and ultrafine TiO2 results from a secondary genotoxic mechanism involving chronic inflammation and cell proliferation, rather than via genotoxicity of TiO2 itself. This effect appears related to the physical form of the inhaled particle (i.e., particle surface area) rather than the chemical compound itself. Other PSLT particles-such as BaSO4, carbon black, toner, and coal dust-also produce inflammation and lung tumors in proportion to particle surface area (Figures 3-2 and 3-3) and therefore appear to act via a similar mechanism.
Studies supporting this mechanism include em pirical studies of the pulmonary inflammatory response of rats exposed to TiO2 and other PSLT (see Sections 3.2.4 and 3.4.1); the tumor re sponse of TiO2 and other PSLT, which have con sistent dose-response relationships (see Section 3.4.2); and in vitro studies, which show that in flammatory cells isolated from BALF from rats exposed to TiO2 released ROS that could induce mutations in naive cells (see Section 3.2.1.3).
There is some evidence, though limited, that in flammation may be a factor in the initiation of human lung cancer as well (see Section 3.5.2.2).
In considering all the data, NIOSH has deter mined that a plausible mechanism of action for TiO2 in rats can be described as the accumula tion of TiO2 in the lungs, overloading of lung clearance mechanisms, followed by increased pulmonary inflammation and oxidative stress, cellular proliferation, and, at higher doses, tumorigenesis. These effects are better described by particle surface area than mass dose (see Section 3.4.2). The best-fitting dose-response curves for the tumorigenicity of TiO2 are non linear; e.g., the multistage model in Table 4-5 is cubic with no linear term , the quantalquadratic model is quadratic with no linear term , and the gamma and Weibull models have power terms of approximately 4 and 3, respectively. This nonlinearity is consistent with a secondary genotoxic mechanism and suggests that the carcinogenic potency of TiO2 would decrease more than proportionately with decreasing surface area dose as described in the best-fitting risk assessment models.
# Quantitative Comparison of Risk Estimates From Human and Animal Data
A quantitative comparison was performed (Appendix C) of the rat-based excess risk esti mates for human lung cancer due to exposure to fine TiO2 to the 95% upper confidence limit (UCL) of excess risk from the epidemiologic studies (Appendix D), in order to compare the rat-and human-based excess risks of lung cancer. If the sensitivity of the rat response to inhaled particulates differs from that of hu mans, then the excess risks derived from the rat data would be expected to differ from the excess risks estimated from the human studies.
The results of the comparison of the rat-and human-based excess risk estimates were used to assess whether or not there was adequate precision in the data to reasonably exclude the rat model as a basis for predicting the excess risk of lung cancer in humans exposed to TiO2.
The results of these comparisons showed that the MLE and 95% UCL excess risk estimates from the rat studies were lower than the 95% UCL from the human studies for an estimated working lifetime (Appendix C, Table C-1). These results indicate that, given the variability in the human studies , the rat-based excess risk es timates cannot reasonably be dismissed from use in predicting the excess risk of lung cancer in humans exposed to TiO2. Thus, NIOSH determined that it is prudent to use these rat dose-response data for risk assessment in workers exposed to TiO2.
# Possible Bases for an REL
# Pulmonary Inflamm ation
As discussed above, the evidence in rats sug gests that the lung tum or mechanism asso ciated with PSLT particles such as TiO2 is a secondary genotoxic mechanism involving chronic inflammation and cell proliferation.
One plausible approach to developing risk esti mates for TiO2 is to estimate exposure concen trations that would not be expected to produce an inflammatory response, thus preventing the development of responses that are secondary to inflammation, including cancer. A bench mark dose analysis for pulmonary inflamma tion in the rat was described in Section 4.3.1, and the results of extrapolating the rat BMDs to humans are presented in Table 4-3. Since two of the three studies available yielded 95% BMDLs of 0.78 and 1.03 mg/m3, a concentra tion of approximately 0.9 mg/m3 is reasonable as the starting point for development of rec ommendations for human exposures for fine TiO2. Similarly, a concentration of approxi mately 0.11 mg/m3 is appropriate as the start ing point for developing recommended expo sures to ultrafine TiO2.
As noted in Section 4.3.1.3, the human pulmo nary inflammation BMDs in Table 4-3 are es timates of frank-effect levels and should be ad justed by the application of uncertainty factors to allow for uncertainty in animal-to-human ex trapolation and interindividual variability. These uncertainty factors are commonly assumed to be ten-fold for animal-to-human extrapola tion and another ten-fold for interindividual variability; the animal-to-human uncertainty may be subdivided into a factor of 4 for toxi cokinetics and 2.5 for toxicodynamics (WHO 1994). Since the rat BMDs were extrapo lated to humans using a deposition/clear ance model, it is reasonable to assume that the animal-to-human toxicokinetic subfactor of 4 has already been accounted for; therefore, a total uncertainty factor of 25 (2.5 for animal to-human toxicodynamics times 10 for inter individual variability) should be applied. This results in estimated exposure concentrations designed to prevent pulmonary inflammation of 0.04 m g/m 3 for fine TiO2 and 0.004 mg/m3 for ultrafine TiO2.
# Lung Tumors
Rather than estimating an exposure concentra tion designed to avoid secondary toxicity by preventing pulmonary inflammation, another possible basis for developing a REL is to model the risk of lung tumors directly. In the absence of mechanistic data in humans, the tumorigenic mechanism operative in rats cannot be ruled out. Therefore, one approach for estimation of recommended levels of occupational exposure to TiO2 is to estimate the pulmonary particle surface area dose associated with a 1/1000 in crease in rat lung tumors and to extrapolate that dose to humans on the basis of particle surface area per unit of lung surface area. This approach was used to assess the excess risk of lung cancer at various working lifetime expo sure concentrations of fine or ultrafine TiO2 (Table 4 -6). Selection of the model for esti mating risks has a significant impact on the risk estimates. As shown in Table 4-6, the 95% LCL working lifetime mean concentration of fine TiO2 associated with a 1/1000 excess risk of lung cancer is 0.3 to 4.4 mg/m3, depending on the model used to fit the rat lung tum or data. For ultrafine TiO2, the 95% LCL working lifetime mean concentration associated with a 1/1000 excess risk of lung cancer is 0.04 to 0.54 mg/m 3, depending on the model.
Although any of the models evaluated in Table 4-6 could conceivably be used to develop rec ommendations for occupational exposures to TiO2, the model averaging procedure is attrac tive since it incorporates both statistical vari ability and model uncertainty into confidence limit estimation. However, an argument could also be made for basing recommendations on the multistage model, due to its long his tory of use for carcinogen risk assessment or the quantal-linear model, on the grounds that it generates the lowest BMD and BMDL and is thus arguably the most health-protective. The BMD and BMDL derived via each of these models are shown in Table 4-6 for both fine and ultrafine TiO2.
Since the various models produce different risk estimates and there is no clear mechanistically based preference for one model over another, it is appropriate to summarize the results by using a MA technique. MA, as implemented here, uses all the information from the various dose-response models, weighting each model by the Akaike information criterion for model fit and constructing an average dose-response model with lower bounds computed by boot strapping. This method was described by Wheeler and Bailer , who demonstrated via simulation studies that the MA method has superior statistical properties to a strategy of simply picking the best-fitting model from the BMDS suite. As shown in Table 4-6, the model average estimate of the working lifetime mean concentration of fine TiO2 associated with a 1/1000 excess risk of lung cancer is 13.2 mg/m3, with a 95% LCL of 2.4 mg/m3. The correspond ing estimates for ultrafine TiO2 are 1.6 mg/m3, with a 95% LCL of 0.3 mg/m3. NIOSH believes that it is reasonable and prudent to use the 95% LCL model-averaged estimates as the ba sis for RELs, as opposed to the MLEs, in order to allow for model uncertainty and statistical variability in the estimates.
# Comparison of Possible Bases for an REL
As discussed above, occupational exposure concentrations designed to prevent pulmo nary inflammation, and thus prevent the de velopment of secondary toxicity (including lung tumors), are 0.04 m g/m 3 for fine TiO2 and 0.004 mg/m3 for ultrafine TiO2. In comparison, modeling of the dose-response relationship for lung tumors indicates that occupational expo sure concentrations of 2.4 mg/m3 for fine TiO2 and 0.3 m g/m 3 for ultrafine TiO2 would be suf ficient to reduce the risk of lung tumors to a 1/1000 lifetime excess risk level. The discrep ancy between the occupational exposure con centrations estimated from modeling either pulmonary inflammation or lung tumors raises serious questions concerning the optimal basis for a TiO2 REL. However, it must be acknowl edged that the two sets of possible RELs are not based on entirely comparable endpoints. The pulmonary inflammation-based exposure concentrations are expected to entirely prevent the development of toxicity secondary to pul monary inflammation, resulting in zero excess risk of lung tumors due to exposure to TiO2. In contrast, the lung tumor-based exposure con centrations are designed to allow a small, but nonzero, excess risk of lung tumors due to oc cupational exposure to TiO2.
As discussed in Section 3.4.1, particle-induced pulmonary inflammation may act as a precur sor for lung tum or development; however, pulmonary inflammation itself is not a specific biomarker for lung cancer. As noted in Sec tion 3.5.2.2, the precise level of sustained inflammation necessary to initiate a tumorigenic response is currently unknown. It is pos sible that the 4% PMN response used in this analysis as the benchmark response level for pulmonary inflammation is overly protective and that a somewhat greater inflammatory re sponse is required for tum or initiation.
It is also possible that the 25-fold uncer tainty factor applied to the critical dose es tim ate for pulm onary inflamm ation may be overly conservative, since pulm onary in flammation is an early event in the sequence of events leading to lung tum ors. However, NIOSH has not previously used early events or secondary toxicity as a rationale for apply ing smaller than normal uncertainty factors. Given that in this case the primary objective of preventing pulmonary inflammation is to prevent the development of lung tumors, and given that lung tumors can be adequately con trolled by exposures many-fold higher than the inflammation-based exposure concentrations, NIOSH has concluded that it is appropriate to base RELs for TiO2 on lung tumors rather than pulmonary inflammation. However, NIOSH notes that extremely low-level exposures to TiO2-i.e., at concentrations less than the pul monary inflammation-based RELs-may pose no excess risk of lung tumors.
# Hazard Classification and Recommended Exposure Limits
NIOSH initiated the evaluation of titanium dioxide by considering it as a single substance with no distinction regarding particle size. However, a review of all the relevant scien tific literature indicated that there could be a greater occupational health risk with smaller size (ultrafine) particles and therefore NIOSH provides separate recommendations for the ul trafine and fine categories.
NIOSH has reviewed the relevant animal and hum an data to assess the carcinogenicity of titanium dioxide (TiO2) and has reached the following conclusions. First, the weight of evi dence suggests that the tum or response ob served in rats exposed to ultrafine TiO2 resulted from a secondary genotoxic mechanism involv ing chronic inflammation and cell proliferation, rather than via direct genotoxicity of TiO2. This effect appears to be related to the physical form of the inhaled particle (i.e., particle surface area) rather than to the chemical compound itself.
Second, based on the weight of the scientific data (including increase in adenocarcinoma tu mor incidence in a chronic inhalation study in rats of 10 mg/m3), NIOSH determined that in haled ultrafine TiO2 is a potential occupational carcinogen and is recommending exposure lim its to minimize the cancer risk from exposure to ultrafine TiO2. Finally, because the tumorigenic dose of fine TiO2 (250 mg/m3) in the Lee et al. studies [1985Lee et al. studies [ , 1986a was substantially higher than current inhalation toxicology practiceand because there was no significant increase in tumors at 10 or 50 mg/m3-NIOSH did not use the highest dose in its hazard identification and concluded that there is insufficient evidence to classify fine TiO2 as a potential occupational carcinogen. Although NIOSH has determined that the data are insufficient for cancer hazard classification of fine TiO2, the particle surface area dose and tum or response relationship is consistent with that observed for ultrafine TiO2 and warrants that precautionary mea sures be taken to protect the health of workers exposed to fine TiO2. Therefore, NIOSH used all of the animal tum or response data to con duct the dose-response modeling, and devel oped separate mass-based RELs for ultrafine and fine TiO2.
# Hazard Classification
NIOSH reviewed the current scientific data on TiO2 to evaluate the weight of the evidence for the NIOSH designation of TiO2 as a "potential occupational carcinogen." Two factors were considered in this evaluation: (1) the evidence in humans and animals for an increased risk of lung cancer from inhalation of TiO2, including exposure up to a full working lifetime, and (2) the evidence on the biologic mechanism of the dose-response relationship observed in rats, including evaluation of the particle character istics and dose metrics that are related to the pulmonary effects.
No exposure-related increase in carcinogenic ity was observed in the epidemiologic studies conducted on workers exposed to TiO2 dust in the workplace . In contrast, chronic inhalation exposures to ultrafine TiO2 at approximately 10 mg/m 3 re sulted in a statistically significant increase in malignant lung tumors in rats, although lung tumors in mice were not elevated . The lung tumors observed in rats after exposure to 250 mg/m3 of fine TiO2 were the basis for the original NIOSH designation of TiO2 as a "potential occupational carcinogen." However, because this dose is considered to be significantly higher than currently accepted in halation toxicology practice , NIOSH concluded that the response at such a high dose should not be used in making its haz ard identification. Therefore, NIOSH has come to different conclusions regarding the potential occupational carcinogenicity for fine versus ultrafine TiO2. NIOSH evaluated the doseresponse data in humans and animals, along with the mechanistic factors described below, in assessing the scientific basis for the current NIOSH designation of ultrafine but not fine TiO2 as a "potential occupational carcinogen."
In addition, NIOSH used the rat dose-response data in a quantitative risk assessment to develop estimates of excess risk of nonmalignant and malignant lung responses in workers over a 45-year working lifetime. These risk estimates were used in the development of RELs for fine and ultrafine TiO2.
# Mechanistic Considerations
As described in detail in Chapter 3, the mech anistic data considered by NIOSH were ob tained from published subchronic and chronic studies in rodents exposed by inhalation to TiO2 or other PSLT particles. These studies in clude findings on the kinetics of particle clear ance from the lungs and on the nature of the relationship between particle surface area and pulmonary inflammation or lung tum or re sponse. The mechanistic issues considered by NIOSH include the influence of particle size or surface area (vs. specific chemical reactivity) on the carcinogenicity of TiO2 in rat lungs, the relationship between particle surface area dose and pulmonary inflammation or lung tumor response in rats, and the mechanistic evidence on the development of particle-elicited lung tumors in rats. These considerations are dis cussed in detail in Chapter 3.
NIOSH also considered the crystal structure as a modifying factor in TiO2 carcinogenic ity and inflammation. As described in Chap ter 3, some short-term studies indicate that the particle surface properties pertaining to the crystal structure of TiO2, including pho toactivation, can influence the ROS genera tion, cytotoxicity, and acute lung responses. These studies also show that crystal structure does not influence the pulmonary inflamma tion or tum or responses following subchronic or chronic exposures . These findings are based on the studies in the scientific literature and may not apply to other formulations, surface coatings, or treatments of TiO2 for which data were not available.
After analysis of the issues, NIOSH concluded that the most plausible mechanism for TiO2 carcinogenesis is a nonchemical specific inter action of the particle with the cells in the lung, characterized by persistent inflammation and mediated by secondary genotoxic processes. The dose-response relationships for both the in flammation and tumorigenicity associated with TiO2 exposure are consistent with those for other PSLT particles. Based on this evidence, NIOSH concluded that the adverse effects produced by TiO2 exposure in the lungs are likely not sub stance-specific, but may be due to a nonchem ical-specific effect of PSLT particles in the lungs at sufficiently high particle surface area expo sures. However, because the tumorigenic dose for fine TiO2 of 250 mg/m3 was significantly higher than currently accepted in halation toxicology practice , NIOSH did not use the 250 mg/m3 dose in its hazard identification. Therefore, NIOSH con cluded that there are insufficient data to classify fine TiO2 as a potential occupational carcinogen but there are sufficient data indicating that ul trafine TiO2 has the potential to cause cancer after adequate occupational exposure.
# Limitations of the Rat Tumor Data
NIOSH evaluated all publicly available epide miology studies and laboratory animal inhala tion studies and determined that the best data to support a quantitative risk assessment for TiO2 were from rat inhalation studies [Lee et al. 1985;Muhle et al. 1991;Heinrich et al. 1995;Cullen et al. 2002;Tran et al. 1999;Bermudez et al. 2002Bermudez et al. , 2004. These studies provided exposureresponse data for both inflammation and tumorigenicity and were used as the basis of the quantitative risk assessment.
NIOSH considered the scientific literature that supported and disputed on the rat as an ap propriate model for human lung cancer after exposure to inhaled particles and reached the conclusion that the rat is an appropriate spe cies on which to base its quantitative risk as sessment for TiO2. Although there is not ex tensive evidence that the overloading of lung clearance, as observed in rats (Chapter 3), occurs in humans, lung burdens consistent with overloading doses in rats have been ob served in some humans with dusty jobs (e.g., coal miners) . Rather than ex cluding the rat as the appropriate model, the lung overload process may cause the rat to at tain lung burdens comparable to those that can occur in workers with dusty jobs. In addition, evidence suggests that, as in the rat, inhalation of particles increases the human inflammatory response, and increases in the inflammatory response may increase the risk of cancer (see Section 3.5.2.2). This information provides ad ditional support for the determination that the rat is a reasonable animal model with which to predict human tum or response for particles such as TiO2.
After evaluating all ofthe available data, NIOSH concluded that the appropriate dose metric in its risk assessment was particle surface area. Both tumorigenicity and inflammation were more strongly associated with particle surface area than particle mass dose. The separate risk estimates for fine and ultrafine TiO2 are sup ported by the higher lung cancer potency in rats of ultrafine TiO2 compared to fine TiO2, which was associated with the greater surface area of ultrafine particles for a given mass. In rats chronically exposed to airborne fine TiO2, statistically significant excess lung tumors were observed only in the 250 m g/m3 dose group. Al though exposure concentrations greater than 100 mg/m3 are not currently standard method ology in inhalation toxicity studies , and NIOSH questions the relevance of the 250 mg/m 3 dose for classifying exposure to TiO2 as a carcinogenic hazard to workers, the tumor-response data are consistent with that observed for ultrafine TiO2 when converted to a particle surface area metric. Thus to be cau tious, NIOSH used all of the animal tum or re sponse data when conducting dose-response modeling and determining separate RELs for ultrafine and fine TiO2. With chronic exposure to airborne ultrafine TiO2, lung tumors were seen in rats exposed to an average of approxi mately 10 mg/m3 (ranged from 7.2 mg/m 3 to 14.8 mg/m3) over the exposure period.
It would be a better reflection of the entire body of available data to set RELs as the inhaled sur face area of the particles rather than the mass of the particles. This would be consistent with the scientific evidence showing an increase in potency with increase in particle surface area (or decrease in particle size) of TiO2 and other PSLT particles. For this reason, the basis of the RELs for fine and ultrafine TiO2 is the rat doseresponse data for particle surface area dose and pulmonary response. However, current tech nology does not permit the routine measure ment of the surface area of airborne particles; therefore, NIOSH recommends sampling the mass airborne concentration of TiO2 as two broad prim ary particle size categories: fine (< 10 ^m) and ultrafine (< 0.1 ^m). These cate gories reflect current aerosol size conventions, although it is recognized that actual particle size distributions in the workplace will vary.
# Cancer Classification in Humans
Since the public comment and peer review draft of this document was made available, NIOSH has learned that the IARC has reassessed TiO2. IARC now classifies TiO2 as an IARC Group 2B carcinogen, "possibly carcinogenic to humans" . NIOSH supports this decision and the underlying analysis lead ing to this conclusion.
Based on the animal studies described in Chapter 3 and the quantitative risk assessment in Chapter 4, NIOSH has concluded that ultra fine but not fine TiO 2 particulate m atter is a potential occupational carcinogen. However, as a precautionary step, NIOSH conducted a quantitative risk assessment based on the tumor data for both fine and ultrafine TiO2.
The potency of ultrafine TiO2, which has a much higher surface area per unit mass than fine TiO2, was many times greater than fine TiO2, with malignant tumors observed at the lowest dose level of ultrafine TiO2 tested (10 mg/m3).
The lack of an exposure-response relation ship in the epidemiologic studies of workers exposed to TiO2 dust in the workplace should not be interpreted as evidence of discordance between the mechanism presumed to operate in rats and the human potential for carcino genicity. As demonstrated by the quantitative comparison between the animal and human studies (see Section 4.4 and Appendix C), the responses were not statistically inconsistent: the epidemiologic studies were not sufficiently precise to determine whether they replicated or refuted the animal dose-response. This is not a surprising finding for carcinogens of lower po tency, such as fine-sized TiO2.
The mechanistic data reviewed above, how ever, leave open the possibility of species dif ferences beyond what would be anticipated for a genotoxic carcinogen. Although it is plau sible that the secondary genotoxic mechanism proposed here operates in humans exposed to TiO2 dust, there is insufficient evidence to corroborate this. In addition, there is limited information on the kinetics or specific physi ological response to TiO2 particles in humans.
The evidence suggests that inhalation of lower surface area TiO2 is not likely to result in car cinogenicity in any test species. This concept is reflected in the quantitative risk assessment, in which the curvilinear dose response predicts that lower exposures have disproportionally less risk than higher exposures. For workers, this suggests that exposure to concentrations lower than the RELs will be less hazardous and may pose a negligible risk.
Although the analysis in this document is lim ited to consideration of the workplace hazard posed by TiO2, these findings suggest that other PSLT particles inhaled in the workplace may pose similar hazards, particularly nano-sized particles with high surface areas. NIOSH is concerned that other nano-sized PSLT particles may have similar health effects to those ob served for TiO2, in which ultrafine TiO2 parti cles were observed to be more carcinogenic and inflammogenic on a mass basis than fine TiO2 [Heinrich et al. 1995;Lee et al. 1985Lee et al. , 1986a.
# Recommended Exposure Limits
NIOSH recommends airborne exposure limits of 2.4 mg/m3 for fine TiO2 and 0.3 mg/m 3 for ultrafine (including engineered nanoscale) TiO2 as TWA concentrations for up to 10 hr/day during a 40-hour work week, using the inter national definitions of respirable dust and the NIOSH Method 0600 for sampling airborne respirable particles . NIOSH selected these expo sure limits for recommendation because they would reduce working lifetime risks for lung cancer to below 1/1000. Cancer risks greater than 1/1000 are considered significant and wor thy of intervention by OSHA. NIOSH has used this risk level in a variety of circumstances, in cluding citing this level as appropriate for devel oping authoritative recommendations in Crite ria Documents and risk assessments published in scientific journal articles . It is noted that the true risk of lung cancer due to exposure to TiO2 at these concentrations could be much lower than 1/1000. To account for the risk that exists in work environments where airborne exposures to fine and ultrafine TiO2 occur, exposure measurements to each size fraction should be combined using the addi tive formula and compared to the additive REL of 1 (unitless) (see Figure 6-1 Exposure assess ment protocol for TiO2).
Because agglomerated ultrafine particles are frequently measured as fine-sized but behave biologically as ultrafine particles due to the sur face area of the constituent particles, exposures to agglomerated ultrafine particles should be controlled to the ultrafine REL.
"Respirable" is defined as particles of aero dynamic size that, when inhaled, are capable of depositing in the gas-exchange (alveolar) region of the lungs . Sampling methods have been developed to estimate the airborne mass concentration of respirable par ticles . "Fine" is defined in this document as all par ticle sizes that are collected by respirable parti cle sampling (i.e., 50% collection efficiency for particles of 4 ^m, with some collection of par ticles up to10 ^m). "Ultrafine" is defined as the fraction of respirable particles with primary particle diameter < 0.1 ^m (< 100 nm), which is a widely used definition. Additional methods are needed to determ ine if an airborne re spirable particle sample includes ultrafine TiO2 (Chapter 6).
The NIOSH RELs for fine TiO2 of 2.4 mg/m3 and ultrafine TiO2 of 0.3 mg/m3 were derived using the model averaging procedure (described in Sections 4. Based on the observed relationship between particle surface area dose and toxicity (Chap ters 3 and 4), the measurement of aerosol surface area would be the preferred method for evaluating workplace exposures to TiO2. However, personal sampling devices that can be routinely used in the workplace for measur ing particle surface area are not currently avail able. As an alternative, if the airborne particle size distribution of the aerosol is known in the workplace and the size distribution remains relatively constant with time, mass concentra tion measurements may be useful as a surro gate for surface area measurements. NIOSH is recommending that a mass-based airborne concentration measurement be used for m oni toring workplace exposures to fine and ultra fine (including engineered nanoscale) TiO2 un til more appropriate measurement techniques can be developed. NIOSH is currently evaluat ing the efficacy of various sampling techniques for measuring fine and ultrafine TiO2 and may make specific recommendations at a later date.
In the interim, personal exposure concentrations to fine (pigment-grade) and ultrafine (including engineered nanoscale) TiO2 should be determined with NIOSH Method 0(300 using a standard 10-mm nylon cyclone or equivalent particle sizeselective sampler . Measurement results from NIOSH Method 0600 should provide a reasonable estimate of the exposure concentra tion to fine and ultrafine (including engineered nanoscale) TiO2 at the NIOSH RELs of 2.4 and 0.3 mg/m3, respectively, when the predominant exposure to workers is TiO2. No personal sam pling devices are available at this time to spe cifically measure the mass concentrations of ultrafine aerosols; however, the use of NIOSH Method 0600 will permit the collection of most airborne ultrafine (including engineered nanoscale) particles and agglomerates.
In work environments where exposure to other types of aerosols occur or when the size distri bution of TiO2 (fine vs. ultrafine) is unknown, other analytical techniques may be needed to characterize exposures. NIOSH Method 7300 can be used to assist in differ entiating TiO2 from other aerosols collected on the filter while electron microscopy, equipped with X-ray energy dispersive spectroscopy (EDS), may be needed to measure and iden tify particles. In workplaces where TiO2 is pur chased as a single type of bulk powder, the pri mary particle size of the bulk powder can be used to determine whether the REL for fine or ultrafine should be applied if adequate airborne exposure data exist to confirm that the airborne particle size has not substantially been altered during the handling and/or material process ing of TiO2. Although NIOSH Methods 0600 and 7300 have not been validated in the field for airborne exposure to TiO2, they have been validated for other substances and, therefore, should provide results for TiO2 within the ex pected accuracy of the Methods.
# Exposure Assessment
A multitiered workplace exposure assessment might be warranted in work environments where the airborne particle size distribution of TiO2 is unknown (fine vs. ultrafine) and/ or where other airborne aerosols may inter fere with the interpretation of sample results. Figure 6-1 illustrates an exposure assessment strategy that can be used to measure and iden tify particles so that exposure concentrations can be determined for fine and ultrafine (in cluding engineered nanoscale) TiO2. An initial exposure assessment should include the simul taneous collection of respirable dust samples with one sample using a hydrophobic filter (as described in NIOSH Method 0600) and the other sample using a mixed cellulose ester filter (MCEF).- If the respirable exposure concentra tion for TiO2 (as determined by Method 0600) is less than 0.3 mg/m3, then no further action is required. If the exposure concentration ex ceeds 0.3 mg/m3, then additional characteriza tion of the sample is needed to determine the particle size and percentage of TiO2 and other extraneous material on the sample. To assist in this assessment, the duplicate respirable sample collected on a MCEF should be evaluated us ing transmission electron microscopy (TEM) to size particles and determine the percentage of fine (> 0.1 ^m) and ultrafine (< 0.1 ^m) TiO2. When feasible, the percent of fine and ultrafine TiO2 should be determined based on the mea surement of "primary" particles (includes ag glomerates comprised of primary particles). The identification of particles (e.g., TiO2) by TEM can be accomplished using EDS or *Note: The collection time for samples using a MCEF may need to be shorter than the duplicate samples col lected and analyzed by M ethod 0600 to ensure that particle loading on the filter doesn't become excessive and hinder particle sizing and identification by TEM.
energy loss spectroscopy. Once the percent of TiO2 (by particle size) has been determined, adjustments can be made to the mass concen tration (as determined by Method 0600) to assess whether exposure to the NIOSH RELs for fine, ultrafine (including engineered na noscale), or combined fine and ultrafine TiO2 had been exceeded.
To minimize the future need to collect sam ples for TEM analysis, samples collected us ing M ethod 0600 can also be analyzed us ing NIOSH M ethod 7300 or other equivalent methods to determine the amount of TiO2 on the sample. When using NIOSH Method 7300, it is important that steps be taken (i.e., pre treatment with sulfuric or hydrofluoric acid) to insure the complete dissolution and recovery of TiO2 from the sample. The results obtained using Method 7300, or other equivalent m eth od, should be compared with the respirable mass concentration to determine the relative percentage of TiO2 in the concentration mea surement. Future assessments of workplace ex posures can then be limited to the collection and analysis of samples using Method 0600 and Method 7300 (or equivalent) to ensure that airborne TiO2 concentrations have not changed over time.
# Control of Workplace Exposures to TiO2
Given the extensive commercial use of fine (pigment grade) TiO2, the potential for occu pational exposure exists in many workplaces. However, few data exist on airborne concen trations and information on workplaces where exposure might occur. Most of the available data for fine TiO2 are reported as total dust and not as the respirable fraction. Historical total dust exposure measurements collected in TiO2 production plants in the United States often exceeded 10 mg/m3 , while more contemporary exposure concentrations in these plants indicate that mean total inhalable dust measurements may be below 3 mg/m3 (1.1 mg/m3 median) for most job tasks . Given the particle size dimensions of fine TiO2 (~0.1 ^m to 4 ^m, avg. of 0.5 ^m) , it is reasonable to conclude that respirable TiO2 particles com prised a significant fraction of the total dust measurement. Estimates of worker exposures to respirable TiO2 determined in 1999 from 8 European plants producing fine TiO2 ranged from 0.1 to 5.99 mg/m3 (plant with highest measured concentrations) to 0.1 to 0.67 mg/m3 (plant with lowest measured concentrations) . The highest reported worker exposure concentrations to TiO2 in both U.S. and European production plants were among packers and micronizers and during maintenance .
Results of a NIOSH health hazard evaluation conducted at a facility using powder paint con taining TiO2 found airborne concentrations of respirable TiO2 ranging from 0.01 to 1.0 mg/m3 (9 samples) . TEM analysis con ducted on one airborne sample found 42.4% of the TiO2 particles less than 0.1 ^m in diameter. NIOSH is not aware of any extensive commer cial production of ultrafine anatase TiO2 in the United States although it may be imported for use. Ultrafine rutile TiO2 is being commercially produced as an additive for plastics to absorb and scatter ultraviolet light; 10%-20% of the ultrafine TiO2 is reported to be < 100 nm in size . Engineered TiO2 nanoparticles are also being manufactured, and like ultrafine TiO2, they are finding commercial application as a photocatalyst for the destruction of chemi cal and microbial contaminants in air and wa ter, in light-emitting diodes and solar cells, in plastics, as a UV blocker, and as a "self-cleaning" surface coating. While a paucity of data exist on worker exposure to engineered TiO2, expo sure measurements taken at a facility manu facturing engineered TiO2 found respirable exposure concentrations as high as 0.14 mg/m3 . The primary particle size determined by TEM analysis ranged from 25 to 100 nm. Liu 1981, 1982]. Ventila tion systems must be properly designed, tested, and routinely maintained to provide maximum efficiency. fit-tested, half-facepiece particulate respirator will provide protection up to 10 times the re spective REL. When selecting the appropriate filter and determining filter change schedules, the respirator program manager should con sider that particle overloading of the filters may occur because of the size and character istics of TiO2 particles. Studies on the filtration performance of N95 filtering-facepiece respi rators found that the mean penetration levels for 40 nm particles ranged from 1.4% to 5.2%, indicating that N95 and higher performing respirator filters would be effective at captur ing airborne ultrafine TiO2 particles [Balazy et al. 2006;Rengasamy et al. 2007Rengasamy et al. , 2008.
In workplaces where there is potential worker exposure to TiO2, employers should establish an occupational health surveillance program. Occupational health surveillance, which in cludes hazard surveillance (hazard and expo sure assessment) and medical surveillance, is an essential component of an effective occupa tional safety and health program. An impor tant objective of the program is the systematic collection and analysis of exposure and health data for the purpose of preventing illness. In workplaces where exposure to ultrafine or en gineered TiO2 occurs, NIOSH has published interim guidance on steps that can be taken to minimize the risk of exposure and recommendations for medical surveil lance and screening that could be used in establishing an occupational health surveillance program.
# Research Needs
Additional data and information are needed to assist NIOSH in evaluating the occupational safety and health issues of working with fine and ultrafine TiO2. Data are particularly need ed on the airborne particle size distributions and exposures to ultrafines in specific opera tions or tasks. These data may be merged with existing epidemiologic data to determine if exposure to ultrafine TiO2 is associated with adverse health effects. Information is needed about whether respiratory health (e.g., lung function) is affected in workers exposed to TiO2. Experimental studies on the mechanism of toxicity and tumorigenicity of ultrafine TiO2 would increase understanding of whether fac tors in addition to surface area may be impor tant. Although sampling devices for all particle sizes are available for research purposes, prac tical devices for routine sampling in the work place are needed.
# Workplace Exposures and Human Health
- Quantify the airborne particle size dis tribution of TiO2 by job or process and obtain quantitative estimates of workers' exposures to fine and ultrafine TiO2. - Conduct epidemiologic studies of work ers manufacturing or using TiO2-containing products using quantitative estimates of exposure by particle size, including fine and ultrafine fractions (see bullet above).
- Evaluate the extent to which the specific surface area in bulk TiO2 is representative of the specific surface area of the airborne TiO2 particles that workers inhale and that are retained in the lungs. - Investigate the adequacy of current massbased human lung dosimetry models for predicting the clearance and retention of inhaled ultrafine particles.
# Experimental Studies
- Investigate the fate of ultrafine particles (e.g., TiO2) in the lungs and the associ ated pulmonary responses. - Investigate the ability of ultrafine parti cles (e.g., TiO2) to enter cells and interact with organelle structures and DNA in m i tochondria or the nucleus.
# Measurement, Controls, and Respirators
- Develop accurate, practical sampling de vices for ultrafine particles (e.g., surface area sampling devices). - Evaluate effectiveness of engineering con trols for controlling exposures to fine and ultrafine TiO2. - Determine effectiveness of respirators for ultrafine TiO2.
# References
ACGIH . Particle size-selective sam pling in the workplace. Report of the ACGIH Technical Committee on air sampling proce dures. Ann Am Conf Gov Ind Hyg 11:23-100.
ACGIH . 1994-1995 As seen in Figures 3-4 and 3-5, particle surface area is a much better dose metric than particle mass for predicting the lung tum or response in rats after chronic exposure to fine and ultra fine TiO2. The statistical fit of these models is shown in Table A-1, using either mass or par ticle surface area dose. These goodness-of-fit tests show that particle surface area dose pro vides an adequate fit to models using either the all tum or response or tumors excluding squa mous cell keratinizing cysts and that particle mass dose provides an inadequate fit to these data. The P-values are for statistical tests of the lack of fit; thus, P < 0.05 indicates lack of fit. Because of the observed differences in tum or response in males and females, when squa mous cell keratinizing cystic tum ors were in cluded in the analysis (Table 4-4), it was im portant to test for heterogeneity in response by rat sex. Since the data were from different studies and rat strains, these factors were also investigated for heterogeneity (the influence of study and strain could not be evaluated separately because a different strain was used in each study). Finally, the possibility of het erogeneity in response to fine and ultrafine TiO2 after adjustment for particle surface area was investigated to determine whether other factors may be associated with particle size that influence lung tum or response and that may not have been accounted for by particle sur face area dose. Table A-2 shows that there was statistically significant heterogeneity between male and female rats for the all lung tumors re sponse but not for the tumors excluding squa mous cell keratinizing cysts. No heterogeneity in tum or response was observed across study/ strain or for fine versus ultrafine when dose was expressed as particle surface area per gram of lung. These analyses showed that all of the data from the different studies, rat strains, and both sexes could be pooled and the model fit was adequate when the dose metric used is particle surface area per gram of lung and the tum or response is neoplastic tumors (i.e., without squamous cell keratinizing cystic tumors).
In addition, a modified dose-response model was developed to examine the all-tumor re sponse (by adjusting for rat sex and to include the averaged male/female lung tumor response data in the Muhle et al. study) (see Sec tion A-3). Data are from two studies of fine TiO2 and one study of ultrafine TiO2 . Estimated directly from model. Estimated from linear extrapolation of BMD and BMDL at 1/10 excess risk level.
Table A -4. All tumors: Benchmark dose (BMD) and lower 95% confidence limit (BMDL) estimates-expressed TiO2 particle surface area in the lungs (m2/g )-by model fit separately to male and female rat data FEMALE rats MALE rats BMD (BMDL) by excess risk level BMD (BMDL) by excess risk level P-value___________________________ P-value ___________________________ Model (BMDS 2003) (for lack of fit) 1/10- 1/1000- 1/1000f (for lack of fit) 1/10- 1/1000- 1/1000f
# APPENDIX B: Threshold Model for Pulmonary Inflammation in Rats
A threshold model (i.e., piecewise linear or "hockeystick") was examined for its ability to adequately represent TiO2-induced pulmonary inflammation in rat lungs . As described in Section 4.3.1.2, the TiO2 pulmonary inflammation data from the Tran et al. and Cullen et al. studies could be fitted with a piecewise linear model which included a threshold pa rameter, and the threshold parameter estimate was significantly different from zero at a 95% confidence level. However, the fine and ultra fine TiO2 pulmonary inflammation data from the Bermudez et al. and Bermudez et al. data sets provided no indication of a nonzero response threshold and were not consistent with a threshold model. The piece wise linear modeling methodology is detailed below.
In modeling pulmonary inflammation (as neu trophilic cell count in BAL fluid) in rat lungs, the response was assumed to be normally distributed with the mean response being a function of the dose and the variance propor tional to a power of the mean. Thus for the ith rat given the dose d., the mean neutrophilic cell count would be jupmn (di ) with variance a (Hpmn (di ))P, where MP"m is any continuous function of dose, a is a proportionality con stant, and p represents a constant power. The mean response was modeled using a variety of functions of dose; these functions were then used to estimate the critical dose at which the mean neutrophil levels went above the back ground. For the continuous functions that did not include a threshold parameter, this criti cal level was found using the BMD method and software . For purposes of calculation, the BMD was defined as the particle surface area dose in the lungs associated with /upmn (dt) corresponding to the upper 5th percentile of the distribution of PMN counts in control rat lungs.
For the piecewise linear model, which is a thresh old model, we assumed no dose-response, and thus no additional risk, above background prior to some critical threshold y . For points beyond the threshold, the dose-response was modeled using a linear function of dose. For example, As the parameter y is an unknown term, the above function is nonlinear and is fit using maximum likelihood (ML) estimation. Very approximate (1-a)% CIs can be found using profile likelihoods . As the con fidence limits are only rough approximations, the limits and significance of the threshold can be cross validated using parametric bootstrap methods .
# APPENDIX C: Comparison of Rat-and Human-Based Excess Risk Estimates for Lung Cancer Following Chronic Inhalation of TiO2
As described in Chapter 2, the epidemiologic studies of workers exposed to TiO2 did not find a statistically significant relationship between the estimated exposure to total or respirable TiO2 and lung cancer mortality , suggesting that the precision of these studies for estimating ex cess risks of concern for worker health (e.g., <1/1000) may be limited. The exposure data in Fryzek et al. were based on the to tal dust fraction, whereas respirable dust data were used to estimate exposures in Boffetta et al. . Neither study had exposure data for ultrafine particles. Chronic inhalation stud ies in rats exposed to fine and ultrafine TiO2 showed statistically significant dose-response relation ships for lung tumors (Chapter 3). However, the rat lung tum or response at high particle doses which overload the lung clearance has been questioned as to its relevance to humans . Recent studies have shown that rats inhaling TiO2 are more sensitive than mice and hamsters to pulmonary effects in cluding inflammation [Bermudez et al. 2002[Bermudez et al. , 2004, although the hamsters had much faster clearance and lower retained lung burdens of TiO2 compared to rats and mice. Because of the observed dose-response data for TiO2 and lung cancer in rats, it is important to quanti tatively compare the rat-based excess risk esti mates with excess risk estimates derived from results of the epidemiologic studies.
The purpose of these analyses is to quantita tively compare the rat-based excess risks of lung cancer with results from the human stud ies. If the sensitivity of the rat response to in haled particulates differs from that of humans, then the excess risks derived from the rat data would be expected to differ from the excess risks estimated from the human studies. The results of the comparison will be used to as sess whether or not the observed differences of excess risks have adequate precision for rea sonably excluding the rat model as a basis for predicting the excess risk of lung cancer in hu mans exposed to TiO2.
# Methods
Excess risk estimates for lung cancer in work ers were derived from the epidemiologic stud ies (Appendix D) and from the chronic inhala tion studies in rats . These excess risk estimates and associated standard errors were computed for a mean exposure concentration of 2.4 m g/m 3 over a 45-year working lifetime. This exposure concentration was selected to correspond to a low value relative to the rat data (which is also the NIOSH REL, see Chapter 5).
Excess risks were derived from the rat data based on the three-model average procedure described in Chapter 4. The Model Averaging for Dichotomous Response (MADR) software used for the model average risk estimation was modified to output the model coefficients and model weights associated with each model fit in a 2,000-sample bootstrap, based on the rat data. The model coefficients and weights were then used to construct the distribution of risk estimates from rats exposed to the equivalent of a lifetime occupational exposure to 2.4 mg/m3 fine TiO2. The rat-equivalent exposures were estimated by using the MPPD2 model to esti mate the human lung burden associated with a 45-year occupational exposure to 2.4 mg/m3 fine TiO2. This lung burden (2545 mg TiO2) was then extrapolated to rats following the steps described in Chapter 4 for extrapolat ing from rats to humans, but in reverse order. This procedure yielded an estimated rat lung burden of 6.64 mg TiO2. Since the rat-based, dose-response modeling was based on a par ticle surface area dose metric, the rat lung bur den was then converted to 0.0444 m 2 particle surface area per gram of lung tissue, assuming a specific surface area of 6.68 m 2/g TiO2. The 2,000 sets of model equations from the model average were then solved for a concentration of 0.0444 m2/g lung tissue, yielding a bootstrap estimate of the distribution of the excess risk associated with exposure to TiO2. Excess risks were estimated from each of the two worker cohort studies, using two different methods for each. For the cohort studied by Boffetta et al. , two different values for representing the highest cumulative exposure group were separately assumed; and for the cohort studied by Fryzek et al. , two dif ferent exposure lags (no lag, 15-year lag) were separately used.
# Results
Table C shows the rat-based maximum likeli hood estimate (MLE) and 95% upper confi dence limit (UCL) of excess risks for lung can cer and the human-based 95% UCL on excess risk from exposure to TiO2. There is consisten cy in the estimates of the 95% UCL from these two independent epidemiologic studies at the exposure concentration evaluated for both studies, 2.4 mg/m3 (Boffetta: 0.038 and 0.053; Fryzek: 0.046 and 0.056). The rat-based MLE and 95% UCL excess risk estimates for fine TiO2 exposures are lower than the 95% UCL risk estimates based on the human studies in Table C. This result suggests that the rat-based risk estimates for fine TiO2 are not inconsistent with the human risk estimates derived from the epidemiological studies.
# Discussion
These two epidemiologic studies are subject to substantially larger variability than are the rat studies. The results of the epidemiologic stud ies of TiO2 workers by Fryzek et al. and Boffetta et al. [2003Boffetta et al. [ , 2004 are consistent with a range of excess risks at given exposures, in cluding the null exposure-response relation ship (i.e., no association between the risk of lung cancer and TiO2 exposure) and an exposure-response relationship consistent with the low-dose extrapolations from the rat stud ies (based on the method used, a three-model average as described in Chapter 4). Both the MLE and 95% UCL excess risk estimates from the rat studies were lower than the 95% UCL from the human studies for fine TiO2.
to differ from the referent population under unexposed conditions.
The estimators of Alpha and Beta are based on iteratively reweighted least squares with weights proportional to the reciprocal of the mean. Although these estimates are equiva lent to Poisson regression MLEs, the observed counts are not strictly Poisson. This is due to the adjustments made by Boffetta et al. for missing cause of death arising from the lim ited time that German death certificates were maintained. The reported observed counts are 53+0.9, 53+2.3, 52+2.7, 53+2.4 where 0.9, 2.3, 2.7, and 2.4 have been added by Boffetta et al. for missing cause of death that are esti mated to have been lung cancer deaths. Invok ing a Poisson regression model should work well given such small adjustments having been added to Poisson counts of 53, 53, 52, and 53. Hence, Alpha and Beta are estimated accord ingly but their standard errors and CIs do not rely on the Poisson assumption; instead, stan dard errors were estimated from the data and Cls were based on the t distribution with 2 de grees of freedom.
A similar approach using the results of Table 4-2 was not attempted since these categorical RR estimates are correlated and information on the correlations was not reported by Bof fetta et al. . Boffetta et al. with a linear effect of cumulative exposure are presented in Table D-1. These results are sensitive to the value used to represent the highest cumulative expo sure category, particularly the estimate of the effect of exposure. However, zero is contained in both of the 95% CIs for Beta indicating that the slope of the exposure-response is not sig nificant for these data.
# Results
# Results based on modeling the SMRs in
Estimates of excess risk based on application of the results given in Table D-1 to U.S. popula tion rates using the method given by BEIR IV appear in Table D-2.
# Discussion
The exposure assessment conducted by Boffet ta et al. relies heavily on tours of the fac tories by two occupational hygienists who first reconstructed historical exposures without using any measurements (as described in Bof fetta et al. ; Cherrie et al. ; Cherrie ; Cherrie and Schneider ). The sole use of exposure measurements by Boffetta et al. was to calculate a single adjustment factor to apply to the previously constructed exposure estimates so that the average of the measurements coincided with the correspond ing reconstructed estimates. However, Boffetta et al. offer no analyses of their data to support this approach. Also, the best value to use to represent the highest exposure interval (i.e., 13.20 + m g/m3^yr) is not known and the results for the two values examined suggest that there is some sensitivity to this value. Hence, these upper limits that reflect only statistical variability are likely to be increased if the ef fects of other sources of uncertainty could be quantified. . ¿Value representing the highest exposure category is 56.5 m g/m 3 yr based on the conditional mean given exposures greater than 13.20 using the conditional distribution derived from the lognormal distribution having median and 75th percentiles equal to 1.98 and 6.88 m g/m 3 yr, respectively.
# f i k t S H
# Delivering on the Nation's promise: safety and health a t work for all people through research and prevention
To receive NIOSH docum ents or more inform ation about occupational safety and health topics, contact NIOSH at
I | 49,246 | {
"id": "b8278eda64ba9edcccd28f18e044f19ba766eaeb",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # CRITERIA DOCUMENT: RECOMMENDATIONS FOR AN OCCUPATIONAL EXPOSURE STANDARD FOR METHYL ALCOHOL
Section 2 -Medical Medical surveillance shall be made available as specified below for all employees occupationally exposed to methyl alcohol.
(a) Preplacement medical examinations shall include:
(1) A comprehensive work history.
(2) A complete physical examination which should include an ophthalmologic examination.
# (b)
Medical surveillance and management including ophthalmologic examination shall be promptly provided to any employee who develops ocular symptoms, or has had methyl alcohol splashed in the eyes, or has ingested methyl alcohol, or has been accidentally overexposed by inhalation or dermal contact.
(c) Periodic medical surveillance should be performed annually for all employees occupationally exposed to methyl alcohol. The employee should be reinformed at least once a year, or whenever there is a process change. This apprisal shall include, as a minimum, all information set forth in Appendix III which is applicable to that specific product or material containing methyl alcohol. For all work areas in which there is potential for emergencies, procedures as specified below, as well as any other procedures appropriate for a specific operation or process, shall be formulated in advance and employees shall be instructed in their implementation.
(1) Procedures shall include prearranged plans for obtaining emergency medical care and for necessary transportation of injured workers.
(2) Firefighting procedures shall be established and implemented. These shall include procedures for emergencies involving the release of methyl alcohol vapor. In case of fire, methyl alcohol sources shall be shut off or removed. Containers shall be removed or cooled with water spray. Chemical foam, carbon dioxide, or dry chemicals should be used for fighting methyl alcohol fires, and proper respiratory protection and protective clothing shall be worn.
(3) Approved eye, skin, and respiratory protection as specified in Section 4 shall be used by personnel essential to emergency operations.
(4) Nonessential employees shall be evacuated from exposure areas during emergencies. Perimeters of hazardous exposure areas shall be delineated, posted, and secured.
(5) Personnel properly trained in the procedures and adequately protected against the attendant hazards shall shut off sources of methyl alcohol, clean up spills, and immediately repair leaks. workdays of any change in production, process, or control that might result in an increase in airborne concentrations. When a fan is located in duct work and where methyl alcohol is likely to be present at concentrations at or above 0.67% (one-tenth the lower flammable limit, 67,000 ppm), the fan rotating element shall be of nonsparking material or the casting shall be coated with, or consist of, a nonsparking material. The ventilation system shall contain devices along the length of the exhaust system intended to prevent the propagation of flashbacks.
# (c) Loading and Unloading
The handling and storage of methyl alcohol shall comply with NFPA Article 30 for flammable and combustible liquids.
(1) Safety showers and eyewash fountains shall be installed in loading and unloading areas.
Fire extinguishers approved for Class I B fires, such as dry chemical extinguishers, shall be available in loading and unloading areas. Fire extinguishers shall be inspected annually and recharged or replaced if necessary.
(3) The equipment required by c(l) of this Section shall be inspected regularly to ensure that it is in working order. The employer shall ensure that such inspection is performed by a qualified person.
(4) In the event of a leak that may lead to airborne concentrations exceeding the environmental limits, the operations shall be stopped and resumed only after necessary repair or replacement has been completed.
(5) Bonding facilities for protection against static sparks during the loading of tank vehicles shall be provided as required in 29 CFR
# 1910.106(f)(3)(IV). (d) Methyl Alcohol Car and Truck Loading Procedure
(1) Smoking, matches, or lighters shall be prohibited in the methyl alcohol car and truck loading area.
The safety shower and eyewash fountain in the loading and unloading area shall be checked regularly.
(3) A wheel chock, a car loading sign, and the derail shall be placed in position and ground cables attached before connecting any lines to the tank car.
(4) Wheel chocks, ground cables, and loading sign shall be in place before connecting any lines to a trailer.
(5) Ground cables shall be removed only when loading or unloading lines have been removed and the dome covers have been secured.
# (h) General Housekeeping
Employers shall ensure that proper maintenance of equipment is provided in order to minimize the accidental escape of methyl alcohol.
Cleanup of spills and repair of equipment and leaks shall be performed as soon as practical.
(a)
# Food Facilities
In accordance with the provisions of 29 CFR 1910.141(g)( 2) and (g)(4), the consumption or storage of food or beverages shall be prohibited in the worksite.
# (b) Smoking
Smoking shall be prohibited in areas where methyl alcohol is used, transferred, stored, or manufactured.
# (c) Handwashing Facilities
Adequate facilities providing soap and water for handwashing shall be made available. In addition, they studied some of the chemical and physical properties of wood alcohol.
In 1855, MacFarlan reported on the industrial utility of "methylated spirit" as a substitute for the higher priced, strictly regulated "spirit of wine" (ethyl alcohol). Methylated spirit was a mixture of "wood naphtha" (methyl alcohol) and "spirit of wine" (ethyl alcohol) usually in a proportion of 1 to 9, respectively. MacFarlan also noted the toxic hazard associated with the industrial use of pure methyl alcohol, "as opposed to methylated spirit," indicating that the former affected the eyes of workers while the vapor of the latter rarely did.
This constitutes one of the earliest references to the occupational hazard of methyl alcohol found in the literature. The studies discussed in the remainder of this section are concerned with methyl alcohol absorption, elimination, and metabolism in the human.
The effect of ethyl alcohol on the metabolism and elimination of methyl alcohol and the explanation why ethyl alcohol administration is effective in preventing or ameliorating some of the symptoms of acute methyl alcohol intoxication in humans will also be examined.
In During the second 6-hour period after ethyl alcohol administration ceased, however, the formic acid excretion actually increased, presumably as a result of an uninhibited methyl alcohol oxidation process. Another significant conclusion of these authors was that the kidneys must have a considerable power of concentrating formate. The results of the equimolar doses of the alcohols Indicated that the peroxidative system is not the primary metabolic pathway for methyl alcohol in the monkey. If it were so, inhibition of methyl alcohol oxidation should have been around 50%. These findings suggested that the alcohol dehydrogenase system, or possibly a system other than the peroxidative system, was responsible for methyl alcohol oxidation in the monkey.
In another study, the effect of 1-butanol on 14C-methyl alcohol metabolism in the monkey was observed. In vitro studies cited by the authors showed that, compared with ethyl alcohol, the reactivity of 1butanol was greater for the alcohol dehydrogenase system. Moreover, 1butanol was less reactive with the perioxidase system than either ethyl or methyl alcohol. With a molar ratio of 14C-methyl alcohol to 1-butanol of 1:0.5, the oxidation of methyl alcohol was inhibited 63% during the first 90 minutes following dosing. This finding is in contrast to the results of the rat experiments described earlier where 1-butanol did not noticeably affect methyl alcohol metabolism. This again supported the view that for monkeys the alcohol dehydrogenase, or some system not involving catalase, is the primary metabolic pathway for methyl alcohol oxidation.
The Chemical substances should be listed according to their complete name derived from a recognized system of nomenclature. Where possible, avoid using common names and general class names such as "aromatic amine," "safety solvent," or "aliphatic hydrocarbon" when the specific name is known.
The "%" may be the approximate percentage by weight or volume (indicate basis) which each hazardous ingredient of the mixture bears to the whole mixture. This may be indicated as a range or maximum amount, ie, "10-40% vol" or "10% max wt" to avoid disclosure of trade secrets.
Toxic hazard data shall be stated in terms of concentration, mode of exposure or test, and animal used, ie, "6.8 ml/kg LD50-oral-rat," "16. to these oxidation products, should be attempted. The occurrence of similar ocular and neurotoxic effects would be supportive evidence that these effects of methyl alcohol in humans are so mediated.
The sampling procedure recommended in this document, while usable, has not been tested in conjunction with the recommended analytic method.
NIOSH is currently testing a modified gas chromatographic method (similar to that in this document) to be used in conjunction with the recommended sampling method.
In view of the demonstrated differences in metabolism of methyl alcohol between primates and lower animals, the utility of mutagenic, 127 teratogenic, or carcinogenic studies in rodents, often the species of choice for such studies, is not clear. Perhaps experimental exposures of rodents to the human metabolites of methyl alcohol would give useful information on these points.
# D E P A R T M E N T OF H E A L T H . E D U C A T IO N , A N D W E L F A R E PUBLIC H E A L T H SERVICE
Collect air samples from within the employee's breathing zone.
# (b)
Record the following on all sampling data sheets:
(1) Date and time of sample collection.
(2) Sampling duration.
(3) Volumetric flowrate of sampling.
(4) Description of sampling location.
(5) Serial number of pump.
(6) Name of person performing the calibration or sampling.
(7) Other pertinent information (temperature, pressure, and information listed in paragraph (i) of Calibration of Equipment) .
# Recommended Method
The sampling train consists of a silica gel tube and a vacuum pump.
# Range and Sensitivity
The sampling method is intended to provide a measure of airborne methyl alcohol in the range of 100-1,000 ppm. This method has been validated at methyl alcohol concentrations of 100, 200, and 400 ppm and a sampling time of 60 minutes, and at 1,000 ppm for at least a 15-minute sampling period.
The gas chromatographic method can measure from 1 to 40 (xg/ml of methyl alcohol in aqueous solutions.
When used in combination, it is estimated that the sampling and analytic methods will determine as little as 0.8 ppm methyl alcohol in a 3-liter air sample. For aqueous solutions, the working range for methyl alcohol is linear up to concentrations of 40
# Hg/ml.
However, the gas chromatographic method can easily be applied to higher concentrations by appropriate serial dilution of the desorbing solution with distilled water.
# Interferences
Any compound which has the same retention time as methyl alcohol at the operating conditions described in this method will interfere with the analysis.
The retention time of any substance suspected of being present in the sample should be determined to evaluate the likelihood of its interfering with the procedure.
# Precision and Accuracy
The | 2,386 | {
"id": "ee0bc54bc1e318e9c1860569692a591a507bfb98",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Botswana suggests that an increased risk of neural tube defects was associated with exposure to antiretroviral (ARV) regimens that include dolutegravir (DTG) at conception. i,ii,iii #
CDC makes the following interim recommendations for the use of HIV PEP (occupational or nonoccupational) while the agency prepares a more detailed review of the evidence and recommendations.
Health care providers prescribing PEP should avoid use of DTG for:
- Non-pregnant women of childbearing potential who are sexually active or have been sexually assaulted and who are not using an effective birth control method; and,
- Pregnant women early in pregnancy since the risk of an unborn infant developing a neural tube defect is during the first 28 days.
The preferred PEP regimen for these women is raltegravir, tenofovir, and emtricitabine. iv,v However, individual circumstances may dictate consideration of alternatives (e.g., raltegravir is not available). Health care providers seeking advice can call the National Clinical Consultations Center's PEPline at (888) 448-4911.
CDC currently recommends that prior to starting PEP all women of childbearing potential should have a pregnancy test performed. iv,v If the PEP regimen for a non-pregnant woman of childbearing potential must include DTG , she should use an effective birth control method until the PEP regimen is completed. Guidance for health care providers regarding contraceptive options for women can be found here: .
Insufficient dietary folate can increase risk for neural tube defects. All women who are of childbearing potential, regardless of pregnancy status, should be provided at least 400 mcg of folic acid daily. Further information regarding folate for women who may become pregnant or are pregnant can be found here: | 370 | {
"id": "ea98e6f21d6fff67835a722a1dbdbe7cebfd1f38",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | In 1994, CDC published the Guidelines for Preventing the Transmission of Mycobacterium tuberculosis in Health-Care Facilities, 1994. The guidelines were issued in response to 1) a resurgence of tuberculosis (TB) disease that occurred in the United States in the mid-1980s and early 1990s, 2) the documentation of several high-profile health-care-associated (previously termed "nosocomial") outbreaks related to an increase in the prevalence of TB disease and human immunodeficiency virus (HIV) coinfection, 3) lapses in infection-control practices, 4) delays in the diagnosis and treatment of persons with infectious TB disease, and 5) the appearance and transmission of multidrug-resistant (MDR) TB strains. The 1994 guidelines, which followed statements issued in 1982 and 1990, presented recommendations for TB-infection control based on a risk assessment process that classified health-care facilities according to categories of TB risk, with a corresponding series of administrative, environmental, and respiratory-protection control measures. The TB infection-control measures recommended by CDC in 1994 were implemented widely in health-care facilities in the United States. The result has been a decrease in the number of TB outbreaks in health-care settings reported to CDC and a reduction in health-care-associated transmission of Mycobacterium tuberculosis to patients and health-care workers (HCWs). Concurrent with this success, mobilization of the nation's TB-control programs succeeded in reversing the upsurge in reported cases of TB disease, and case rates have declined in the subsequent 10 years. Findings indicate that although the 2004 TB rate was the lowest recorded in the United States since national reporting began in 1953, the declines in rates for 2003 (2.3%) and 2004 (3.2%) were the smallest since 1993. In addition, TB infection rates greater than the U.S. average continue to be reported in certain racial/ethnic populations. The threat of MDR TB is decreasing, and the transmission of M. tuberculosis in health-care settings continues to decrease because of implementation of infection-control measures and reductions in community rates of TB. Given the changes in epidemiology and a request by the Advisory Council for the Elimination of Tuberculosis (ACET) for review and update of the 1994 TB infection-control document, CDC has reassessed the TB infection-control guidelines for healthcare settings. This report updates TB control recommendations reflecting shifts in the epidemiology of TB, advances in scientific understanding, and changes in health-care practice that have occurred in the United States during the preceding decade. In the context of diminished risk for health-care-associated transmission of M. tuberculosis, this document places emphasis on actions to maintain momentum and expertise needed to avert another TB resurgence and to eliminate the lingering threat to HCWs, which is mainly from patients or others with unsuspected and undiagnosed infectious TB disease. CDC prepared the current guidelines in consultation with experts in TB, infection control, environmental control, respiratory protection, and occupational health. The new guidelines have been expanded to address a broader concept; health-care-associated settings go beyond the previously defined facilities. The term "health-care setting" includes many types, such as inpatient settings, outpatient settings, TB clinics, settings in correctional facilities in which health care is delivered, settings in which home-based health-care and emergency medical services are provided, and laboratories handling clinical specimens that might contain M. tuberculosis. The term "setting" has been chosen over the term "facility," used in the previous guidelines, to broaden the potential places for which these guidelines apply.# Introduction Overview
In 1994, CDC published the Guidelines for Preventing the Transmission of Mycobacterium tuberculosis in Health Care Facilities, 1994 (1). The guidelines were issued in response to 1) a resurgence of tuberculosis (TB) disease that occurred in the United States in the mid-1980s and early 1990s, 2) the documentation of multiple high-profile health-care-associated (previously "nosocomial") outbreaks related to an increase in the prevalence of TB disease and human immunodeficiency virus (HIV) coinfection, 3) lapses in infection-control practices, 4) delays in the diagnosis and treatment of persons with infectious TB disease (2,3), and 5) the appearance and transmission of multidrug-resistant (MDR) TB strains (4,5).
The 1994 guidelines, which followed CDC statements issued in 1982 and 1990 (1,6,7), presented recommendations for TB infection control based on a risk assessment process. In this process, health-care facilities were classified according to categories of TB risk,with a corresponding series of environmental and respiratory-protection control measures.
The TB infection-control measures recommended by CDC in 1994 were implemented widely in health-care facilities nationwide (8)(9)(10)(11)(12)(13)(14)(15). As a result, a decrease has occurred in 1) the number of TB outbreaks in health-care settings reported to CDC and 2) health-care-associated transmission of M. tuberculosis to patients and health-care workers (HCWs) (9,(16)(17)(18)(19)(20)(21)(22)(23). Concurrent with this success, mobilization of the nation's TB-control programs succeeded in reversing the upsurge in reported cases of TB disease, and case rates have declined in the subsequent 10 years (4,5). Findings indicate that although the 2004 TB rate was the lowest recorded in the United States since national reporting began in 1953, the declines in rates for 2003 (2.3%) and 2004 (3.2%) were the lowest since 1993. In addition, TB rates higher than the U.S. average continue to be reported in certain racial/ ethnic populations (24). The threat of MDR TB is decreasing, and the transmission of M. tuberculosis in health-care settings continues to decrease because of implementation of infection-control measures and reductions in community rates of TB (4,5,25).
Despite the general decline in TB rates in recent years, a marked geographic variation in TB case rates persists, which means that HCWs in different areas face different risks (10). In 2004, case rates varied per 100,000 population: 1.0 in Wyoming, 7.1 in New York, 8.3 in California, and 14.6 in the District of Columbia (26). In addition, despite the progress in the United States, the 2004 rate of 4.9 per 100,000 population remained higher than the 2000 goal of 3.5. This goal was established as part of the national strategic plan for TB elimination; the final goal is <1 case per 1,000,000 population by 2010 (4,5,26).
Given the changes in epidemiology and a request by the Advisory Council for the Elimination of Tuberculosis (ACET) for review and updating of the 1994 TB infection-control document, CDC has reassessed the TB infection-control guidelines for health-care settings. This report updates TB-control recommendations, reflecting shifts in the epidemiology of TB (27), advances in scientific understanding, and changes in health-care practice that have occurred in the United States in the previous decade (28). In the context of diminished risk for health-care-associated transmission of M. tuberculosis, this report emphasizes actions to maintain momentum and expertise needed to avert another TB resurgence and eliminate the lingering threat to HCWs, which is primarily from patients or other persons with unsuspected and undiagnosed infectious TB disease.
CDC prepared the guidelines in this report in consultation with experts in TB, infection control, environmental control, respiratory protection, and occupational health. This report replaces all previous CDC guidelines for TB infection control in health-care settings (1,6,7). Primary references citing evidence-based science are used in this report to support explanatory material and recommendations. Review articles, which include primary references, are used for editorial style and brevity.
The following changes differentiate this report from previous guidelines:
- The risk assessment process includes the assessment of additional aspects of infection control. - The term "tuberculin skin tests" (TSTs) is used instead of purified protein derivative (PPD). - The whole-blood interferon gamma release assay (IGRA), QuantiFERON®-TB Gold test (QFT-G) (Cellestis Limited, Carnegie, Victoria, Australia), is a Food and Drug Administration (FDA)-approved in vitro cytokine-based assay for cell-mediated immune reactivity to M. tuberculosis and might be used instead of TST in TB screening programs for HCWs. This IGRA is an example of a blood assay for M. tuberculosis (BAMT). - The frequency of TB screening for HCWs has been decreased in various settings, and the criteria for determination of screening frequency have been changed. - The scope of settings in which the guidelines apply has been broadened to include laboratories and additional outpatient and nontraditional facility-based settings. - Criteria for serial testing for M. tuberculosis infection of HCWs are more clearly defined. In certain settings, this change will decrease the number of HCWs who need serial TB screening. - These recommendations usually apply to an entire healthcare setting rather than areas within a setting. - New terms, airborne infection precautions (airborne precautions) and airborne infection isolation room (AII room), are introduced. - Recommendations for annual respirator training, initial respirator fit testing, and periodic respirator fit testing have been added.
- The evidence of the need for respirator fit testing is summarized.
# Information on ultraviolet germicidal irradiation (UVGI)
and room-air recirculation units has been expanded. - Additional information regarding MDR TB and HIV infection has been included. In accordance with relevant local, state, and federal laws, implementation of all recommendations must safeguard the confidentiality and civil rights of all HCWs and patients who have been infected with M. tuberculosis and who developTB disease.
The 1994 CDC guidelines were aimed primarily at hospitalbased facilities, which frequently refer to a physical building or set of buildings. The 2005 guidelines have been expanded to address a broader concept. Setting has been chosen instead of "facility" to expand the scope of potential places for which these guidelines apply (Appendix A). "Setting" is used to describe any relationship (physical or organizational) in which HCWs might share air space with persons with TB disease or in which HCWs might be in contact with clinical specimens. Various setting types might be present in a single facility. Health-care settings include inpatient settings, outpatient settings, and nontraditional facility-based settings.
- Inpatient settings include patient rooms, emergency departments (EDs), intensive care units (ICUs), surgical suites, laboratories, laboratory procedure areas, bronchoscopy suites, sputum induction or inhalation therapy rooms, autopsy suites, and embalming rooms. - Outpatient settings include TB treatment facilities, medical offices, ambulatory-care settings, dialysis units, and dental-care settings. - Nontraditional facility-based settings include emergency medical service (EMS), medical settings in correctional facilities (e.g., prisons, jails, and detention centers), homebased health-care and outreach settings, long-term-care settings (e.g., hospices, skilled nursing facilities), and homeless shelters. Other settings in which suspected and confirmed TB patients might be encountered might include cafeterias, general stores, kitchens, laundry areas, maintenance shops, pharmacies, and law enforcement settings.
# HCWs Who Should Be Included in a TB Surveillance Program
HCWs refer to all paid and unpaid persons working in health-care settings who have the potential for exposure to M. tuberculosis through air space shared with persons with infectious TB disease. Part time, temporary, contract, and full-time HCWs should be included in TB screening programs. All HCWs who have duties that involve face-to-face contact with patients with suspected or confirmed TB disease (including transport staff) should be included in a TB screening program.
The following are HCWs who might be included in a TB screening program:
- - Technicians (e.g., health, laboratory, radiology, and animal) - Veterinarians - Volunteers In addition, HCWs who perform any of the following activities should also be included in the TB screening program.
- entering patient rooms or treatment rooms whether or not a patient is present; - participating in aerosol-generating or aerosol-producing procedures (e.g., bronchoscopy, sputum induction, and administration of aerosolized medications) (29); - participating in suspected or confirmed M. tuberculosis specimen processing; or - installing, maintaining, or replacing environmental controls in areas in which persons with TB disease are encountered.
# Pathogenesis, Epidemiology, and Transmission of M. tuberculosis
M. tuberculosis is carried in airborne particles called droplet nuclei that can be generated when persons who have pulmonary or laryngeal TB disease cough, sneeze, shout, or sing (30,31). The particles are approximately 1-5 µm; normal air currents can keep them airborne for prolonged periods and spread them throughout a room or building (32). M. tuberculosis is usually transmitted only through air, not by surface contact. After the droplet nuclei are in the alveoli, local infection might be established, followed by dissemination to draining lymphatics and hematogenous spread throughout the body (33). Infection occurs when a susceptible person inhales droplet nuclei containing M. tuberculosis, and the droplet nuclei traverse the mouth or nasal passages, upper respiratory tract, and bronchi to reach the alveoli. Persons with TB pleural effusions might also have concurrent unsuspected pulmonary or laryngeal TB disease.
Usually within 2-12 weeks after initial infection with M. tuberculosis, the immune response limits additional multiplication of the tubercle bacilli, and immunologic test results for M. tuberculosis infection become positive. However, certain bacilli remain in the body and are viable for multiple years. This condition is referred to as latent tuberculosis infection (LTBI). Persons with LTBI are asymptomatic (they have no symptoms of TB disease) and are not infectious.
In the United States, LTBI has been diagnosed traditionally based on a PPD-based TST result after TB disease has been excluded. In vitro cytokine-based immunoassays for the detection of M. tuberculosis infection have been the focus of intense research and development. One such blood assay for M. tuberculosis (or BAMT) is an IGRA, the QuantiFERON®-TB test (QFT), and the subsequently developed version, QFT-G.
The QFT-G measures cell-mediated immune responses to peptides from two M. tuberculosis proteins that are not present in any Bacille Calmette-Guérin (BCG) vaccine strain and that are absent from the majority of nontuberculous mycobacteria (NTM), also known as mycobacteria other than TB (MOTT). QFT-G was approved by FDA in 2005 and is an available option for detecting M. tuberculosis infection. CDC recommendations for the United States regarding QFT and QFT-G have been published (34,35). Because this field is rapidly evolving, in this report, BAMT will be used generically to refer to the test currently available in the United States.
Additional cytokine-based immunoassays are under development and might be useful in the diagnosis of M. tuberculosis infection. Future FDA-licensed products in combination with CDC-issued recommendations might provide additional diagnostic alternatives. The latest CDC recommendations for guidance on diagnostic use of these and related technologies are available at html/Maj_guide/Diagnosis.htm.
Typically, approximately 5%-10% of persons who become infected with M. tuberculosis and who are not treated for LTBI will develop TB disease during their lifetimes (1). The risk for progression of LTBI to TB disease is highest during the first several years after infection (36)(37)(38).
# Persons at Highest Risk for Exposure to and Infection with M. tuberculosis
Characteristics of persons exposed to M. tuberculosis that might affect the risk for infection are not as well defined. The probability that a person who is exposed to M. tuberculosis will become infected depends primarily on the concentration of infectious droplet nuclei in the air and the duration of exposure to a person with infectious TB disease. The closer the proximity and the longer the duration of exposure, the higher the risk is for being infected.
Close contacts are persons who share the same air space in a household or other enclosed environment for a prolonged period (days or weeks, not minutes or hours) with a person with pulmonary TB disease (39). A suspect TB patient is a person in whom a diagnosis of TB disease is being considered, whether or not antituberculosis treatment has been started. Persons generally should not remain a suspect TB patient for >3 months (30,39).
In addition to close contacts, the following persons are also at higher risk for exposure to and infection with M. tuberculosis. Persons listed who are also close contacts should be top priority.
- Foreign-born persons, including children, especially those who have arrived to the United States within 5 years after moving from geographic areas with a high incidence of TB disease (e.g., Africa, Asia, Eastern Europe, Latin America, and Russia) or who frequently travel to countries with a high prevalence of TB disease. - Residents and employees of congregate settings that are high risk (e.g., correctional facilities, long-term-care facilities , and homeless shelters). - HCWs who serve patients who are at high risk.
- HCWs with unprotected exposure to a patient with TB disease before the identification and correct airborne precautions of the patient. - Certain populations who are medically underserved and who have low income, as defined locally. - Populations at high risk who are defined locally as having an increased incidence of TB disease. - Infants, children, and adolescents exposed to adults in high-risk categories.
# Persons Whose Condition is at High Risk for Progression From LTBI to TB Disease
The following persons are at high risk for progressing from LTBI to TB disease:
- persons infected with HIV;
- persons infected with M. tuberculosis within the previous 2 years; - infants and children aged <4 years;
- persons with any of the following clinical conditions or other immunocompromising conditions -silicosis, -diabetes mellitus, -chronic renal failure, -certain hematologic disorders (leukemias and lymphomas), -other specific malignancies (e.g., carcinoma of the head, neck, or lung), -body weight ≥10% below ideal body weight, -prolonged corticosteroid use, -other immunosuppressive treatments (including tumor necrosis factor-alpha antagonists), -organ transplant, -end-stage renal disease (ESRD), and -intestinal bypass or gastrectomy; and - persons with a history of untreated or inadequately treated TB disease, including persons with chest radiograph findings consistent with previous TB disease. Persons who use tobacco or alcohol (40,41), illegal drugs, including injection drugs and crack cocaine (42)(43)(44)(45)(46)(47), might also be at increased risk for infection and disease. However, because of multiple other potential risk factors that commonly occur among such persons, use of these substances has been difficult to identify as separate risk factors.
HIV infection is the greatest risk factor for progression from LTBI to TB disease (22,39,48,49). Therefore, voluntary HIV counseling, testing, and referral should be routinely offered to all persons at risk for LTBI (1,50,51). Health-care settings should be particularly aware of the need for preventing transmission of M. tuberculosis in settings in which persons infected with HIV might be encountered or might work (52).
All HCWs should be informed regarding the risk for developing TB disease after being infected with M. tuberculosis (1). However, the rate of TB disease among persons who are HIV-infected and untreated for LTBI in the United States is substantially higher, ranging from 1.7-7.9 TB cases per 100 person-years (53). Persons infected with HIV who are already severely immunocompromised and who become newly infected with M. tuberculosis have a greater risk for developing TB disease, compared with newly infected persons without HIV infection (39,(53)(54)(55)(56)(57).
The percentage of patients with TB disease who are HIV-infected is decreasing in the United States because of improved infection-control practices and better diagnosis and treatment of both HIV infection and TB. With increased voluntary HIV counseling and testing and the increasing use of treatment for LTBI, TB disease will probably continue to decrease among HIV-infected persons in the United States (58). Because the risk for disease is particularly high among HIV-infected persons with M. tuberculosis infection, HIV-infected contacts of persons with infectious pulmonary or laryngeal TB disease must be evaluated for M. tuberculosis infection, including the exclusion of TB disease, as soon as possible after learning of exposure (39,49,53).
Vaccination with BCG probably does not affect the risk for infection after exposure, but it might decrease the risk for progression from infection with M. tuberculosis to TB disease, preventing the development of miliary and meningeal disease in infants and young children (59,60). Although HIV infection increases the likelihood of progression from LTBI to TB disease (39,49), whether HIV infection increases the risk for becoming infected if exposed to M. tuberculosis is not known.
# Characteristics of a Patient with TB Disease That Increase the Risk for Infectiousness
The following characteristics exist in a patient with TB disease that increases the risk for infectiousness:
- presence of cough;
- cavitation on chest radiograph;
- positive acid-fast bacilli (AFB) sputum smear result;
- respiratory tract disease with involvement of the larynx (substantially infectious); - respiratory tract disease with involvement of the lung or pleura (exclusively pleural involvement is less infectious);
- failure to cover the mouth and nose when coughing;
- incorrect, lack of, or short duration of antituberculosis treatment; and - undergoing cough-inducing or aerosol-generating procedures (e.g., bronchoscopy, sputum induction, and administration of aerosolized medications) (29).
# Environmental Factors That Increase the Risk for Probability of Transmission of M. tuberculosis
The probability of the risk for transmission of M. tuberculosis is increased as a result of various environmental factors.
- Exposure to TB in small, enclosed spaces.
- Inadequate local or general ventilation that results in insufficient dilution or removal of infectious droplet nuclei. - Recirculation of air containing infectious droplet nuclei.
- Inadequate cleaning and disinfection of medical equipment. - Improper procedures for handling specimens.
# Risk for Health-Care-Associated Transmission of M. tuberculosis
Transmission of M. tuberculosis is a risk in health-care settings (57,(61)(62)(63)(64)(65)(66)(67)(68)(69)(70)(71)(72)(73)(74)(75)(76)(77)(78)(79). The magnitude of the risk varies by setting, occupational group, prevalence of TB in the community, patient population, and effectiveness of TB infection-control measures. Health-care-associated transmission of M. tuberculosis has been linked to close contact with persons with TB disease during aerosol-generating or aerosol-producing procedures, including bronchoscopy (29,63,(80)(81)(82), endotracheal intubation, suctioning (66), other respiratory procedures (8,9,(83)(84)(85)(86), open abscess irrigation (69,83), autopsy (71,72,77), sputum induction, and aerosol treatments that induce coughing (87)(88)(89)(90).
Of the reported TB outbreaks in health-care settings, multiple outbreaks involved transmission of MDR TB strains to both patients and HCWs (56,57,70,87,(91)(92)(93)(94). The majority of the patients and certain HCWs were HIV-infected, and progression to TB and MDR TB disease was rapid. Factors contributing to these outbreaks included delayed diagnosis of TB disease, delayed initiation and inadequate airborne precautions, lapses in AII practices and precautions for cough-inducing and aerosol-generating procedures, and lack of adequate respiratory protection. Multiple studies suggest that the decline in health-care-associated transmission observed in specific institutions is associated with the rigorous implementation of infection-control measures (11,12,(18)(19)(20)23,(95)(96)(97). Because various interventions were implemented simultaneously, the effectiveness of each intervention could not be determined.
After the release of the 1994 CDC infection-control guidelines, increased implementation of recommended infection-control measures occurred and was documented in multiple national surveys (13,15,98,99). In a survey of approximately 1,000 hospitals, a TST program was present in nearly all sites, and 70% reported having an AII room (13). Other surveys have documented improvement in the proportion of AII rooms meeting CDC criteria and proportion of HCWs using CDCrecommended respiratory protection and receiving serial TST (15,98). A survey of New York City hospitals with high caseloads of TB disease indicated 1) a decrease in the time that patients with TB disease spent in EDs before being transferred to a hospital room, 2) an increase in the proportion of patients initially placed in AII rooms, 3) an increase in the proportion of patients started on recommended antituberculosis treatment and reported to the local or state health department, and 4) an increase in the use of recommended respiratory protection and environmental controls (99). Reports of increased implementation of recommended TB infection controls combined with decreased reports of outbreaks of TB disease in health-care settings suggest that the recommended controls are effective in reducing and preventing health-care-associated transmission of M. tuberculosis (28).
Less information is available regarding the implementation of CDC-recommended TB infection-control measures in settings other than hospitals. One study identified major barriers to implementation that contribute to the costs of a TST program in health departments and hospitals, including personnel costs, HCWs' time off from work for TST administration and reading, and training and education of HCWs (100). Outbreaks have occurred in outpatient settings (i.e., private physicians' offices and pediatric settings) where the guidelines were not followed (101)(102)(103). CDC-recommended TB infection-control measures are implemented in correctional facilities, and certain variations might relate to resources, expertise, and oversight (104)(105)(106).
# Fundamentals of TB Infection Control
One of the most critical risks for health-care-associated transmission of M. tuberculosis in health-care settings is from patients with unrecognized TB disease who are not promptly handled with appropriate airborne precautions (56,57,93,104) or who are moved from an AII room too soon (e.g., patients with unrecognized TB and MDR TB) (94). In the United States, the problem of MDR TB, which was amplified by healthcare-associated transmission, has been substantially reduced by the use of standardized antituberculosis treatment regimens in the initial phase of therapy, rapid drug-susceptibility testing, directly observed therapy (DOT), and improved infection-control practices (1). DOT is an adherence-enhancing strategy in which an HCW or other specially trained health professional watches a patient swallow each dose of medication and records the dates that the administration was observed. DOT is the standard of care for all patients with TB disease and should be used for all doses during the course of therapy for TB disease and for LTBI whenever feasible.
All health-care settings need a TB infection-control program designed to ensure prompt detection, airborne precautions, and treatment of persons who have suspected or confirmed TB disease (or prompt referral of persons who have suspected TB disease for settings in which persons with TB disease are not expected to be encountered). Such a program is based on a three-level hierarchy of controls, including administrative, environmental, and respiratory protection (86,107,108).
# Administrative Controls
The first and most important level of TB controls is the use of administrative measures to reduce the risk for exposure to persons who might have TB disease. Administrative controls consist of the following activities:
- assigning responsibility for TB infection control in the setting; - conducting a TB risk assessment of the setting; - developing and instituting a written TB infection-control plan to ensure prompt detection, airborne precautions, and treatment of persons who have suspected or confirmed TB disease; - ensuring the timely availability of recommended laboratory processing, testing, and reporting of results to the ordering physician and infection-control team; - implementing effective work practices for the management of patients with suspected or confirmed TB disease; - ensuring proper cleaning and sterilization or disinfection of potentially contaminated equipment (usually endoscopes); - training and educating HCWs regarding TB, with specific focus on prevention, transmission, and symptoms; - screening and evaluating HCWs who are at risk for TB disease or who might be exposed to M. tuberculosis (i.e., TB screening program); - applying epidemiologic-based prevention principles, including the use of setting-related infection-control data; - using appropriate signage advising respiratory hygiene and cough etiquette; and - coordinating efforts with the local or state health department.
HCWs with TB disease should be allowed to return to work when they 1) have had three negative AFB sputum smear results (109-112) collected 8-24 hours apart, with at least one being an early morning specimen because respiratory secretions pool overnight; and 2) have responded to antituberculosis treatment that will probably be effective based on susceptibility results. In addition, HCWs with TB disease should be allowed to return to work when a physician knowledgeable and experienced in managing TB disease determines that HCWs are noninfectious (see Treatment Procedures for LTBI and TB Disease). Consideration should also be given to the type of setting and the potential risk to patients (e.g., general medical office versus HIV clinic) (see Supplements, Estimating the Infectiousness of a TB Patient; Diagnostic Procedures for LTBI and TB Disease; and Treatment Procedures for LTBI and TB Disease).
# Environmental Controls
The second level of the hierarchy is the use of environmental controls to prevent the spread and reduce the concentration of infectious droplet nuclei in ambient air.
Primary environmental controls consist of controlling the source of infection by using local exhaust ventilation (e.g., hoods, tents, or booths) and diluting and removing contaminated air by using general ventilation.
Secondary environmental controls consist of controlling the airflow to prevent contamination of air in areas adjacent to the source (AII rooms) and cleaning the air by using high efficiency particulate air (HEPA) filtration or UVGI.
# Respiratory-Protection Controls
The first two control levels minimize the number of areas in which exposure to M. tuberculosis might occur and, therefore, minimize the number of persons exposed. These control levels also reduce, but do not eliminate, the risk for exposure in the limited areas in which exposure can still occur. Because persons entering these areas might be exposed to M. tuberculosis, the third level of the hierarchy is the use of respiratory protective equipment in situations that pose a high risk for exposure. Use of respiratory protection can further reduce risk for exposure of HCWs to infectious droplet nuclei that have been expelled into the air from a patient with infectious TB disease (see Respiratory Protection). The following measures can be taken to reduce the risk for exposure:
- implementing a respiratory-protection program,
- training HCWs on respiratory protection, and - training patients on respiratory hygiene and cough etiquette procedures.
# Recommendations for Preventing Transmission of M. tuberculosis in Health-Care Settings TB Infection-Control Program
Every health-care setting should have a TB infection-control plan that is part of an overall infection-control program. The specific details of the TB infection-control program will differ, depending on whether patients with suspected or confirmed TB disease might be encountered in the setting or whether patients with suspected or confirmed TB disease will be transferred to another health-care setting. Administrators making this distinction should obtain medical and epidemiologic consultation from state and local health departments.
# TB Infection-Control Program for Settings in Which Patients with Suspected or Confirmed TB Disease Are Expected To Be Encountered
The TB infection-control program should consist of administrative controls, environmental controls, and a respiratory-protection program. Every setting in which services are provided to persons who have suspected or confirmed infectious TB disease, including laboratories and nontraditional facility-based settings, should have a TB infection-control plan. The following steps should be taken to establish a TB infection-control program in these settings:
1. Assign supervisory responsibility for the TB infection-control program to a designated person or group with expertise in LTBI and TB disease, infection control, occupational health, environmental controls, and respiratory protection. Give the supervisor or supervisory body the support and authority to conduct a TB risk assessment, implement and enforce TB infection-control policies, and ensure recommended training and education of HCWs.
-Train the persons responsible for implementing and enforcing the TB infection-control program.
-Designate one person with a back-up as the TB resource person to whom questions and problems should be addressed, if supervisory responsibility is assigned to a committee. 2. Develop a written TB infection-control plan that outlines a protocol for the prompt recognition and initiation of airborne precautions of persons with suspected or confirmed TB disease, and update it annually. 3. Conduct a problem evaluation (see Problem Evaluation) if a case of suspected or confirmed TB disease is not promptly recognized and appropriate airborne precautions not initiated, or if administrative, environmental, or respiratory-protection controls fail. 4. Perform a contact investigation in collaboration with the local or state health department if healthcare-associated transmission of M. tuberculosis is suspected (115). Implement and monitor corrective action. 5. Collaborate with the local or state health department to develop administrative controls consisting of the risk assessment, the written TB infection-control plan, management of patients with suspected or confirmed TB disease, training and education of HCWs, screening and evaluation of HCWs, problem evaluation, and coordination. 6 Settings in which TB patients might stay before transfer should still have a TB infection-control program in place consisting of administrative, environmental, and respiratory-protection controls. The following steps should be taken to establish a TB infection-control program in these settings:
1. Assign responsibility for the TB infection-control program to appropriate personnel. 2. Develop a written TB infection-control plan that outlines a protocol for the prompt recognition and transfer of persons who have suspected or confirmed TB disease to another health-care setting. The plan should indicate procedures to follow to separate persons with suspected or confirmed infectious TB disease from other persons in the setting until the time of transfer. Evaluate the plan annually, if possible, to ensure that the setting remains one in which persons who have suspected or confirmed TB disease are not encountered and that they are promptly transferred. 3. Conduct a problem evaluation (see Problem Evaluation) if a case of suspected or confirmed TB disease is not promptly recognized, separated from others, and transferred. 4. Perform an investigation in collaboration with the local or state health department if health-care-associated transmission of M. tuberculosis is suspected. 5. Collaborate with the local or state health department to develop administrative controls consisting of the risk assessment and the written TB infection-control plan.
# TB Risk Assessment
Every health-care setting should conduct initial and ongoing evaluations of the risk for transmission of M. tuberculosis, regardless of whether or not patients with suspected or confirmed TB disease are expected to be encountered in the setting. The TB risk assessment determines the types of administrative, environmental, and respiratory-protection controls needed for a setting and serves as an ongoing evaluation tool of the quality of TB infection control and for the identification of needed improvements in infection-control measures. Part of the risk assessment is similar to a program review that is conducted by the local TB-control program (42). The TB Risk Assessment Worksheet (Appendix B) can be used as a guide for conducting a risk assessment. This worksheet frequently does not specify values for acceptable performance indicators because of the lack of scientific data.
# TB Risk Assessment for Settings in Which Patients with Suspected or Confirmed TB Disease Are Expected To Be Encountered
The initial and ongoing risk assessment for these settings should consist of the following steps:
1. Review the community profile of TB disease in collaboration with the state or local health department. 2. Consult the local or state TB-control program to obtain epidemiologic surveillance data necessary to conduct a TB risk assessment for the health-care setting. 3. Review the number of patients with suspected or confirmed TB disease who have been encountered in the setting during at least the previous 5 years. 4. Determine if persons with unrecognized TB disease have been admitted to or were encountered in the setting during the previous 5 years. 5. Determine which HCWs need to be included in a TB screening program and the frequency of screening (based on risk classification) (Appendix C). 6. Ensure the prompt recognition and evaluation of suspected episodes of health-care-associated transmission of M. tuberculosis. 7. Identify areas in the setting with an increased risk for health-care-associated transmission of M. tuberculosis, and target them for improved TB infection controls. 8. Assess the number of AII rooms needed for the setting.
The risk classification for the setting should help to make this determination, depending on the number of TB patients examined. At least one AII room is needed for settings in which TB patients stay while they are being treated, and additional AII rooms might be needed, depending on the magnitude of patient-days of cases of suspected or confirmed TB disease. Additional AII rooms might be considered if options are limited for transferring patients with suspected or confirmed TB disease to other settings with AII rooms. 9. Determine the types of environmental controls needed other than AII rooms (see TB Airborne Precautions). 10. Determine which HCWs need to be included in the respiratory-protection program. 11. Conduct periodic reassessments (annually, if possible) to ensure -proper implementation of the TB infection-control plan, -prompt detection and evaluation of suspected TB cases, -prompt initiation of airborne precautions of suspected infectious TB cases, -recommended medical management of patients with suspected or confirmed TB disease (31), -functional environmental controls, -implementation of the respiratory-protection program, and -ongoing HCW training and education regarding TB. 12. Recognize and correct lapses in infection control.
# TB Risk Assessment for Settings in Which Patients with Suspected or Confirmed TB Disease Are Not Expected To Be Encountered
The initial and ongoing risk assessment for these settings should consist of the following steps:
1. Review the community profile of TB disease in collaboration with the local or state health department. 2. Consult the local or state TB-control program to obtain epidemiologic surveillance data necessary to conduct a TB risk assessment for the health-care setting. 3. Determine if persons with unrecognized TB disease were encountered in the setting during the previous 5 years. 4. Determine if any HCWs need to be included in the TB screening program. 5. Determine the types of environmental controls that are currently in place, and determine if any are needed in the setting (Appendices A and D). 6. Document procedures that ensure the prompt recognition and evaluation of suspected episodes of health-careassociated transmission of M. tuberculosis. 7. Conduct periodic reassessments (annually, if possible) to ensure 1) proper implementation of the TB infection-control plan; 2) prompt detection and evaluation of suspected TB cases; 3) prompt initiation of airborne precautions of suspected infectious TB cases before transfer; 4) prompt transfer of suspected infectious TB cases; 5) proper functioning of environmental controls, as applicable; and 6) ongoing TB training and education for HCWs. 8. Recognize and correct lapses in infection control.
# Use of Risk Classification to Determine Need for TB Screening and Frequency of Screening HCWs
Risk classification should be used as part of the risk assessment to determine the need for a TB screening program for HCWs and the frequency of screening (Appendix C). A risk classification usually should be determined for the entire setting. However, in certain settings (e.g., health-care organizations that encompass multiple sites or types of services), specific areas defined by geography, functional units, patient population, job type, or location within the setting might have separate risk classifications. Examples of assigning risk classifications have been provided (see Risk Classification Examples).
# TB Screening Risk Classifications
The three TB screening risk classifications are low risk, medium risk, and potential ongoing transmission. The classification of low risk should be applied to settings in which persons with TB disease are not expected to be encountered, and, therefore, exposure to M. tuberculosis is unlikely. This classification should also be applied to HCWs who will never be exposed to persons with TB disease or to clinical specimens that might contain M. tuberculosis.
The classification of medium risk should be applied to settings in which the risk assessment has determined that HCWs will or will possibly be exposed to persons with TB disease or to clinical specimens that might contain M. tuberculosis.
The classification of potential ongoing transmission should be temporarily applied to any setting (or group of HCWs) if evidence suggestive of person-to-person (e.g., patient-to-patient, patient-to-HCW, HCW-to-patient, or HCW-to-HCW) transmission of M. tuberculosis has occurred in the setting during the preceding year. Evidence of person-to-person transmission of M. tuberculosis includes 1) clusters of TST or BAMT conversions, 2) HCW with confirmed TB disease, 3) increased rates of TST or BAMT conversions, 4) unrecognized TB disease in patients or HCWs, or 5) recognition of an identical strain of M. tuberculosis in patients or HCWs with TB disease identified by deoxyribonucleic acid (DNA) fingerprinting.
If uncertainty exists regarding whether to classify a setting as low risk or medium risk, the setting typically should be classified as medium risk.
# TB Screening Procedures for Settings (or HCWs) Classified as Low Risk
- All HCWs should receive baseline TB screening upon hire, using two-step TST or a single BAMT to test for infection with M. tuberculosis. - After baseline testing for infection with M. tuberculosis, additional TB screening is not necessary unless an exposure to M. tuberculosis occurs. - HCWs with a baseline positive or newly positive test result for M. tuberculosis infection (i.e., TST or BAMT) or documentation of treatment for LTBI or TB disease should receive one chest radiograph result to exclude TB disease (or an interpretable copy within a reasonable time frame, such as 6 months). Repeat radiographs are not needed unless symptoms or signs of TB disease develop or unless recommended by a clinician (39,116). (39).
TB
# TB Screening Procedures for Settings (or HCWs) Classified as Potential Ongoing Transmission
- Testing for infection with M. tuberculosis might need to be performed every 8-10 weeks until lapses in infection control have been corrected, and no additional evidence of ongoing transmission is apparent. - The classification of potential ongoing transmission should be used as a temporary classification only. It warrants immediate investigation and corrective steps. After a determination that ongoing transmission has ceased, the setting should be reclassified as medium risk. Maintaining the classification of medium risk for at least 1 year is recommended.
# Settings Adopting BAMT for Use in TB Screening
Settings that use TST as part of TB screening and want to adopt BAMT can do so directly (without any overlapping TST) or in conjunction with a period of evaluation (e.g., 1 or 2 years) during which time both TST and BAMT are used. Baseline testing for BAMT would be established as a single step test. As with the TST, BAMT results should be recorded in detail. The details should include date of blood draw, result in specific units, and the laboratory interpretation (positive, negative, or indeterminate-and the concentration of cytokine measured, for example, interferon-gamma ).
# Risk Classification Examples
# Inpatient Settings with More Than 200 Beds
If less than six TB patients for the preceding year, classify as low risk. If greater than or equal to six TB patients for the preceding year, classify as medium risk.
# Inpatient Settings with Less Than 200 Beds
If less than three TB patients for the preceding year, classify as low risk. If greater than or equal to three TB patients for the preceding year, classify as medium risk.
# Outpatient, Outreach, and Home-Based Health-Care Settings
If less than three TB patients for the preceding year, classify as low risk. If greater than or equal to three TB patients for the preceding year, classify as medium risk.
# Hypothetical Risk Classification Examples
The following hypothetical situations illustrate how assessment data are used to assign a risk classification. The risk classifications are for settings in which patients with suspected or confirmed infectious TB disease are expected to be encountered.
Example A. The setting is a 150-bed hospital located in a small city. During the preceding year, the hospital admitted two patients with a diagnosis of TB disease. One was admitted directly to an AII room, and one stayed on a medical ward for 2 days before being placed in an AII room. A contact investigation of exposed HCWs by hospital infection-control personnel in consultation with the state or local health department did not identify any health-care-associated transmission. Risk classification: low risk.
Example B. The setting is an ambulatory-care site in which a TB clinic is held 2 days per week. During the preceding year, care was delivered to six patients with TB disease and approximately 50 persons with LTBI. No instances of transmission of M. tuberculosis were noted. Risk classification: medium risk (because it is a TB clinic).
Example C. The setting is a large publicly funded hospital in a major metropolitan area. The hospital admits an average of 150 patients with TB disease each year, comprising 35% of the city burden. The setting has a strong TB infection-control program (i.e., annually updates infection-control plan, fully implements infection-control plan, and has enough AII rooms ) and an annual conversion rate (for tests for M. tuberculosis infection) among HCWs of 0.5%. No evidence of health-care-associated transmission is apparent. The hospital has strong collaborative linkages with the state or local health department. Risk classification: medium risk (with close ongoing surveillance for episodes of transmission from unrecognized cases of TB disease, test conversions for M. tuberculosis infection in HCWs as a result of health-careassociated transmission, and specific groups or areas in which a higher risk for health-care-associated transmission exists).
Example D. The setting is an inpatient area of a correctional facility. A proportion of the inmates were born in countries where TB disease is endemic. Two cases of TB disease were diagnosed in inmates during the preceding year. Risk classification: medium risk (Correctional facilities should be classified as at least medium risk).
Example E. A hospital located in a large city admits 35 patients with TB disease per year, uses QFT-G to measure M. tuberculosis infection, and has an overall HCW M. tuberculosis infection test conversion rate of 1.0%. However, on annual testing, three of the 20 respiratory therapists tested had QFT-G conversions, for a rate of 15%. All of the respiratory therapists who tested positive received medical evaluations, had TB disease excluded, were diagnosed with LTBI, and were offered and completed a course of treatment for LTBI. None of the respiratory therapists had known exposures to M. tuberculosis outside the hospital. The problem evaluation revealed that 1) the respiratory therapists who converted had spent part of their time in the pulmonary function laboratory where induced sputum specimens were collected, and 2) the ventilation in the laboratory was inadequate. Risk classification: potential ongoing transmission for the respiratory therapists (because of evidence of health-care-associated transmission). The rest of the setting was classified as medium risk. To address the problem, booths were installed for sputum induction. On subsequent testing for M. tuberculosis infection, no conversions were noted at the repeat testing 3 months later, and the respiratory therapists were then reclassified back to medium risk.
Example F. The setting is an ambulatory-care center associated with a large health maintenance organization (HMO). The patient volume is high, and the HMO is located in the inner city where TB rates are the highest in the state. During the preceding year, one patient who was known to have TB disease was evaluated at the center. The person was recognized as a TB patient on his first visit and was promptly triaged to an ED with an AII room capacity. While in the ambulatory-care center, the patient was held in an area separate from HCWs and other patients and instructed to wear a surgical or procedure mask, if possible. QFT-G was used for infection-control surveillance purposes, and a contact investigation was conducted among exposed staff, and no QFT-G conversions were noted. Risk classification: low risk.
Example G. The setting is a clinic for the care of persons infected with HIV. The clinic serves a large metropolitan area and a patient population of 2,000. The clinic has an AII room and a TB infection-control program. All patients are screened for TB disease upon enrollment, and airborne precautions are promptly initiated for anyone with respiratory complaints while the patient is being evaluated. During the preceding year, seven patients who were encountered in the clinic were subsequently determined to have TB disease. All patients were promptly put into an AII room, and no contact investigations were performed. The local health department was promptly notified in all cases. Annual TST has determined a conversion rate of 0.3%, which is low compared with the rate of the hospital with which the clinic is associated. Risk classification: medium risk (because persons infected with HIV might be encountered).
Example H. A home health-care agency employs 125 workers, many of whom perform duties, including nursing, physical therapy, and basic home care. The agency did not care for any patients with suspected or confirmed TB disease during the preceding year. Approximately 30% of the agency's workers are foreign-born, many of whom have immigrated within the previous 5 years. At baseline two-step testing, four had a positive initial TST result, and two had a positive second-step TST result. All except one of these workers was foreign-born. Upon further screening, none were determined to have TB disease. The home health-care agency is based in a major metropolitan area and delivers care to a community where the majority of persons are poor and medically underserved and TB case rates are higher than the community as a whole. Risk classification: low risk (because HCWs might be from populations at higher risk for LTBI and subsequent progression to TB disease because of foreign birth and recent immigration or HIV-infected clients might be overrepresented, medium risk could be considered).
# Screening HCWs Who Transfer to Other Health-Care Settings
All HCWs should receive baseline TB screening, even in settings considered to be low risk. Infection-control plans should address HCWs who transfer from one health-care setting to another and consider that the transferring HCWs might be at an equivalent or higher risk for exposure in different settings. Infection-control plans might need to be customized to balance the assessed risks and the efficacy of the plan based on consideration of various logistical factors. Guidance is provided based on different scenarios.
Because some institutions might adopt BAMT for the purposes of testing for M. tuberculosis infection, infection-control programs might be confronted with interpreting historic and current TST and BAMT results when HCWs transfer to a different setting. On a case-by-case basis, expert medical opinion might be needed to interpret results and refer patients with discordant BAMT and TST baseline results. Therefore, infection-control programs should keep all records when documenting previous test results. For example, an infection-control program using a BAMT strategy should request and keep historic TST results of a HCW transferring from a previous setting. Even if the HCW is transferring from a setting that used BAMT to a setting that uses BAMT, historic TST results might be needed when in the future the HCW transfers to a setting that uses TST. Similarly, historic BAMT results might be needed when the HCW transfers from a setting that used TST to a setting that uses BAMT.
HCWs transferring from low-risk to low-risk settings. After a baseline result for infection with M. tuberculosis is established and documented, serial testing for M. tuberculosis infection is not necessary.
HCWs transferring from low-risk to medium-risk settings. After a baseline result for infection with M. tuberculosis is established and documented, annual TB screening (including a symptom screen and TST or BAMT for persons with previously negative test results) should be performed.
HCWs transferring from low-or medium-risk settings to settings with a temporary classification of potential ongoing transmission. After a baseline result for infection with M. tuberculosis is established, a decision should be made regarding follow-up screening on an individual basis. If transmission seems to be ongoing, consider including the HCW in the screenings every 8-10 weeks until a determination has been made that ongoing transmission has ceased. When the setting is reclassified back to medium-risk, annual TB screening should be resumed.
# Calculation and Use of Conversion Rates for M. tuberculosis Infection
The M. tuberculosis infection conversion rate is the percentage of HCWs whose test result for M. tuberculosis infection has converted within a specified period. Timely detection of M. tuberculosis infection in HCWs not only facilitates treatment for LTBI, but also can indicate the need for a source case investigation and a revision of the risk assessment for the setting. Conversion in test results for M. tuberculosis, regardless of the testing method used, is usually interpreted as presumptive evidence of new M. tuberculosis infection, and recent infections are associated with an increased risk for progression to TB disease.
For administrative purposes, a TST conversion is ≥10 mm increase in the size of the TST induration during a 2-year period in 1) an HCW with a documented negative (<10 mm) baseline two-step TST result or 2) a person who is not an HCW with a negative (<10 mm) TST result within 2 years.
In settings conducting serial testing for M. tuberculosis infection (medium-risk settings), use the following steps to estimate the risk for test conversion in HCWs.
# Use of Conversion Test Data for M. tuberculosis Infection To Identify Lapses in Infection Control
- Conversion rates above the baseline level (which will be different in each setting) should instigate an investigation to evaluate the likelihood of health-care-associated transmission. When testing for M. tuberculosis infection, if conversions are determined to be the result of well-documented community exposure or probable false-positive test results, then the risk classification of the setting does not need to be adjusted. - For settings that no longer perform serial testing for M. tuberculosis infection among HCWs, reassessment of the risk for the setting is essential to ensure that the infection-control program is effective. The setting should have ongoing communication with the local or state health department regarding incidence and epidemiology of TB in the population served and should ensure that timely contact investigations are performed for HCWs or patients with unprotected exposure to a person with TB disease.
# Example Calculation of Conversion Rates
Medical Center A is classified as medium risk and uses TST for annual screening. At the end of 2004, a total of 10,051 persons were designated as HCWs. Of these, 9,246 had negative baseline test results for M. tuberculosis infection. Of the HCWs tested, 10 experienced an increase in TST result by ≥10 mm. The overall setting conversion rate for 2004 is 0.11%. If five of the 10 HCWs whose test results converted were among the 100 HCWs employed in the ICU of Hospital X (in Medical Center A), then the ICU setting-specific conversion rate for 2004 is 5%.
Evaluation of HCWs for LTBI should include information from a serial testing program, but this information must be interpreted as only one part of a full assessment. TST or BAMT conversion criteria for administrative (surveillance) purposes are not applicable for medical evaluation of HCWs for the diagnosis of LTBI (see Supplement, Surveillance and Detection of M. tuberculosis Infections in Health-Care Workers ).
# Evaluation of TB Infection-Control Procedures and Identification of Problems
Annual evaluations of the TB infection-control plan are needed to ensure the proper implementation of the plan and to recognize and correct lapses in infection control. Previous hospital admissions and outpatient visits of patients with TB disease should be noted before the onset of TB symptoms. Medical records of a sample of patients with suspected and confirmed TB disease who were treated or examined at the setting should be reviewed to identify possible problems in TB infection control. The review should be based on the factors listed on the TB Risk Assessment Worksheet (Appendix B).
- Time interval from suspicion of TB until initiation of airborne precautions and antituberculosis treatment to: -suspicion of TB disease and patient triage to proper AII room or referral center for settings that do not provide care for patients with suspected or confirmed TB disease; -admission until TB disease was suspected; -admission until medical evaluation for TB disease was performed; -admission until specimens for AFB smears and polymerase chain reaction (PCR)-based nucleic acid amplification (NAA) tests for M. tuberculosis were ordered; -admission until specimens for mycobacterial culture were ordered; -ordering of AFB smears, NAA tests, and mycobacterial culture until specimens were collected; -collection of specimens until performance and AFB smear results were reported; -collection of specimens until performance and culture results were reported; -collection of specimens until species identification was reported; -collection of specimens until drug-susceptibility test results were reported; -admission until airborne precautions were initiated; and -admission until antituberculosis treatment was initiated. - Duration of airborne precautions. - Measurement of meeting criteria for discontinuing airborne precautions. Certain patients might be correctly discharged from an AII room to home. - Patient history of previous admission.
- Adequacy of antituberculosis treatment regimens. - Adequacy of procedures for collection of follow-up sputum specimens. - Adequacy of discharge planning.
- Number of visits to outpatient setting from the start of symptoms until TB disease was suspected (for outpatient settings). Work practices related to airborne precautions should be observed to determine if employers are enforcing all practices, if HCWs are adhering to infection-control policies, and if patient adherence to airborne precautions is being enforced. Data from the case reviews and observations in the annual risk assessment should be used to determine the need to modify 1) protocols for identifying and initiating prompt airborne precautions for patients with suspected or confirmed infectious TB disease, 2) protocols for patient management, 3) laboratory procedures, or 4) TB training and education programs for HCWs. (118).
# Suggested Components of an Initial TB Training and Education Program for HCWs
The following are suggested components of an initial TB training and education program:
# Clinical Information
- Basic concepts of M. tuberculosis transmission, pathogenesis, and diagnosis, including the difference between LTBI and TB disease and the possibility of reinfection after previous infection with M. tuberculosis or TB disease. - Symptoms and signs of TB disease and the importance of a high index of suspicion for patients or HCWs with these symptoms. - Indications for initiation of airborne precautions of inpatients with suspected or confirmed TB disease. - Policies and indications for discontinuing airborne precautions.
- Principles of treatment for LTBI and for TB disease (indications, use, effectiveness, and potential adverse effects).
# Epidemiology of TB
- Epidemiology of TB in the local community, the United States, and worldwide. - Risk factors for TB disease.
# Infection-Control
# TB and Public Health
- Role of the local and state health department's TB-control program in screening for LTBI and TB disease, providing treatment, conducting contact investigations and outbreak investigations, and providing education, counseling, and responses to public inquiries. - Roles of CDC and of OSHA.
- Availability of information, advice, and counseling from community sources, including universities, local experts, and hotlines. - Responsibility of the setting's clinicians and infectioncontrol program to promptly report to the state or local health department a case of suspected TB disease or a cluster of TST or BAMT conversions. - Responsibility of the setting's clinicians and infection-control program to promptly report to the state or local health department a person with suspected or confirmed TB disease who leaves the setting against medical advice.
# Managing Patients Who Have Suspected or Confirmed TB Disease: General Recommendations
The primary TB risk to HCWs is the undiagnosed or unsuspected patient with infectious TB disease. A high index of suspicion for TB disease and rapid implementation of precautions are essential to prevent and interrupt transmission. Specific precautions will vary depending on the setting.
# Prompt Triage
Within health-care settings, protocols should be implemented and enforced to promptly identify, separate from others, and either transfer or manage persons who have suspected or confirmed infectious TB disease. When patients' medical histories are taken, all patients should be routinely asked about 1) a history of TB exposure, infection, or disease; 2) symptoms or signs of TB disease; and 3) medical conditions that increase their risk for TB disease (see Supplements, Diagnostic Procedures for LTBI and TB Disease; and Treatment Procedures for LTBI and TB Disease). The medical evaluation should include an interview conducted in the patient's primary language, with the assistance of a qualified medical interpreter, if necessary. HCWs who are the first point of contact should be trained to ask questions that will facilitate detection of persons who have suspected or confirmed infectious TB disease. For assistance with language interpretation, contact the local and state health department. Interpretation resources are also available (119) at ; . languageline.com; and .
A diagnosis of respiratory TB disease should be considered for any patient with symptoms or signs of infection in the lung, pleura, or airways (including larynx), including coughing for ≥3 weeks, loss of appetite, unexplained weight loss, night sweats, bloody sputum or hemoptysis, hoarseness, fever, fatigue, or chest pain. The index of suspicion for TB disease will vary by geographic area and will depend on the population served by the setting. The index of suspicion should be substantially high for geographic areas and groups of patients characterized by high TB incidence (26). Special steps should be taken in settings other than TB clinics. Patients with symptoms suggestive of undiagnosed or inadequately treated TB disease should be promptly referred so that they can receive a medical evaluation. These patients should not be kept in the setting any longer than required to arrange a referral or transfer to an AII room. While in the setting, symptomatic patients should wear a surgical or procedure mask, if possible, and should be instructed to observe strict respiratory hygiene and cough etiquette procedures (see Glossary) (120)(121)(122).
Immunocompromised persons, including those who are HIV-infected, with infectious TB disease should be physically separated from other persons to protect both themselves and others. To avoid exposing HIV-infected or otherwise severely immunocompromised persons to M. tuberculosis, consider location and scheduling issues to avoid exposure.
# TB Airborne Precautions
Within health-care settings, TB airborne precautions should be initiated for any patient who has symptoms or signs of TB disease, or who has documented infectious TB disease and has not completed antituberculosis treatment. For patients placed in AII rooms because of suspected infectious TB disease of the lungs, airway, or larynx, airborne precautions may be discontinued when infectious TB disease is considered unlikely and either 1) another diagnosis is made that explains the clinical syndrome or 2) the patient has three consecutive, negative AFB sputum smear results (109)(110)(111)(112)123). Each of the three sputum specimens should be collected in 8-24-hour intervals (124), and at least one specimen should be an early morning specimen because respiratory secretions pool overnight. Generally, this method will allow patients with negative sputum smear results to be released from airborne precautions in 2 days.
The classification of the risk assessment of the health-care setting is used to determine how many AII rooms each setting needs, depending on the number of TB patients examined. At least one AII room is needed for settings in which TB patients stay while they are being treated, and additional AII rooms might be needed depending on the magnitude of patient-days of persons with suspected or confirmed TB disease (118). Additional rooms might be considered if options are limited for transferring patients with suspected or confirmed TB disease to other settings with AII rooms. For example, for a hospital with 120 beds, a minimum of one AII room is needed, possibly more, depending on how many TB patients are examined in 1 year.
# TB Airborne Precautions for Settings in Which Patients with Suspected or Confirmed TB Disease Are Expected To Be Encountered
Settings that plan to evaluate and manage patients with TB disease should have at least one AII room or enclosure that meets AII requirements (see Environmental Controls; and Supplement, Environmental Controls). These settings should develop written policies that specify 1) indications for airborne precautions, 2) persons authorized to initiate and discontinue airborne precautions, 3) specific airborne precautions, 4) AII room-monitoring procedures, 5) procedures for managing patients who do not adhere to airborne precautions, and 6) criteria for discontinuing airborne precautions.
A high index of suspicion should be maintained for TB disease. If a patient has suspected or confirmed TB disease, airborne precautions should be promptly initiated. Persons with suspected or confirmed TB disease who are inpatients should remain in AII rooms until they are determined to be noninfectious and have demonstrated a clinical response to a standard multidrug antituberculosis treatment regimen or until an alternative diagnosis is made. If the alternative diagnosis cannot be clearly established, even with three negative sputum smear results, empiric treatment of TB disease should strongly be considered (see Supplement, Estimating the Infectiousness of a TB Patient). Outpatients with suspected or confirmed infectious TB disease should remain in AII rooms until they are transferred or until their visit is complete.
# TB Airborne Precautions for Settings in Which Patients with Suspected or Confirmed TB Disease Are Not Expected To Be Encountered
Settings in which patients with suspected or confirmed TB disease are not expected to be encountered do not need an AII room or a respiratory-protection program for the prevention of transmission of M. tuberculosis. However, follow these steps in these settings.
A written protocol should be developed for referring patients with suspected or confirmed TB disease to a collaborating referral setting in which the patient can be evaluated and managed properly. The referral setting should provide documentation of intent to collaborate. The protocol should be reviewed routinely and revised as needed.
Patients with suspected or confirmed TB disease should be placed in an AII room, if available, or in a room that meets the requirements for an AII room, or in a separate room with the door closed, apart from other patients and not in an open waiting area. Adequate time should elapse to ensure removal of M. tuberculosis-contaminated room air before allowing entry by staff or another patient (Tables 1 and 2).
If an AII room is not available, persons with suspected or confirmed infectious TB disease should wear a surgical or procedure mask, if possible. Patients should be instructed to keep the mask on and to change the mask if it becomes wet. If patients cannot tolerate a mask, they should observe strict respiratory hygiene and cough etiquette procedures.
# AII Room Practices
AII rooms should be single-patient rooms in which environmental factors and entry of visitors and HCWs are controlled to minimize the transmission of M. tuberculosis. All HCWs who enter an AII room should wear at least N95 disposable respirators (see Respiratory Protection). Visitors may be offered respiratory protection (i.e., N95) and should be instructed by HCWs on the use of the respirator before entering an AII room. AII rooms have specific requirements for controlled ventilation, negative pressure, and air filtration (118) Address problems that could interfere with adherence (e.g., management of withdrawal from addictive substances, including tobacco); and - ensure that patients with suspected or confirmed infectious TB disease who must be transported to another area of the setting or to another setting for a medically essential procedure bypass the waiting area and wear a surgical or procedure mask, if possible. Drivers, HCWs, and other staff who are transporting persons with suspected or confirmed infectious TB disease might consider wearing an N95 respirator. Schedule procedures on patients with TB disease when a minimum number of HCWs and other patients are present and as the last procedure of the day to maximize the time available for removal of airborne contamination (Tables 1 and 2).
# Diagnostic Procedures
Diagnostic procedures should be performed in settings with appropriate infection-control capabilities. The following recommendations should be applied for diagnosing TB disease and for evaluating patients for potential infectiousness.
# Clinical Diagnosis
A complete medical history should be obtained, including symptoms of TB disease, previous TB disease and treatment, previous history of infection with M. tuberculosis, and previous treatment of LTBI or exposure to persons with TB disease. A physical examination should be performed, including chest radiograph, microscopic examination, culture, and, when indicated, NAA testing of sputum (39,53,125,126). If possible, sputum induction with aerosol inhalation is preferred, particularly when the patient cannot produce sputum. Gastric aspiration might be necessary for those patients, particularly children, who cannot produce sputum, even with aerosol inhalation (127)(128)(129)(130). Bronchoscopy might be needed for specimen collection, especially if sputum specimens have been nondiagnostic and doubt exists as to the diagnosis (90,111,127,128,(131)(132)(133)(134).
All patients with suspected or confirmed infectious TB disease should be placed under airborne precautions until they have been determined to be noninfectious (see Supplement, Estimating the Infectiousness of a TB Patient). Adult and adolescent patients who might be infectious include persons who are coughing; have cavitation on chest radiograph; have positive AFB sputum smear results; have respiratory tract disease with involvement of the lung, pleura or airways, including larynx, who fail to cover the mouth and nose when coughing; are not on antituberculosis treatment or are on incorrect antituberculosis treatment; or are undergoing cough-inducing or aerosolgenerating procedures (e.g., sputum induction, bronchoscopy, and airway suction) (30,135).
Persons diagnosed with extrapulmonary TB disease should be evaluated for the presence of concurrent pulmonary TB disease. An additional concern in infection control with children relates to adult household members and visitors who might be the source case (136). Pediatric patients, including adolescents, who might be infectious include those who have extensive pulmonary or laryngeal involvement, prolonged cough, positive sputum AFB smears results, cavitary TB on chest radiograph (as is typically observed in immunocompetent adults with TB disease), or those for whom cough-inducing or aerosol-generating procedures are performed (136,137).
Although children are uncommonly infectious, pediatric patients should be evaluated for infectiousness by using the same criteria as for adults (i.e., on the basis of pulmonary or laryngeal involvement). Patients with suspected or confirmed TB disease should be immediately reported to the local public health authorities so that arrangements can be made for tracking their treatment to completion, preferably through a case management system, so that DOT can be arranged and standard procedures for identifying and evaluating TB contacts can be initiated. Coordinate efforts with the local or state health department to arrange treatment and long-term follow-up and evaluation of contacts.
# Laboratory Diagnosis
To produce the highest quality laboratory results, laboratories performing mycobacteriologic tests should be skilled in both the laboratory and the administrative aspects of specimen processing. Laboratories should use or have prompt access to the most rapid methods available: 1) fluorescent microscopy and concentration for AFB smears; 2) rapid NAA testing for direct detection of M. tuberculosis in patient specimens (125); 3) solid and rapid broth culture methods for isolation of mycobacteria; 4) nucleic acid probes or high pressure liquid chromatography (HPLC) for species identification; and 5) rapid broth culture methods for drug susceptibility testing. Laboratories should incorporate other more rapid or sensitive tests as they become available, practical, and affordable (see Supplement, Diagnostic Procedures for LTBI and TB Disease) (138,139).
In accordance with local and state laws and regulations, a system should be in place to ensure that laboratories report any positive results from any specimens to clinicians within 24 hours of receipt of the specimen (139,140). Certain settings perform AFB smears on-site for rapid results (and results should be reported to clinicians within 24 hours) and then send specimens or cultures to a referral laboratory for identification and drug-susceptibility testing. This referral practice can speed the receipt of smear results but delay culture identification and drug-susceptibility results. Settings that cannot provide the full range of mycobacteriologic testing services should contract with their referral laboratories to ensure rapid results while maintaining proficiency for on-site testing. In addition, referral laboratories should be instructed to store isolates in case additional testing is necessary.
All drug susceptibility results on M. tuberculosis isolates should be reported to the local or state health department as soon as these results are available. Laboratories that rarely receive specimens for mycobacteriologic analysis should refer specimens to a laboratory that performs these tests routinely. The reference laboratory should provide rapid testing and reporting. Out-of-state reference laboratories should provide all results to the local or state health department from which the specimen originated.
# Special Considerations for Persons Who Are at High Risk for TB Disease or in Whom TB Disease Might Be Difficult to Diagnose
The probability of TB disease is higher among patients who 1) previously had TB disease or were exposed to M. tuberculosis, 2) belong to a group at high risk for TB disease or, 3) have a positive TST or BAMT result. TB disease is strongly suggested if the diagnostic evaluation reveals symptoms or signs of TB disease, a chest radiograph consistent with TB disease, or AFB in sputum or from any other specimen. TB disease can occur simultaneously in immunocompromised persons who have pulmonary infections caused by other organisms (e.g., Pneumocystis jaroveci and M. avium complex) and should be considered in the diagnostic evaluation of all such patients with symptoms or signs of TB disease (53).
TB disease can be difficult to diagnose in persons who have HIV infection (49) (or other conditions associated with severe suppression of cell mediated immunity) because of nonclassical or normal radiographic presentation or the simultaneous occurrence of other pulmonary infections (e.g., P. jaroveci or M. avium complex) (2). Patients who are HIVinfected are also at greater risk for having extrapulmonary TB (2). The difficulty in diagnosing TB disease in HIV-infected can be compounded by the possible lower sensitivity and specificity of sputum smear results for detecting AFB (53,141) and the overgrowth of cultures with M. avium complex in specimens from patients infected with both M. tuberculosis and M. avium complex. The TST in patients with advanced HIV infection is unreliable and cannot be used in clinical decision making (35,53,142).
For immunocompromised patients who have respiratory symptoms or signs that are attributed initially to infections or conditions other than TB disease, conduct an evaluation for coexisting TB disease. If the patient does not respond to recommended treatment for the presumed cause of the pulmonary abnormalities, repeat the evaluation (see Supplement, Diagnostic Procedures for LTBI and TB Disease). In certain settings in which immunocompromised patients and patients with TB disease are examined, implementing airborne precautions might be prudent for all persons at high risk. These persons include those infected with HIV who have an abnormal chest radiograph or respiratory symptoms, symptomatic foreign-born persons who have immigrated within the previous 5 years from TB-endemic countries, and persons with pulmonary infiltrates on chest radiograph, or symptoms or signs of TB disease.
# Initiation of Treatment
For patients who have confirmed TB disease or who are considered highly probable to have TB disease, promptly start antituberculosis treatment in accordance with current guidelines (see Supplements, Diagnostic Procedures for LTBI and TB Disease; and Treatment Procedures for LTBI and TB Disease) (31). In accordance with local and state regulations, local health departments should be notified of all cases of suspected TB.
DOT is the standard of care for all patients with TB disease and should be used for all doses during the course of therapy for treatment of TB disease. All inpatient medication should be administered by DOT and reported to the state or local health department. Rates of relapse and development of drugresistance are decreased when DOT is used (143)(144)(145). All patients on intermittent (i.e., once or twice per week) treatment for TB disease or LTBI should receive DOT. Settings should collaborate with the local or state health department on decisions concerning inpatient DOT and arrangements for outpatient DOT (31).
# Managing Patients Who Have Suspected or Confirmed TB Disease: Considerations for Special Circumstances and Settings
The recommendations for preventing transmission of M. tuberculosis are applicable to all health-care settings, including those that have been described (Appendix A). These settings should each have independent risk assessments if they are stand-alone settings, or each setting should have a detailed section written as part of the risk assessment for the overall setting.
# Minimum Requirements
The specific precautions for the settings included in this section vary, depending on the setting.
# Inpatient Settings
# Emergency Departments (EDs)
The symptoms of TB disease are usually symptoms for which patients might seek treatment in EDs. Because TB symptoms are common and nonspecific, infectious TB disease could be encountered in these settings. The use of ED-based TB screening has not been demonstrated to be consistently effective (146).
The amount of time patients with suspected or confirmed infectious TB disease spend in EDs and urgent-care settings should be minimized. Patients with suspected or confirmed infectious TB disease should be promptly identified, evaluated, and separated from other patients. Ideally, such patients should be placed in an AII room. When an AII room is not available, use a room with effective general ventilation, and use air cleaning technologies (e.g., a portable HEPA filtration system), if available, or transfer the patient to a setting or area with recommended infection-control capacity. Facility engineering personnel with expertise in heating, ventilation, and air conditioning (HVAC) and air handlers have evaluated how this option is applied to ensure no over pressurization of return air or unwanted deviations exists in design of air flow in the zone.
EDs with a high volume of patients with suspected or confirmed TB disease should have at least one AII room (see TB Risk Assessment). Air-cleaning technologies (e.g., HEPA filtration and UVGI) can be used to increase equivalent air changes per hour (ACH) in waiting areas (Table 1). HCWs entering an AII room or any room with a patient with infectious TB disease should wear at least an N95 disposable respirator. After a patient with suspected or confirmed TB disease exits a room, allow adequate time to elapse to ensure removal of M. tuberculosis-contaminated room air before allowing entry by staff or another patient (Tables 1 and 2).
Before a patient leaves an AII room, perform an assessment of 1) the patient's need to discontinue airborne precautions, 2) the risk for transmission and the patient's ability to observe strict respiratory hygiene, and 3) cough etiquette procedures. Patients with suspected or confirmed infectious TB who are outside an AII room should wear a surgical or procedure mask, if possible. Patients who cannot tolerate masks because of medical conditions should observe strict respiratory hygiene and cough etiquette procedures.
# Intensive Care Units (ICUs)
Patients with infectious TB disease might become sick enough to require admission to an ICU. Place ICU patients with suspected or confirmed infectious TB disease in an AII room, if possible. ICUs with a high volume of patients with suspected or confirmed TB disease should have at least one AII room (Appendix B). Air-cleaning technologies (e.g., HEPA filtration and UVGI) can be used to increase equivalent ACH in waiting areas (see Environmental Controls).
HCWs entering an AII room or any room with a patient with infectious TB disease should wear at least an N95 disposable respirator. To help reduce the risk for contaminating a ventilator or discharging M. tuberculosis into the ambient air when mechanically ventilating (i.e., with a ventilator or manual resuscitator) a patient with suspected or confirmed TB disease, place a bacterial filter on the patient's endotracheal tube (or at the expiratory side of the breathing circuit of a ventilator) (147)(148)(149)(150)(151). In selecting a bacterial filter, give preference to models specified by the manufacturer to filter particles 0.3 µm in size in both the unloaded and loaded states with a filter efficiency of ≥95% (i.e., filter penetration of <5%) at the maximum design flow rates of the ventilator for the service life of the filter, as specified by the manufacturer.
# Surgical Suites
Surgical suites require special infection-control considerations for preventing transmission of M. tuberculosis. Normally, the direction of airflow should be from the operating room (OR) to the hallway (positive pressure) to minimize contamination of the surgical field. Certain hospitals have procedure rooms with reversible airflow or pressure, whereas others have positive-pressure rooms with a negative pressure anteroom. Surgical staff, particularly those close to the surgical field, should use respiratory protection (e.g., a valveless N95 disposable respirator) to protect themselves and the patient undergoing surgery. When possible, postpone non-urgent surgical procedures on patients with suspected or confirmed TB disease until the patient is determined to be noninfectious or determined to not have TB disease. When surgery cannot be postponed, procedures should be performed in a surgical suite with recommended ventilation controls. Procedures should be scheduled for patients with suspected or confirmed TB disease when a minimum number of HCWs and other patients are present in the surgical suite, and at the end of the day to maximize the time available for removal of airborne contamination (Tables 1 and 2).
If a surgical suite or an OR has an anteroom, the anteroom should be either 1) positive pressure compared with both the corridor and the suite or OR (with filtered supply air) or 2) negative pressure compared with both the corridor and the suite or OR. In the usual design in which an OR has no anteroom, keep the doors to the OR closed, and minimize traffic into and out of the room and in the corridor. Using additional air-cleaning technologies (e.g., UVGI) should be considered to increase the equivalent ACH. Air-cleaning systems can be placed in the room or in surrounding areas to minimize contamination of the surroundings after the procedure (114) (see Environmental Controls).
Ventilation in the OR should be designed to provide a sterile environment in the surgical field while preventing contaminated air from flowing to other areas in the health-care setting. Personnel steps should be taken to reduce the risk for contaminating ventilator or anesthesia equipment or discharging tubercle bacilli into the ambient air when operating on a patient with suspected or confirmed TB disease (152). A bacterial filter should be placed on the patient's endotracheal tube (or at the expiratory side of the breathing circuit of a ventilator or anesthesia machine, if used) (147)(148)(149)(150)(151). When selecting a bacterial filter, give preference to models specified by the manufacturer to filter particles 0.3 µm in size in both the unloaded and loaded states with a filter efficiency of ≥95% (i.e., filter penetration of <5%) at the maximum design flow rates of the ventilator for the service life of the filter, as specified by the manufacturer.
When surgical procedures (or other procedures that require a sterile field) are performed on patients with suspected or confirmed infectious TB, respiratory protection should be worn by HCWs to protect the sterile field from the respiratory secretions of HCWs and to protect HCWs from the infectious droplet nuclei generated from the patient. When selecting respiratory protection, do not use valved or positive-pressure respirators, because they do not protect the sterile field. A respirator with a valveless filtering facepiece (e.g., N95 disposable respirator) should be used.
Postoperative recovery of a patient with suspected or confirmed TB disease should be in an AII room in any location where the patient is recovering (118). If an AII or comparable room is not available for surgery or postoperative recovery, air-cleaning technologies (e.g., HEPA filtration and UVGI) can be used to increase the number of equivalent ACH (see Environmental Controls); however, the infection-control committee should be involved in the selection and placement of these supplemental controls.
# Laboratories
Staff who work in laboratories that handle clinical specimens encounter risks not typically present in other areas of a healthcare setting (153)(154)(155). Laboratories that handle TB specimens include 1) pass-through facilities that forward specimens to reference laboratories for analysis; 2) diagnostic laboratories that process specimens and perform acid-fast staining and primary culture for M. tuberculosis; and 3) facilities that perform extensive identification, subtyping, and susceptibility studies.
Procedures involving the manipulation of specimens or cultures containing M. tuberculosis introduce additional substantial risks that must be addressed in an effective TB infection-control program. Personnel who work with mycobacteriology specimens should be thoroughly trained in methods that minimize the production of aerosols and undergo periodic competency testing to include direct observation of their work practices. Risks for transmission of M. tuberculosis in laboratories include aerosol formation during any specimen or isolate manipulation and percutaneous inoculation from accidental exposures. Biosafety recommendations for laboratories performing diagnostic testing for TB have been published (74,75,138,156,157).
In laboratories affiliated with a health-care setting (e.g., a hospital) and in free-standing laboratories, the laboratory director, in collaboration with the infection-control staff for the setting, and in consultation with the state TB laboratory, should develop a risk-based infection-control plan for the laboratory that minimizes the risk for exposure to M. tuberculosis. Consider factors including 1) incidence of TB disease (including drug-resistant TB) in the community and in patients served by settings that submit specimens to the laboratory, 2) design of the laboratory, 3) level of TB diagnostic service offered, 4) number of specimens processed, and 5) whether or not aerosol-generating or aerosol-producing procedures are performed and the frequency at which they are performed. Referral laboratories should store isolates in case additional testing is necessary.
Biosafety level (BSL)-2 practices and procedures, containment equipment, and facilities are required for nonaerosolproducing manipulations of clinical specimens (e.g., preparing direct smears for acid-fast staining when done in conjunction with training and periodic checking of competency) (138). All specimens suspected of containing M. tuberculosis (including specimens processed for other microorganisms) should be handled in a Class I or II biological safety cabinet (BSC) (158,159). Conduct all aerosol-generating activities (e.g., inoculating culture media, setting up biochemical and antimicrobic susceptibility tests, opening centrifuge cups, and performing sonication) in a Class I or II BSC (158).
For laboratories that are considered at least medium risk (Appendix C), conduct testing for M. tuberculosis infection at least annually among laboratorians who perform TB diagnostics or manipulate specimens from which M. tuberculosis is commonly isolated (e.g., sputum, lower respiratory secretions, or tissues) (Appendix D). More frequent testing for M. tuberculosis is recommended in the event of a documented conversion among laboratory staff or a laboratory accident that poses a risk for exposure to M. tuberculosis (e.g., malfunction of a centrifuge leading to aerosolization of a sample).
Based on the risk assessment for the laboratory, employees should use personal protective equipment (including respiratory protection) recommended by local regulations for each activity. For activities that have a low risk for generating aerosols, standard personal protective equipment consists of protective laboratory coats, gowns, or smocks designed specifically for use in the laboratory. Protective garments should be left in the laboratory before going to nonlaboratory areas.
For all laboratory procedures, disposable gloves should be worn. Gloves should be disposed of when work is completed, the gloves are overtly contaminated, or the integrity of the glove is compromised. Local or state regulations should determine procedures for the disposal of gloves. Face protection (e.g., goggles, full-facepiece respirator, face shield, or other splatter guard) should also be used when manipulating specimens inside or outside a BSC. Use respiratory protection when performing procedures that can result in aerosolization outside a BSC. The minimum level of respiratory protection is an N95 filtering facepiece respirator. Laboratory workers who use respiratory protection should be provided with the same training on respirator use and care and the same fit testing as other HCWs.
After documented laboratory accidents, conduct an investigation of exposed laboratory workers. Laboratories in which specimens for mycobacteriologic studies (e.g., AFB smears and cultures) are processed should follow the AIA and CDC/National Institute of Health guidelines (118,159) (see Environmental Controls). BSL-3 practices, containment equipment, and facilities are recommended for the propagation and manipulation of cultures of M. tuberculosis complex (including M. bovis) and for animal studies in which primates that are experimentally or naturally infected with M. tuberculosis or M. bovis are used. Animal studies in which guinea pigs or mice are used can be conducted at animal BSL-2. Aerosol infection methods are recommended to be conducted at BSL-3 (159).
# Bronchoscopy Suites
Because bronchoscopy is a cough-inducing procedure that might be performed on patients with suspected or confirmed TB disease, bronchoscopy suites require special attention (29,81,160,161). Bronchoscopy can result in the transmission of M. tuberculosis either through the airborne route (29,63,81,86,162) or a contaminated bronchoscope (80,82,(163)(164)(165)(166)(167)(168)(169)(170). Closed and effectively filtered ventilatory circuitry and minimizing opening of such circuitry in intubated and mechanically ventilated patients might minimize exposure (see Intensive Care Units) (149).
If possible, avoid bronchoscopy on patients with suspected or confirmed TB disease or postpone the procedure until the patient is determined to be noninfectious, by confirmation of the three negative AFB sputum smear results (109)(110)(111)(112). When collection of spontaneous sputum specimen is not adequate or possible, sputum induction has been demonstrated to be equivalent to bronchoscopy for obtaining specimens for culture (110). Bronchoscopy might have the advantage of confirmation of the diagnosis with histologic specimens, collection of additional specimens, including post bronchoscopy sputum that might increase the diagnostic yield, and the opportunity to confirm an alternate diagnosis. If the diagnosis of TB disease is suspected, consideration should be given to empiric antituberculosis treatment.
A physical examination should be performed, and a chest radiograph, microscopic examination, culture, and NAA testing of sputum or other relevant specimens should also be obtained, including gastric aspirates (125), as indicated (53,126,131,130). Because 15%-20% of patients with TB disease have negative TST results, a negative TST result is of limited value in the evaluation of the patient with suspected TB disease, particularly in patients from high TB incidence groups in whom TST positive rates exceed 30% (31).
Whenever feasible, perform bronchoscopy in a room that meets the ventilation requirements for an AII room (same as the AIA guidelines parameters for bronchoscopy rooms) (see Environmental Controls). Air-cleaning technologies (e.g., HEPA filtration and UVGI) can be used to increase equivalent ACH.
If sputum specimens must be obtained and the patient cannot produce sputum, consider sputum induction before bronchoscopy (111). In a patient who is intubated and mechanically ventilated, minimize the opening of circuitry. At least N95 respirators should be worn by HCWs while present during a bronchoscopy procedure on a patient with suspected or confirmed infectious TB disease. Because of the increased risk for M. tuberculosis transmission during the performance of bronchoscopy procedures on patients with TB disease, consider using a higher level of respiratory protection than an N95 disposable respirator (e.g., an elastomeric full-facepiece respirator or a powered air-purifying respirator ) (see Respiratory Protection).
After bronchoscopy is performed on a patient with suspected or confirmed infectious TB disease, allow adequate time to elapse to ensure removal of M. tuberculosis-contaminated room air before performing another procedure in the same room (Tables 1 and 2). During the period after bronchoscopy when the patient is still coughing, collect at least one sputum for AFB to increase the yield of the procedure. Patients with suspected or confirmed TB disease who are undergoing bronchoscopy should be kept in an AII room until coughing subsides.
# Sputum Induction and Inhalation Therapy Rooms
Sputum induction and inhalation therapy induces coughing, which increases the potential for transmission of M. tuberculosis (87,88,90). Therefore, appropriate precautions should be taken when working with patients with suspected or confirmed TB disease. Sputum induction procedures for persons with suspected or confirmed TB disease should be considered after determination that self-produced sputum collection is inadequate and that the AFB smear result on other specimens collected is negative. HCWs who order or perform sputum induction or inhalation therapy in an environment without proper controls for the purpose of diagnosing conditions other than TB disease should assess the patient's risk for TB disease.
Cough-inducing or aerosol-generating procedures in patients with diagnosed TB should be conducted only after an assessment of infectiousness has been considered for each patient and should be conducted in an environment with proper controls. Sputum induction should be performed by using local exhaust ventilation (e.g., booths with special ventilation) or alternatively in a room that meets or exceeds the requirements of an AII room (see Environmental Controls) (90). At least an N95 disposable respirator should be worn by HCWs performing sputum inductions or inhalation therapy on a patient with suspected or confirmed infectious TB disease. Based on the risk assessment, consideration should be given to using a higher level of respiratory protection (e.g., an elastomeric full-facepiece respirator or a PAPR) (see Respiratory Protection) (90).
After sputum induction or inhalation therapy is performed on a patient with suspected or confirmed infectious TB disease, allow adequate time to elapse to ensure removal of M. tuberculosis-contaminated room air before performing another procedure in the same room (Tables 1 and 2). Patients with suspected or confirmed TB disease who are undergoing sputum induction or inhalation therapy should be kept in an AII room until coughing subsides.
# Autopsy Suites
Autopsies performed on bodies with suspected or confirmed TB disease can pose a high risk for transmission of M. tuberculosis, particularly during the performance of aerosol-generating procedures (e.g., median sternotomy). Persons who handle bodies might be at risk for transmission of M. tuberculosis (77,78,(171)(172)(173)(174)(175)(176)(177). Because certain procedures performed as part of an autopsy might generate infectious aerosols, special airborne precautions are required.
Autopsies should not be performed on bodies with suspected or confirmed TB disease without adequate protection for those performing the autopsy procedures. Settings in which autopsies are performed should meet or exceed the requirements of an AII room, if possible (see Environmental Controls), and the drawing in the American Conference of Governmental Industrial Hygienists® (ACGIH) Industrial Ventilation Manual VS-99-07 (178). Air should be exhausted to the outside of the building. Air-cleaning technologies (e.g., HEPA filtration or UVGI) can be used to increase the number of equivalent ACH (see Environmental Controls).
As an added administrative measure, when performing autopsies on bodies with suspected or confirmed TB disease, coordination between attending physicians and pathologists is needed to ensure proper infection control and specimen collection. The use of local exhaust ventilation should be considered to reduce exposures to infectious aerosols (e.g., when using a saw, including Striker saw). For HCWs performing an autopsy on a body with suspected or confirmed TB disease, at least N95 disposable respirators should be worn (see Respiratory Protection). Based on the risk assessment, consider using a higher level of respiratory protection than an N95 disposable respirator (e.g., an elastomeric full-facepiece respirator or a PAPR) (see Respiratory Protection).
After an autopsy is performed on a body with suspected or confirmed TB disease, allow adequate time to elapse to ensure removal of M. tuberculosis-contaminated room air before performing another procedure in the same room (Tables 1 and 2). If time delay is not feasible, the autopsy staff should continue to wear respirators while they are in the room.
# Embalming Rooms
Tissue or organ removal in an embalming room performed on bodies with suspected or confirmed TB disease can pose a high risk for transmission of M. tuberculosis, particularly during the performance of aerosol-generating procedures. Persons who handle corpses might be at risk for transmission of M. tuberculosis (77,78,(171)(172)(173)(174)(175)(176). Because certain procedures performed as part of embalming might generate infectious aerosols, special airborne precautions are required.
Embalming involving tissue or organ removal should not be performed on bodies with suspected or confirmed TB disease without adequate protection for the persons performing the procedures. Settings in which these procedures are performed should meet or exceed the requirements of an AII room, if possible (see Environmental Controls), and the drawing in the ACGIH Industrial Ventilation Manual VS-99-07 (178). Air should be exhausted to the outside of the building. Air-cleaning technologies (e.g., HEPA filtration or UVGI) can be used to increase the number of equivalent ACH (see Environmental Controls). The use of local exhaust ventilation should be considered to reduce exposures to infectious aerosols (e.g., when using a saw, including Striker saw) and vapors from embalming fluids.
When HCWs remove tissues or organs from a body with suspected or confirmed TB disease, at least N95 disposable respirators should be worn (see Respiratory Protection). Based on the risk assessment, consider using a higher level of respiratory protection than an N95 disposable respirator (e.g., an elastomeric full-facepiece respirator or a PAPR) (see Respiratory Protection).
After tissue or organ removal is performed on a body with suspected or confirmed TB disease, allow adequate time to elapse to ensure removal of M. tuberculosis-contaminated room air before performing another procedure in the same room (see Environmental Controls). If time delay is not feasible, the staff should continue to wear respirators while in the room.
# Outpatient Settings
Outpatient settings might include TB treatment facilities, dental-care settings, medical offices, ambulatory-care settings, and dialysis units. Environmental controls should be implemented based on the types of activities that are performed in the setting.
# TB Treatment Facilities
TB treatment facilities might include TB clinics, infectious disease clinics, or pulmonary clinics. TB clinics and other settings in which patients with TB disease and LTBI are examined on a regular basis require special attention. The same principles of triage used in EDs and ambulatory-care settings (see Minimum Requirements) should be applied to TB treatment facilities. These principles include prompt identification, evaluation, and airborne precautions of patients with suspected or confirmed infectious TB disease.
All TB clinic staff, including outreach workers, should be screened for M. tuberculosis infection (Appendix C). Patients with suspected or confirmed infectious TB disease should be physically separated from all patients, but especially from those with HIV infection and other immunocompromising conditions that increase the likelihood of development of TB disease if infected. Immunosuppressed patients with suspected or confirmed infectious TB disease need to be physically separated from others to protect both the patient and others. Appointments should be scheduled to avoid exposing HIVinfected or otherwise severely immunocompromised persons to M. tuberculosis. Certain times of the day should be designated for appointments for patients with infectious TB disease or treat them in areas in which immuno compromised persons are not treated.
Persons with suspected or confirmed infectious TB disease should be promptly placed in an AII room to minimize exposure in the waiting room and other areas of the clinic, and they should be instructed to observe strict respiratory hygiene and cough etiquette procedures. Clinics that provide care for patients with suspected or confirmed infectious TB disease should have at least one AII room. The need for additional AII rooms should be based on the risk assessment for the setting.
All cough-inducing and aerosol-generating procedures should be performed using environmental controls (e.g., in a booth or an AII room) (see Environmental Controls). Patients should be left in the booth or AII room until coughing subsides. Another patient or HCW should not be allowed to enter the booth or AII room until sufficient time has elapsed for adequate removal of M. tuberculosis-contaminated air (see Environmental Controls). A respiratory-protection program should be implemented for all HCWs who work in the TB clinic and who enter AII rooms, visit areas in which persons with suspected or confirmed TB disease are located, or transport patients with suspected or confirmed TB disease in vehicles. When persons with suspected or confirmed infectious TB disease are in the TB clinic and not in an AII room, they should wear a surgical or procedure mask, if possible.
# Medical Offices and Ambulatory-Care Settings
The symptoms of TB disease are usually symptoms for which patients might seek treatment in a medical office. Therefore, infectious TB disease could possibly be encountered in certain medical offices and ambulatory-care settings.
Because of the potential for M. tuberculosis transmission in medical offices and ambulatory-care settings, follow the general recommendations for management of patients with suspected or confirmed TB disease and the specific recommendations for EDs (see Intensive Care Units ). The risk assessment may be used to determine the need for or selection of environmental controls and the frequency of testing HCWs for M. tuberculosis infection.
# Dialysis Units
Certain patients with TB disease need chronic dialysis for treatment of ESRD (179)(180)(181). The incidence of TB disease and infection in patients with ESRD might be higher than in the general population (181)(182)(183) and might be compounded by the overlapping risks for ESRD and TB disease among patients with diabetes mellitus (39). In addition, certain dialysis patients or patients who are otherwise immunocompromised (e.g., patients with organ transplants) might be on immunosuppressive medications (162,183). Patients with ESRD who need chronic dialysis should have at least one test for M. tuberculosis infection to determine the need for treatment of LTBI. Annual re-screening is indicated if ongoing exposure of ESRD patients to M. tuberculosis is probable.
Hemodialysis procedures should be performed on hospitalized patients with suspected or confirmed TB disease in an AII room. Dialysis staff should use recommended respiratory protection, at least an N95 disposable respirator. Patients with suspected or confirmed TB disease who need chronic hemodialysis might need referral to a hospital or other setting with the ability to perform dialysis procedures in an AII room until the patient is no longer infectious or another diagnosis is made. Certain antituberculosis medications are prescribed differently for hemodialysis patients (31).
# Dental-Care Settings
The generation of droplet nuclei containing M. tuberculosis as a result of dental procedures has not been demonstrated (184). Nonetheless, oral manipulations during dental procedures could stimulate coughing and dispersal of infectious particles. Patients and dental HCWs share the same air space for varying periods, which contributes to the potential for transmission of M. tuberculosis in dental settings (185). For example, during primarily routine dental procedures in a dental setting, MDR TB might have been transmitted between two dental workers (186).
To prevent the transmission of M. tuberculosis in dentalcare settings, certain recommendations should be followed (187,188). Infection-control policies for each dental healthcare setting should be developed, based on the community TB risk assessment (Appendix B), and should be reviewed annually, if possible. The policies should include appropriate screening for LTBI and TB disease for dental HCWs, education on the risk for transmission to the dental HCWs, and provisions for detection and management of patients who have suspected or confirmed TB disease.
When taking a patient's initial medical history and at periodic updates, dental HCWs should routinely document whether the patient has symptoms or signs of TB disease. If urgent dental care must be provided for a patient who has suspected or confirmed infectious TB disease, dental care should be provided in a setting that meets the requirements for an AII room (see Environmental Controls). Respiratory protection (at least N95 disposable respirator) should be used while performing procedures on such patients.
In dental health-care settings that routinely provide care to populations at high risk for TB disease, using engineering controls (e.g., portable HEPA units) similar to those used in waiting rooms or clinic areas of health-care settings with a comparable community-risk profile might be beneficial.
During clinical assessment and evaluation, a patient with suspected or confirmed TB disease should be instructed to observe strict respiratory hygiene and cough etiquette procedures (122). The patient should also wear a surgical or procedure mask, if possible. Non-urgent dental treatment should be postponed, and these patients should be promptly referred to an appropriate medical setting for evaluation of possible infectiousness. In addition, these patients should be kept in the dental health-care setting no longer than required to arrange a referral.
# Nontraditional Facility-Based Settings
Nontraditional facility-based settings include EMS, medical settings in correctional facilities, home-based health-care and outreach settings, long-term-care settings (e.g., hospices and skilled nursing facilities), and homeless shelters. Environmental controls should be implemented based on the types of activities that are performed in the setting.
TB is more common in the homeless population than in the general population (189)(190)(191)(192). Because persons who visit homeless shelters frequently share exposure and risk characteristics of TB patients who are treated in outpatient clinics, homeless shelters with clinics should observe the same TB infection-control measures as outpatient clinics. ACET has developed recommendations to assist health-care providers, health departments, shelter operators and workers, social service agencies, and homeless persons to prevent and control TB in this population (189).
# Emergency Medical Services (EMS)
Although the overall risk is low (193), documented transmission of M. tuberculosis has occurred in EMS occupational settings (194), and approaches to reduce this risk have been described (193,195). EMS personnel should be included in a comprehensive screening program to test for M. tuberculosis infection and provide baseline screening and follow-up testing as indicated by the risk classification of the setting. Persons with suspected or confirmed infectious TB disease who are transported in an ambulance should wear a surgical or procedure mask, if possible, and drivers, HCWs, and other staff who are transporting the patient might consider wearing an N95 respirator.
The ambulance ventilation system should be operated in the nonrecirculating mode, and the maximum amount of outdoor air should be provided to facilitate dilution. If the vehicle has a rear exhaust fan, use this fan during transport. If the vehicle is equipped with a supplemental recirculating ventilation unit that passes air through HEPA filters before returning it to the vehicle, use this unit to increase the number of ACH (188). Air should flow from the cab (front of vehicle), over the patient, and out the rear exhaust fan. If an ambulance is not used, the ventilation system for the vehicle should bring in as much outdoor air as possible, and the system should be set to nonrecirculating. If possible, physically isolate the cab from the rest of the vehicle, and place the patient in the rear seat (194).
EMS personnel should be included in the follow-up contact investigations of patients with infectious TB disease. The Ryan White Comprehensive AIDS Resource Emergency Act of 1990 (Public law 101-381) mandates notification of EMS personnel after they have been exposed to a patient with suspected or confirmed infectious TB disease (Title 42 U.S. Code 1994) ().
# Medical Settings in Correctional Facilities
TB is a substantial health concern in correctional facilities; employees and inmates are at high risk (105,(196)(197)(198)(199)(200)(201)(202)(203)(204)(205). TB outbreaks in correctional facilities can lead to transmission in surrounding communities (201,206,207). ACET recommends that all correctional facilities have a written TB infection-control plan (196), and multiple studies indicate that screening correctional employees and inmates is a vital TB control measure (204,208,209).
The higher risk for M. tuberculosis transmission in health-care settings in correctional facilities (including jails and prisons) is a result of the disproportionate number of inmates with risk factors for TB infection and TB disease (203,210). Compared with the general population, TB prevalence is higher among inmates and is associated with a higher prevalence of HIV infection (197), increased illicit substance use, lower socioeconomic status (201), and their presence in settings that are at high risk for transmission of M. tuberculosis.
A TB infection-control plan should be developed specifically for that setting, even if the institution is part of a multifacility system (196,211). Medical settings in correctional facilities should be classified as at least medium risk; therefore, all correctional facility health-care personnel and other staff, including correctional officers should be screened for TB at least annually (201,203,208).
Correctional facilities should collaborate with the local or state health department to decide on TB contact investigations and discharge planning (105,212) and to provide TB training and education to inmates and employees (196). Corrections staff should be educated regarding symptoms and signs of TB disease and encouraged to facilitate prompt evaluation of inmates with suspected infectious TB disease (206).
At least one AII room should be available in correctional facilities. Any inmate with suspected or confirmed infectious TB disease should be placed in an AII room immediately or transferred to a setting with an AII room; base the number of additional AII rooms needed on the risk assessment for the setting. Sputum samples should be collected in sputum induction booths or AII rooms, not in inmates' cells. Sputum collection can also be performed safely outside, away from other persons, windows, and ventilation intakes.
Inmates with suspected or confirmed infectious TB disease who must be transported outside an AII room for medically essential procedures should wear a surgical or procedure mask during transport, if possible. If risk assessment indicates the need for respiratory protection, drivers, medical or security staff, and others who are transporting patients with suspected or confirmed infectious TB disease in an enclosed vehicle should consider wearing an N95 disposable respirator.
A respiratory-protection program, including training, education, and fit-testing in the correctional facility's TB infection-control program should be implemented. Correctional facilities should maintain a tracking system for inmate TB screening and treatment and establish a mechanism for sharing this information with state and local health departments and other correctional facilities (196,201). Confidentiality of inmates should be ensured during screening for symptoms or signs of TB disease and risk factors.
# Home-Based Health-Care and Outreach Settings
Transmission of M. tuberculosis has been documented in staff who work in home-based health-care and outreach settings (213,214). The setting's infection-control plan should include training that reminds HCWs who provide medical services in the homes of patients or other outreach settings of the importance of early evaluation of symptoms or signs of TB disease for early detection and treatment of TB disease. Training should also include the role of the HCW in educating patients regarding the importance of reporting symptoms or signs of TB disease and the importance of reporting any adverse effects to treatment for LTBI or TB disease.
HCWs who provide medical services in the homes of patients with suspected or confirmed TB disease can help prevent transmission of M. tuberculosis by 1) educating patients and other household members regarding the importance of taking medications as prescribed, 2) facilitating medical evaluation of symptoms or signs of TB disease, and 3) administering DOT, including DOT for treatment of LTBI whenever feasible.
HCWs who provide medical services in the homes of patients should not perform cough-inducing or aerosol-generating procedures on patients with suspected or confirmed infectious TB disease, because recommended infection controls probably will not be in place. Sputum collection should be performed outdoors, away from other persons, windows, and ventilation intakes.
HCWs who provide medical services in the homes of patients with suspected or confirmed infectious TB disease should instruct TB patients to observe strict respiratory hygiene and cough etiquette procedures. HCWs who enter homes of persons with suspected or confirmed infectious TB disease or who transport such persons in an enclosed vehicle should consider wearing at least an N95 disposable respirator (see Respiratory Protection).
# Long-Term-Care Facilities (LTCFs)
TB poses a health risk to patients, HCWs, visitors, and volunteers in LTCFs (e.g., hospices and skilled nursing facilities) (215,216). Transmission of M. tuberculosis has occurred in LTCF (217)(218)(219)(220), and pulmonary TB disease has been documented in HIV-infected patients and other immunocompromised persons residing in hospices (218,221,222). New employees and residents to these settings should receive a symptom screen and possibly a test for M. tuberculosis infection (see TB Risk Assessment Worksheet).
LTCFs must have adequate administrative and environmental controls, including airborne precautions capabilities and a respiratory-protection program, if they accept patients with suspected or confirmed infectious TB disease. The setting should have 1) a written protocol for the early identification of patients with symptoms or signs of TB disease and 2) procedures for referring these patients to a setting where they can be evaluated and managed. Patients with suspected or confirmed infectious TB disease should not stay in LTCFs unless adequate administrative and environmental controls and a respiratory-protection program are in place. Persons with TB disease who are determined to be noninfectious can remain in the LTCF and do not need to be in an AII room.
# Training and Educating HCWs
HCW training and education regarding infection with M. tuberculosis and TB disease is an essential part of administrative controls in a TB surveillance or infection-control program. Training physicians and nurse managers is especially essential because of the leadership role they frequently fulfill in infection control. HCW training and education can increase adherence to TB infection-control measures. Training and education should emphasize the increased risks posed by an undiagnosed person with TB disease in a health-care setting and the specific measures to reduce this risk. HCWs receive various types of training; therefore, combining training for TB infection control with other related trainings might be preferable.
# Initial TB Training and Education
The setting should document that all HCWs, including physicians, have received initial TB training relevant to their work setting and additional occupation-specific education. The level and detail of baseline training will vary according to the responsibilities of the HCW and the risk classification of the setting.
Educational materials on TB training are available from various sources at no cost in printed copy, on videotape ( 223 Physicians, trainees, students, and other HCWs who work in a health-care setting but do not receive payment from that setting should receive baseline training in TB infection-control policies and practices, the TB screening program, and procedures for reporting an M. tuberculosis infection test conversion or diagnosis of TB disease. Initial TB training should be provided before the HCW starts working.
# Follow-Up TB Training and Education
All settings should conduct an annual evaluation of the need for follow-up training and education for HCWs based on the number of untrained and new HCWs, changes in the organization and services of the setting, and availability of new TB infection-control information.
If a potential or known exposure to M. tuberculosis occurs in the setting, prevention and control measures should include retraining HCWs in the infection-control procedures established to prevent the recurrence of exposure. If a potential or known exposure results in a newly recognized positive TST or BAMT result, test conversion, or diagnosis of TB disease, education should include information on 1) transmission of M. tuberculosis, 2) noninfectiousness of HCWs with LTBI, and 3) potential infectiousness of HCWs with TB disease.
OSHA requires annual respiratory-protection training for HCWs who use respiratory devices (see Respiratory Protection). HCWs in settings with a classification of potential ongoing transmission should receive additional training and education on 1) symptoms and signs of TB disease, 2) M. tuberculosis transmission, 3) infection-control policies, 4) importance of TB screening for HCWs, and 5) responsibilities of employers and employees regarding M. tuberculosis infection test conversion and diagnosis of TB disease.
# TB Infection-Control Surveillance HCW Screening Programs for TB Support Surveillance and Clinical Care
TB screening programs provide critical information for caring for individual HCWs and information that facilitates detection of M. tuberculosis transmission. The screening program consists of four major components: 1) baseline testing for M. tuberculosis infection, 2) serial testing for M. tuberculosis infection, 3) serial screening for symptoms or signs of TB disease, and 4) TB training and education.
Surveillance data from HCWs can protect both HCWs and patients. Screening can prevent future transmission by identifying lapses in infection control and expediting treatment for persons with LTBI or TB disease. Tests to screen for M. tuberculosis infection should be administered, interpreted, and recorded according to procedures in this report (see Supplement, Diagnostic Procedures for LTBI and TB Disease). Protection of privacy and maintenance of confidentiality of HCW test results should be ensured. Methods to screen for infection with M. tuberculosis are available (30,31,39).
# Baseline Testing for M. tuberculosis Infection
Baseline testing for M. tuberculosis infection is recommended for all newly hired HCWs, regardless of the risk classification of the setting and can be conducted with the TST or BAMT. Baseline testing is also recommended for persons who will receive serial TB screening (e.g., residents or staff of correctional facilities or LTCFs) (39,224). Certain settings, with the support of the infection-control committee, might choose not to perform baseline or serial TB screening for HCWs who will never be in contact with or have shared air space with patients who have TB disease (e.g., telephone operators who work in a separate building from patients) or who will never be in contact with clinical specimens that might contain M. tuberculosis.
Baseline test results 1) provide a basis for comparison in the event of a potential or known exposure to M. tuberculosis and 2) facilitate the detection and treatment of LTBI or TB disease in an HCW before employment begins and reduces the risk to patients and other HCWs. If TST is used for baseline testing, two-step testing is recommended for HCWs whose initial TST results are negative (39,224). If the first-step TST result is negative, the second-step TST should be administered 1-3 weeks after the first TST result was read. If either 1) the baseline first-step TST result is positive or 2) the first-step TST result is negative but the second-step TST result is positive, TB disease should be excluded, and if it is excluded, then the HCW should be evaluated for treatment of LTBI. If the first and second-step TST results are both negative, the person is classified as not infected with M. tuberculosis.
If the second test result of a two-step TST is not read within 48-72 hours, administer a TST as soon as possible (even if several months have elapsed) and ensure that the result is read within 48-72 hours (39). Certain studies indicate that positive TST reactions might still be measurable from 4-7 days after testing (225,226). However, if a patient fails to return within 72 hours and has a negative test result, the TST should be repeated (42).
A positive result to the second step of a baseline two-step TST is probably caused by boosting as opposed to recent infection with M. tuberculosis. These responses might result from remote infections with M. tuberculosis, infection with an NTM (also known as MOTT), or previous BCG vaccination. Two-step testing will minimize the possibility that boosting will lead to an unwarranted suspicion of transmission of M. tuberculosis with subsequent testing. A second TST is not needed if the HCW has a documented TST result from any time during the previous 12 months (see Baseline Testing for M. tuberculosis Infection After TST Within the Previous 12 Months).
A positive TST reaction as a result of BCG wanes after 5 years. Therefore, HCWs with previous BCG vaccination will frequently have a negative TST result (74,(227)(228)(229)(230)(231)(232). Because HCWs with a history of BCG are frequently from high TB-prevalence countries, positive test results for M. tuberculosis infection in HCWs with previous BCG vaccination should be interpreted as representing infection with M. tuberculosis (74,(227)(228)(229)(230)(231)(232)(233). Although BCG reduces the occurrence of severe forms of TB disease in children and overall might reduce the risk for progression from LTBI to TB disease (234,235), BCG is not thought to prevent M. tuberculosis infection (236). Test results for M. tuberculosis infection for HCWs with a history of BCG should be interpreted by using the same diagnostic cut points used for HCWs without a history of BCG vaccination.
BAMT does not require two-step testing and is more specific than skin testing. BAMT that uses M. tuberculosis-specific antigens (e.g., QFT-G) are not expected to result in false-positive results in persons vaccinated with BCG. Baseline test results should be documented, preferably within 10 days of HCWs starting employment.
# Baseline Testing for M. tuberculosis Infection After TST Within the Previous 12 Months
A second TST is not needed if the HCW has a documented TST result from any time during the previous 12 months. If a newly employed HCW has had a documented negative TST result within the previous 12 months, a single TST can be administered in the new setting (Box 1). This additional TST represents the second stage of two-step testing. The second test decreases the possibility that boosting on later testing will lead to incorrect suspicion of transmission of M. tuberculosis in the setting.
A recent TST (performed in ≤12 months) is not a contraindication to a subsequent TST unless the test was associated with severe ulceration or anaphylactic shock, which are substantially rare adverse events (30,(237)(238)(239). Multiple TSTs are safe and do not increase the risk for a false-positive result or a TST conversion in persons without infection with mycobacteria (39).
# Baseline Documentation of a History of TB Disease, a Previously Positive Test Result for M. tuberculosis Infection, or Completion of Treatment for LTBI or TB Disease
Additional tests for M. tuberculosis infection do not need to be performed for HCWs with a documented history of TB disease, documented previously positive test result for M. tuberculosis infection, or documented completion of treatment for LTBI or TB disease. Documentation of a previously positive test result for M. tuberculosis infection can be substituted for a baseline test result if the documentation includes a recorded TST result in millimeters (or BAMT result), including the concentration of cytokine measured (e.g., IFN-γ). All other HCWs should undergo baseline testing for M. tuberculosis infection to ensure that the test result on record in the setting has been performed and measured using the recommended diagnostic the recommended procedures (see Supplement, Diagnostic Procedures for LTBI and TB Disease).
A recent TST (performed in ≤12 months) is not a contraindication to the administration of an additional test unless the TST was associated with severe ulceration or anaphylactic BOX 1. Indications for two-step tuberculin skin tests (TSTs)
# Situation
Recommended testing shock, which are substantially rare adverse events (30,237,238). However, the recent test might complicate interpretation of subsequent test results because of the possibility of boosting.
# Serial Follow-Up of TB Screening and Testing for M. tuberculosis Infection
The need for serial follow-up screening for groups of HCWs with negative test results for M. tuberculosis infection is an institutional decision that is based on the setting's risk classification. This decision and changes over time based on updated risk assessments should be official and documented. If a serial follow-up screening program is required, the risk assessment for the setting (Appendix B) will determine which HCWs should be included in the program and the frequency of screening. Two-step TST testing should not be performed for follow-up testing.
If possible, stagger follow-up screening (rather than testing all HCWs at the same time each year) so that all HCWs who work in the same area or profession are not tested in the same month. Staggered screening of HCWs (e.g., on the anniversary of their employment or on their birthdays) increases opportunities for early recognition of infection-control problems that can lead to conversions in test results for M. tuberculosis infection. Processing aggregate analysis of TB screening data on a periodic regular basis is important for detecting problems.
# HCWs with a Newly Recognized Positive Test Result for M. tuberculosis Infection or Symptoms or Signs of TB Disease
# Clinical Evaluation
Any HCW with a newly recognized positive test result for M. tuberculosis infection, test conversion, or symptoms or signs of TB disease should be promptly evaluated. The evaluation should be arranged with employee health, the local or state health department, or a personal physician. Any physicians who evaluate HCWs with suspected TB disease should be familiar with current diagnostic and therapeutic guidelines for LTBI and TB disease (31,39).
The definitions for positive test results for M. tuberculosis infection and test conversion in HCWs are included in this report (see Supplement, Diagnostic Procedures for LTBI and TB Disease). Symptoms of disease in the lung, pleura, or airways, and the larynx include coughing for ≥3 weeks, loss of appetite, unexplained weight loss, night sweats, bloody sputum or hemoptysis, hoarseness, fever, fatigue, or chest pain. The evaluation should include a clinical examination and symptom screen (a procedure used during a clinical evaluation in which patients are asked if they have experienced any symptoms or signs of TB disease), chest radiograph, and collection of sputum specimens.
If TB disease is diagnosed, begin antituberculosis treatment immediately, according to published guidelines (31). The diagnosing clinician (who might not be a physician with the institution's infection-control program) should notify the local or state health department in accordance with disease reporting laws, which generally specify a 24-hour time limit.
If TB disease is excluded, offer the HCW treatment for LTBI in accordance with published guidelines (see Supplements, Diagnostic Procedures for LTBI and TB Disease; and Treatment Procedures for LTBI and TB Disease ). If the HCW has already completed treatment for LTBI and is part of a TB screening program, instead of participating in serial skin testing, the HCW should be monitored for symptoms of TB disease and should receive any available training, which should include information on the symptoms of TB disease and instructing the HCW to report any such symptoms immediately to occupational health. In addition, annual symptom screens should be performed, which can be administered as part of other HCW screening and education efforts. Treatment for LTBI should be offered to HCWs who are eligible (39).
HCWs with a previously negative test result who have an increase of ≥10 mm induration when examined on follow-up testing probably have acquired M. tuberculosis infection and should be evaluated for TB disease. When disease is excluded, HCWs should be treated for LTBI unless medically contraindicated (39,240).
# Chest Radiography
HCWs with a baseline positive or newly positive TST or BAMT result should receive one chest radiograph to exclude a diagnosis of TB disease (or an interpretable copy within a reasonable time frame, such as 6 months). After this baseline chest radiograph is performed and the result is documented, repeat radiographs are not needed unless symptoms or signs of TB disease develop or a clinician recommends a repeat chest radiograph (39,116). Instead of participating in serial testing for M. tuberculosis infection, HCWs with a positive test result for M. tuberculosis infection should receive a symptom screen. The frequency of this symptom screen should be determined by the risk classification for the setting.
Serial follow-up chest radiographs are not recommended for HCWs with documentation of a previously positive test result for M. tuberculosis infection, treatment for LTBI or TB disease, or for asymptomatic HCWs with negative test results for M. tuberculosis infection. HCWs who have a previously positive test result for M. tuberculosis infection and who change jobs should carry documentation of a baseline chest radiograph result (and the positive test result for M. tuberculosis infection) to their new employers.
# Workplace Restrictions
HCWs with a baseline positive or newly positive test result for M. tuberculosis infection should receive one chest radiograph result to exclude TB disease (or an interpretable copy within a reasonable time frame, such as 6 months).
HCWs with confirmed infectious pulmonary, laryngeal, endobroncheal, or tracheal TB disease, or a draining TB skin lesion pose a risk to patients, HCWs, and others. Such HCWs should be excluded from the workplace and should be allowed to return to work when the following criteria have been met: 1) three consecutive sputum samples (109)(110)(111)(112) collected in 8-24-hour intervals that are negative, with at least one sample from an early morning specimen (because respiratory secretions pool overnight); 2) the person has responded to antituberculosis treatment that will probably be effective (can be based on susceptibility results); and 3) the person is determined to be noninfectious by a physician knowledgeable and experienced in managing TB disease (see Supplements, Estimating the Infectiousness of a TB Patient; Diagnostic Procedures for LTBI and TB Disease; and Treatment Procedures for LTBI and TB Disease).
HCWs with extrapulmonary TB disease usually do not need to be excluded from the workplace as long as no involvement of the respiratory track has occurred. They can be confirmed as noninfectious and can continue to work if documented evidence is available that indicates that concurrent pulmonary TB disease has been excluded.
HCWs receiving treatment for LTBI can return to work immediately. HCWs with LTBI who cannot take or do not accept or complete a full course of treatment for LTBI should not be excluded from the workplace. They should be counseled regarding the risk for developing TB disease and instructed to report any TB symptoms immediately to the occupational health unit.
HCWs who have a documented positive TST or BAMT result and who leave employment should be counseled again, if possible, regarding the risk for developing TB disease and instructed to seek prompt evaluation with the local health department or their primary care physician if symptoms of TB disease develop. Consider mailing letters to former HCWs who have LTBI. This information should be recorded in the HCWs' employee health record when they leave employment.
Asymptomatic HCWs with a baseline positive or newly positive TST or BAMT result do not need to be excluded from the workplace. Treatment for LTBI should be considered in accordance with CDC guidelines (39).
# Identification of Source Cases and Recording of Drug-Susceptibility Patterns
If an HCW experiences a conversion in a test result for M. tuberculosis infection, evaluate the HCW for a history of suspected or known exposure to M. tuberculosis to determine the potential source. When the source case is identified, also identify the drug susceptibility pattern of the M. tuberculosis isolate from the source. The drug-susceptibility pattern should be recorded in the HCW's medical or employee health record to guide the treatment of LTBI or TB disease, if indicated.
# HCWs with Medical Conditions Associated with Increased Risk for Progression to TB Disease
In settings in which HCWs are severely immuno compromised, additional precautions must be taken. HIV infection is the highest risk factor for progression from LTBI to TB disease (22,39,42,49). Other immunocompromising conditions, including diabetes mellitus, certain cancers, and certain drug treatments, also increase the risk for rapid progression from LTBI to TB disease. TB disease can also adversely affect the clinical course of HIV infection and acquired immunodeficiency syndrome (AIDS) and can complicate HIV treatment (31,39,53).
Serial TB screening beyond that indicated by the risk classification for the setting is not indicated for persons with the majority of medical conditions that suppress the immune system or otherwise increase the risk for infection with M. tuberculosis progressing to TB disease (58). However, consideration should be given to repeating the TST for HIV-infected persons whose initial TST result was negative and whose immune function has improved in response to highly active antiretroviral therapy (HAART) (i.e., those whose CD4-T lymphocyte count has increased to >200 cells/mL).
All HCWs should, however, be encouraged during their initial TB training to determine if they have such a medical condition and should be aware that receiving medical treatment can improve cell-mediated immunity. HCWs should be informed concerning the availability of counseling, testing, and referral for HIV (50,51). In addition, HCWs should know whether they are immunocompromised, and they should be aware of the risks from exposure to M. tuberculosis (1). In certain cases, reassignment to areas in which exposure is minimized or nonexistent might be medically advisable or desirable.
Immunocompromised HCWs should have the option of an assignment in an area or activity where the risk for exposure to M. tuberculosis is low. This choice is a personal decision for the immunocompromised HCW (241) (/ laws/ada.html). Health-care settings should provide education and follow infection-control recommendations (70).
Information provided by HCWs regarding their immune status and request for voluntary work assignments should be treated confidentially, according to written procedures on the confidential handling of such information. All HCWs should be made aware of these procedures at the time of employment and during initial TB training and education.
# Problem Evaluation
Contact investigations might be initiated in response to 1) conversions in test results in HCWs for M. tuberculosis infection, 2) diagnosis of TB disease in an HCW, 3) suspected person-to-person transmission of M. tuberculosis, 4) lapses in TB infection-control practices that expose HCWs and patients to M. tuberculosis, or 5) possible TB outbreaks identified using automated laboratory systems (242). In these situations, the objectives of a contact investigation might be to 1) determine the likelihood that transmission of M. tuberculosis has occurred; 2) determine the extent of M. tuberculosis transmission; 3) identify persons who were exposed, and, if possible, the sources of potential transmission; 4) identify factors that could have contributed to transmission, including failure of environmental infection-control measures, failure to follow infection-control procedures, or inadequacy of current measures or procedures; 5) implement recommended interventions; 6) evaluate the effectiveness of the interventions; and 7) ensure that exposure to M. tuberculosis has been terminated and that the conditions leading to exposure have been eliminated.
Earlier recognition of a setting in which M. tuberculosis transmission has occurred could be facilitated through innovative approaches to TB contact investigations (e.g., network analysis and genetic typing of isolates). Network analysis makes use of information (e.g., shared locations within a setting that might not be collected in traditional TB contact investigations) (45). This type of information might be useful during contact investigations involving hospitals or correctional settings to identify any shared wards, hospital rooms, or cells. Genotyping of isolates is universally available in the United States and is a useful adjunct in the investigation of M. tuberculosis transmission (44,89,243,244). Because the situations prompting an investigation are likely to vary, investigations should be tailored to the individual circumstances. Recommendations provide general guidance for conducting contact investigations (34,115).
# General Recommendations for Investigating Conversions in Test Results for M. tuberculosis Infection in HCWs
A test conversion might need to be reported to the health department, depending on state and local regulations. Problem evaluation during contact investigations should be accomplished through cooperation between infection-control personnel, occupational health, and the local or state TB-control program. If a test conversion in an HCW is detected as a result of serial screening and the source is not apparent, conduct a source case investigation to determine the probable source and the likelihood that transmission occurred in the health-care setting (115).
Lapses in TB infection control that might have contributed to the transmission of M. tuberculosis should be corrected. Test conversions and TB disease among HCWs should be recorded and reported, according to OSHA requirements (. osha.gov/recordkeeping). Consult Recording and Reporting Occupational Injuries and Illness (OSHA standard 29 Code of Federal Regulations , 1904) to determine recording and reporting requirements (245).
# Investigating Conversions in Test Results for M. tuberculosis Infection in HCWs: Probable Source Outside the Health-Care Setting
If a test conversion in an HCW is detected and exposure outside the health-care setting has been documented by the corresponding local or state health department, terminate the investigation within the health-care setting.
# Investigating Conversions in Test Results for M. tuberculosis Infection in HCWs: Known Source in the Health-Care Setting
An investigation of a test conversion should be performed in collaboration with the local or state health department. If a conversion in an HCW is detected and the HCW's history does not document exposure outside the health-care setting but does identify a probable source in the setting, the following steps should be taken: 1) identify and evaluate close contacts of the suspected source case, including other patients and visitors; 2) determine possible reasons for the exposure; 3) implement interventions to correct the lapse(s) in infection control; and 4) immediately screen HCWs and patients if they were close contacts to the source case. For exposed HCWs and patients in a setting that has chosen to screen for infection with M. tuberculosis by using the TST, the following steps should be taken:
- administer a symptom screen;
- administer a TST to those who had previously negative TST results; baseline two-step TST should not be performed in contact investigations; - repeat the TST and symptom screen 8-10 weeks after the end of exposure, if the initial TST result is negative (33);
- administer a symptom screen, if the baseline TST result is positive; - promptly evaluate (including a chest radiograph) the exposed person for TB disease, if the symptom screen or the initial or 8-10-week follow-up TST result is positive; and - conduct additional medical and diagnostic evaluation (which includes a judgment about the extent of exposure) for LTBI, if TB disease is excluded. If no additional conversions in the test results for M. tuberculosis infection are detected in the follow-up testing, terminate the investigation. If additional conversions in the tests for M. tuberculosis infection are detected in the follow-up testing, transmission might still be occurring, and additional actions are needed: 1) implement a classification of potential ongoing transmission for the specific setting or group of HCWs; 2) the initial cluster of test conversions should be reported promptly to the local or state health department; 3) possible reasons for exposure and transmission should be reassessed and 4) the degree of adherence to the interventions implemented should be evaluated.
Testing for M. tuberculosis infection should be repeated 8-10 weeks after the end of exposure for HCW contacts who previously had negative test results, and the circle of contacts should be expanded to include other persons who might have been exposed. If no additional TST conversions are detected on the second round of follow-up testing, terminate the investigation. If additional TST conversions are detected on the second round of follow-up testing, maintain a classification of potential ongoing transmission and consult the local or state health department or other persons with expertise in TB infection control for assistance.
The classification of potential ongoing transmission should be used as a temporary classification only. This classification warrants immediate investigation and corrective steps. After determination has been made that ongoing transmission has ceased, the setting should be reclassified as medium risk. Maintaining the classification of medium risk for at least 1 year is recommended.
# Investigating a Conversion of a Test Result for M. tuberculosis Infection in an HCW with an Unknown Exposure
If a test conversion in an HCW is detected and the HCW's history does not document exposure outside the health-care setting and does not identify a probable source of exposure in the setting, additional investigation to identify a probable source in the health-care setting is warranted.
If no source case is identified, estimate the interval during which the HCW might have been infected. The interval is usually 8-10 weeks before the most recent negative test result through 2 weeks before the first positive test result. Laboratory and infection-control records should be reviewed to identify all patients (and any HCWs) who have had suspected or confirmed infectious TB disease and who might have transmitted M. tuberculosis to the HCW. If the investigation identifies a probable source, identify and evaluate contacts of the suspected source. Close contacts should be the highest priority for screening.
The following steps should be taken in a setting that uses TST or BAMT to screen for M. tuberculosis: 1) administer a symptom screen and the test routinely used in the setting (i.e., TST or BAMT) to persons who previously had negative results; 2) if the initial result is negative, the test and symptom screen should be repeated 8-10 weeks after the end of exposure; 3) if the symptom screen, the first test result, or the 8-10-week follow-up test result is positive, the presumed exposed person should be promptly evaluated for TB disease, including the use of a chest radiograph; and 4) if TB disease is excluded, additional medical and diagnostic evaluation for LTBI is needed, which includes a judgment regarding the extent of exposure (see Investigating Conversions in Test Results for M. tuberculosis Infection in HCWs: Known Source in the Health-Care Setting).
# Investigations That Do Not Identify a Probable Source
If serial TB screening is performed in the setting, review the results of screening of other HCWs in the same area of the health-care setting or same occupational group. If serial TB screening is not performed in the setting or if insufficient numbers of recent results are available, conduct additional TB screening of other HCWs in the same area or occupational group. If the review and screening yield no additional test conversions, and no evidence to indicate health-care-associated transmission exists, then the investigation should be terminated.
Whether HCW test conversions resulted from exposure in the setting or elsewhere or whether true infection with M. tuberculosis has even occurred is uncertain. However, the absence of other data implicating health-care-associated transmission suggests that the conversion could have resulted from 1) unrecognized exposure to M. tuberculosis outside the health-care setting; 2) cross reactivity with another antigen (e.g., BCG or nontuberculous mycobacteria); or 3) errors in applying, reading, or interpreting the test result for M. tuberculosis infection. If the review and screening identify additional test conversions, health-care-associated transmission is more probable.
Evaluation of the patient identification process, TB infection-control policies and practices, and environmental controls to identify lapses that could have led to exposure and transmission should be conducted. If no problems are identified, a classification of potential ongoing transmission should be applied, and the local or state health department or other persons with expertise in TB infection control should be consulted for assistance. If problems are identified, implement recommended interventions and repeat testing for M. tuberculosis infection 8-10 weeks after the end of exposure for HCWs with negative test results. If no additional test conversions are detected in the follow-up testing, terminate the investigation.
# Conversions in Test Results for M. tuberculosis Infection Detected in Follow-Up Testing
In follow-up testing, a classification of potential ongoing transmission should be maintained. Possible reasons for exposure and transmission should be reassessed, and the appropriateness of and degree of adherence to the interventions implemented should be evaluated. For HCWs with negative test results, repeat testing for M. tuberculosis infection 8-10 weeks after the end of exposure. The local or state health department or other persons with expertise in TB infection control should be consulted.
If no additional conversions are detected during the second round of follow-up testing, terminate the investigation. If additional conversions are detected, continue a classification of potential ongoing transmission and consult the local or state health department or other persons with expertise in TB infection control.
The classification of potential ongoing transmission should be used as a temporary classification only. This classification warrants immediate investigation and corrective steps. After a determination that ongoing transmission has ceased, the setting should be reclassified as medium risk. Maintaining the classification of medium risk for at least 1 year is recommended.
# Investigating a Case of TB Disease in an HCW
Occupational health services and other physicians in the setting should have procedures for immediately notifying the local administrators or infection-control personnel if an HCW is diagnosed with TB disease so that a problem evaluation can be initiated. If an HCW is diagnosed with TB disease and does not have a previously documented positive test result for M. tuberculosis infection, conduct an investigation to identify the probable sources and circumstances for transmission (see General Recommendations for Investigating Conversions in Test Results for M. tuberculosis Infection in HCWs). If an HCW is diagnosed with TB disease, regardless of previous test result status, an additional investigation must be conducted to ascertain whether the disease was transmitted from this HCW to others, including other HCWs, patients, and visitors.
The potential infectiousness of the HCW, if potentially infectious, and the probable period of infectiousness (see Contact Investigations) should be determined. For HCWs with suspected or confirmed infectious TB disease, conduct an investigation that includes 1) identification of contacts (e.g., other HCWs, patients, and visitors), 2) evaluation of contacts for LTBI and TB disease, and 3) notification of the local or state health department for consultation and investigation of community contacts who were exposed outside the healthcare setting.
M. tuberculosis genotyping should be performed so that the results are promptly available. Genotyping results are useful adjuncts to epidemiologically based public health investigations of contacts and possible source cases (especially in determining the role of laboratory contamination) (89,166,243,(246)(247)(248)(249)(250)(251)(252)(253)(254)(255)(256)(257)(258)(259)(260)(261). When confidentiality laws prevent the local or state health department from communicating information regarding a patient's identity, health department staff should work with hospital staff and legal counsel, and the HCW to determine how the hospital can be notified without breaching confidentiality.
# Investigating Possible Patient-to-Patient Transmission of M. tuberculosis
Information concerning TB cases among patients in the setting should be routinely recorded for risk classification and risk assessment purposes. Documented information by location and date should include results of sputum smear and culture, chest radiograph, drug-susceptibility testing, and adequacy of infection-control measures.
Each time a patient with suspected or confirmed TB disease is encountered in a health-care setting, an assessment of the situation should be made and the focus should be on 1) a determination of infectiousness of the patient, 2) confirmation of compliance with local public health reporting requirements (including the prompt reporting of a person with suspected TB disease as required), and 3) assessment of the adequacy of infection control.
A contact investigation should be initiated in situations where infection control is inadequate and the patient is infectious. Patients with positive AFB sputum smear results are more infectious than patients with negative AFB sputum smear results, but the possibility exists that patients with negative sputum smear results might be infectious (262). Patients with negative AFB sputum smear results but who undergo aerosolgenerating or aerosol-producing procedures (including bronchoscopy) without adequate infection-control measures create a potential for exposure. All investigations should be conducted in consultation with the local public health department.
If serial surveillance of these cases reveals one of the following conditions, patient-to-patient transmission might have occurred, and a contact investigation should be initiated:
- A high proportion of patients with TB disease were admitted to or examined in the setting during the year preceding onset of their TB disease, especially when TB disease is identified in patients who were otherwise unlikely to be exposed to M. tuberculosis. - An increase occurred in the number of TB patients diagnosed with drug-resistant TB, compared with the previous year. - Isolates from multiple patients had identical and characteristic drug susceptibility or DNA fingerprint patterns.
# Surveillance of TB Cases in Patients Indicates Possible Patient-to-Patient Transmission of M. tuberculosis
Health-care settings should collaborate with the local or state health department to conduct an investigation. For settings in which HCWs are serially tested for M. tuberculosis infection, review HCW records to determine whether an increase in the number of conversions in test results for M. tuberculosis infection has occurred. Patient surveillance data and medical records should be reviewed for additional cases of TB disease. Settings should look for possible exposures from previous or current admissions that might have exposed patients with newly diagnosed TB disease to other patients with TB disease, determining if the patients were admitted to the same room or area, or if they received the same procedure or went to the same treatment area on the same day.
If the investigation suggests that transmission has occurred, possible causes of transmission of M. tuberculosis (e.g., delayed diagnosis of TB disease, institutional barriers to implementing timely and correct airborne precautions, and inadequate environmental controls) should be evaluated. Possible exposure to other patients or HCWs should be determined, and if exposure has occurred, these persons should be evaluated for LTBI and TB disease (i.e., test for M. tuberculosis infection and administer a symptom screen).
If the local or state health department was not previously contacted, settings should notify the health department so that a community contact investigation can be initiated, if necessary. The possibility of laboratory errors in diagnosis or the contamination of bronchoscopes (82,169) or other equipment should be considered (136).
# Contact Investigations
The primary goal of contact investigations is to identify secondary cases of TB disease and LTBI among contacts so that therapy can be initiated as needed (263)(264)(265). Contact investigations should be collaboratively conducted by both infection-control personnel and local TB-control program personnel.
# Initiating a Contact Investigation
A contact investigation should be initiated when 1) a person with TB disease has been examined at a health-care setting, and TB disease was not diagnosed and reported quickly, resulting in failure to apply recommended TB infection controls; 2) environmental controls or other infection-control measures have malfunctioned while a person with TB disease was in the setting; or 3) an HCW develops TB disease and exposes other persons in the setting.
As soon as TB disease is diagnosed or a problem is recognized, standard public health practice should be implemented to prioritize the identification of other patients, HCWs, and visitors who might have been exposed to the index case before TB infection-control measures were correctly applied (52). Visitors of these patients might also be contacts or the source case.
The following activities should be implemented in collaboration with or by the local or state health department (34,266): 1) interview the index case and all persons who might have been exposed; 2) review the medical records of the index case; 3) determine the exposure sites (i.e., where the index case lived, worked, visited, or was hospitalized before being placed under airborne precautions); and 4) determine the infectious period of the index case, which is the period during which a person with TB disease is considered contagious and most capable of transmitting M. tuberculosis to others.
For programmatic purposes, for patients with positive AFB sputum smear results, the infectious period can be considered to begin 3 months before the collection date of the first positive AFB sputum smear result or the symptom onset date (whichever is earlier). The end of the infectious period is the date the patient is placed under airborne precautions or the date of collection of the first of consistently negative AFB sputum smear results (whichever is earlier). For patients with negative AFB sputum smear results, the infectious period can begin 1 month before the symptom onset date and end when the patient is placed under airborne precautions.
The exposure period, the time during which a person shared the same air space with a person with TB disease for each contact, should be determined as well as whether transmission occurred from the index patient to persons with whom the index patient had intense contact. In addition, the following should be determined: 1) intensity of the exposure based on proximity, 2) overlap with the infectious period of the index case, 3) duration of exposure, 4) presence or absence of infection-control measures, 5) infectiousness of the index case, 6) performance of procedures that could increase the risk for transmission during contact (e.g., sputum induction, bronchoscopy, and airway suction), and 7) the exposed cohort of contacts for TB screening.
The most intensely exposed HCWs and patients should be screened as soon as possible after exposure to M. tuberculosis has occurred and 8-10 weeks after the end of exposure if the initial TST result is negative. Close contacts should be the highest priority for screening.
For HCWs and patients who are presumed to have been exposed in a setting that screens for infection with M. tuberculosis using the TST, the following activities should be implemented:
- performing a symptom screen;
- administering a TST to those who previously had negative TST results; - repeating the TST and symptom screen 8-10 weeks after the end of exposure, if the initial TST result is negative; - promptly evaluating the HCW for TB disease, including performing a chest radiograph, if the symptom screen or the initial or 8-10-week follow-up TST result is positive; and - providing additional medical and diagnostic evaluation for LTBI, including determining the extent of exposure, if TB disease is excluded. For HCWs and patients who are presumed to have been exposed in a setting that screens for infection with M. tuberculosis using the BAMT, the following activities should be implemented (see Supplement, Surveillance and Detection of M. tuberculosis Infections in Health-Care Settings). If the most intensely exposed persons have test conversions or positive test results for M. tuberculosis infection in the absence of a previous history of a positive test result or TB disease, expand the investigation to evaluate persons with whom the index patient had less contact. If the evaluation of the most intensely exposed contacts yields no evidence of transmission, expanding testing to others is not necessary.
Exposed persons with documented previously positive test results for M. tuberculosis infection do not require either repeat testing for M. tuberculosis infection or a chest radiograph (unless they are immunocompromised or otherwise at high risk for TB disease), but they should receive a symptom screen. If the person has symptoms of TB disease, 1) record the symptoms in the HCW's medical chart or employee health record, 2) perform a chest radiograph, 3) perform a full medical evaluation, and 4) obtain sputum samples for smear and culture, if indicated.
The setting should determine the reason(s) that a TB diagnosis or initiation of airborne precautions was delayed or procedures failed, which led to transmission of M. tuberculosis in the setting. Reasons and corrective actions taken should be recorded, including changes in policies, procedures, and TB training and education practices.
# Collaboration with the Local or State Health Department
For assistance with the planning and implementation of TB-control activities in the health-care setting and for names of experts to help with policies, procedures, and program evaluation, settings should coordinate with the local or state TB-control program . By law, the local or state health department must be notified when TB disease is suspected or confirmed in a patient or HCW so that follow up can be arranged and a community contact investigation can be conducted. The local or state health department should be notified as early as possible before the patient is discharged to facilitate followup and continuation of therapy by DOT (31). For inpatient settings, coordinate a discharge plan with the patient (including a patient who is an HCW with TB disease) and the TB-control program of the local or state health department.
# Environmental Controls
Environmental controls are the second line of defense in the TB infection-control program, after administrative controls. Environmental controls include technologies for the removal or inactivation of airborne M. tuberculosis. These technologies include local exhaust ventilation, general ventilation, HEPA filtration, and UVGI. These controls help to prevent the spread and reduce the concentration of infectious droplet nuclei in the air. A summary of environmental controls and their use in prevention of transmission of M. tuberculosis is provided in this report (see Supplement, Environmental Controls), including detailed information concerning the application of environmental controls.
# Local Exhaust Ventilation
Local exhaust ventilation is a source-control technique used for capturing airborne contaminants (e.g., infectious droplet nuclei or other infectious particles) before they are dispersed into the general environment. In local exhaust ventilation methods, external hoods, enclosing booths, and tents are used. Local exhaust ventilation (e.g., enclosed, ventilated booth) should be used for cough-inducing and aerosol-generating procedures. When local exhaust is not feasible, perform coughinducing and aerosol-generating procedures in a room that meets the requirements for an AII room.
# General Ventilation
General ventilation systems dilute and remove contaminated air and control airflow patterns in a room or setting. An engineer or other professional with expertise in ventilation should be included as part of the staff of the health-care setting or hire a consultant with expertise in ventilation engineering specific to health-care settings. Ventilation systems should be designed to meet all applicable federal, state, and local requirements.
A single-pass ventilation system is the preferred choice in areas in which infectious airborne droplet nuclei might be present (e.g., AII rooms). Use HEPA filtration if recirculation of air is necessary.
AII rooms in health-care settings pre-existing 1994 guidelines should have an airflow of ≥6 ACH. When feasible, the airflow should be increased to ≥12 ACH by 1) adjusting or modifying the ventilation system or 2) using air-cleaning methods (e.g., room-air recirculation units containing HEPA filters or UVGI systems that increase the equivalent ACH). New construction or renovation of health-care settings should be designed so that AII rooms achieve an airflow of ≥12 ACH. Ventilation rates for other areas in health-care settings should meet certain specifications (see Risk Classification Examples). If a variable air volume (VAV) ventilation system is used in an AII room, design the system to maintain the room under negative pressure at all times. The VAV system minimum set point must be adequate to maintain the recommended mechanical and outdoor ACH and a negative pressure ≥0.01 inch of water gauge compared with adjacent areas.
Based on the risk assessment for the setting, the required number of AII rooms, other negative-pressure rooms, and local exhaust devices should be determined. The location of these rooms and devices will depend partially on where recommended ventilation conditions can be achieved. Grouping AII rooms in one area might facilitate the care of patients with TB disease and the installation and maintenance of optimal environmental controls.
AII rooms should be checked for negative pressure by using smoke tubes or other visual checks before occupancy, and these rooms should be checked daily when occupied by a patient with suspected or confirmed TB disease. Design, construct, and maintain general ventilation systems so that air flows from clean to less clean (more contaminated) areas. In addition, design general ventilation systems to provide optimal airflow patterns within rooms and to prevent air stagnation or short-circuiting of air from the supply area to the exhaust area.
Health-care settings serving populations with a high prevalence of TB disease might need to improve the existing general ventilation system or use air-cleaning technologies in generaluse areas (e.g., waiting rooms, EMS areas, and radiology suites).
Applicable approaches include 1) single-pass, nonrecirculating systems that exhaust air to the outside, 2) recirculation systems that pass air through HEPA filters before recirculating it to the general ventilation system, and 3) room-air recirculation units with HEPA filters and/or UVGI systems.
# Air-Cleaning Methods
# High-Efficiency Particulate Air (HEPA) Filters
HEPA filters can be used to filter infectious droplet nuclei from the air and must be used 1) when discharging air from local exhaust ventilation booths or enclosures directly into the surrounding room or area and 2) when discharging air from an AII room (or other negative-pressure room) into the general ventilation system (e.g., in settings in which the ventilation system or building configuration makes venting the exhaust to the outside impossible).
HEPA filters can be used to remove infectious droplet nuclei from air that is recirculated in a setting or exhausted directly to the outside. HEPA filters can also be used as a safety measure in exhaust ducts to remove droplet nuclei from air being discharged to the outside. Air can be recirculated through HEPA filters in areas in which 1) no general ventilation system is present, 2) an existing system is incapable of providing sufficient ACH, or 3) air-cleaning (particulate removal) without affecting the fresh-air supply or negative-pressure system is desired. Such uses can increase the number of equivalent ACH in the room or area.
Recirculation of HEPA filtered air can be achieved by exhausting air from the room into a duct, passing it through a HEPA filter installed in the duct, and returning it to the room or the general ventilation system. In addition, recirculation can be achieved by filtering air through HEPA recirculation systems installed on the wall or ceiling of the room or filtering air through portable room-air recirculation units.
To ensure adequate functioning, install HEPA filters carefully and maintain the filters according to the instructions of the manufacturer. Maintain written records of all prefilter and HEPA maintenance and monitoring (114). Manufacturers of room-air recirculation units should provide installation instructions and documentation of the filtration efficiency and of the overall efficiency of the unit (clean air delivery rate) in removing airborne particles from a space of a given size.
# UVGI
UVGI is an air-cleaning technology that can be used in a room or corridor to irradiate the air in the upper portion of the room (upper-air irradiation) and is installed in a duct to irradiate air passing through the duct (duct irradiation) or incorporated into room air-recirculation units. UVGI can be used in ducts that recirculate air back into the same room or in ducts that exhaust air directly to the outside. However, UVGI should not be used in place of HEPA filters when discharging air from isolation booths or enclosures directly into the surrounding room or area or when discharging air from an AII room into the general ventilation system. Effective use of UVGI ensures that M. tuberculosis, as contained in an infectious droplet nucleus is exposed to a sufficient dose of ultraviolet-C (UV-C) radiation at 253.7 nanometers (nm) to result in inactivation. Because dose is a function of irradiance and time, the effectiveness of any application is determined by its ability to deliver sufficient irradiance for enough time to result in inactivation of the organism within the infectious droplet. Achieving a sufficient dose can be difficult for airborne inactivation because the exposure time can be substantially limited; therefore, attaining sufficient irradiance is essential.
For each system, follow design guidelines to maximize UVGI effectiveness in equivalent ACH. Because air velocity, air mixing, relative humidity, UVGI intensity, and lamp position all affect the efficacy of UVGI systems, consult a UVGI system designer before purchasing and installing a UVGI system. Experts who might be consulted include industrial hygienists, engineers, and health physicists.
To function properly and minimize potential hazards to HCWs and other room occupants, upper-air UVGI systems should be properly installed, maintained, and labeled. A person knowledgeable in the use of ultraviolet (UV) radiometers or actinometers should monitor UV irradiance levels to ensure that exposures in the work area are within safe exposure levels. UV irradiance levels in the upper-air, where the air disinfection is occurring, should also be monitored to determine that irradiance levels are within the desired effectiveness range.
UVGI tubes should be changed and cleaned according to the instructions of the manufacturer or when irradiance measurements indicate that output is reduced below effective levels. In settings that use UVGI systems, education of HCWs should include 1) basic principles of UVGI systems (mechanism and limitations), 2) potential hazardous effects of UVGI if overexposure occurs, 3) potential for photosensitivity associated with certain medical conditions or use of certain medications, and 4) the importance of maintenance procedures and record-keeping. In settings that use UVGI systems, patients and visitors should be informed of the purpose of UVGI systems and be warned about the potential hazards and safety precautions.
# Program Issues
Personnel from engineering, maintenance, safety and infection control, and environmental health should collaborate to ensure the optimal selection, installation, operation, and maintenance of environmental controls. A written maintenance plan should be developed that outlines the responsibility and authority for maintenance of the environmental controls and addresses HCW training needs. Standard operating procedures should include the notification of infection-control personnel before performing maintenance on ventilation systems servicing TB patient-care areas.
Personnel should schedule routine preventive maintenance for all components of the ventilation systems (e.g., fans, filters, ducts, supply diffusers, and exhaust grills) and air-cleaning devices. Quality control (QC) checks should be conducted to verify that environmental controls are operating as designed and that records are current. Provisions for emergency electrical power should be made so that the performance of essential environmental controls is not interrupted during a power failure.
# Respiratory Protection
The first two levels of the infection-control hierarchy, administrative and environmental controls, minimize the number of areas in which exposure to M. tuberculosis might occur. In addition, these administrative and environmental controls also reduce, but do not eliminate, the risk in the few areas in which exposures can still occur (e.g., AII rooms and rooms where cough-inducing or aerosol-generating procedures are performed). Because persons entering these areas might be exposed to airborne M. tuberculosis, the third level of the hierarchy is the use of respiratory protective equipment in situations that pose a high risk for exposure (see Supplement, Respiratory Protection).
On October 17, 1997, OSHA published a proposed standard for occupational exposure to M. tuberculosis (267) (272)(273)(274).
# Indications for Use
Respiratory protection should be used by the following persons: - all persons, including HCWs and visitors, entering rooms in which patients with suspected or confirmed infectious TB disease are being isolated;
- persons present during cough-inducing or aerosol-generating procedures performed on patients with suspected or confirmed infectious TB disease; and - persons in other settings in which administrative and environmental controls probably will not protect them from inhaling infectious airborne droplet nuclei. These persons might also include persons who transport patients with suspected or confirmed infectious TB disease in vehicles (e.g., EMS vehicles or, ideally, ambulances) and persons who provide urgent surgical or dental care to patients with suspected or confirmed infectious TB disease (see Supplement, Estimating the Infectiousness of a TB Patient). Laboratorians conducting aerosol-producing procedures might require respiratory protection. A decision concerning use of respiratory protection in laboratories should be made on an individual basis, depending on the type of ventilation in use for the laboratory procedure and the likelihood of aerosolization of viable mycobacteria that might result from the laboratory procedure.
# Respiratory-Protection Program
OSHA requires health-care settings in which HCWs use respiratory protection to develop, implement, and maintain a respiratory-protection program. All HCWs who use respiratory protection should be included in the program (see Supplement, Respiratory Protection).
# Training HCWs
Annual training regarding multiple topics should be conducted for HCWs, including the nature, extent, and hazards of TB disease in the health-care setting. The training can be conducted in conjunction with other related training regarding infectious disease associated with airborne transmission. In addition, training topics should include the 1) risk assessment process and its relation to the respirator program, including signs and symbols used to indicate that respirators are required in certain areas and the reasons for using respirators; 2) environmental controls used to prevent the spread and reduce the concentration of infectious droplet nuclei; 3) selection of a particular respirator for a given hazard (see Selection of Respirators); 4) operation, capabilities, and limitations of respirators; 5) cautions regarding facial hair and respirator use (275,276); and 6) OSHA regulations regarding respirators, including assessment of employees' knowledge.
Trainees should be provided opportunities to handle and wear a respirator until they become proficient (see Fit Testing). Trainees should also be provided with 1) copies or summaries of lecture materials for use as references and 2) instructions to refer all respirator problems immediately to the respiratory program administrator.
# Selection of Respirators
Respiratory protective devices used in health-care settings for protection against M. tuberculosis should meet the following criteria (277,278):
- certified by CDC/National Institute for Occupational Safety and Health (NIOSH) as a nonpowered particulate filter respirator (N-, R-, and P-series 95%, 99%, and 100% filtration efficiency), including disposable respirators, or PAPRs with high efficiency filters ( 279); - ability to adequately fit respirator wearers (e.g., a fit factor of ≥100 for disposable and half facepiece respirators) who are included in a respiratory-protection program; and - ability to fit the different facial sizes and characteristics of HCWs. (This criterion can usually be met by making respirators available in different sizes and models.) The fit of filtering facepiece respirators varies because of different facial types and respirator characteristics (10,(280)(281)(282)(283)(284)(285)(286)(287)(288)(289). Assistance with selection of respirators should be obtained through consultation with respirator fit-testing experts, CDC, occupational health and infection-control professional organizations, peer-reviewed research, respirator manufacturers, and advanced respirator training courses.
# Fit Testing
A fit test is used to determine which respirator fits the user adequately and to ensure that the user knows when the respirator fits properly. After a risk assessment is conducted to validate the need for respiratory protection, perform fit testing during the initial respiratory-protection program training and periodically thereafter in accordance with federal, state, and local regulations.
Fit testing provides a means to determine which respirator model and size fits the wearer best and to confirm that the wearer can don the respirator properly to achieve a good fit. Periodic fit testing of respirators on HCWs can serve as an effective training tool in conjunction with the content included in employee training and retraining. The frequency of periodic fit testing should be determined by the occurrence of 1) risk for transmission of M. tuberculosis, 2) a change in facial features of the wearer, 3) medical condition that would affect respiratory function, 4) physical characteristics of respirator (despite the same model number), or 5) a change in the model or size of the assigned respirator (281).
# Respirator Options: General Recommendations
In situations that require respiratory protection, the minimum respiratory protection device is a filtering facepiece (nonpowered, air-purifying, half-facepiece) respirator (e.g., an N95 disposable respirator). This CDC/NIOSH-certified respirator meets the minimum filtration performance for respiratory protection in areas in which patients with suspected or confirmed TB disease might be encountered. For situations in which the risk for exposure to M. tuberculosis is especially high because of cough-inducing and aerosol-generating procedures, more protective respirators might be needed (see Respirator Options: Special Circumstances).
# Respirator Options: Special Circumstances
Visitors to AII rooms and other areas with patients who have suspected or confirmed infectious TB disease may be offered respirators and should be instructed by an HCW on the use of the respirator before entering an AII room (Supplement, Frequently Asked Questions User-Seal Check in Respiratory Protection section). Particulate respirators vary substantially by model, and fit testing is usually not easily available to visitors.
The risk assessment for the setting might identify a limited number of circumstances (e.g., bronchoscopy or autopsy on persons with suspected or confirmed TB disease and selected laboratory procedures) for which a level of respiratory protection that exceeds the minimum level provided by an N95 disposable respirator should be considered. In such circumstances, consider providing HCWs with a level of respiratory protection that both exceeds the minimum criteria and is compatible with patient care delivery. Such protection might include more protective respirators (e.g., full-facepiece respirators or PAPRs) (see Supplement, Respiratory Protection). Detailed information regarding these and other respirators has been published (272,273,278,290).
In certain settings, HCWs might be at risk for both inhalation exposure to M. tuberculosis and mucous membrane exposure to bloodborne pathogens. In these situations, the HCW might wear a nonfluid-resistant respirator with a fullface shield or the combination product surgical mask/N95 disposable respirator to achieve both respiratory protection and fluid protection.
When surgical procedures (or other procedures requiring a sterile field) are performed on persons with suspected or confirmed infectious TB disease, respiratory protection worn by HCWs must also protect the surgical field. The patient should be protected from the HCW's respiratory secretions and the HCW from infectious droplet nuclei that might be expelled by the patient or generated by the procedure. Respirators with exhalation valves and PAPRs do not protect the sterile field.
Settings in which patients with suspected or confirmed infectious TB disease will not be encountered do not need a respiratory-protection program for exposure to M. tuberculosis. However, these settings should have written protocols for the early identification of persons with symptoms or signs of TB disease and procedures for referring these patients to a setting where they can be evaluated and managed. Filtering facepiece respirators should also be available for emergency use by HCWs who might be exposed to persons with suspected or confirmed TB disease before transfer. In addition, respirators and the associated respiratory-protection program might be needed to protect HCWs from other infectious diseases or exposures to harmful vapors and gases. Their availability or projected need for other exposures should be considered in the selection of respirators for protection against TB to minimize replication of effort.
Surgical or procedure masks are designed to prevent respiratory secretions of the wearer from entering the air. To reduce the expulsion of droplet nuclei into the air, persons with suspected or confirmed TB disease should be instructed to observe respiratory hygiene and cough etiquette procedures (122) and should wear a surgical or procedure mask, if possible, when they are not in AII rooms. These patients do not need to wear particulate respirators.
Patients with suspected or confirmed TB disease should never wear any kind of respiratory protection that has an exhalation valve. This type of respirator does not prevent droplet nuclei from being expelled into the air.
# Cough-Inducing and Aerosol-Generating Procedures General Recommendations
Procedures that involve instrumentation of the lower respiratory tract or induction of sputum can increase the likelihood that droplet nuclei will be expelled into the air. These cough-inducing procedures include endotracheal intubation, suctioning, diagnostic sputum induction, aerosol treatments (e.g., pentamidine therapy and nebulized treatments), bronchoscopy, and laryngoscopy. Gastric aspiration and nasogastric tube placement can also induce cough in certain patients. Other procedures that can generate aerosols include irrigating TB abscesses, homogenizing or lyophilizing tissue, performing autopsies on cadavers with untreated TB disease, and other processing of tissue that might contain tubercle bacilli and TB laboratory procedures.
If possible, postpone cough-inducing or aerosol-generating procedures on patients with suspected or confirmed infectious TB disease unless the procedure can be performed with recommended precautions. When a cough-inducing or aerosol-generating procedure must be performed on a patient with suspected or confirmed infectious TB disease, use a local exhaust ventilation device (e.g., booth or special enclosure). If using this device is not feasible, perform the procedure in a room that meets the ventilation requirements for an AII room.
After completion of cough-inducing procedures, keep patients in the AII room or enclosure until coughing subsides. Patients should be given tissues and instructed to cover the mouth and nose with tissues when coughing. Tissues should be disposed of in accordance with the infection-control plan.
Before the booth, enclosure, or room is used for another patient, allow enough time for the removal of ≥99% of airborne contaminants. This interval will vary based on the efficiency of the ventilation or filtration system (Table 1).
For postoperative recovery, do not place the patient in a recovery room with other patients; place the patient in a room that meets the ventilation requirements for an AII room. If the room does not meet the ventilation requirements for an AII room, air-cleaning technologies (e.g., HEPA filtration and UVGI) can be used to increase the number of equivalent ACH (see Supplement, Environmental Controls).
Perform all manipulations of suspected or confirmed M. tuberculosis specimens that might generate aerosols in a BSC. When in rooms or enclosures in which cough-inducing or aerosol-generating procedures are being performed, respiratory protection should be worn.
# Special Considerations for Bronchoscopy
Bronchoscopy can result in the transmission of M. tuberculosis either through the airborne route (63,81,86,162) or a contaminated bronchoscope (80,82,(163)(164)(165)(166)(167)(168)(169). Whenever feasible, perform bronchoscopy in a room that meets the ventilation requirements for an AII room (see Supplement, Environmental Controls). Air-cleaning technologies can be used to increase equivalent ACH. If a bronchoscopy must be performed in a positive-pressure room (e.g., OR), exclude TB disease before performing the procedure. Examine three spontaneous or induced sputum specimens for AFB (if possible) to exclude a diagnosis of TB disease before bronchoscopy is considered as a diagnostic procedure (110,291).
In a patient who is intubated and mechanically ventilated, minimize the opening of circuitry. For HCWs present during bronchoscopic procedures on patients with suspected or confirmed TB disease, a respirator with a level of protection of at least an N95 disposable respirator should be worn. Protection greater than an N95 disposable respirator (e.g., a full-facepiece elastomeric respirator or PAPR) should be considered.
# Special Considerations for Administration of Aerosolized Pentamidine and Other Medications
Patients receiving aerosolized pentamidine (or other aerosolized medications) who are immunocompromised and have a confirmed or suspected pulmonary infection (i.e., pneumocystis pneumonia or pneumonia caused by P. jaroveci, formerly P. carinii) are also at risk for TB disease. Patients receiving other aerosolized medications might have an immunocompromising condition that puts them at greater risk for TB disease. Patients should be screened for TB disease before initiating prophylaxis with aerosolized pentamidine; a medical history, test for infection with M. tuberculosis, and a chest radiograph should be performed.
Before each subsequent treatment with aerosolized pentamidine, screen patients for symptoms or signs of TB disease. If symptoms or signs are present, evaluate the patient for TB disease. Patients with suspected or confirmed TB disease should be administered oral prophylaxis for P. jaroveci instead of aerosolized pentamidine if clinically practical. Patients receiving other aerosolized medication might have immunocompromising conditions; therefore, if warranted, they should be similarly screened and evaluated, and treatment with oral medications should be considered.
# Supplements Estimating the Infectiousness of a TB Patient General Principles
Transmission of M. tuberculosis is most likely to result from exposure to persons who have 1) unsuspected pulmonary TB disease and are not receiving antituberculosis treatment, 2) diagnosed TB disease and are receiving inadequate therapy, or 3) diagnosed TB disease and are early in the course of effective therapy. Administration of effective antituberculosis treatment has been associated with decreased infectiousness among persons who have TB disease (292). Effective treatment reduces coughing, the amount of sputum produced, the number of organisms in the sputum, and the viability of the organisms in the sputum. However, the duration of therapy required to decrease or eliminate infectiousness varies (293). Certain TB patients are never infectious, whereas those with unrecognized or inadequately treated drug-resistant TB disease might remain infectious for weeks or months (2,3,87,94,162,(294)(295)(296)(297). In one study, 17% of transmission occurred from persons with negative AFB smear results (262). Rapid laboratory methods, including PCR-based techniques, can decrease diagnostic delay and reduce the duration of infectiousness (298).
The infectiousness of patients with TB correlates with the number of organisms they expel into the air (299). The number of organisms expelled are related to the following factors: 1) presence of cough lasting ≥3 weeks; 2) cavitation on chest radiograph; 3) positive AFB sputum smear result; 4) respiratory tract disease with involvement of the lung or airways, including larynx; 5) failure to cover the mouth and nose when coughing; 6) lack of, incorrect, or short duration of antituberculosis treatment (300); or 7) undergoing cough-inducing or aerosolgenerating procedures (e.g., sputum induction, bronchoscopy, and airway suction). Closed and effectively filtered ventilatory circuitry and minimized opening of such circuitry in intubated and mechanically ventilated patients might minimize exposure (see Intensive Care Units ).
Persons with extrapulmonary TB disease usually are not infectious unless they have concomitant pulmonary disease, nonpulmonary disease located in the oral cavity or the larynx, or extrapulmonary disease that includes an open abscess or lesion in which the concentration of organisms is high, especially if drainage from the abscess or lesion is extensive, or if aerosolization of drainage fluid is performed (69,72,77,83,301). Persons with TB pleural effusions might also have concurrent unsuspected pulmonary or laryngeal TB disease. These patients should be considered infectious until pulmonary TB disease is excluded. Patients with suspected TB pleural effusions or extrapulmonary TB disease should be considered pulmonary TB suspects until concomitant pulmonary disease is excluded (302).
Although children with TB disease usually are less likely than adults to be infectious, transmission from young children can occur (135,137). Therefore, children and adolescents with TB disease should be evaluated for infectiousness by using the majority of the same criteria as for adults. These criteria include presence of cough lasting ≥3 weeks; cavitation on chest radiograph; or respiratory tract disease with involvement of lungs, airways, or larynx. Infectiousness would be increased if the patient were on nonstandard or short duration of antituberculosis treatment (300) or undergoing cough-inducing or aerosolgenerating procedures (e.g., sputum induction, bronchoscopy, and airway suction). Although gastric lavage is useful in the diagnosis of pediatric TB disease, the grade of the positive AFB smear result does not correlate with infectiousness. Pediatric patients who might be infectious include those who are not on antituberculosis treatment, who have just been started on treatment or are on inadequate treatment, and who have extensive pulmonary or laryngeal involvement (i.e., coughing ≥3 weeks, cavitary TB disease, positive AFB sputum smear results, or undergoing cough-inducing or aerosol-generating procedures). Children who have typical primary TB lesions on chest radiograph and do not have any of these indicators of infectiousness might not need to be placed in an AII room.
No data exist on the transmission of M. tuberculosis and its association with the collection of gastric aspirate specimens. Children who do not have predictors for infectiousness do not need to have gastric aspirates obtained in an AII room or other special enclosure; however, the procedure should not be performed in an area in which persons infected with HIV might be exposed. Because the source case for pediatric TB patients might be a member of the infected child's family, parents and other visitors of all hospitalized pediatric TB patients should be screened for TB disease as soon as possible to ensure that they do not become sources of health-care-associated transmission of M. tuberculosis (303)(304)(305)(306).
Patients who have suspected or confirmed TB disease and who are not on antituberculosis treatment usually should be considered infectious if characteristics include - presence of cough;
- cavitation on chest radiograph;
- positive AFB sputum smear result;
- respiratory tract disease with involvement of the lung or airways, including larynx; - failure to cover the mouth and nose when coughing; and - undergoing cough-inducing or aerosol-generating procedures (e.g., sputum induction, bronchoscopy, and airway suction). If a patient with one or more of these characteristics is on standard multidrug therapy with documented clinical improvement usually in connection with smear conversion over multiple weeks, the risk for infectiousness is reduced.
# Suspected TB Disease
For patients placed under airborne precautions because of suspected infectious TB disease of the lungs, airway, or larynx, airborne precautions can be discontinued when infectious TB disease is considered unlikely and either 1) another diagnosis is made that explains the clinical syndrome or 2) the patient has three negative AFB sputum smear results (109)(110)(111)(112). Each of the three consecutive sputum specimens should be collected in 8-24-hour intervals (124), and at least one specimen should be an early morning specimen because respiratory secretions pool overnight. Generally, this method will allow patients with negative sputum smear results to be released from airborne precautions in 2 days.
Hospitalized patients for whom the suspicion of TB disease remains after the collection of three negative AFB sputum smear results should not be released from airborne precautions until they are on standard multidrug antituberculosis treatment and are clinically improving. If the patient is believed to not have TB disease because of an alternate diagnosis or because clinical information is not consistent with TB disease, airborne precautions may be discontinued. Therefore, a patient suspected of having TB disease of the lung, airway, or larynx who is symptomatic with cough and not responding clinically to antituberculosis treatment should not be released from an AII room into a non-AII room, and additional sputum specimens should be collected for AFB examination until three negative AFB sputum smear results are obtained (30,31). Additional diagnostic approaches might need to be considered (e.g., sputum induction) and, after sufficient time on treatment, bronchoscopy.
# Confirmed TB Disease
A patient who has drug-susceptible TB of the lung, airway, or larynx, who is on standard multidrug antituberculosis treatment, and who has had a substantial clinical and bacteriologic response to therapy (i.e., reduction in cough, resolution of fever, and progressively decreasing quantity of AFB on smear result) is probably no longer infectious. However, because culture and drug-susceptibility results are not usually known when the decision to discontinue airborne precautions is made, all patients with suspected TB disease should remain under airborne precautions while they are hospitalized until they have had three consecutive negative AFB sputum smear results, each collected in 8-24-hour intervals, with at least one being an early morning specimen; have received standard multidrug antituberculosis treatment (minimum of 2 weeks); and have demonstrated clinical improvement.
# Discharge to Home of Patients with Suspected or Confirmed TB Disease
If a hospitalized patient who has suspected or confirmed TB disease is deemed medically stable (including patients with positive AFB sputum smear results indicating pulmonary TB disease), the patient can be discharged from the hospital before converting the positive AFB sputum smear results to negative AFB sputum smear results, if the following parameters have been met:
- a specific plan exists for follow-up care with the local TB-control program; - the patient has been started on a standard multidrug antituberculosis treatment regimen, and DOT has been arranged; - no infants and children aged <4 years or persons with immunocompromising conditions are present in the household; - all immunocompetent household members have been previously exposed to the patient; and
- the patient is willing to not travel outside of the home except for health-care-associated visits until the patient has negative sputum smear results. Patients with suspected or confirmed infectious TB disease should not be released to health-care settings or homes in which the patient can expose others who are at high risk for progressing to TB disease if infected (e.g., persons infected with HIV or infants and children aged <4 years). Coordination with the local health department TB program is indicated in such circumstances.
# Drug-Resistant TB Disease
Because the consequences of transmission of MDR TB are severe, certain infection-control practitioners might choose to keep persons with suspected or confirmed MDR TB disease under airborne precautions during the entire hospitalization or until culture conversion is documented, regardless of sputum smear results. The role of drug resistance in transmission is complex. Transmission of drug-resistant organisms to persons with and without HIV infection has been documented (54,(307)(308)(309). In certain cases, transmission from patients with TB disease caused by drug-resistant organisms might be extensive because of prolonged infectiousness as a result of delays in diagnosis and delays in initiation of effective therapy (53,94,98,101,255,310,311).
# HIV-Associated TB Disease
Although multiple TB outbreaks among HIV-infected persons have been reported (51,52,99), the risk for transmission does not appear to be increased from patients with TB disease and HIV infection, compared with TB patients without HIV infection (54,(312)(313)(314)(315). Whether persons infected with HIV are more likely to be infected with M. tuberculosis if exposed is unclear; however, after infected with M. tuberculosis, the risk for progression to TB disease in persons infected with HIV is high (316). Progression to TB disease can be rapid, as soon as 1 month after exposure (51,53,54,101).
# Diagnostic Procedures for LTBI and TB Disease
LTBI is a condition that develops after exposure to a person with infectious TB disease, and subsequent infection with M. tuberculosis occurs where the bacilli are alive but inactive in the body. Persons who have LTBI but who do not have TB disease are asymptomatic (i.e., have no symptoms), do not feel sick, and cannot spread TB to other persons.
# Use of QFT-G for Diagnosing M. tuberculosis Infections in Health-Care Workers (HCWs)
In the United States, LTBI has been traditionally diagnosed based on a positive TST result after TB disease has been excluded. In vitro cytokine-based immunoassays for the detection of M. tuberculosis infection have been the focus of intense research and development. This document uses the term "BAMT" to refer to blood assay for M. tuberculosis infection currently available in the United States.
One such BAMT is QFT (which is PPD-based) and the subsequently developed version, QFT-G. QFT-G measures cell-mediated immune responses to peptides representative of two M. tuberculosis proteins that are not present in any BCG vaccine strain and are absent from the majority of nontuberculosis mycobacteria. This assay was approved by FDA in 2005 and is an available option for detecting M. tuberculosis infection. CDC recommendations for the United States on QFT and QFT-G have been published (35).
QFT-G is an in vitro test based on measuring interferongamma (IFN-γ) released in heparinized whole blood when incubated overnight with mitogen (serving as a positive control), Nil (i.e., all reagents except antigens, which sets a baseline), and peptide simulating ESAT-6 (6-kDa early secretory antigenic target) and CFP-10 (10-kDa culture filtrate protein) (measured independently), two different proteins with similar amino acid sequences specific for M. tuberculosis (Box 2). The sequences of ESAT-6 and CFP-10 are not related to each other. The genes encoding these two proteins are usually found next to each other in an operon (i.e., are coexpressed and translated from an mRNA product containing both genes). Although mycobacterial genomes contain multiple copies of each family, QFT-G and Elispot detect immunoreactivity associated only with the ESAT-6 protein and CFP-10 protein encoded by the genes in the region of deletion (RD1). In addition, virulence attributes are associated with the RD1 genes only and not the other homologues. The blood tests using IFN-γ methods require one less patient visit, assess responsiveness to M. tuberculosis antigens, and do not boost anamnestic immune responses. Interpretation of the BAMT result is less subjective than interpretation of a skin test result, and the BAMT result might be affected less by previous BCG vaccination and sensitization to environmental mycobacteria (e.g., M. avium complex) than the PPD-based TST. BAMT might be more efficient and cost effective than TST (35). Screening programs that use BAMT might eliminate the need for two-step testing because this test does not boost sensitization.
Other cytokine-based immunoassays are under development and might be useful in the diagnosis of M. tuberculosis infection. Future FDA-approved products, in combination with CDC-issued recommendations, might provide additional diagnostic alternatives. For guidance on the use of these and related technologies, CDC plans to periodically publish recommendations on the diagnosis of M. tuberculosis infection. BAMT can be used in both testing and infection-control surveillance programs for HCWs.
# Use of Tuberculin Skin Test (TST) for Diagnosing M. tuberculosis Infections in HCWs
The TST is frequently the first step of a TB diagnostic evaluation that might lead to diagnosing LTBI. Although currently available preparations of PPD used in TST are <100% sensitive and specific for the detection of LTBI, the TST is currently the most widely used diagnostic test for M. tuberculosis infection in the United States. The TST is less sensitive in patients who have TB disease.
The TST, like all medical tests, is subject to variability (74,228,317), but many of the inherent variations in administering and reading TST results can be avoided by training and attention to detail (318). Details of TST administration and TST result reading procedures are suggested in this report to improve the technical aspects of TST placement and reading, thus reducing observer variations and improving test reliability (Appendix F). These checklists were developed for the National Health and Nutrition Examination Survey (NHANES) to standardize TST placement and reading for research purposes. The suggested TST training recommendations are not mandatory.
# Adherence to TST
Operational policies, procedures, and practices at healthcare settings can enhance HCW adherence to serial TST. In 2002, one focus group study identified potential barriers and facilitators to adherence with routine TST (319). HCWs identified structural factors (e.g., inconvenient TST screening schedules and locations and long waiting times) that negatively affected adherence. Facilitators to help HCWs adhere to routine TST included active follow-up by supervisors and occupational health staff and work-site visits for TST screening. Misinformation and stigma concerning TB also emerged in the discussions, indicating the need for additional training and education for HCWs.
# Administering the TST
For each patient, a risk assessment should be conducted that takes into consideration recent exposure to M. tuberculosis, clinical conditions that increase risk for TB disease if infected, and the program's capacity to deliver treatment for LTBI to determine if the TST should be administered.
The recommended method for TST is the Mantoux method (Appendix F) (223,318,(320)(321)(322). Mantoux TST training materials supporting the guidance in this report are available at (223,318,(320)(321)(322)(323)(324)(325). Multipuncture tests (e.g., Tine® tests) are not as reliable as the Mantoux method of skin testing and should not be used as a diagnostic test in the United States (30). Contact the state and local health department for TST resources. - ESAT-6 is a 6-kDa early secretory antigenic target, and CFP-10 is 10-kDa culture filtrate protein.
# BOX 2. Interpretation of QuantiFERON ® -TB Gold test (QFT-G) results
# QFT-G result Interpretation
Positive ESAT-6 or CFP-10- responsiveness detected Negative
No ESAT-6 and CFP-10 responsiveness detected
# Indeterminate
Mycobacterium tuberculosis infection probable M. tuberculosis infection unlikely, but cannot be excluded, especially when 1. any illness is consistent with TB disease, and 2. the likelihood of progression to TB disease is increased (e.g., because of immunosuppression)
Test not interpretable
# Reading the TST Result
The TST result should be read by a designated, trained HCW 48-72 hours after the TST is placed (39,326,327). If the TST was not read between 48-72 hours, ideally, another TST should be placed as soon as possible and read within 48-72 hours (39). Certain studies indicate that positive TST reactions might still be measurable from 4-7 days after testing (225,226,328). However, if a patient fails to return within 72 hours and has a negative test result, the TST should be repeated (42). Patients and HCWs should not be allowed to read their own TST results. HCWs do not typically measure their own TST results reliably (48).
Reading the TST result consists of first determining the presence or absence of induration (hard, dense, and raised formation) and, if induration is present, measuring the diameter of induration transverse (perpendicular) to the long axis of the forearm (Figure 1) (39,318). Erythema or redness of the skin should not be considered when reading a TST result (Appendix F).
# Interpreting TST Results
The positive-predictive value of a TST is the probability that a person with a positive TST result is actually infected with M. tuberculosis. The positive predictive value is dependent on the prevalence of infection with M. tuberculosis in the population being tested and the sensitivity and specificity of the test (228,329,330).
In populations with a low prevalence of M. tuberculosis infection, the probability that a positive TST result represents true infection with M. tuberculosis can be substantially low, especially if the cut point is set too low (i.e., the test is not adequately specific and a low prevalence exists in the population). In populations with a high prevalence of infection with M. tuberculosis and inadequate test specificity, the probability that a positive TST result using the same cut point represents true infection with M. tuberculosis is much higher.
# Interpreting TST Results in HCWs
TST result interpretation depends on two factors: 1) measured TST induration in millimeters and 2) the person's risk for being infected with M. tuberculosis and risk for progression to TB disease if infected.
Intepretations of TST and QFT results vary according to the purpose of testing (Box 3). A TST result with no induration (0 mm) or a measured induration below the defined cut point for each category is considered to signify absence of infection with M. tuberculosis.
In the context of TST screening as part of a TB infectioncontrol program, the interpretation of TST results occurs in two distinct parts. The first is the interpretation by standard criteria, without regard to personal risk factors or setting-specific factors of the TST results for infection control, surveillance, and referral purposes. The second is the interpretation by individualized criteria to determine the need for treatment of LTBI.
Determining the need for treatment of LTBI is a subsequent and separate task. For infection-control and surveillance purposes, TST results should be interpreted and recorded under strict criteria, without considering setting-based or personal risk factors (see Supplement, Diagnostic Procedures for LTBI and TB Disease). Any HCW with a positive TST result from serial TB screening should be referred to a medical provider for an evaluation and to determine the need for treatment of LTBI based on individual risk (Box 3).
# Interpreting the TST Result for Infection Control and Surveillance
On baseline TST testing, a TST result of ≥10 mm is considered positive for the majority of HCWs, and a TST result of ≥5 mm is considered positive for HCWs who are infected with HIV or who have other immunocompromising conditions (Box 3). All HCWs with positive baseline TST results should be referred for medical and diagnostic evaluation; additional skin testing does not need to be performed.
On serial screening for the purposes of infection-control surveillance, TST results indicating an increase of ≥10 mm within 2 years should be interpreted and recorded as a TST conversion. For the purposes of assessing and monitoring infection control, TST conversion rates should be regularly determined. Health-care settings with a substantial number of HCWs to be tested might have systems in place that can accurately determine the TST conversion rate every month (e.g., from among a group of HCWs tested annually), whereas smaller settings might have imprecise estimates of their TST conversion rate even with annual assessments.
The precision of the setting's TST conversion rate and any analysis assessing change from baseline TST results will depend on the number and frequency of HCWs tested. These factors should be considered when establishing a regular interval for TB screening for HCWs.
After a known exposure in a health-care setting, close HCW contacts who have TST results of ≥5 mm should be considered to have positive TST results, which should be interpreted as new infections only in HCWs whose previous TST result is 0 mm. However, HCWs 1) with a baseline or follow-up TST result of >0 mm but <10 mm with a health-care-associated exposure to M. tuberculosis and 2) who then have an increase of ≥10 mm should be considered to have a TST conversion because of a new infection (Box 3).
In a contact investigation, a follow-up TST should be administered 8-10 weeks after the end of exposure (rather than 1-3 weeks later, as in two-step testing). In this instance, a change from a negative TST result to a positive TST result should not be interpreted as a boosted reaction. The change in the TST result indicates a TST conversion, recent exposure, transmission, and infection.
All HCWs who are immunocompromised should be referred for a medical and diagnostic evaluation for any TST result of ≥5 mm on baseline or follow-up screening. Because infection-control staff will usually not know the immune status of the HCWs being tested, HCWs who have TST results of 5-9 mm should be advised that such results can be an indication for referral for medical evaluation for HCWs who have HIV infection or other causes of severe immunosuppression.
After an HCW has met criteria for a positive TST result, including HCWs who will not receive treatment for LTBI, repeat TSTs are not necessary because the results would not provide any additional information (30). This approach applies to HCWs who have positive TST results but who will not receive treatment for LTBI after medical evaluation. For future TB screening in settings that are medium risk, instead of participating in serial skin testing, the HCW should receive a medical evaluation and a symptom screen annually.
# Interpreting the TST Result for Medical and Diagnostic Referral and Evaluation
HCWs who have positive TST results and who meet the criteria for referral should have a medical and diagnostic evaluation. For HCWs who are at low risk (e.g., those from low-incidence settings), a baseline result of ≥15 mm of induration (instead of ≥10 mm) might possibly be the cut point. The criteria used to determine the need for treatment of LTBI has been presented.
When making decisions for the diagnosis and treatment of LTBI, setting-based risk factors (e.g., the prevalence of TB disease and personal risk factors such as having an immunocompromising condition or known contact with a TB case) should be assessed when choosing the cut point for a positive TST result. The medical evaluation can occur in different settings, including an occupational health clinic, local or state health department, or private medical clinic.
When 15 mm is used as the cut point, TST results of 10-14 mm can be considered clinically negative (331). These HCWs should not have repeat TST, and the referring physician might 3. ≥5 mm is considered a positive result in persons who have a baseline TST result of 0 mm; an increase of ≥10 mm is considered a positive result in persons with a negative baseline TST result or previous follow-up screening TST result of >0 mm 3. Change to positive not recommend treatment for LTBI. This issue of false-positive TST results might be especially true in areas of the country where the prevalence of infection with NTM is high. HCWs who have TST results of 5-9 mm on baseline twostep testing should be advised that such results might be an indication for treatment of LTBI if the HCW is a contact of a person with infectious TB disease, has HIV infection, or has other causes of severe immunosuppression (e.g., organ transplant and receipt of the equivalent of ≥15 mg/day of prednisone for ≥1 month). The risk for TB disease in persons treated with corticosteroids increases with higher dose and longer duration of corticosteroid use. TNF-α antagonists also substantially increase the risk for progression to TB disease in persons with LTBI (332).
HCWs with negative baseline two-step TST results who are referred for medical evaluation for an increase of ≥10 mm induration on follow-up TST screening, including those who are otherwise at low risk for TB disease, probably acquired M. tuberculosis infection since receiving the previous TST and should be evaluated for TB disease. If disease is excluded, the HCW should be offered treatment for LTBI if they have no contraindication to treatment.
# QC Program for Techniques for TST Administration and Reading TST Results
Random variation (i.e., differences in procedural techniques) in TST administration and reading TST results can cause falsepositive or false-negative TST results. Many of the variations in administering and reading TST results can be avoided by conducting training and maintaining attention to details. HCWs who are responsible for TST procedures should be trained to reduce variation by following standardized operational procedures and should be observed by an expert TST trainer. All TST procedures (i.e., administering, reading, and recording the results) should be supervised and controlled to detect and correct variation. Corrective actions might include coaching and demonstration by the TST trainer. Annual re-training is recommended for HCWs responsible for administering and reading TST results.
One strategy to identify TST procedure variation is to use a QC tool (Appendix F). The expert TST trainer should observe the procedures and indicate procedural variation on the observation checklists. An expert trainer includes persons who have documented training experience.
# QC for Administering TST by the Mantoux Method
Ideally, the TST trainer should participate in QC TST administrations with other TST trainers to maintain TST trainer certification. State regulations specify who is qualified to administer the test by injection. The TST trainer should first ensure antigen stability by maintaining the manufacturer's recommended cold chain (i.e., controlling antigen exposure to heat and light from the time it is out of refrigeration until the time it is placed back into refrigeration or until the vial is empty or expired). The TST trainer should prevent infection during an injection by preparing the skin and preventing contamination of solution, needle, and syringe.
The TST trainer should prevent antigen administration errors by controlling the five rights of administration: 1) right antigen; 2) right dose; 3) right patient; 4) right route; and 5) right time for TST administration, reading, and clinical evaluation (333). Finally, the TST trainer should observe and coach the HCW trainee in administering multiple intradermal injections by the Mantoux method. The TST trainer should record procedural variation on the observation checklist (Appendix F). TST training and coaching should continue until more than 10 correct skin test placements (i.e., ≥6 mm wheal) are achieved.
For training purposes, normal saline for injection can be used instead of PPD for intradermal injections. Volunteers are usually other HCWs who agree to be tested. Attempt to recruit volunteers who have known positive TST results so the trainees can practice reading positive TST results. A previous TST is not a contraindication to a subsequent TST unless the test was associated with severe ulceration or anaphylactic shock, which are substantially rare adverse events (30,237,238).
# Model TST Training Program
A model TST training program for placing TST and reading TST results has been produced by NHANES (326). The number of hours, sessions, and blinded independent duplicate reading (BIDR) readings should be determined by the setting's TB risk assessment. The following information can be useful for a model TST training program. The suggested TST training recommendations are not mandatory.
Initial training for a TST placer ideally consists of three components.
- Introductory lecture and demonstration by an expert TST placer or trainer. An expert TST trainer is a qualified HCW who has received training on administering multiple TST and reading multiple TST results (consider 3 hours of lecture). - Supervised practical work using procedural checklists observed and coached by the expert TST trainer (Appendix F) (consider 9 hours of practical work). - Administration of more than 10 total skin tests on volunteers by using injectable saline and producing more than 10 wheals that measure 6-10 mm. TST training should include supervised TST administration, which is a procedure in which an expert TST trainer supervises a TST trainee during all steps on the procedural observation checklist for TST administration (Appendix F). Wheal size should be checked for all supervised TST administrations, and skin tests should be repeated if wheal size is inadequate (i.e., <6 mm). TST training and coaching should continue until more than10 correct skin test placements (i.e., ≥6 mm wheal) are achieved.
# QC for Reading TST Results by the Palpation Method
The TST trainer should participate in QC readings with other TST trainers to maintain TST trainer certification. When training HCWs to read TST results, providing measurable TST responses is helpful (i.e., attempt to recruit volunteers who have known positive TST results so that the trainees can practice reading positive TST results).
TST readers should correctly read both measurable (>0 mm) and nonmeasurable responses (0 mm) (e.g., consider reading more than 20 TST results , if possible). The TST trainer should observe and coach the HCW in reading multiple TST results by the Palpation method and should record procedure variation on the observation checklist (Appendix F).
The TST trainer should conduct BIDRs for comparison with the HCW's reading. BIDRs are performed when two or more consecutive TST readers immediately measure the same TST result by standard procedures, without consulting or observing one another's readings, and record results independently (may use recommended procedural observation checklist; Appendix F). BIDRs help ensure that TST readers continue to read TST results correctly.
Initial training for a TST reader ideally should consist of multiple components.
- Receiving an introductory lecture and demonstration by an expert TST reader. Training materials are available from CDC (223,318) and CDC-sponsored Regional Model and Training Centers and should also be available at the local or state health department (consider 6 hours for lecture and demonstration). - Receiving four sessions of supervised practical work using procedural checklists (observed and coached by an expert TST reader) (consider 16 hours of practical work). - Performing BIDR readings (consider more than 80, if possible). TST trainers should attempt to organize the sessions so that at least 50% of the TST results read have a result of >0 mm according to the expert TST reader. - Performing BIDR readings on the last day of TST training (consider more than 30 BIDR readings out of the total 80 readings, if possible). TST trainers should attempt to ensure that at least 25% of persons tested have a TST result of >0 mm, according to the expert TST reader.
- Missing no more than two items on the procedural observation checklist (Appendix F) for three random observations by an expert TST reader. - Performing all procedures on the checklist correctly during the final observation. TST training and coaching should continue until the HCW is able to perform all procedures correctly and until a satisfactory measurement is achieved (i.e., the trainer and the trainee read the TST results within 2 mm of each other). For example, if the trainer reads the TST result as 11 mm (this might be considered the gold standard reading), the trainee's reading should be between 9-13 mm to be considered correct. Only a single measurement in millimeters should be recorded (not 11 mm x 11 mm or 11 mm x 15 mm). QC Procedural Observation Checklists (Appendix F) are recommended by CDC as a tool for use during TST training.
# Special Considerations in TST
Anergy. The absence of a reaction to a TST does not exclude a diagnosis of TB disease or infection with M. tuberculosis. In immunocompromised persons, delayed-type hypersensitivity (DTH) responses (e.g., tuberculin reactions) can decrease or disappear more rapidly, and a limited number of otherwise healthy persons apparently are incapable of reacting to tuberculin even after diagnosed infection with M. tuberculosis. This condition, called anergy, can be caused by multiple factors (e.g., advanced HIV infection, measles infection, sarcoidosis, poor nutrition, certain medications, vaccinations, TB disease itself, and other factors) (307,(334)(335)(336)(337)(338). However, anergy testing in conjunction with TB skin testing is no longer recommended routinely for screening for M. tuberculosis infection (336).
# Reconstitution of DTH in HIV-infected persons taking antiretroviral therapy (ART).
In one prospective study (340), TB patients who initially had negative TST results had positive TST results after initiation of HAART. HCWs must be aware of the potential public health and clinical implications of restored TST reactivity among persons who have not been diagnosed with TB disease but who might have LTBI. After the initiation of HAART repeat testing for infection with M. tuberculosis is recommended for HIV-infected persons previously known to have negative TST results (58). Recommendations on the prevention and treatment of TB in HIV-infected persons have been published (39,53,240).
Pregnancy. Tens of thousands of pregnant women have received TST since the test was developed, and no documented episodes of TST-related fetal harm have been reported (341). No evidence exists that the TST has adverse effects on the pregnant mother or fetus (39). Pregnant HCWs should be included in serial skin testing as part of an infection-control program or a contact investigation because no contraindication for skin testing exists (342). Guidelines issued by the American College of Obstetricians and Gynecologists (ACOG) emphasize that postponement of the diagnosis of infection with M. tuberculosis during pregnancy is unacceptable (343).
Booster phenomenon and two-step testing. In certain persons with LTBI, the DTH responsible for TST reactions wanes over time. Repeated TST can elicit a reaction called boosting in which an initial TST result is negative, but a subsequent TST result is positive. For example, a TST administered years after infection with M. tuberculosis can produce a false-negative result. This TST might stimulate (or boost) the person's ability to react to tuberculin, resulting in a positive result to a subsequent test (including the second step of a two-step procedure) (36,74,316,342,343). With serial testing, a boosted reaction on a subsequent TST might be misinterpreted as a newly acquired infection, compared with the false-negative result from the initial TST. Misinterpretation of a boosted reaction as a new infection with M. tuberculosis or TST conversion might prompt unnecessary investigations to find the source case, unnecessary treatment for the person tested, and unnecessary testing of other HCWs. The booster phenomenon can occur in anyone, but it is more likely to occur in older persons, persons with remote infection with M. tuberculosis (i.e., infected years ago), persons infected with NTM, and persons with previous BCG vaccination (39,229,234,344,345).
All newly employed HCWs who will be screened with TST should receive baseline two-step TST upon hire, unless they have documentation of either a positive TST result or treatment for LTBI or TB disease (39,224). Any setting might have HCWs at risk for boosting, and a rate of boosting even as low as 1% can result in unnecessary investigation of transmission. Therefore, two-step TSTs are needed to establish a baseline for persons who will receive serial TST (e.g., residents or staff of correctional facilities or LTCFs). This procedure is especially important for settings that are classified as low risk where testing is indicated only upon exposure. A reliable baseline test result is necessary to detect health-care-associated transmission of M. tuberculosis. Guidance for baseline TST for HCWs is included in this report (Box 3). To estimate the frequency of boosting in a particular setting, a four-appointment schedule of TST administration and reading (i.e., appointments for TST administration and reading both TST results) is necessary, rather than the three-appointment schedule (i.e., appointments for the administration of both tests, with reading of the secondstep TST result only) (196).
Two-step testing should be used only for baseline screening, not in contact investigations. In a contact investigation, for persons with a negative TST, a follow-up test should be administered 8-10 weeks after the end of exposure (rather than 1-3 weeks later, as in a two-step TST). In this instance, a change from a negative to a positive TST result suggests that recent exposure, transmission, and infection occurred and should not be interpreted as a boosted response.
After a known exposure in a health-care setting (close contact to a patient or HCW with infectious TB disease), TST results of ≥5 mm should be considered positive and interpreted as a new infection in HCWs whose previous TST result is 0 mm. If an HCW has a baseline or follow-up TST result of >0 mm but ≤10 mm, a health-care-associated exposure to M. tuberculosis, and an increase in the TST size of ≥10 mm, the result should be interpreted as the HCW having a TST conversion because of new infection.
BCG vaccination. In the United States, vaccination with BCG is not recommended routinely for anyone, including HCWs or children (227). Previous BCG vaccination is not a contraindication to having a TST or two-step skin testing administered. HCWs with previous BCG vaccination should receive baseline and serial skin testing in the same manner as those without BCG vaccination (233) (Box 1).
Previous BCG vaccination can lead to boosting in baseline two-step testing in certain persons (74,231,(344)(345)(346). Distinguishing a boosted TST reaction resulting from BCG vaccination (a false-positive TST result) and a TST result because of previous infection with M. tuberculosis (true positive TST result) is not possible (39). Infection-control programs should refer HCWs with positive TST results for medical evaluation as soon as possible (Box 3).
Previous BCG vaccination increases the probability of a boosted reaction that will probably be uncovered on initial two-step skin testing. For an HCW with a negative baseline two-step TST result who is a known contact of a patient who has suspected or confirmed infectious TB disease, treatment for LTBI should be considered if the follow-up TST result is ≥5 mm, regardless of BCG vaccination status.
PPD preparations for diagnosing infection with M. tuberculosis. Two PPD preparations are available in the United States: Tubersol® (Aventis Pasteur, Switftwater, Pennsylvania) (237) and APLISOL® (Parkdale Pharmaceuticals, Rochester, Michigan) (238). Compared with the U.S. reference PPD, no difference exists in TST interpretation between the two preparations (347). However, when Tubersol and Aplisol were compared with each other, a slight difference in reactivity was observed. Aplisol produced slightly larger reactions than Tubersol, but this difference was not statistically significant (347). The difference in specificity, 98% versus 99%, is limited. However, when applied in large institutional settings that test thousands of workers annually who are at low risk for infection with M. tuberculosis, this difference in specificity might affect the rate of positive TST results observed. TB screening programs should use one antigen consistently and should realize that changes in products might make serial changes in TST results difficult to interpret. In one report, systematic changes in product use resulted in a cluster of pseudoconversions that were believed to have erroneously indicated a health-careassociated outbreak (348). Persons responsible for making decisions about the choice of pharmacy products should seek advice from the local or state health department's TB infection-control program before switching PPD preparations and should inform program staff of any changes.
# Chest Radiography
Chest radiographic abnormalities can suggest pulmonary TB disease. Radiographic abnormalities that are consistent with pulmonary TB disease include upper-lobe infiltration, cavitation, and effusion. Infiltrates can be patchy or nodular and observed in the apical (in the top part of the lungs) or subapical posterior upper lobes or superior segment of the lower lobes in the lungs. HCWs who have positive test results for M. tuberculosis infection or symptoms or signs of TB disease, regardless of test results for M. tuberculosis infection, should have a chest radiograph performed to exclude a diagnosis of TB disease. However, a chest radiograph is not a substitute for tests for M. tuberculosis infection in a serial TB screening program for HCWs.
Persons who have LTBI or cured TB disease should not have repeat chest radiographs performed routinely (116). Repeat radiographs are not needed unless symptoms or signs of TB disease develop or a clinician recommends a repeat chest radiograph (39,116).
A chest radiograph to exclude pulmonary TB disease is indicated for all persons being considered for treatment of LTBI. If chest radiographs do not indicate pulmonary TB and if no symptoms or signs of TB disease are present, persons with a positive test result for infection with M. tuberculosis might be candidates for treatment of LTBI. In persons with LTBI, the chest radiograph is usually normal, although it might demonstrate abnormalities consistent with previous healed TB disease or other pulmonary conditions. In patients with symptoms or signs of TB disease, pulmonary infiltrates might only be apparent on a computed tomography (CT) scan. Previous, healed TB disease can produce radiographic findings that might differ from those associated with current TB disease, although a substantial overlap might exist. These findings include nodules, fibrotic scars, calcified granulomas, or basal pleural thickening. Nodules and fibrotic scars might contain slowly multiplying tubercle bacilli and pose a high risk for progression to TB disease. Calcified nodular lesions (calcified granulomas) and apical pleural thickening pose a lower risk for progression to TB disease (31).
# Chest Radiography and Pregnancy
Because TB disease is dangerous to both mother and fetus, pregnant women who have a positive TST result or who are suspected of having TB disease, as indicated by symptoms or other concerns, should receive chest radiographs (with shielding consistent with safety guidelines) as soon as feasible, even during the first trimester of pregnancy (31,39,341).
# Chest Radiography and HIV-Infected Persons
The radiographic presentation of pulmonary TB in persons infected with HIV might be apical; however, apical cavitary disease is less common among such patients. More common chest radiograph findings for HIV-infected persons are infiltrates in any lung zone, mediastinal or hilar adenopathy, or, occasionally, a normal chest radiograph. Typical and cavitary lesions are usually observed in patients with higher CD4 counts, and more atypical patterns are observed in patients with lower CD4 counts (31,49,94,142,(349)(350)(351)(352)(353)(354). In patients with symptoms and signs of TB, a negative chest radiograph result does not exclude TB. Such patients might be candidates for airborne precautions during medical evaluation.
# Evaluation of Sputum Samples
Sputum examination is a critical diagnostic procedure for pulmonary TB disease (30) and is indicated for the following persons:
- anyone suspected of having pulmonary or laryngeal TB disease; - persons with chest radiograph findings consistent with TB disease (current, previous, or healed TB); - persons with symptoms of infection in the lung, pleura, or airways, including larynx; - HIV-infected persons with any respiratory symptoms or signs, regardless of chest radiograph findings; and - persons suspected of having pulmonary TB disease for whom bronchoscopy is planned.
# Sputum Specimen Collection
Persons requiring sputum collection for smear and culture should have at least three consecutive sputum specimens obtained, each collected in 8-24-hours intervals (124), with at least one being an early morning specimen (355). Specimens should be collected in a sputum induction booth or in an AII room. In resource-limited settings without environmental containment or when an AII room is not available, sputum collection can be performed safely outside of a building, away from other persons, windows, and ventilation intakes. Patients should be instructed on how to produce an adequate sputum specimen (containing little saliva) and should be supervised and observed by an HCW during the collection of sputum, if possible (30). If the patient's specimen is determined to be inadequate, it should still be sent for bacteriologic testing, although the inadequate nature of the specimen should be recorded. The HCW should wear an N95 disposable respirator during sputum collection.
# Sputum Induction
For patients who are unable to produce an adequate sputum specimen, expectoration can be induced by inhalation of an aerosol of warm, hypertonic saline. Because sputum induction is a cough-inducing procedure, pre-treatment with a bronchodilator should be considered in patients with a history of asthma or other chronic obstructive airway diseases. Medical assistance and bronchodilator medication should be available during any sputum induction in the event of induced bronchospasm (109,356,357).
The patient should be seated in a small, well-ventilated sputum induction booth or in an AII room (see Environmental Controls; and Supplement, Environmental Controls). For best results, an ultrasonic nebulizer that generates an aerosol of approximately 5 mL/minute should be used. A 3% hypertonic saline is commercially available, and its safety has been demonstrated. At least 30 mL of 3% saline should be administered; administration of smaller volumes will have a lower yield. Higher concentrations can be used with an adjustment in the dose and closer monitoring for adverse effects.
Patients should be instructed to breathe deeply and cough intermittently. Sputum induction should be continued for up to 15 minutes or until an adequate specimen (containing little saliva) is produced. Induced sputum will often be clear and watery. Any expectorated material produced should be labeled as expectorated sputum and sent to the laboratory.
# Laboratory Examination
Detection of AFB in stained smears by microscopy can provide the first bacteriologic indication of TB disease. Laboratories should report any positive smear results within 24 hours of receipt of the specimen (30). A positive result for AFB in a sputum smear is predictive of increased infectiousness. Smears allow presumptive detection of mycobacteria, but definitive identification, strain typing, and drug-susceptibility testing of M. tuberculosis require that a culture be performed (30). Negative AFB sputum smear results do not exclude a diagnosis of TB disease, especially if clinical suspicion of disease is high. In the United States, approximately 63% of patients with reported positive sputum culture results have positive AFB sputum smear results (26).
A culture of sputum or other clinical specimen that contains M. tuberculosis provides a definitive diagnosis of TB disease. In the majority of cases, identification of M. tuberculosis and drug-susceptibility results are available within 28 days (or 4-6 weeks) when recommended rapid methods such as liquid culture and DNA probes are used. Negative culture results are obtained in approximately 14% of patients with confirmed pulmonary TB disease (4,5). Testing sputum with rapid techniques (e.g., NAA) facilitates the rapid detection and identification of M. tuberculosis but should not replace culture and drug-susceptibility testing in patients with suspected TB disease (30,125,358). Mixed mycobacterial infection can obscure the identification of M. tuberculosis during the laboratory evaluation (e.g., because of cross-contamination or dual infections) and can be distinguished by the use of mycobacterial species-specific DNA probes (359). Examination of colony morphology on solid culture media can also be useful.
Drug-susceptibility tests should be performed on initial isolates from all patients to assist in identifying an effective antituberculosis treatment regimen. Drug-susceptibility tests should be repeated if sputum specimens continue to be culture-positive after 3 months of antituberculosis treatment or if culture results become positive for M. tuberculosis after a period of negative culture results (30,31).
# Bronchoscopy
If possible, bronchoscopy should be avoided in patients with a clinical syndrome consistent with pulmonary or laryngeal TB disease because bronchoscopy substantially increases the risk for transmission either through an airborne route (63,80,81,162,360) or a contaminated bronchoscope (80,82,(163)(164)(165)(166)(167)(168)(169), including in persons with negative AFB sputum smear results. Microscopic examination of three consecutive sputum specimens obtained in 8-24-hour intervals, with at least one obtained in the early morning, is recommended instead of bronchoscopy, if possible. In a patient who is intubated and mechanically ventilated, closed circuitry can reduce the risk for exposure.
If the suspicion for pulmonary TB disease is high or if the patient is seriously ill with a disorder, either pulmonary or extrapulmonary, that is believed to be TB disease, multidrug antituberculosis treatment using one of the recommended regimens should be initiated promptly, frequently before AFB smear results are known (31). Obtaining three sputum samples is safer than performing bronchoscopy. For AFB smear and culture results, three sputum samples have an increased yield compared with a single specimen (110,357), and induced specimens have better yield than specimens obtained without induction. Sputum induction is welltolerated (90,109,132,133,357,361,362), even in children (134,356), and sputum specimens (either spontaneous or induced) should be obtained in all cases before a bronchoscopy (109,356,363,364).
In circumstances where a person who is suspected of having TB disease is not on a standard antituberculosis treatment regimen and the sputum smear results (possibly including induced specimens) are negative and a reasonably high suspicion for TB disease remains, additional consideration to initiate treatment for TB disease should be given. If the underlying cause of a radiographic abnormality remains unknown, additional evaluation with bronchoscopy might be indicated; however, in cases where TB disease remains a diagnostic possibility, initiation of a standard antituberculosis treatment regimen for a period before bronchoscopy might reduce the risk for transmission. Bronchoscopy might be valuable in establishing the diagnosis; in addition, a positive culture result can be both of clinical and public health importance to obtain drugsusceptibility results. Bronchoscopy in patients with suspected or confirmed TB disease should not be undertaken until after consideration of the risks for transmission of M. tuberculosis (30,63,81,162,360). If bronchoscopy is performed, because it is a cough-inducing procedure, additional sputum samples for AFB smear and culture should be collected after the procedure to increase the diagnostic yield.
# Treatment Procedures for LTBI and TB Disease
# Treatment for LTBI
Treatment for LTBI is essential to control and eliminate TB disease in the United States because it substantially reduces the risk that infection with M. tuberculosis will progress to TB disease (10,28). Certain groups of persons are at substantially high risk for developing TB disease after being infected, and every effort should be made to begin treatment for LTBI and to ensure that those persons complete the entire course of treatment (Table 3).
Before beginning treatment of LTBI, a diagnosis of TB disease should be excluded by history, medical examination, chest radiography, and, when indicated, bacteriologic studies. In addition, before offering treatment of LTBI, ensure that the patient has not experienced adverse reactions with previous isoniazid (INH) treatment (215).
# Candidates for Treatment of LTBI
Persons in the following groups at high risk should be administered treatment for LTBI if their TST result is ≥5 mm or if their BAMT result is positive, regardless of age (31,39):
- persons infected with HIV,
- recent contacts with a person with TB disease,
- persons with fibrotic changes on chest radiograph consistent with previous TB disease, - organ transplant recipients, and - other immunosuppressed persons (e.g., persons receiving ≥15 mg/day of prednisone for ≥1 month). Persons in the following groups at high risk should be considered for treatment of LTBI if their TST result is ≥10 mm, or if the BAMT result is positive:
- - infants, children, and adolescents exposed to adults at high risk for developing TB disease. Persons who use tobacco or alcohol (40,41), illegal drugs, including injection drugs and crack cocaine (43)(44)(45)(46)(47)(48), might also be at increased risk for infection and disease, but because of the multiple other potential risk factors that commonly occur among such persons, use of these substances has been difficult to identify as separate risk factors.
Persons with no known risk factors for TB disease can be considered for treatment of LTBI if their TST result is ≥15 mm. However, programs to screen HCWs for infection with M. tuberculosis should only be conducted among groups at high risk. All testing activities should be accompanied by a plan for follow-up care for persons with LTBI or, if it is found, TB disease. A decision to test for infection with M. tuberculosis should be based on a commitment to treat LTBI after a medical examination (39).
Persons who might not be good candidates for treatment of LTBI include those with a previous history of liver injury or a history of excessive alcohol consumption. Active hepatitis and end-stage liver disease (ESLD) are relative contraindications to the use of INH for treatment of LTBI (39,240). If the decision is made to treat such patients, baseline and follow-up monitoring of serum aminotransaminases should be considered.
For persons who have previous positive TST or BAMT results and who completed treatment for LTBI previously, treating them again is not necessary. Documentation of completed therapy for LTBI is critical. Instead of participating in serial skin testing, the HCW should receive a medical evaluation and a symptom screen annually. A symptom screen is a procedure used during a clinical evaluation in which patients are asked if they have experienced any departure from normal in function, appearance, or sensation related to TB disease (e.g., cough).
Screening HCWs for infection with M. tuberculosis is an essential administrative measure for the control of transmission of M. tuberculosis in health-care settings. By conducting TB screening, ongoing transmission of M. tuberculosis can be detected, and future transmission can be prevented by identifying lapses in infection control and identifying persons infected with M. tuberculosis and TB disease. The majority of individual HCWs, however, do not have the risk factors for progression to disease that serve as the basis for the current recommendations for targeted testing and treatment of LTBI. The majority of HCWs in the United States do not provide care in areas in which the prevalence of TB is high. Therefore, HCWs should be tested, as determined by risk classification for the health-care setting, and can be categorized as having a positive test result or conversion for M. tuberculosis infection. HCWS can be categorized as part of the TB infection-control program for the purpose of surveillance and referral, but they might not necessarily be candidates for treatment of LTBI.
HCWs should receive serial screening for infection with M. tuberculosis (either TST or BAMT), as determined by the health-care setting's risk classification (Appendix B). For infection-control purposes, the results of the testing should be recorded and interpreted as part of the TB infection-control program as either a 1) negative TST result, 2) previously documented positive TST or BAMT result, or 3) TST or BAMT conversion. All recordings of TST results should also document the size of the induration in millimeters, not simply as negative or positive. BAMT results should be recorded in detail. The details should include date of blood draw, result in specific units, and the laboratory interpretation (positive, negative, or indeterminate) and the concentration of cytokine measured (e.g., IFN-γ).
To determine whether treatment for LTBI should be indicated, HCWs should be referred for medical and diagnostic evaluation according to the TST result criteria (Box 5). In conjunction with a medical and diagnostic evaluation, HCWs with positive test results for M. tuberculosis should be considered for treatment of LTBI (Box 5) after TB disease has been excluded by further medical evaluation. HCWs cannot be compelled to take treatment for LTBI, but they should be encouraged to do so if they are eligible for treatment.
HCWs' TST or BAMT results might be considered positive as part of the TB infection-control program for the purposes of surveillance and referral (i.e., meet the criterion for a conversion), and this occurrence is important to note. However, not all of these HCWs may be considered candidates for treatment of LTBI, according to the individual medical and diagnostic evaluation. After an HCW has been classified as having a positive result or conversion for M. tuberculosis infection, additional testing for M. tuberculosis infection is not necessary.
# Treatment Regimens for LTBI
For persons suspected of having LTBI, treatment of LTBI should not begin until TB disease has been excluded. Persons highly suspected of having TB disease should receive the standard multidrug antituberculosis treatment regimen for TB disease until the diagnosis is excluded. Standard drug regimens for the treatment of LTBI have been presented (Table 3); however, modifications to those regimens should be considered under certain circumstances, including HIV infection, suspected drug resistance, and pregnancy (47,365).
Reports of severe liver injury and death associated with the combination of rifampin and pyrazinamide (RZ) for treatment of LTBI (366)(367)(368) prompted the American Thoracic Society and CDC to revise previous recommendations (39,53) to indicate that RZ generally should not be offered for the treatment of LTBI (240). If the potential benefits substantially outweigh the demonstrated risk for severe liver injury and death associated with this regimen and the patient has no contraindications, a physician with experience treating LTBI and TB disease should be consulted before using this regimen (246). Clinicians should continue the appropriate use of rifampin and pyrazinamide in standard multidrug antituberculosis treatment regimens for the treatment of TB disease (31). Collaborate with the local or state health department on decisions regarding DOT arrangements.
For all regimens for treatment of LTBI, nonadherence to intermittent dosing (i.e., once or twice weekly) results in a larger proportion of total doses missed than daily dosing. DOT should be used for all doses during the course of treatment of LTBI whenever feasible (31). Collaborate with the local or state health department on decisions regarding DOT arrangements.
Contacts of patients with drug-susceptible TB disease. Persons with a previously negative TST or BAMT result who are contacts of patients with drug-susceptible TB disease and who subsequently have a positive TST result (≥5 mm) or positive BAMT result should be evaluated for treatment of LTBI, regardless of age. The majority of persons who are infected with M. tuberculosis will have a positive TST result within 6 weeks of exposure (74,228,(369)(370)(371). Therefore, contacts of patients with drug-susceptible TB disease with negative TST (or BAMT) results should be retested 8-10 weeks after the end of exposure to a patient with suspected or confirmed TB disease. Persons infected with M. tuberculosis should be advised that they possibly can be reinfected with M. tuberculosis if reexposed (246,(372)(373)(374)(375). Persons infected with HIV, persons receiving immunosuppressive therapy, regardless of TST result, and persons with a previous positive TST or BAMT result who are close contacts of a person with suspected or confirmed TB disease should be considered for treatment of LTBI.
The interpretation of TST results is more complicated in a contact investigation among HCWs who have negative baseline TST results from two-step testing but where the induration was >0 mm on the baseline TST or subsequent serial testing. Differences in the TST results between the contact investigation and previous baseline and serial TST could be a result of 1) inter-test variability in reaction size; 2) intervening exposure to NTM, BCG, or M. tuberculosis; and 3) reversion. In practice, for TST, only inter-test variability and exposure to or infection with NTM or M. tuberculosis are likely.
Treatment of LTBI should not be started until a diagnosis of TB disease has been excluded. If uncertainty exists concerning the presence of TB disease because of an ambiguous chest radiograph, a standard multidrug antituberculosis treatment regimen can be started and adjusted as necessary based on the results of sputum cultures and the patient's clinical response (31). If cultures are obtained without initiating therapy, treatment for LTBI should not be initiated until all culture results are reported as negative.
Contacts of patients with drug-resistant TB disease. Treatment for LTBI caused by drug-resistant or MDR TB disease is complex and should be conducted in consultation with the local or state health department's infection-control program and experts in the medical management of drugresistant TB. In certain instances, medical decision making for the person with LTBI will benefit from the results of drug susceptibility testing of the isolate of the index TB case. Treatment should be guided by susceptibility test results from the isolate to which the patient was exposed and presumed to be infected (31,376,377).
# Pretreatment Evaluation and Monitoring of Treatment
The pretreatment evaluation of persons who are targeted for treatment of LTBI provides an opportunity for health-care providers to 1) establish rapport with patients; 2) discuss details of the patient's risk for progression from LTBI to TB disease; 3) explain the benefits of treatment and the importance of adhering to the drug regimen; 4) review possible adverse effects of the regimen, including interactions with other medications; and 5) establish an optimal follow-up plan.
Monitoring for adverse effects of antituberculosis medications must be individualized. Persons receiving treatment for LTBI should be specifically instructed to look for symptoms associated with the most common reactions to the medications they are taking (39). Laboratory testing should be performed to evaluate possible adverse effects (31,39). Routine laboratory monitoring during treatment of LTBI is indicated for patients with abnormal baseline test results and for persons with a risk for hepatic disease. Baseline laboratory testing is indicated for persons infected with HIV, pregnant women, women in the immediate postpartum period (usually within 3 months of delivery), persons with a history of liver disease, persons who use alcohol regularly, and those who have or are at risk for chronic liver disease.
All patients being treated for LTBI should be clinically monitored at least monthly, including a brief clinical assessment conducted in the person's primary language for signs of hepatitis (e.g., nausea, vomiting, abdominal pain, jaundice, and yellow or brown urine). Patients receiving treatment for LTBI should be advised about the adverse effects of the drugs and the need for prompt cessation of treatment and clinical evaluation if adverse effects occur.
Because of the risk for serious hepatic toxicity and death, the use of the combination of RZ for the treatment of LTBI generally should not be offered. If RZ is used, a physician with experience treating LTBI and TB disease should be consulted before the use of this regimen. In addition, more extensive biochemical and clinical monitoring is recommended (240).
# Treatment for TB Disease
Suspected or confirmed TB cases must be reported to the local or state health department in accordance with laws and regulations. Case management for TB disease should be coordinated with officials of the local or state health department. Regimens for treatment of TB disease must contain multiple drugs to which the organisms are susceptible. For persons with TB disease, treatment with a single drug can lead to the development of mycobacterial resistance to that drug. Similarly, adding a single drug to a failing antituberculosis treatment regimen can lead to resistance to the added drug (31).
For the majority of patients, the preferred regimen for treating TB disease consists of an initiation 2-month phase of four drugs (INH, rifampin, pyrazinamide, and ethambutol) and at least a 4-month continuation phase of INH and rifampin (for a minimum total treatment of 6 months). Ethambutol may be discontinued if supporting drug susceptibility results are available. Completion of therapy is based on the number of doses taken within a maximal period and not simply 6 months (31). Persons with cavitary pulmonary TB disease and positive culture results of sputum specimens at the completion of 2 months of therapy should receive a longer (7-month continuation) phase because of the significantly higher rate of relapse (31).
TB treatment regimens might need to be altered for persons infected with HIV who are on ART (49). Whenever feasible, the care of persons with both TB disease and HIV infection should be provided by or in consultation with experts in the management of both TB and HIV-related disease (31). To prevent the emergence of rifampin-resistant organisms, persons with TB disease, HIV infection, and CD4 cell counts of <100 cells/mm 3 should not be treated with highly intermittent (i.e., once or twice weekly) regimens. These patients should receive daily treatment during the intensive phase by DOT (if feasible) and daily or three times weekly by DOT during the continuation phase (378). Detailed information on TB treatment for persons infected with HIV has been published and is available (/ Overviews/AIDS_HIV.htm, . org, and / TOC.htm) and published (31,53).
Drug-susceptibility testing should be performed on all initial isolates from patients with TB disease. When results from drugsusceptibility tests become available, the antituberculosis treatment regimen should be reassessed, and the drugs used in combination should be adjusted accordingly (376,377,(379)(380)(381). If drug resistance is present, clinicians who are not experts in the management of patients with drug-resistant TB disease should seek expert consultation (31) and collaborate with the local or state health department for treatment decisions.
The major determinant of the outcome of treatment is adherence to the drug regimen. Therefore, careful attention should be paid to measures designed to enable and foster adherence (31,319,382). DOT is an adherence-enhancing strategy in which a trained HCW or other specially trained person watches a patient swallow each dose of medication and records the dates that the DOT was observed. DOT is the standard of care for all patients with TB disease and should be used for all doses during the course of therapy for TB disease and for LTBI, whenever feasible. Plans for DOT should be coordinated with the local or state health department (31).
# Reporting Serious Adverse Events
HCWs should report serious adverse events associated with the administration of tuberculin antigen or treatment of LTBI or TB disease to the FDA MedWatch, Adverse Event Reporting System (AERS), telephone: 800-FDA-1088, fax: 800-FDA-0178, . Report Form 3500, Physicians' Desk Reference. Specific instructions for the types of adverse events that should be reported are included in MedWatch report forms.
# Surveillance and Detection of M. tuberculosis Infections in Health-Care Settings
TB disease should be considered for any patient who has symptoms or signs of disease, including coughing for ≥3 weeks, loss of appetite, unexplained weight loss, night sweats, bloody sputum or hemoptysis, hoarseness, fever, fatigue, or chest pain. The index of suspicion for TB disease will vary by individual risk factors, geographic area, and prevalence of TB disease in the population served by the health-care setting. Persons exposed to patients with infectious TB disease might acquire LTBI, depending on host immunity and the degree and duration of exposure. Diagnostic tests for TB disease include chest radiography and laboratory tests of sputum (examination for AFB and culture). The treatment of persons with TB disease involves vital aspects of TB control by stopping transmission of M. tuberculosis and preventing persons with LTBI from developing infectious TB disease (36).
In the majority of the U.S. population, targeted testing for LTBI and TB disease is performed to identify persons with LTBI and TB disease who would benefit from treatment. Therefore, all testing activities should be accompanied by a plan for follow-up care of persons with LTBI or TB disease. A decision to test for infection with M. tuberculosis should be based on a commitment to treat LTBI after a medical examination (39). Health-care agencies or other settings should consult with the local or state health department before starting a program to test HCWs for M. tuberculosis infection. This step ensures that adequate provisions are in place for the evaluation and treatment of persons whose test results are positive, including the medical supervision of the course of treatment for those who are treated for LTBI or TB disease.
Groups that are not at high risk for LTBI or TB disease should not be tested routinely because testing in populations at low risk diverts resources from other priority activities. In addition, testing persons at low risk for M. tuberculosis infection is discouraged because a substantial proportion of persons from populations at low risk who have positive TST results might actually have false-positive TST results and might not represent true infection with M. tuberculosis (39,316). Testing for infection with M. tuberculosis should be performed for welldefined groups at high risk. These groups can be divided into two categories: 1) persons at higher risk for exposure to and infection with M. tuberculosis and 2) persons at higher risk for progression from LTBI to TB disease ( Flexibility is needed in defining high-priority groups for TB screening. The changing epidemiology of TB indicates that the risk for TB among groups currently considered as high priority might decrease over time, and groups currently not identified originally as being at high risk might be considered as high priority.
# Baseline Testing with BAMT
For the purposes of establishing a baseline, a single negative BAMT result is sufficient evidence that the HCW is probably not infected with M. tuberculosis (Box 2). However, cautions regarding making medical care decisions for persons whose conditions are at increased risk for progressing to TB disease from M. tuberculosis infection have been presented (Box 4).
If BAMT is used for baseline testing of HCWs, including those in settings that are low risk, one negative BAMT result is sufficient to demonstrate that the HCW is not infected with M. tuberculosis (Box 2). Perform and document the baseline BAMT result preferably within 10 days of starting employment. HCWs with positive baseline results should be referred for a medical and diagnostic evaluation to exclude TB disease and then treatment for LTBI should be considered in accordance with CDC guidelines. Persons with a positive BAMT result do not need to be tested again for surveillance. For HCWs who have indeterminate test results, providers should consult the responsible laboratorian for advice on interpreting the result and making additional decisions (383).
# Serial Testing with BAMT for Infection-Control Surveillance
When using BAMT for serial testing, a conversion for administrative purposes is a change from a negative to a positive result (Box 3). For HCWs who have indeterminate test results, providers should consult the responsible laboratorian for advice on interpreting the result and making additional decisions (383). Persons with indeterminate results should not be counted for administrative calculations of conversion rates.
# Exposure of HCWs and Patients to M. tuberculosis
# Known and Presumed Exposure
For HCWs with known and presumed exposure to M. tuberculosis, administer a symptom screen and obtain the BAMT result. A BAMT conversion probably indicates recent M. tuberculosis infection; therefore, TB disease must be excluded. Experience with BAMT in contact investigations is limited. Specific attention is needed in the management of certain populations (e.g., infants and children aged <4 years and immunocompromised persons, including those who are HIV-infected) (Box 4).
If the symptom screen or the BAMT result is positive, the exposed person should be evaluated for TB disease promptly, which includes a chest radiograph. If TB disease is excluded, additional medical and diagnostic evaluation for LTBI is needed, which includes a judgment regarding the extent of exposure.
# BOX 4. Conditions requiring caution in interpreting negative QuantiFERON ® -TB Gold test results
- Human immunodeficiency virus infection or acquired immunodeficiency syndrome - Immunosuppressive drugs, including those used for managing organ transplantation - TNFα - Diabetes mellitus - Silicosis - Chronic renal failure - Certain hematological disorders (e.g., leukemias and lymphomas) - Other specific malignancies (e.g., carcinoma of the head, neck, or lung)
# Performing QFT-G
The QFT-G should be performed as described in the product insert provided with the BAMT kit. This insert is also available from the manufacturer's website ().
# Interpretation of BAMT Results and Referral for Evaluation
HCWs who meet the criteria for referral should have a medical and diagnostic evaluation (see Supplements, Estimating the Infectiousness of a TB Patient; Diagnostic Procedures for LTBI and TB Disease; and Treatment Procedures for LTBI and TB Disease). The factors affecting treatment decisions during medical and diagnostic evaluation by risk for infection with M. tuberculosis have been presented (Box 5). In addition, because BAMT and other indirect tests for M. tuberculosis infection are diagnostic aids, the test results must be interpreted in the context of epidemiologic, historical, physical, and diagnostic findings. A higher likelihood of infection, as estimated from historical or epidemiologic details (e.g., exposure to M. tuberculosis) or because of the presence of an illness consistent with TB disease, increases the predictive value of a positive result. Setting-based risk factors (e.g., the prevalence of TB disease in the setting) should be considered when making decisions regarding the diagnosis and treatment of LTBI.
Medical conditions that impair or alter immune function (Box 4) decrease the predictive value of a negative result, and additional diagnostic methods (e.g., bacteriology, radiography, and histology) are required as evidence before excluding M. tuberculosis infection when the BAMT result is negative. Medical evaluations can occur in different settings, including an occupational health clinic, local or state health department, hospital, or private medical clinic.
Indeterminate QFT-G results are reported for either of two test conditions.
- The IFN-γ responses to all antigens (ESAT-6, CFP-10, and mitogen) are below a cut-off threshold. The weak response to mitogen could be caused by nonstandard storage or transportation of the blood sample, by laboratory errors, or by lymphocytic insensitivity caused by immune dysfunction. OR,
- The IFN-γ response to the Nil exceeds a specified threshold, and the responses to both ESAT-6 and CFP-10 do not exceed the response to Nil by at least 50%. This response could be caused by nonstandard storage or transportation, laboratory errors, or circulating IFN-γ, which can be increased in ill HCWs or patients. For HCWs who have indeterminate test results, providers should consult the responsible laboratorian for advice on interpreting the result and making further decisions (383).
# Interpreting the BAMT Result for Infection Control and Surveillance
BAMT conversion rates should be determined routinely. The precision of the BAMT conversion rate will depend, in part, on the number of HCWs tested, which should be considered when establishing a regular interval for evaluation and monitoring of HCWs with BAMT. Health-care settings with a substantial number of HCWs might have testing schedules that can accurately determine the BAMT conversion rate each month (i.e., from annual results of an HCW cohort tested within the given month), if testing is staggered throughout the year. BAMT conversion rates are more difficult to calculate in settings with fewer test results.
# QC Program for the BAMT
Multiple processes are necessary to assure quality BAMT results: specimen collection, transport and handling, and conducting the test in the laboratory. BAMT must meet performance parameters for a valid test result to be achieved. QC is an ongoing laboratory issue. The infection-control team should assist the laboratory in assuring that all requisite conditions are present. The laboratory performing the BAMT will be required to validate its performance of the test before processing clinical samples. State and federal laboratory requirements regulate laboratory-testing procedures.
# Additional Considerations
An indeterminate QFT-G result does not mean that the test has failed; it indicates that the specimen has inadequate responsiveness for the test to be performed. This result might reflect the condition of the HCW or patient, who, for example, might be immunosuppressed. Alternatively, the specimen might have been handled incorrectly. For HCWs who have indeterminate test results, providers should consult the responsible laboratorian for advice on interpreting the result and making further decisions (383). Skin testing for cutaneous anergy is not useful in screening for asymptomatic LTBI or for diagnosing TB disease (339).
QFT-G use with HIV-infected persons taking ART. The effect of HIV infection and of ART on the performance of the QFT-G have not been fully evaluated.
Persons aged <17 years or pregnant women. The use of the QFT-G has not been evaluated in persons aged <17 years or pregnant women (35).
Booster phenomenon and BAMT. BAMT does not involve the injection of any substance into the persons being tested and is not affected by the booster phenomenon.
BCG vaccination. In the United States, vaccination with BCG is not routinely recommended (227). However, BCG is the most commonly used vaccine in the world. Foreign-born
# BOX 5. Factors affecting treatment decisions during the medical and diagnostic evaluation, by tuberculin skin test (TST) result
TST result ≥5 mm TST result ≥15 mm is positive TST result ≥10 mm is positive is positive*
# Environmental Controls Overview
Environmental controls include the following technologies to remove or inactivate M. tuberculosis: local exhaust ventilation, general ventilation, HEPA filtration, and UVGI. These controls help to prevent the spread and reduce the concentration of airborne infectious droplet nuclei. Environmental controls are the second line of defense in the TB infection-control program, and they work in harmony with administrative controls.
The reduction of exposures to M. tuberculosis can be facilitated through the effective use of environmental controls at the source of exposure (e.g., coughing patient or laboratory specimen) or in the general workplace environment. Source control is amenable to situations where the source has been identified and the generation of the contaminant is localized. Source-control techniques can prevent or reduce the spread of infectious droplet nuclei into the air by collecting infectious particles as they are released. These techniques are especially critical during procedures that will probably generate infectious aerosols (e.g., bronchoscopy, sputum induction, endotracheal intubation, suctioning, irrigating TB abscesses, aerosol treatments, autopsies on cadavers with untreated TB disease, and certain laboratory specimen manipulations) and when patients with infectious TB disease are coughing or sneezing.
Unsuspected and undiagnosed cases of infectious TB disease are believed to represent a substantial proportion of the current risk to HCWs (10,85). In such situations, source control is not a feasible option. Instead, general ventilation and air cleaning must be relied upon for control. General ventilation can be used to dilute the air and remove air contaminants and to control airflow patterns in rooms or in a health-care setting. Air-cleaning technologies include HEPA filtration to reduce the concentration of M. tuberculosis droplet nuclei and UVGI to kill or inactivate the microorganisms so that they no longer pose a risk for infection.
Ventilation systems for health-care settings should be designed, and modified when necessary, by ventilation engineers in collaboration with infection-control practitioners and occupational health staff. Recommendations for designing and operating ventilation systems have been published (117,118,178). The multiple types and conditions for use of ventilation systems in health-care settings and the needs of persons in these settings preclude the provision of extensive guidance in this document.
The information in this section is conceptual and intended to educate HCWs regarding environmental controls and how these controls can be used in the TB infection-control program. This information should not be used in place of consultation with experts who can give advice on ventilation system design, selection, installation, and maintenance. Because environmental controls will fail if they are not properly operated and maintained, routine training and education of staff are key components to a successful TB infection-control program. These guidelines do not specifically address mechanical ventilators in detail (see Intensive Care Units ).
# Local Exhaust Ventilation
Local exhaust ventilation captures airborne contaminants at or near their source and removes the contaminants without exposing persons in the area to infectious agents. This method is considered the most efficient way to remove airborne contaminants because it captures them before they can disperse. In local exhaust devices, hoods are typically used. Two types of hoods are 1) enclosing devices, in which the hood either partially or fully encloses the infectious source; and 2) exterior devices, in which the infectious source is near but outside the hood. Fully enclosed hoods, booths, or tents are always preferable to exterior devices because of their superior ability to prevent contaminants from escaping into the HCW's breathing space. Descriptions of both enclosing and exterior devices have been published (178).
# Enclosing Devices
Enclosing devices for local exhaust ventilation include 1) booths for sputum induction or administration of aerosolized medications (Figure 2), 2) tents or hoods for enclosing and isolating a patient, and 3) BSCs (165). These devices are available in various configurations. The simplest device is a tent placed over the patient; the tent has an exhaust connection to the room-discharge exhaust system. The most complex device is an enclosure with a self-contained airflow and recirculation system (Figure 2).
Tents and booths should have sufficient airflow to remove at least 99% of airborne particles during the interval between the departure of one patient and the arrival of the next (Table 1). The time required to remove 99% or 99.9% of airborne particles from an enclosed space depends on 1) the number of ACH, which is a function of the volume (number of cubic feet of air) in the room or booth and the rate at which air is exiting the room or booth at the intake source; 2) the location of the ventilation inlet and outlet; and 3) the configuration of the room or booth. The surfaces of tents and booths should be periodically cleaned in accordance with recommendations and guidance from the manufacturers (see Supplement, Cleaning, Disinfecting, and Sterilizing Patient-Care Equipment and Rooms).
# Exterior Devices
Exterior devices for local exhaust ventilation are usually hoods that are near to but not enclosing an infectious patient. The airflow produced by these devices should be sufficient to prevent cross-currents of air near the patient's face from allowing droplet nuclei to escape. Whenever possible, the patient should face directly into the opening of the hood to direct any coughing or sneezing into the hood. The device should maintain an air velocity of 200 feet per minute (fpm) at the patient's breathing zone to ensure the capture of droplet nuclei. Smoke tubes should be used to verify that the control velocity at the typical location of the patient's breathing zone is adequate to provide capture for the condition of highest expected cross-drafts and then the patient's breathing zone should be maintained at this location for the duration of the treatment.
# Discharge of Exhaust from Booths, Tents, and Hoods
Air from booths, tents, and hoods is either discharged into the room in which the device is located or to the outside. If the exhaust air is discharged into the room, a HEPA filter should be incorporated at the discharge duct or vent of the device. The exhaust fan should be located on the discharge side of the HEPA filter to ensure that the air pressure in the filter housing and booth is negative compared with adjacent areas. Uncontaminated air from the room will flow into the booth through all openings, preventing infectious droplet nuclei in the booth from escaping into the room. Additional information on the installation, maintenance, and monitoring of HEPA filters is included in this report (Appendix A).
The majority of commercially available booths, tents, and hoods are fitted with HEPA filters; additional HEPA filtration is not needed with these devices. If a device does not incorporate a HEPA filter, the air from the device should be exhausted directly to the outside and away from air-intake vents, persons, and animals, in accordance with applicable federal, state, and local regulations on environmental discharges.
# General Ventilation
General ventilation is used to 1) dilute and remove contaminated air, 2) control the direction of airflow in a health-care setting, and 3) control airflow patterns in rooms.
# Dilution and Removal of Contaminated Air
General ventilation maintains air quality by both air dilution and removal of airborne contaminants. Uncontaminated supply air mixes with contaminated room air (dilution), and air is subsequently removed from the room by the exhaust system (removal). These processes reduce the concentration of droplet nuclei in the room air.
Ventilation systems for air dilution and removal. Two types of general ventilation systems are used to dilute and remove contaminated air: single-pass air systems and recirculating air systems.
In a single-pass air system, the supply air is either outside air that has been heated or cooled or air that is uncontaminated from a central system that supplies multiple areas. After air passes through the room or area, 100% of the air is exhausted to the outside. A single-pass system is the preferred choice for an AII room because the system prevents contaminated air from being recirculated to other areas of the health-care setting. In a recirculating air system, a limited portion of the exhaust air is discharged directly to the outside and replaced with fresh outside air, which mixes with the portion of exhaust air that was not discharged. If the resulting air mixture is not treated, it can contain a substantial proportion of contaminated air when it is recirculated to areas serviced by the system. This air mixture can be recirculated into the general ventilation, and infectious particles can be carried from contaminated areas to uncontaminated areas. Alternatively, the air mixture could be recirculated in a specific room or area so that other areas are not affected. The use of air-cleaning technologies for removing or inactivating infectious particles in recirculated air systems has been discussed (Appendix A).
# FIGURE 2. An enclosing booth designed to sweep air past a patient with tuberculosis disease and collect the infectious droplet nuclei on a high efficiency particular air (HEPA) filter
Delivery of general ventilation. General ventilation is delivered by either constant air volume (CAV) systems or VAV systems. In general, CAV systems are best for AII rooms and other negative-pressure rooms because the negative-pressure differential is easier to maintain. VAV systems are acceptable if provisions are made to maintain the minimum mechanical and outside ACH and a negative pressure ≥0.01 inch of water gauge compared with adjacent areas at all times.
Ventilation rates. Recommended ventilation rates (air change rates) for health-care settings are usually expressed in numbers of ACH, which is the ratio of the volume of air entering the room per hour to the room volume. ACH equals the exhaust airflow (Q cubic feet per minute ) divided by the room volume (V cubic feet) multiplied by 60.
# ACH = (Q ÷ V) x 60
Ventilation recommendations for selected areas in new or renovated health-care settings have been presented (Table 2). These recommendations have been adapted from those published by AIA (118). The feasibility of achieving a specific ventilation rate depends on the construction and operational requirements of the ventilation system and might differ for retrofitted and newly constructed facilities. The expense and effort of achieving a high ventilation rate might be reasonable for new construction but less reasonable when retrofitting an existing setting.
In existing settings, air-cleaning technologies (e.g., fixed or portable room-air recirculation units or UVGI) can be used to increase the equivalent ACH. This equivalent ventilation concept has been used to compare microbial inactivation by UVGI with particleremoval by mechanical ventilation (384,385) and to compare particle removal by HEPA filtration of recirculated air with particle removal by mechanical ventilation. The equivalent ventilation approach does not, however, negate the requirement to provide sufficient fresh outside air for occupant comfort (Table 2).
To dilute the concentration of normal room-air contaminants and minimize odors, a portion of the supply air should come from the outdoors (Table 2). Health-care settings should consult the American Society of Heating, Refrigerating, and Air-Conditioning Engineers, Inc. (ASHRAE), Standard 62.1, Ventilation for Acceptable Indoor Air Quality, for outside air recommendations in areas not listed in this report (386).
# Control of Airflow Direction in a Health-Care Setting
Airflow direction is controlled in health-care settings to contain contaminated air and prevent its spread to uncontaminated areas.
Directional airflow. The general ventilation system should be designed and balanced so that air flows from less contaminated (more clean) to more contaminated (less clean) areas (118,117). For example, air should flow from corridors (cleaner areas) into AII rooms (less clean areas) to prevent the spread of contaminants. In certain rooms in which surgical and invasive procedures are performed and in protective environment (PE) rooms, the direction of airflow should be from the room to the hallway. Environmental control recommendations for situations involving the care and treatment of patients with TB disease in ORs and PE rooms have been presented (see Other Selected Settings). Cough-inducing or aerosol-generating procedures should not be performed on patients with suspected or confirmed TB disease in rooms where air flows from the room to the hallway.
Negative pressure for achieving directional airflow. The direction of airflow is controlled by creating a lower (negative) pressure in the area into which the flow of air is desired. Negative pressure is the approximate air-pressure difference between two areas in a health-care setting. For air to flow from one area to another, the air pressure in the two areas must be different. Air will flow from a higher pressure area to a lower pressure area. A room that is under negative pressure has a lower pressure than adjacent areas, which keeps air flowing from the adjacent rooms or areas into the room. Negative pressure is achieved by exhausting air at a higher volumetric rate than the rate that the air is being supplied.
# Control of Airflow Patterns in Rooms
General ventilation systems should be designed to provide controlled patterns of airflow in rooms and to prevent air stagnation or short-circuiting of air from the supply to the exhaust (i.e., passage of air directly from the air supply to the exhaust). To provide controlled airflow patterns, the air supply and exhaust should be located so that clean air flows first to parts of the room where HCWs probably work and then across the infectious source and into the exhaust. Therefore, HCWs are not positioned between the infectious source and the exhaust. This configuration is not always possible but should be used whenever feasible.
One way to achieve a controlled airflow pattern is to supply air at the side of the room opposite the patient and exhaust it from the side where the patient is located (Figure 3). Another method, which is most effective when the supply air is cooler than the room air, is to supply air near the ceiling and exhaust it near the floor (Figure 3). Care must be taken to ensure that furniture or moveable equipment does not block the low exhausts. Airflow patterns are affected by air temperature differentials, location of the supply diffusers and exhaust grilles, location of furniture, movement of HCWs and patients, and the configuration of the space.
If the room ventilation is not designed for a plug-flow type of airflow pattern (Figure 3), then adequate mixing must be maintained to minimize air stagnation. The majority of rooms with properly installed supply diffusers and exhaust grilles will have adequate mixing. A qualitative measure of mixing is the visualization of air movement with smoke tubes at multiple locations in the room. Smoke movement in all areas of the room indicates good mixing. Additional sophisticated studies can be conducted by using a tracer gas to quantify air-mixing and air-exchange rates.
If areas of air stagnation are present, air mixing can be improved by adding a circulating fan or repositioning the supply and exhaust vents. Room-air recirculation units positioned in the room or installed above the ceiling can also improve air mixing. If supply or exhaust vents, circulating fans, or room-air recirculation units are placed incorrectly, HCWs might not be adequately protected.
# Achieving Negative Pressure in Rooms
Negative pressure is needed to control the direction of airflow between selected rooms in a health-care setting and their adjacent spaces to prevent contaminated air from escaping from the room into other areas (118) (Figure 4). Control of a room's differential airflow and total leakage area is critical to achieving and maintaining negative pressure. Differential airflow, differential pressure, and leakage area are interrelated. This relation is illustrated (Figure 4) and is expressed in an empirical equation (387).
A E = 0.01138 - (∆Q 1.170 /∆P 0.602 ) In the equation, AE is the leakage area in square inches; ∆Q is the differential airflow rate in cfm; and ∆P is the differential pressure drop in inches of water gauge. This empirical equation was used (Figure 4), which indicates that changing one parameter will influence one or both of the other parameters. For example, the control of differential pressure can frequently be improved by increasing the air tightness or seal of a room, maintaining the HVAC system, and ensuring continuous monitoring. In a room that is already substantially tight (e.g., with 10 square inches of leakage), however, a small change in differential pressure will have a substantial effect on differential airflow. Similarly, a room with a more substantial leakage area (e.g., 300 square inches of leakage) requires a higher differential airflow rate to achieve a pressure differential of 0.01 inch of water gauge. Reducing the leakage in a room with 300 square inches of leakage can help achieve a pressure differential of 0.01 inch of water gauge (Figure 4). If the leakage area is reduced to approximately 40 square inches, a pressure differential of 0.01 inch of water gauge can be achieved by exhausting approximately 100 cubic feet per minute (cfm) more air from the room than is supplied to the room.
Room leakage can occur through cracks or spaces near doors, windows, ceiling, and utility connections. Steps should be taken to minimize these leaks. Changes in the performance of the HVAC system will affect the pressure differential in a room and can potentially cause a negative-pressure room to become positive-pressure. Therefore, each of these parameters requires close monitoring to ensure that minor changes in the performance of the HVAC system do not adversely affect the entire system (388,389).
Pressure differential. To achieve negative pressure in a room that has a normally functioning ventilation system, first measure and balance the supply and exhaust airflows to achieve an exhaust flow higher than the supply flow. Next, measure the pressure differential across the closed door. Although the minimum pressure difference needed for airflow into a room is substantially small (approximately 0.001 inch of water gauge), a pressure differential of ≥0.01 inch of water gauge ) is recommended. This higher pressure differential is easier to measure and offers a margin of safety for maintaining negative pressure as the pressure in surrounding areas changes because of the opening and closing of doors, operation of elevators, stack effect (rising of warm air, similar to a chimney), ventilation system fluctuations, and other factors. The higher pressurization value is consistent with the most recent AIA recommendations for airborne precautions in health-care settings (118) and is the generally accepted level of negative pressurization for microbiology and biomedical laboratories (390).
Opening doors and windows can substantially affect the negative pressure in an AII room. Infection-control criteria requires AII room windows and doors to remain closed, except when doors must be opened for persons to enter or leave the room. Keeping certain doors in the corridor outside the AII rooms closed might be necessary to maintain the negativepressure differential between an AII room and the corridor. Pressurization cannot be maintained in rooms or spaces that are not enclosed.
If ≥0.01 inch of water gauge is not achieved and cannot be achieved by increasing the flow differential (within the limits of the ventilation system), the room should be inspected for leakage. The total room leakage is based on the previously measured pressure, and air flow differentials can be estimated (Figure 4). If the room leakage is too substantial (e.g., 300 square inches), maintaining a negative-pressure differential as high as 0.01 inch of water gauge might be difficult. A lower value is acceptable if air-pressure monitoring indicates that negative pressure is always maintained (or airflow indicators consistently demonstrate that air is flowing in the desired direction). If negative pressure cannot be maintained, the leakage area might need to be reduced by sealing cracks around windows or replacing porous suspended ceiling panels with gasketed or sealed solid panels.
Because negative pressure in an AII room can be affected by even minimal changes in the operation of the ventilation system, negative pressure can be difficult to maintain with a VAV ventilation system. To maintain negative pressure, a VAV supply system should be coupled with a compensating exhaust system that increases when the supply flow rate increases. Alternatively, the exhaust can be set at a fixed rate that ensures negative pressure throughout the VAV supply cycle. The VAV minimum flow rate must also be adequate to maintain the recommended minimum mechanical and outdoor ACH (Table 2).
Alternate methods for achieving negative pressure. An anteroom is not a substitute for negative pressure in an AII room. However, an anteroom can reduce the escape of droplet nuclei during the opening and closing of the door to an AII room and can buffer an AII room from pressure fluctuations in the corridor. To function properly, an anteroom must have more air exhausted from the room than supplied to remove M. tuberculosis that can enter from the AII room. An anteroom can also have its own supply diffuser, if needed, to balance the pressure with the corridor. If an anteroom is unventilated or not properly ventilated, it will function only as a lesser contaminated vestibule between the AII room and the corridor and might not prevent the escape of droplet nuclei into the corridor. To adjust airflow and pressure differentials, healthcare settings should consult a ventilation engineer who is knowledgeable regarding all applicable regulations, including building fire codes.
If the desired negative pressure cannot be achieved because a room does not have a separate ventilation system or a system that can provide the proper airflow, steps should be taken to provide a method to discharge air from an AII room. One method to achieve negative pressure in a room is to add a supplemental exhaust unit. If an AII room has a window or an outside wall, a small exhaust fan can be used. An engineer should be consulted to evaluate the potential for negative effects on surrounding areas (e.g., disruption of exhaust airflow in adjoining bathrooms) and to ensure the provision of the recommended amounts of outdoor air. The exhaust must not be discharged where it can immediately re-enter the building or pose a hazard to persons outside.
Fixed room-air recirculation systems (i.e., systems that recirculate the air in an entire AII room) can be designed to achieve negative pressure by discharging a portion of the air to the outside. Some portable room-air recirculation units are also designed to discharge air to the outside to achieve negative pressure. These air cleaners must be designed specifically for this purpose. Monitoring negative pressure. Negative pressure must be monitored to ensure that air is always flowing from the corridor (or surrounding area) into the AII room. Negative pressure can be monitored either continuously or periodically. Monitoring methods include chemical aerosols (e.g., smoke tube), differential pressure-sensing devices (e.g., manometer), and physical indicators (e.g., flutter strips).
A chemical aerosol resembling smoke can be used to observe airflow between a room and the surrounding area, or within a room. Devices called smoke tubes generate the chemical aerosol resembling smoke, which follows the local air currents wherever it is released. To check the negative pressure in a room, hold a smoke tube approximately 2 inches in front of the base of the closed door of the AII room or in front of the air transfer grille, if the door has such a feature. Hold the smoke tube parallel to the door. A small amount of smoke should be generated slowly to ensure that the velocity of smoke emanating from the tube does not overpower the air velocity (Figure 5). If the room is under negative pressure, the smoke will travel into the room (from higher to lower pressure). If the room is not under negative pressure, the smoke will be blown outward or stay in front of the door. Room air cleaners in the room should be operating. Persons using smoke tubes should avoid inhaling the smoke, because direct inhalation of high concentrations of the smoke can be irritating (391) (Figure 5).
Manometers are used to monitor negative pressure. They provide either periodic (noncontinuous) pressure measurements or continuous pressure monitoring. A continuous monitoring indicator can simply be a visible or audible warning signal indicating that air pressure is positive. Both periodic and continuous pressure detectors generate a digital or analog signal that can be recorded for later verification or used to automatically adjust the room's ventilation control system.
Physical indicators (e.g., flutter strips) are occasionally used to provide a continuous visual sign that a room is under negative pressure. These simple and inexpensive devices are placed directly in the door and can be useful in identifying a pressure differential problem.
Pressure-measuring devices should sense the pressure just inside the airflow path into the AII room (e.g., at the base of the door). Unusual airflow patterns can cause pressure variations. For example, the air can be under negative pressure at the middle of a door and under positive pressure at the base of the same door. The ideal location of a pressure-measuring device has been illustrated (Figure 6). If the pressure-sensing ports of the device cannot be located directly across the airflow path, validating that the negative pressure at the sensing point is and remains the same as the negative pressure across the flow path might be necessary.
Pressure-sensing devices should incorporate an audible warning with a time delay to indicate an open door. When a door is open, the negative pressure cannot be maintained, but this situation should not generate an alarm unless the door is left open. Therefore, the time delay should allow adequate time for persons to enter or leave an AII room without activating the alarm.
The pressure differentials used to achieve low negative pressure (<0.005 inch) require the use of substantially sensitive mechanical devices, electronic devices, or pressure gauges to ensure accurate measurements. Pressure-measuring and monitoring devices can give false readings if the calibration has drifted. For example, a sensor might indicate that the room pressure is slightly negative compared with the corridor, but, because air current momentum effects or "drift" of the electrical signal, air might actually be flowing out of the AII room through the opening at the base of the door. In one study of 38 AII rooms with electrical or mechanical devices to continuously monitor air pressurization, one half had airflow at the door in the opposite direction of that indicated by the continuous monitors (392). The investigators attributed this problem to instrument limitations and device malfunction. A negative pressure differential of ≥0.01 inch of water gauge (compared with the previously recommended 0.001 inch of water gauge) might help to minimize this problem.
Periodic checks are required to maintain the desired negative pressure and the optimal operation of monitoring devices.
- AII rooms should be checked for negative pressure before occupancy.
# FIGURE 5. Smoke tube testing and manometer placement to determine the direction of airflow into and out of a room
- When occupied by a patient, an AII room should be checked daily with smoke tubes or other visual checks for negative pressure. - If pressure-sensing devices are used in AII rooms occupied by patients with suspected or confirmed TB disease, negative pressure should be checked daily by using smoke tubes or other visual checks. - If the AII rooms are not being used for patients who have suspected or confirmed TB disease but potentially could be used for such patients, the negative pressure should be checked monthly. - Laboratories should be checked daily for negative pressure.
# AII Rooms and Other Negative-Pressure Rooms
AII rooms are used to 1) separate patients who probably have infectious TB from other persons, 2) provide an environment in which environmental factors are controlled to reduce the concentration of droplet nuclei, and 3) prevent the escape of droplet nuclei from such rooms into adjacent areas using directional airflow. Other negative-pressure rooms include bronchoscopy suites, sputum induction rooms, selected examination and treatment rooms, autopsy suites, and clinical laboratories.
Preventing the escape of droplet nuclei. AII rooms used for TB isolation should be single-patient rooms with negative pressure, compared with the corridor or other areas connected to the room. Opening doors and windows can substantially affect the negative pressure in an AII room. Infection-control criteria require AII room windows and doors to remain closed, except when doors must be opened for persons to enter or leave the room. It might also be necessary to keep certain doors in the corridor outside the AII rooms closed to maintain the negativepressure differential between an AII room and the corridor.
The use of self-closing doors is recommended. The openings in the room (e.g., windows, and electrical and plumbing entries) should be sealed as much as possible, with the exception of a small gap (1/8-1/2 inch) at the base of the door to provide a controlled airflow path. Proper use of negative pressure will prevent contaminated air from escaping the room (393,394).
Reducing the concentration of droplet nuclei. AII rooms in existing health-care settings should have an air change rate of ≥6 mechanical ACH. Whenever feasible, this airflow rate should be increased to ≥12 mechanical ACH by adjusting or modifying the ventilation system or should be increased to ≥12 equivalent ACH by supplementing with air-cleaning technologies (e.g., fixed or portable room-air recirculation systems or UVGI systems). New construction or renovation of existing health-care settings should be designed so that AII rooms achieve a total air change rate of ≥12 mechanical ACH. These recommendations are consistent with guidelines by ASHRAE and AIA that recommend ≥12 mechanical ACH for AII rooms (117,118). Ventilation recommendations for other negativepressure rooms in new or renovated health-care settings have been presented (see Risk Classification Examples).
To dilute the concentration of normal room air contaminants and minimize odors, a portion of the supply air should come from the outdoors. A minimum of 2 ACH of outdoor air should be provided to AII rooms and other negativepressure rooms (117,118).
Exhaust from AII rooms and other negative-pressure rooms. Air from AII rooms and other negative-pressure rooms for patients with suspected or confirmed TB disease should be exhausted directly to the outside and away from air-intake vents, persons, and animals, in accordance with applicable federal, state, and local regulations on environmental discharges. Exhaust ducts should be located away from sidewalks or windows that can be opened. Ventilation system exhaust discharges and inlets should be designed to prevent the re-entry of exhausted air. Wind blowing over a building creates a substantially turbulent recirculation zone that can cause exhausted air to re-enter the building. Exhaust flow should be discharged above this zone. Design guidelines for proper placement of exhaust ducts have been published (395). If recirculation of air from such rooms into the general ventilation system is unavoidable, the air should be passed through a HEPA filter before recirculation.
Alternatives to negative-pressure rooms. AII can also be achieved by the use of negative-pressure enclosures (e.g., tents or booths). These enclosures can provide patient isolation in EDs and medical testing and treatment areas and can supplement AII in designated negative-pressure rooms.
# FIGURE 6. Cross-sectional view of a room indicating the location of negative pressure measurement*
# Other Selected Settings
Operating rooms, autopsy suites, sputum-induction rooms, and aerosolized treatment rooms pose potential hazards from infectious aerosols generated during procedures on patients with TB disease (72,90,(396)(397)(398). Recommended administrative, environmental, and respiratory-protection controls for these and other selected settings have been summarized (Appendix A). Additional or specialized TB infection controls that are applicable to special circumstances and types of health-care delivery settings have also been described (see Managing Patients Who Have Suspected or Confirmed TB Disease: Considerations for Special Circumstances and Settings). Ventilation recommendations for these settings in new or renovated health-care facilities have been included in this report (Table 2). Existing facilities might need to augment the current ventilation system or use the air-cleaning methods to increase the number of equivalent ACH.
Patients with TB disease who also require a PE room (e.g., severely immunocompromised patients) are special cases. These patients require protection from common airborne infectious microorganisms and must be placed in a room that has HEPA-filtered supply air and is under positive pressure compared with its surroundings (118). If an anteroom is not available, the use of other air-cleaning methods should be considered to increase the equivalent ACH. The air-cleaning systems can be placed in the room and in surrounding areas to minimize contamination of the surroundings. Similar controls can be used in ORs that are used for patients with TB disease because these rooms must be maintained under positive pressure, compared with their surroundings to maintain a sterile field.
# Air-Cleaning Methods
# HEPA Filtration
HEPA filtration can be used to supplement other recommended ventilation measures by providing a minimum removal efficiency of 99.97% of particles equal to 0.3 µm in diameter. This air-cleaning method is considered an adjunct to other ventilation measures. Used alone, this method neither provides outside air for occupant comfort nor satisfies other recommended ventilation measures (e.g., using source control whenever possible and minimizing the spread of contaminants in a setting through control of airflow patterns and pressure differentials).
HEPA filters have been demonstrated to reduce the concentration of Aspergillus spores (range in size: 5-6 µm) to below measurable levels (399)(400)(401). Because infective droplet nuclei generated by TB patients are believed to range from 1-5 µm in diameter (300) (comparable in size to Aspergillus spores) (402), HEPA filters will remove M. tuberculosis-containing infectious droplet nuclei from contaminated air. HEPA filters can be used to clean air before it is 1) exhausted to the outside, 2) recirculated to other areas of a health-care setting, or 3) recirculated in an AII room. Because electrostatic filters can degrade over time with exposure to humid environments and ambient aerosols (403), their use in systems that recirculate air back into the general ventilation system from AII rooms and treatment rooms should be avoided. If used, the filter manufacturer should be consulted regarding the performance of the filter to ensure that it maintains the desired filtration efficiency over time and with loading.
Use of HEPA filtration when exhausting air to the outside. HEPA filters can be used as an added safety measure to clean air from AII rooms and local exhaust devices (e.g., booths, tents, and hoods) before exhausting it to the outside. This added measure is not necessary, however, if the exhaust air cannot re-enter the ventilation system supply and does not pose a risk to persons and animals where it is exhausted.
Exhaust air frequently is not discharged directly to the outside; instead, the air is directed through heat-recovery devices (e.g., heat wheels or radiator-like devices). Heat wheels are frequently used to reduce the costs of operating ventilation systems (404). As the wheel rotates, energy is transferred into or removed from the supply inlet air stream. If a heat wheel is used with a system, a HEPA filter should also be used. The HEPA filter should be placed upstream from the heat wheel because of the potential for leakage across the seals separating the inlet and exhaust chambers and the theoretical possibility that droplet nuclei might be impacted on the wheel by the exhaust air and subsequently stripped off into the supply air.
Recirculation of HEPA-filtered air. Air from AII rooms and other negative-pressure rooms should be exhausted directly to the outside. In certain instances, however, recirculation of air into the general ventilation system from such rooms is unavoidable (e.g., settings in which the ventilation system or building configuration causes venting the exhaust to the outside impossible). In such cases, HEPA filters should be installed in the exhaust duct exiting the room to remove infectious organisms from the air before it is returned to the general ventilation system.
Individual room-air recirculation can be used in areas in which no general ventilation system exists, where an existing system is incapable of providing sufficient ACH, or where aircleaning (particulate removal) is desired without affecting the fresh air supply or negative-pressure system. Recirculation of HEPA-filtered air in a room can be achieved by 1) exhausting air from the room into a duct, passing it through a HEPA filter installed in the duct, and returning it to the room (Figure 7); 2) filtering air through HEPA recirculation systems installed on the wall or ceiling of the room (Figure 8); or 3) filtering air through portable HEPA recirculation systems. In this report, the first two approaches are referred to as fixed roomair recirculation systems because the recirculation systems are not easily movable.
Fixed room-air recirculation systems. The preferred method of recirculating HEPA-filtered air is by using a built-in system in which air is exhausted from the room into a duct, filtered through a HEPA filter, and returned to the room (Figure 7). This technique can add equivalent ACH in areas in which the recommended minimum ACH is difficult to meet with general ventilation. This equivalent ventilation concept compares particle removal by HEPA filtration of the recirculated air with particle clearance from exhaust ventilation. Because the air does not have to be conditioned, airflow rates that are higher than those produced by the general ventilation system can usually be achieved. An alternative is to install HEPA filtration units on the wall or ceiling (Figure 8).
Fixed recirculation systems are preferred to portable (freestanding) units because they can be installed with a higher degree of reliability. In addition, certain fixed systems have a higher airflow capacity than portable systems, and the potential for short-circuiting of air is reduced as the distance between the air intake and exhaust is increased.
Portable room-air recirculation systems. Portable room-air recirculation units with HEPA filters (also called portable air cleaners) can be considered when 1) a room has no general ventilation system, 2) the system cannot provide adequate ACH, or 3) increased effectiveness in airflow is needed. Effectiveness depends on the ability of the portable room-air recirculation unit to circulate as much of the air in the room as possible through the HEPA filter. Effectiveness can vary depending on the room's configuration, the furniture and persons in the room, the placement of the HEPA filtration unit compared with the supply diffusers and exhaust grilles, and the degree of mixing of air within the room.
Portable room-air recirculation units have been demonstrated to be effective in removing bioaerosols and aerosolized particles from room air (405)(406)(407)(408)(409)(410). Findings indicate that various commercially available units are useful in reducing the concentration of airborne particles and are therefore helpful in reducing airborne disease transmission. The performance of 14 units was evaluated for volumetric airflow, airborne particle reduction, noise level, and other parameters (406). The range of volumetric airflow rates was 110 cfm-1,152 cfm, and the equivalent ACH range was an average of 8-22 in a standard-sized, substantially well-mixed, single-patient room. Recommendations were provided to make subsequent models safer, more effective, quieter, and easier to use and service. Purchasers should be aware that the majority of manufacturer specifications indicated flow rates of free-wheeling fans and not the fan under the load of a filter.
Portable HEPA filtration units should be designed to 1) achieve ≥12 equivalent ACH, 2) ensure adequate air mixing in all areas of the rooms, and 3) be compatible with the ventilation system. An estimate of the ability of the unit to circulate the air in a room can be made by visualizing airflow patterns (estimating room air mixing ). If the air movement is adequate in all areas of the room, the unit should be effective. If portable devices are used, units with high volumetric airflow rates that provide maximum flow through the HEPA filter are preferred. Placement should be selected to optimize the recirculation of AII room air through the HEPA filter. Careful consideration must be given to obstacles (e.g., furnishings, medical equipment, and walls) that could disrupt airflow and to system specifications (e.g., physical dimensions, airflow capacity, locations of air inlet and exhaust, and noise) to maximize performance of the units, minimize short-circuiting of air, and reduce the probability that the units will be switched off by room occupants.
Installing, maintaining, and monitoring HEPA filters. The performance of HEPA filters depends on proper installation, testing, and meticulous maintenance (411), especially if the system recirculates air to other parts of the health-care setting. Improper design, installation, or maintenance could allow infectious particles to circumvent filtration and escape into the general ventilation system (117). These failures also could impede proper ventilation performance.
HEPA filters should be installed to prevent leakage between filter segments and between the filter bed and its frame. A regularly scheduled maintenance program is required to monitor filters for possible leakage and filter loading. A quantitative filter performance test (e.g., the dioctyl phthalate penetration test ) should be performed at the initial installation and each time the filter is changed. Records should be maintained for all filter changes and testing. A leakage test using a particle counter or photometer should be performed every 6-12 months for filters in general-use areas and in areas with systems that will probably be contaminated with M. tuberculosis (e.g., AII rooms).
A manometer or other pressure-sensing device should be installed in the filter system to provide an accurate and objective means of determining the need for filter replacement. Pressure-drop characteristics of the filter are supplied by the manufacturer. Installation of the filter should allow for maintenance that will not contaminate the delivery system or the area served. For general infection-control purposes, special care should be taken to avoid jarring or dropping the filter element during or after removal.
The scheduled maintenance program should include procedures for installation, removal, and disposal of filter elements. HEPA filter maintenance should be performed only by adequately trained personnel and only while the ventilation system or room-air recirculation unit is not being operated.
Laboratory studies indicate that re-aerosolization of viable mycobacteria from filter material (HEPA filters and N95 disposable respirator filter media) is not probable under normal conditions (414)(415)(416). Although these studies indicate that M. tuberculosis becoming an airborne hazard is not probable after it is removed by a HEPA filter (or other high efficiency filter material), the risks associated with handling loaded HEPA filters in ventilation systems under field-use conditions have not been evaluated. Therefore, persons performing maintenance and replacing filters on any ventilation system that is probably contaminated with M. tuberculosis should wear a respirator (see Respiratory Protection) in addition to eye protection and gloves. When feasible, HEPA filters can be disinfected in 10% bleach solution or other appropriate mycobacteriacide before removal (417). In addition, filter housing and ducts leading to the housing should be labeled clearly with the words "TB-Contaminated Air" or other similar warnings. Disposal of filters and other potentially contaminated materials should be in accordance with applicable local or state regulations.
One or more lower-efficiency disposable pre-filters installed upstream can extend the life of a HEPA filter by at least 25%. If the disposable filter is replaced by a 90% extended surface filter, the life of the HEPA filter can be extended by approximately 900% (178). Pre-filters should be handled and disposed of in the same manner as the HEPA filter.
# UVGI
Ultraviolet germicidal irradiation (UVGI) is a form of electromagnetic radiation with wavelengths between the blue region of the visible spectrum and the radiograph region. UV-C radiation (short wavelengths; range: 100-280 nm) (418) can be produced by various artificial sources (e.g., arc lamps and metal halide lamps). The majority of commercially available UV lamps used for germicidal purposes are low-pressure mercury vapor lamps that emit radiant energy in the UV-C range, predominantly at a wavelength of 253.7 nm (418).
Research has demonstrated that UVGI is effective in killing or inactivating M. tuberculosis under experimental conditions (292,385,(419)(420)(421)(422)(423) and in reducing transmission of other infectious agents in hospitals (424), military housing (425), and classrooms (426)(427)(428). Because of the results of multiple studies (384,(429)(430)(431)(432) and the experiences of clinicians and mycobacteriologists during the preceding decades, UVGI has been recommended as a supplement or adjunct to other TB infection-control and ventilation measures in settings in which the need to kill or inactivate M. tuberculosis is essential (6,7,196,433,434). UVGI alone does not provide outside air or circulate interior air, both of which are essential in achieving acceptable air quality in occupied spaces.
Applications of UVGI. UVGI is considered a method of air cleaning because it can kill or inactivate microorganisms so that they are no longer able to replicate and form colonies. UVGI is not a substitute for HEPA filtration before exhausting the air from AII rooms back into the general circulation. UVGI lamps can be placed in ducts, fixed or portable room air-recirculation units, or upper-air irradiation systems. The use of this air-cleaning technique has increased, particularly in substantial open areas in which unsuspected or undiagnosed patients with TB disease might be present (e.g., ED waiting rooms, shelters, and correctional facilities), and the costs of conditioning substantial volumes of outdoor air are prohibitive.
For each UVGI system, guidelines should be followed to maximize effectiveness. Effectiveness can be expressed in terms of an equivalent air change rate (427,(435)(436)(437), comparing the ability of UVGI to inactivate organisms with removal through general ventilation. Initially, understanding and characterizing the application for which UVGI will be used is vital. Because the effectiveness of UVGI systems will vary, the use of UVGI must be carefully evaluated and the level of efficacy clearly defined and monitored.
The effective use of UVGI is associated with exposure of M. tuberculosis, as contained in an infectious droplet, to a sufficient dose of UV-C radiation at 253.7 nm to ensure inactivation. Because dose is a function of irradiance and time, the effectiveness of any application is determined by its ability to deliver sufficient irradiance for enough time to result in inactivation of the organism within the infectious droplet. Achieving a sufficient dose can be difficult with airborne inactivation because the exposure time can be substantially limited; therefore, attaining sufficient irradiance is essential.
The number of persons who are properly trained in the design and installation of UVGI systems is limited. One critical recommendation is that health-care facility managers consult a UVGI system designer to address safety and efficacy considerations before such a system is procured and installed. Experts who can be consulted include industrial hygienists, engineers, and health physicists.
Duct irradiation. Duct irradiation is designed to kill or inactivate M. tuberculosis without exposing persons to UVGI. In duct irradiation systems, UVGI lamps are placed inside ducts to disinfect the exhaust air from AII rooms or other areas in which M. tuberculosis might be present before it is recirculated to the same room (desirable) or to other areas served by the system (less desirable). When UVGI duct systems are not properly designed, installed, and maintained, high levels of UVGI can be produced in the duct that can potentially cause high UVGI exposures during maintenance operations.
Duct-irradiation systems depend on the circulation of as much of the room air as possible through the duct. Velocity profiles and mixing are important factors in determining the UVGI dose received by airborne particles. Design velocity for a typical UVGI unit is approximately 400 fpm (438). The particle residence time must be sufficient for inactivation of the microorganisms.
Duct irradiation can be used in three ways.
- Ventilation systems serving AII rooms to recirculate air from the room, through a duct containing UV lamps, and back into the same room. UVGI duct systems should not be used either in place of HEPA filters, if air from AII rooms must be recirculated to other areas of a setting, or as a substitute for HEPA filtration of air from booths, tents, or hoods used for cough-inducing or aerosol-generating procedures. - Return air ducts serving patient rooms, waiting rooms, EDs, and general-use areas in which patients with undiagnosed TB disease could potentially contaminate the recirculated air. - Recirculating ventilation systems serving rooms or areas in which ceiling heights are too low for the safe and effective use of upper-air UVGI. Upper-air irradiation. In upper-air irradiation, UVGI lamp fixtures are suspended from the ceiling and installed on walls. The base of the lamps are shielded to direct the radiation upward and outward to create an intense zone of UVGI in the upper air while minimizing the levels of UVGI in the lower part of the room where the occupants are located. The system depends on air mixing to move the air from the lower part of the room to the upper part where microbial-contaminated air can be irradiated.
A major consideration is the placement of UVGI fixtures to achieve sufficient irradiance of the upper-air space. The ceiling should be high enough (≥8 feet) for a substantial volume of upper air to be irradiated without overexposing occupants in the lower part of the room to UVGI. System designers must consider the mechanical ventilation system, room geometry, and emission characteristics of the entire fixture.
Upper-air UVGI can be used in various settings.
- AII rooms and rooms in which aerosol-generating or aerosol-producing procedures (e.g., bronchoscopy, sputum induction, and administration of aerosolized medications) are performed. - Patient rooms, waiting rooms, EDs, corridors, central areas, and other substantial areas in which patients with undiagnosed TB disease could potentially contaminate the air. - Operating rooms and adjacent corridors where procedures are performed on patients with TB disease. - Medical settings in correctional facilities. Portable room air recirculation systems. In portable room air-recirculation units containing UVGI, a fan moves a volume of room air across UVGI lamps to disinfect the air before it is recirculated back to the room. Some portable units contain both a HEPA filter (or other high efficiency filter) and UVGI lamps.
In addition to the guidelines described for the use of portable room air-recirculation systems containing HEPA filtration, consideration must be given to the volume of room air that passes through the unit, the UVGI levels, particle residence time, and filtration efficiency (for devices with a filter). One study in which a bioaerosol chamber was used demonstrated that portable room air cleaners with UVGI lamps as the primary air-cleaning mechanism are effective (>99%) in inactivating or killing airborne vegetative bacteria (439). Additional studies need to be performed in rooms with portable air cleaners that rely only on UVGI for air cleaning.
Portable room air cleaners with UVGI can be used in 1) AII rooms as an adjunct method of air cleaning and 2) waiting rooms, EDs, corridors, central areas, or other substantial areas in which patients with undiagnosed TB disease could potentially contaminate the air.
Effectiveness of UVGI. Air mixing, air velocity, relative humidity, UVGI intensity, and lamp configuration affect the efficacy of all UVGI applications. For example, with upper-air systems, airborne microorganisms in the lower, occupied areas of the room must move to the upper part of the room to be killed or inactivated by upper-air UVGI. Air mixing can occur through convection caused by temperature differences, fans, location of supply and exhaust ducts, or movement of persons.
Air-mixing. UVGI has been demonstrated to be effective in killing bacteria in the upper-air applications under conditions in which air mixing was accomplished primarily by convection. In a 1976 study on aerosolization of M. bovis BCG (a surrogate for M. tuberculosis) in a room without mechanical ventilation that relied primarily on convection and infiltration resulted in 10-25 equivalent ACH, depending on the number of UVGI fixtures used (384). Other early studies examined the effect of air-mixing on UVGI efficacy (440,441). These studies indicated that the efficacy of UVGI was substantially increased if cold supply air relative to the lower portion of the room entered through diffusers in the ceiling. The findings indicated that substantial temperature gradients between the upper and lower portions of the room favored (cold air in the upper portion of the room) or inhibited (hot air in the upper portion of the room) vertical mixing of air between the two zones.
When large-bladed ceiling fans were used to promote mixing in the experimental room, the ability of UVGI to inactivate Serratia marcescens, an organism known to be highly sensitive to UVGI, was doubled (442,443). Similar effects were reported in studies conducted during 2000-2002 in which louvered UVGI fixtures were used. One study documented an increase in UVGI effectiveness of 16% at 2 ACH and 33% at 6 ACH when a mixing fan was used (444). Another study conducted in a simulated health-care room determined that 1) at 0 ACH, a high degree of efficacy of upper-air UVGI was achieved in the absence or presence of mixing fans when no temperature gradient was created; and 2) at 6 ACH, bringing in warm air at the ceiling resulted in a temperature gradient with cooler room air near the floor and a UVGI efficacy of only 9% (422). Turning on box fans under these winter conditions increased UVGI efficacy nearly 10-fold (to 89%) (445).
To reduce variability in upper-air UVGI efficacy caused by temperature gradients in the room, a fan should be routinely used to continually mix the air, unless the room has been determined to be well mixed under various conditions of operation. Use of a fan would also reduce or remove the variable winter versus summer ACH requirements for optimal upper-air UVGI efficacy (446).
Relative humidity. In studies conducted in bioaerosol chambers, the ability of UVGI to kill or inactivate microorganisms declined substantially when the relative humidity exceeded 60% (447)(448)(449)(450). In room studies, declines in the ability of upper-air UVGI to kill or inactivate microorganisms at high relative humidity (65%, 75%, and 100%) (384,422) have also been reported. The exact mechanism responsible for the reduced effectiveness of UVGI at these higher levels of relative humidity is unknown but does not appear to be related to changes in UV irradiance levels. Relative humidity changes from 55%-90% resulted in no corresponding changes in measured UVGI levels (437). In another study, an increase in relative humidity from 25%-67% did not reduce UVGI levels (422). Bacteria have been demonstrated to absorb substantial amounts of water from the air as the relative humidity increases. At high humidity, the UV irradiance levels required to inactivate bacteria might approach the higher levels that are needed for liquid suspensions of bacteria (448). The ability of bacteria to repair UVGI damage to their DNA through photoreactivation has also been reported to increase as relative humidity increases (422,448).
For optimal efficacy of upper-air UVGI, relative humidity should be maintained at ≤60%, a level that is consistent with recommendations for providing acceptable indoor air quality and minimizing environmental microbial contamination in indoor environments (386,451).
Ventilation rates. The relation between ventilation and UVGI has also been evaluated. Certain predicted inactivation rates have been calculated and published for varying flow rates, UV intensity, and distances from the lamp, based on radiative heat transfer theory (438). In room studies with substantially well-mixed air, ventilation rates (0 ACH, 3 ACH, and 6 ACH) were combined with various irradiation levels of upper-air UVGI. All experiments were conducted at 50% relative humidity and 70 º F (21.2° C). When M. parafortuitum was used as a surrogate for M. tuberculosis, ventilation rates usually had no adverse effect on the efficiency of upper-air UVGI. The combined effect of both environmental controls was primarily additive in this artificial environment, with possibly a small loss of upper-air UVGI efficiency at 6 ACH (422). Therefore, ventilation rates of up to 6 ACH in a substantially well-mixed room might achieve ≥12 ACH (mechanical ACH plus equivalent ACH) by combining these rates with the appropriate level of upper-air irradiation (422). Higher ventilation rates (>6 ACH) might, however, decrease the time the air is irradiated and, therefore, decrease the killing of bacteria (429,452).
Ventilation rates up to six mechanical ACH do not appear to adversely affect the performance of upper-air UVGI in a substantially well-mixed room. Additional studies are needed to examine the combined effects of mechanical ventilation and UVGI at higher room-air exchange rates.
UVGI intensity. UVGI intensity field plays a primary role in the performance of upper-air UVGI systems. The UVGI dose received by microorganisms is a function of UVGI times duration of exposure. Intensity is influenced by the lamp wattage, distance from the lamp, surface area, and presence of reflective surfaces. The number of lamps, location, and UVGI level needed in a room depends on the room's geometry, area, and volume, and the location of supply air diffusers (422,436). UVGI fixtures should be spaced to reduce overlap while maintaining an even irradiance zone in the upper air.
The emission profile of a fixture is a vital determinant of UVGI effectiveness. Information regarding total UVGI output for a given fixture (lamp plus housing and louvers) should be requested from the manufacturer and used for comparison when selecting UVGI systems. Information concerning only the UVGI output of the lamp is inadequate; the lamp output will be higher than the output for the fixture because of losses from reflectors and nonreflecting surfaces and the presence of louvers and other obstructions (436,437). In addition, information provided by the manufacturer reflects ideal laboratory conditions; damage to fixtures or improper installation will affect UV radiation output. Because old or dust-covered UVGI lamps are less effective, routine maintenance and cleaning of UVGI lamps and fixtures is essential. UVGI system designers should consider room geometry, fixture output, room ventilation, and the desired level of equivalent ACH in determining the types, numbers, and placement of UVGI fixtures in a room to achieve target irradiance levels in the upper air.
Health and safety issues. Short-term overexposure to UV radiation can cause erythema (i.e., abnormal redness of the skin), photokeratitis (inflammation of the cornea), and conjunctivitis (i.e., inflammation of the conjunctiva) (453). Symptoms of photokeratitis and conjunctivitis include a feeling of sand in the eyes, tearing, and sensitivity to light. Photokeratitis and conjunctivitis are reversible conditions, but they can be debilitating while they run their course. Because the health effects of UVGI are usually not evident until after exposure has ended (typically 6-12 hours later), HCWs might not recognize them as occupational injuries.
In 1992, UV-C (100-280 nm) radiation was classified by the International Agency for Research on Cancer as "probably carcinogenic to humans (Group 2A)" (454). This classification was based on studies indicating that UV-C radiation can induce skin cancers in animals and create DNA damage, chromosomal aberrations, and sister chromatid exchange and transformation in human cells in vitro. In addition, DNA damage in mammalian skin cells in vivo can be caused. In the animal studies, a contribution of UV-C radiation to the tumor effects could not be excluded, but the effects were higher than expected for UV-B radiation alone (454). Certain studies have demonstrated that UV radiation can activate HIV gene promoters (i.e., genes in HIV that prompt replication of the virus) in laboratory samples of human cells (455)(456)(457)(458)(459)(460). The potential for UV-C radiation to cause cancer and promote HIV in humans is unknown, but skin penetration might be an important factor. According to certain reports, only 20% of incident 250 nm UV penetrates the stratum corneum, compared with approximately 30-60% of 300 nm UV (UV-B) radiation (461).
In upper-air UVGI systems, fixtures must be designed and installed to ensure that UVGI exposures to occupants are below current safe exposure levels. Health-hazard evaluations have identified potential problems at some settings using UVGI systems. These problems include overexposure of HCWs to UVGI and inadequate maintenance, training, labeling, and use of personal protective equipment (PPE) (398,462,463).
An improperly maintained (unshielded) germicidal lamp was believed to be the cause of dermatosis or photokeratitis in five HCWs in an ED (464) and three HCWs who were inadvertently exposed to an unshielded UVGI lamp in a room that had been converted from a sputum induction room to an office (465). These case reports highlight the importance of posting warning signs to identify the presence of UVGI (see Supplement, Labeling and Posting) and are reminders that shielding should be used to minimize UVGI exposures to occupants in the lower room. In the majority of applications, properly designed, installed, and maintained UVGI fixtures provide protection from the majority of, if not all, the direct UVGI in the lower room. However, radiation reflected from glass, polished metal, and high-gloss ceramic paints can be harmful to persons in the room, particularly if more than one UVGI fixture is in use. Surfaces in irradiated rooms that can reflect UVGI into occupied areas of the room should be covered with non-UV-reflecting material.
Although more studies need to be conducted, lightweight clothing made of tightly woven fabric and UV-absorbing sunscreens with solar-protection factors (SPFs) of ≥15 might help protect photosensitive persons. Plastic eyewear containing a UV inhibitor that prevents the transmission of ≥95% of UV radiation in the 210-405 nm range is commercially available. HCWs should be advised that any eye or skin irritation that develops after UVGI exposure should be evaluated by an occupational health professional.
Exposure criteria. In 1972, CDC published a recommended exposure limit (REL) for occupational exposure to UV radiation (453). REL is intended to protect HCWs from the acute effects of UV light exposure. Photosensitive persons and those exposed concomitantly to photoactive chemicals might not be protected by the recommended standard.
The CDC/NIOSH REL for UV radiation is wavelength dependent because different wavelengths have different adverse effects on the skin and eyes (453). At 254 nm, the predominant wavelength for germicidal UV lamps, the CDC/ NIOSH REL is 0.006 joules per square centimeter (J/cm 2 ) for a daily 8-hour work shift. ACGIH has a Threshold Limit Value® for UV radiation that is identical to the REL for this spectral region (466). HCWs frequently do not stay in one place in the setting during the course of their work and, therefore, are not exposed to UV irradiance levels for 8 hours. Permissible exposure times (PET) for HCWs with unprotected eyes and skin can be calculated for various irradiance levels as follows:
PET (seconds) = 0.006 J/cm2 (CDC/NIOSH REL at 254 nm)
Measured irradiance level (at 254 nm) in W/cm 2 Exposures exceeding the CDC/NIOSH REL require the use of PPE to protect the skin and eyes.
# Labeling, Maintenance, and Monitoring
Labeling and posting. Health-care settings should post warning signs on UV lamps and wherever high-intensity (i.e., UVGI exposure greater than the REL) UVGI irradiation is present to alert maintenance staff, HCWs, and the general public of the hazard. The warning signs should be written in the languages of the affected persons (Box 6).
Maintenance. Because the UVGI output of the lamps declines with age, a schedule for replacing the lamps should be developed in accordance with manufacturer recommendations. The schedule can be determined from a time-use log, a system based on cumulative time, or routinely (e.g., at least annually). UVGI lamps should be checked monthly for dust build-up, which lessens radiation output. A dirty UVGI lamp should be allowed to cool and then should be cleaned in accordance with the manufacturer recommendations so that no residue remains.
UVGI lamps should be replaced if they stop glowing, if they flicker, or if the measured irradiance (see Supplement, Environmental Controls) drops below the performance criteria or minimum design criterion set forth by the design engineers.
Maintenance personnel must switch off all UVGI lamps before entering the upper part of the room or before accessing ducts for any purpose. Only limited seconds of direct exposure to the intense UVGI in the upper-air space or in ducts can cause dermatosis or photokeratitis. Protective clothing and equipment (e.g., gloves, goggles, face shield, and sunscreen) should be worn if exposure greater than the recommended levels is possible or if UVGI radiation levels are unknown.
Banks of UVGI lamps can be installed in ventilation system ducts. Safety devices and lock-out or tag-out protocols should be used on access doors to eliminate exposures of maintenance personnel. For duct irradiation systems, the access door for servicing the lamps should have an inspection window through which the lamps are checked periodically for dust build-up and to ensure that they are functioning properly. The access door should have a warning sign written in appropriate languages to alert maintenance personnel to the health hazard of looking directly at bare UV lamps. The lock for this door should have an automatic electric switch or other device that turns off the lamps when the door is opened.
Types of fixtures used in upper-air irradiation include wallmounted, corner-mounted, and ceiling-mounted fixtures that have louvers or baffles to block downward radiation and ceiling-mounted fixtures that have baffles to block radiation below the horizontal plane of the fixtures. If possible, light switches that can be locked should be used to prevent injury to persons who might unintentionally turn the lamps on during maintenance procedures. Because lamps must be discarded after use, consideration should be given to selecting germicidal lamps that are manufactured with relatively low amounts (i.e., ≤5 mg) of mercury. UVGI products should be listed with the Underwriters Laboratories (UL) or Electrical Testing Laboratories (ETL) for their specific application and installed in accordance with the National Electric Code.
Monitoring. UVGI intensity should be measured by an industrial hygienist or other person knowledgeable in the use of UV radiometers with a detector designed to be most sensitive at 254 nm. Equipment used to measure UVGI should be maintained and calibrated on a regular schedule, as recommended by the manufacturer.
UVGI should be measured in the lower room to ensure that exposures to occupants are below levels that could result in acute skin and eye effects. The monitoring should consider typical duties and locations of the HCWs and should be done at eye level. At a minimum, UVGI levels should be measured at the time of initial installation and whenever fixtures are moved or other changes are made to the system that could affect UVGI. Changes to the room include those that might result in higher exposures to occupants (e.g., addition of UV-reflecting materials or painting of walls and ceiling). UVGI monitoring information, lamp maintenance, meter calibration, and lamp and fixture change-outs should be recorded.
UVGI measurements should also be made in the upper air to define the area that is being irradiated and determine if target irradiance levels are met (467). Measurements can be made using UVGI radiometers or other techniques (e.g., spherical actinometry), which measures the UVGI in an omnidirectional manner to estimate the energy to which microorganisms would be exposed (468). Because high levels of UVGI can be measured in the upper air, persons making the measurements should use adequate skin and eye protection. UVGI radiation levels close to the fixture source can have permissible exposure times on the order of seconds or minutes for HCWs with unprotected eyes and skin. Therefore, overexposures can occur with brief UVGI exposures in the upper air (or in ventilation system ducts where banks of unshielded UV lamps are placed) in HCWs who are not adequately protected.
Upper-air UVGI systems and portable room-air recirculation units. A study in 2002 examined the relation between three portable room-air recirculation units with different capture or inactivation mechanisms and an upper-air UVGI system in a simulated health-care room (409). The study determined that the equivalent ACH produced by the recirculation units and produced by the upper-air UVGI system were approximately additive. For example, one test using aerosolized M. parafortuitum provided an equivalent ACH for UVGI of 17 and an equivalent ACH for the recirculation unit of 11; the total experimentally measured equivalent ACH for the two systems was 27. Therefore, the use of portable room-air recirculation units in conjunction with upper-air UVGI systems might increase the overall removal of M. tuberculosis droplet nuclei from room air.
# Environmental Controls: Program Concerns
To be most effective, environmental controls must be installed, operated, and maintained correctly. Ongoing maintenance is a critical part of infection control that should be addressed in the written TB infection-control plan. The plan should outline the responsibility and authority for maintenance and address staff training needs. At one hospital, improperly functioning ventilation controls were believed to be an important factor in the transmission of MDR TB disease to three patients and a correctional officer, three of whom died (469). In three other multihospital studies evaluating the performance of AII rooms, failure to routinely monitor air-pressure differentials or a failure of the continuous monitoring devices installed in the AII rooms resulted in a substantial percentage of the rooms being under positive pressure (57,392,470,471).
Routine preventive maintenance should be scheduled and should include all components of the ventilation systems (e.g., fans, filters, ducts, supply diffusers, and exhaust grilles) and any air-cleaning devices in use. Performance monitoring should be conducted to verify that environmental controls are operating as designed. Performance monitoring can include 1) directional airflow assessments using smoke tubes and use of pressure monitoring devices that are sensitive to pressures as low as approximately 0.005 inch of water gauge and 2) measurement of supply and exhaust airflows to compare with recommended air change rates for the respective areas of the setting. Records should be kept to document all preventive maintenance and repairs.
Standard procedures should be established to ensure that maintenance staff notifies infection-control personnel before performing maintenance on ventilation systems servicing patient-care areas. Similarly, infection-control staff should request assistance from maintenance personnel in checking the operational status of AII rooms and local exhaust devices (e.g., booths, hoods, and tents) before use. A protocol that is well-written and followed will help to prevent unnecessary exposures of HCWs and patients to infectious aerosols. Proper labeling of ventilation system components (e.g., ducts, fans, and filters) will help identify air-flow paths. Clearly labeling which fan services a given area will help to prevent accidental shutdowns (472).
In addition, provisions should be made for emergency power to avoid interruptions in the performance of essential environmental controls during a power failure.
# Respiratory Protection
# Considerations for Selection of Respirators
The overall effectiveness of respiratory protection is affected by 1) the level of respiratory protection selected (e.g., the assigned protection factor), 2) the fit characteristics of the respirator model, 3) the care in donning the respirator, and 4) the adequacy of the fit-testing program. Although data on the effectiveness of respiratory protection from various hazardous airborne materials have been collected, the precise level of effectiveness in protecting HCWs from M. tuberculosis transmission in health-care settings has not been determined.
Information on the transmission parameters of M. tuberculosis is also incomplete. Neither the smallest infectious dose of M. tuberculosis nor the highest level of exposure to M. tuberculosis at which transmission will not occur has been defined conclusively (159,473,474). In addition, the size distribution of droplet nuclei and the number of particles containing viable M. tuberculosis organisms that are expelled by patients with infectious TB disease have not been adequately defined, and accurate methods of measuring the concentration of infectious droplet nuclei in a room have not been developed. Nonetheless, in certain settings (e.g., AII rooms and ambulances during the transport of persons with suspected or confirmed infectious TB disease), administrative and environmental controls alone might not adequately protect HCWs from infectious airborne droplet nuclei.
On October 17, 1997, OSHA published a proposed standard for occupational exposure to M. tuberculosis (267). On December 31, 2003, OSHA announced the termination of rulemaking for a TB standard (268). Previous OSHA policy permitted the use of any Part 84 particulate filter respirator for protection against infection with M. tuberculosis (269). Respirator usage for TB had been regulated by OSHA under CFR Title 29, Part 1910.139 (29 CFR 1910.139) (270) and compliance policy directive (CPL) 2.106 (Enforcement Procedures and Scheduling for Occupational Exposure to Tuberculosis). Respirator usage for TB is now regulated under the general industry standard for respiratory protection (29 CFR 1910.134) (271). General information on respiratory protection for aerosols, including M. tuberculosis, has been published (272)(273)(274).
# Performance Criteria for Respirators
Performance criteria for respirators are derived from data on 1) effectiveness of respiratory protection against noninfectious hazardous materials in workplaces other than health-care settings and an interpretation of how these data can be applied to respiratory protection against M. tuberculosis, 2) efficiency of respirator filters in filtering biologic aerosols, 3) face-seal leakage, and 4) characteristics of respirators used in conjunction with administrative and environmental controls in outbreak settings to stop transmission of M. tuberculosis to HCWs and patients.
Particulate filter respirators certified by CDC/NIOSH, either nonpowered respirators with N95, N99, N100, R95, R99, R100, P95, P99, and P100 filters (including disposable respirators), or PAPRs with high efficiency filters can be used for protection against airborne M. tuberculosis.
The most essential attribute of a respirator is the ability to fit the different facial sizes and characteristics of HCWs. Studies have demonstrated that fitting characteristics vary substantially among respirator models. The fit of filtering facepiece respirators varies because of different facial types and respirator characteristics (10,(280)(281)(282)(283)(284)(285)(286)(287)(288)(289). Selection of respirators can be done through consultation with respirator fit-testing experts, CDC, occupational health and infection-control professional organizations, peer-reviewed research, respirator manufacturers, and from advanced respirator training courses. Data have determined that fit characteristics cannot be determined solely by physical appearance of the respirator (282).
# Types of Respiratory Protection for TB
Respirators encompass a range of devices that vary in complexity from flexible masks covering only the nose and mouth, to units that cover the user's head (e.g., loose-fitting or hooded PAPRs), and to those that have independent air supplies (e.g., airline respirators). Respirators must be selected from those approved by CDC/NIOSH under the provisions of 42 CFR, Part 84 (475).
Nonpowered air-purifying respirators. Nine classes of nonpowered, air-purifying, particulate-filter respirators are certified under 42 CFR 84. These include N-, R-, and P-series respirators of 95%, 99%, and 100% (99.7%) filtration efficiency when challenged with 0.3 µm particles (filters are generally least efficient at this size) (Table 4). The N, R, and P classifications are based on the capacity of the filter to withstand exposure to oil. All of these respirators meet or exceed CDC's filtration efficiency performance criteria during the service life of the filter (1,272,273).
Nonpowered air-purifying respirators work by drawing ambient air through the filter during inhalation. Inhalation causes negative pressure to develop in the tight-fitting facepiece and allows air to enter while the particles are captured on the filter. Air leaves the facepiece during exhalation because positive pressure develops in the facepiece and forces air out of the mask through the filter (disposable) or through an exhalation valve (replaceable and certain ones are disposable).
The classes of certified nonpowered air-purifying respirators include both filtering facepiece (disposable) respirators and elastomeric (rubber-like) respirators with filter cartridges. The certification test for filtering facepieces and filter cartridges consists only of a filter performance test. It does not address respirator fit. Although all N-, R-, and P-series respirators are recommended for protection against M. tuberculosis infection in health-care settings and other workplaces that are usually free of oil aerosols that could degrade filter efficiency, well-fitting N-series respirators are usually less expensive than R-and P-series respirators (272,273). All respirators should be replaced as needed, based on hygiene considerations, increased breathing resistance, time-use limitations specified in the CDC/NIOSH approval guidelines, respirator damage, and in accordance with manufacturer user's instructions.
PAPRs. PAPRs use a blower that draws air through the filters into the facepiece. PAPRs can be equipped with a tight-fitting or loose-fitting facepiece, a helmet, or a hood. PAPR filters are classified as high efficiency and are different from those presented in this report (Table 4). A PAPR high efficiency filter meets the N100, R100, and P100 criteria at the beginning of their service life. No loading tests using 0.3 µm particles are conducted as part of certification. PAPRs can be useful for persons with facial hair or other conditions that prevent an adequate face to facepiece seal (476).
Atmosphere-supplying respirators. Positive-pressure airline (supplied-air) respirators are provided with air from a stationary source (compressor) or an air tank.
# Effectiveness of Respiratory-Protection Devices
Data on the effectiveness of respiratory protection against hazardous airborne materials are based on experience in the industrial setting; data on protection against transmission of M. tuberculosis in health-care settings are not available. The parameters used to determine the effectiveness of a respiratory protective device are face-seal efficacy and filter efficiency.
Face-seal leakage. Face-seal leakage is the weak link that limits a respirator's protective ability. Excessive face-seal leakage compromises the ability of particulate respirators to protect HCWs from airborne materials (477). A proper seal between the respirator's sealing surface and the face of the person wearing the respirator is essential for the effective and reliable performance of any tight-fitting, negative-pressure respirator.
For tight-fitting, negative-pressure respirators (e.g., N95 disposable respirators), the amount of face-seal leakage is determined by 1) the fit characteristics of the respirator, 2) the care in donning the respirator, and 3) the adequacy of the fit-testing program. Studies indicate that a well-fitting respirator and a fit test produces better results than a well-fitting respirator without a fit test or a poor-fitting respirator with a fit test. Increased face-seal leakage can result from additional factors, including incorrect facepiece size, failure to follow the manufacturer's instructions at each use, beard growth, perspiration or facial oils that can cause facepiece slippage, improper maintenance, physiological changes of the HCW, and respirator damage.
Face-seal leakage is inherent in tight-fitting negativepressure respirators. Each time a person wearing a nonpowered particulate respirator inhales, negative pressure (relative to the workplace air) is created inside the facepiece. Because of this negative pressure, air containing contaminants can leak into the respirator through openings at the face-seal interface and avoid the higher-resistance filter material. A half-facepiece respirator, including an N95 disposable respirator, should have <10% leakage. Full facepiece, nonpowered respirators have the same leakage (<2%) as PAPRs with tight-fitting full-facepieces.
The more complex PAPRs and positive-pressure airline respirators reduce or eliminate this negative facepiece pressure and, therefore, reduce leakage into the respirator and enhance protection. A PAPR is equipped with a blower that forcibly draws ambient air through high efficiency filters and then delivers the filtered air to the facepiece. This air is blown into the facepiece at flow rates that generally exceed the expected inhalation flow rates. The pressure inside the facepiece reduces face-seal leakage to low levels, particularly during the relatively low inhalation rates expected in health-care settings. PAPRs with a tight-fitting facepiece have <2% face-seal leakage under routine conditions (278). PAPRs with loose-fitting facepieces, hoods, or helmets have <4% inward leakage under routine conditions (278). Therefore, a PAPR might offer lower levels of face-seal leakage than nonpowered, half-mask respirators.
Filter penetration. Aerosol penetration through respirator filters depends on at least five independent variables: 1) filtration characteristics for each type of filter, 2) size distribution of the droplets in the aerosol, 3) linear velocity through the filtering material, 4) filter loading (i.e., amount of contaminant deposited on the filter), and 5) electrostatic charges on the filter and on the droplets in the aerosol (284). When N95 disposable respirators are used, filter penetration might approach 5% (50% of the allowable leakage of 10% for an N95 disposable respirator). When high efficiency filters are used in PAPRs or for half-facepiece respirators, filter efficiency is high (effectively 100%), and filter penetration is less of a consideration. Therefore, for high efficiency or 100-series filter respirators, the majority of inward leakage of droplet nuclei occurs at the respirator's faceseal or exhalation valve.
# Implementing a Respiratory-Protection Program
If respirators are used in a health-care setting, OSHA requires the development, implementation, administration, and periodic reevaluation of a respiratory-protection program (271,277,278). The most critical elements of a respiratoryprotection program include 1) assigning of responsibility, 2) training, and 3) fit testing (1). All HCWs who use respirators for protection against infection with M. tuberculosis should be included in the respiratory-protection program.
Visitors to AII rooms and other areas with patients who have suspected or confirmed infectious TB disease may be offered respirators (e.g., N95 disposable respirators) and should be instructed by an HCW on the use of the respirator before entering an AII room (see Respiratory Protection section for User-Seal Check FAQs). The health-care setting should develop a policy on use of respirators by visitors.
The number of HCWs included in the respiratory-protection program will vary depending on the 1) number of persons who have suspected or confirmed TB disease examined in a setting, 2) number of rooms or areas in which patients with suspected or confirmed infectious TB disease stay or are encountered, and 3) number of HCWs needed to staff these rooms or areas. In settings in which respiratory-protection programs are required, enough HCWs should be included to provide adequate care for patients with suspected or confirmed TB disease. However, administrative measures should be used to limit the number of HCWs exposed to M. tuberculosis (see Prompt Triage).
Information on the development and management of a respiratory-protection program is available in technical training courses that cover the basics of respiratory protection. Such courses are offered by OSHA, the American Industrial Hygiene Association, universities, manufacturers, and private contractors. To be effective and reliable, respiratory-protection programs must include at least the following elements (274,277,278).
# Assignment of Responsibility
One person (the program administrator) must be in charge of the respiratory-protection program and be given the authority and responsibility to manage all aspects of the program. The administrator must have sufficient knowledge (obtained by training or experience) to develop and implement a respiratory-protection program. Preferably, the administrator should have a background in industrial hygiene, safety, health care, or engineering. The administrator should report to the highest official possible (e.g., manager of the safety department, supervisor of nurses, HCWs' health manager, or infection-control manager) and should be allocated sufficient time to administer the respiratory-protection program in addition to other assigned duties.
# Standard Operating Procedures
The effectiveness of a respiratory-protection program requires the development of written standard procedures. These procedures should include information and guidance for the proper selection, use, and care of respirators (274).
# Screening
HCWs should not be assigned a task requiring use of respirators unless they are physically able to perform job duties while wearing the respirator. HCWs who might need to use a respirator should be screened by a physician or other licensed health-care professional for pertinent medical conditions at the time they are hired and then re-screened periodically (274). The screening process should begin with a screening questionnaire for pertinent medical conditions, the results of which should be used to identify HCWs who need further evaluation (Appendix G). Unless prescribed by the screening physician, serial physical examination or testing with chest radiographs or spirometry is neither necessary nor required (287). Trainees should be provided resources as an adjunct to the respiratory-protection program.
# Training
- Opportunities to handle and wear a respirator until they are proficient (see Supplement, Fit Testing). - Educational material for use as references.
- Instructions to refer all respirator problems immediately to the respirator program administrator.
# Selection
Filtering facepiece respirators used for protection against M. tuberculosis must be selected from those approved by CDC/ NIOSH under the provisions of 42 CFR 84 (. gov/niosh/celintro.html). A listing of CDC/NIOSH-approved disposable particulate respirators (filtering facepieces) is available at / disp_part. If a health-care setting uses respirators for protection against other regulated hazards (e.g., formaldehyde and ethylene oxide), then these potential exposures should be specifically addressed in the program. Combination product surgical mask/N95 disposable respirators (respirator portion certified by CDC/NIOSH and surgical mask portion listed by FDA) are available that provide both respiratory protection and bloodborne pathogen protection. Selection of respirators can be chosen through consultation with respirator fit-testing experts, CDC, occupational health and infection-control professional organizations, peer-reviewed research, respirator manufacturers, and advanced respirator training courses (10,(280)(281)(282)(283)(284)(285)(286)(287)(288)(289).
# Fit Testing
A fit test is used to determine which respirator fits the user adequately and to ensure that the user knows when the respirator fits properly. After a risk assessment is conducted to validate the need for respiratory protection, perform fit testing during the initial respiratory-protection program training and periodically thereafter, in accordance with federal, state, and local regulations.
Fit testing provides a method to determine which respirator model and size fits the wearer best and to confirm that the wearer can properly fit the respirator. Periodic fit testing for respirators used in environments where a risk for M. tuberculosis transmission exists can serve as an effective training tool in conjunction with the content included in employee training and retraining. The frequency of periodic fit testing should be determined by the occurrence of 1) a risk for transmission of M. tuberculosis, 2) a change in facial features of the wearer, 3) a medical condition that would affect respiratory function, 4) physical characteristics of respirator (despite the same model number), or 5) a change in the model or size of the assigned respirator (281).
# Inspection and Maintenance
Respirator maintenance should be an integral part of an overall respirator program. Maintenance applies both to respirators with replaceable filters and to respirators that are classified as disposable but are reused. Manufacturer instructions for inspecting, cleaning, maintaining, and using (or reuse) respirators should be followed to ensure that the respirator continues to function properly (278).
When respirators are used for protection against noninfectious aerosols (e.g., wood dust) that might be present in the air in heavy concentrations, the filter can become obstructed with airborne material. This obstruction in the filter material can result in increased resistance, causing breathing to be uncomfortable. In health-care settings in which respirators are used for protection against biologic aerosols, the concentration of infectious particles in the air is probably low. Thus, the filter in a respirator is unlikely to become obstructed with airborne material. In addition, no evidence exists to indicate that particles that affect the filter material in a respirator are reaerosolized easily. Therefore, the filter material used in respirators in health-care settings might remain functional for weeks. Because electrostatic filter media can degrade, the manufacturer should be contacted for the product's established service life to confirm filter performance.
Respirators with replaceable filters are reusable, and a respirator classified as disposable can be reused by the same HCW as long as it remains functional and is used in accordance with local infection-control procedures. Respirators with replaceable filters and filtering facepiece respirators can be reused by HCWs as long as they have been inspected before each use and are within the specified service life of the manufacturer. If the filter material is physically damaged or soiled or if the manufacturer's service life criterion has been exceeded, the filter (in respirators with replaceable filters) should be changed or the disposable respirator should be discarded according to local regulations. Infection-control personnel should develop standard procedures for storing, reusing, and disposing of respirators that have been designated for disposal.
# Evaluation
The respirator program must be evaluated periodically to ensure its continued effectiveness.
# Cleaning, Disinfecting, and Sterilizing Patient-Care Equipment and Rooms
# General
Medical instruments and equipment, including medical waste, used on patients who have TB disease are usually not involved in the transmission of M. tuberculosis (478)(479)(480). However, transmission of M. tuberculosis and pseudo-outbreaks (e.g., contamination of clinical specimens) have been linked to inadequately disinfected bronchoscopes contaminated with M. tuberculosis (80,81,160,163,164,166). Guidelines for cleaning, disinfecting, and sterilizing flexible endoscopic instruments have been published (481)(482)(483)(484)(485).
The rationale for cleaning, disinfecting, or sterilizing patientcare instruments and equipment can be understood more readily if medical devices, equipment, and surgical materials are divided into three general categories (486). The categories are critical, semicritical, and noncritical and are based on the potential risk for infection if an item remains contaminated at the time of use.
# Critical Medical Instruments
Instruments that are introduced directly into the bloodstream or other normally sterile areas of the body (e.g., needles, surgical instruments, cardiac catheters, and implants) are critical medical instruments. These items should be sterile at the time of use.
# Semicritical Medical Instruments
Instruments that might come into contact with mucous membranes but do not ordinarily penetrate body surfaces (e.g., noninvasive flexible and rigid fiberoptic endoscopes or bronchoscopes, endotracheal tubes, and anesthesia breathing circuits) are semicritical medical instruments. Although sterilization is preferred for these instruments, high-level disinfection that destroys vegetative microorganisms, the majority of fungal spores, mycobacteria (including tubercle bacilli), and small nonlipid viruses can be used. Meticulous cleaning of such items before sterilization or high-level disinfection is essential (481). When an automated washer is used to clean endoscopes and bronchoscopes, the washer must be compatible with the instruments to be cleaned (481,487). High-level disinfection can be accomplished with either manual procedures alone or use of an automated endoscope reprocessor with manual cleaning (80,481). In all cases, manual cleaning is an essential first-step in the process to remove debris from the instrument.
# Noncritical Medical Instruments or Devices
Instruments or devices that either do not ordinarily touch the patient or touch only the patient's intact skin (e.g., crutches, bed boards, and blood pressure cuffs) are noncritical medical instruments. These items are not associated with transmission of M. tuberculosis. When noncritical instruments or equipment are contaminated with blood or body substances, they should be cleaned and then disinfected with a hospital-grade, Environmental Protection Agency (EPA)-registered germicide disinfectant with a label claim for tuberculocidal activity (i.e., an intermediate-level disinfectant). Tuberculocidal activity is not necessary for cleaning agents or low-level disinfectants that are used to clean or disinfect minimally soiled noncritical items and environmental surfaces (e.g., floors, walls, tabletops, and surfaces with minimal hand contact).
# Disinfection
The rationale for use of a disinfectant with tuberculocidal activity is to ensure that other potential pathogens with less intrinsic resistance than that of mycobacteria are killed. A common misconception in the use of surface disinfectants in health care relates to the underlying purpose of products labeled as tuberculocidal germicides. Such products will not interrupt and prevent transmission of M. tuberculosis in health-care settings, because TB is not acquired from environmental surfaces. The tuberculocidal claim is used as a benchmark by which to measure germicidal potency. Because mycobacteria have the highest intrinsic level of resistance among the vegetative bacteria, viruses, and fungi, any germicide with a tuberculocidal claim on the label (i.e., an intermediate-level disinfectant) is considered capable of inactivating many pathogens, including much less resistant organisms such as the bloodborne pathogens (e.g., hepatitis B virus, hepatitis C virus, and HIV). Rather than the product's specific potency against mycobacteria, a germicide that can inactivate many pathogens is the basis for protocols and regulations indicating the appropriateness of tuberculocidal chemicals for surface disinfection.
Policies of health-care settings should specify whether cleaning, disinfecting, or sterilizing an item is necessary to decrease the risk for infection. Decisions regarding decontamination processes should be based on the intended use of the item, not on the diagnosis of the condition of the patient for whom the item is used. Selection of chemical disinfectants depends on the intended use, the level of disinfection required, and the structure and material of the item to be disinfected.
The same cleaning procedures used in other rooms in the health-care setting should be used to clean AII rooms. However, personnel should follow airborne precautions while cleaning these rooms when they are still in use. Personal protective equipment is not necessary during the final cleaning of an AII room after a patient has been discharged if the room has been ventilated for the appropriate amount of time (Table 1).
# Frequently Asked Questions (FAQs)
The following are FAQs regarding TST, QFT-G, BAMT, treatment for LTBI, risk assessment, environmental controls, respiratory protection, and cough-inducing and aerosol-generating procedures.
# TST and QFT-G
- Does having more than one TST placed in 1 year pose any risk? No risk exists for having TSTs placed multiple times per year. - Can repeated TSTs, by themselves, cause the TST result to convert from negative to positive? No, the TST itself does not cause false-positive results. Exposure to other mycobacteria or BCG vaccination can cause false-positive TST results. - What defines a negative TST result? A TST result of 0 mm or a measurement below the defined cut point for each criteria category is considered a negative TST result (Box 3). - What defines a positive TST result? A TST result of any millimeter reading above or at the defined cut point for each criteria category is considered a positive TST result (Box 3). The cut point (5 mm, 10 mm, and 15 mm) varies according to the purpose of the test (e.g., infection-control surveillance or medical and diagnostic evaluation, or contact investigation versus baseline testing). - What defines a false-negative result? A false-negative TST or QFT-G result is one that is interpreted as negative for a particular purpose (i.e., infection-control surveillance versus medical and diagnostic evaluation) in a person who is actually infected with M. tuberculosis. False-negative TST results might be caused by incorrect TST placement (too deeply or too shallow), incorrect reading of the TST result, use of an incorrect antigen, or if the person being tested is anergic (i.e., unable to respond to the TST because of an immunocompromising condition) or sick with TB disease. - What defines a false-positive result? A false-positive TST or QFT-G result is one that is interpreted as positive for a particular purpose (i.e., infection-control surveillance versus medical and diagnostic evaluation) in a person who is actually not infected with M. tuberculosis. False-positive TST results are more likely to occur in persons who have been vaccinated with BCG or who are infected with NTM, also known as mycobacteria other than TB (MOTT). A false-positive TST result might also be caused by incorrect reading of the TST result (reading erythema rather than induration) or use of incorrect antigen (e.g., tetanus toxoid). - Is placing a TST on a nursing mother safe? Yes, placing a TST on a nursing mother is safe.
- A pregnant HCW in a setting is reluctant to get a TST. Should she be encouraged to have the test administered? Yes, placing a TST on a pregnant woman is safe. The HCW should be encouraged to have a TST or offered BAMT. The HCW should receive education that 1) pregnancy is not a contraindication to having a TST administered and 2) skin testing does not affect the fetus or the mother. Tens of thousands of pregnant women have received TST since the test was developed, and no documented episodes of TST-related fetal harm have been reported. Guidelines issued by ACOG emphasize that postponement of the application of a TST as indicated and postponement of the diagnosis of infection with M. tuberculosis during pregnancy is unacceptable. person has a negative TST (i.e., false-negative) result years after infection with M. tuberculosis and then a positive subsequent TST result. The positive TST result is caused by a boosted immune response of previous sensitivity rather than by a new infection (false-positive TST conversion). Two-step testing reduces the likelihood of mistaking a boosted reaction for a new infection. - What procedure should be followed for a newly hired HCW who had a documented negative TST result 3 months ago at their previous job? This person should receive one baseline TST upon hire (ideally before the HCW begins assigned duties). The negative TST result from the 3 months preceding new employment (or a documented negative TST result anytime within the previous 12 months) should be considered the first step of the baseline two-step TST. If the HCW does not have documentation of any TST result, the HCW should be tested with baseline two-step TST (one TST upon hire and one TST placed 1-3 weeks after the first TST result was read).
- Why are two-step TSTs important for the baseline (the beginning of an HCW's employment)? If TST is used for TB screening (rather than BAMT), performing two-step TST at baseline minimizes the possibility that boosting will lead to suspicion of transmission of M. tuberculosis in the setting during a later contact investigation or during serial testing (false-positive TST conversions - If an HCW has a baseline first-step TST result between 0-9 mm, does a second-step TST need to be placed? Yes, if the baseline first-step TST result is 10 mm, a ≥10 mm increase did not occur in the TST result to meet the criteria for a TST conversion. How should this reading be interpreted? The TST result needs to be interpreted from two perspectives: 1) administrative and 2) individual medical interpretation. Because an increase by ≥10 mm did not occur, the result would not be classified as a TST conversion for administrative purposes. However, this HCW should be referred for a medical evaluation. The following criteria are used to determine whether a TST result is positive or negative, considering individual clinical grounds: 1) absolute measured induration (i.e., ≥5, ≥10, or ≥15 mm induration, depending on the level of risk and purpose of testing); 2) the change in the size of the TST result; 3) time frame of the change; 4) risk for exposure, if any; and 5) occurrence of other documented TST conversions in the setting. For HCWs at low risk for LTBI, TST results of 10-14 mm can be considered negative from a clinical standpoint, and these HCWs should not have repeat TST, because an additional increase in induration of ≥10 mm will not be useful in determining the likelihood of LTBI. - Are baseline two-step TSTs needed for HCWs who begin jobs that involve limited contact with patients (e.g., medical records staff)? Yes, all HCWs who might share air space with patients should receive baseline twostep TST (or one-time BAMT) before starting duties. However, in certain settings, a choice might be offered not to perform baseline TST on HCWs who will never be in contact with or share air space with patients who have TB disease, or who will never be in contact with clinical specimens (e.g., telephone operators in a separate building from patients). - A setting conducts skin testing annually on the anniversary of each HCW's employment. Last year, multiple TST conversions occurred in April; therefore, all HCWs received a TST during that month. In the future, do all HCWs need to be tested annually in April? No, after a contact investigation is performed, the best and preferred schedule for annual TB screening is on the anniversary of the HCW's employment date or on their birthday (rather than testing all HCWs at the same time each year), because it increases the opportunity for early recognition of infection-control problems that can lead to TST conversions. ). In addition, offer TB screening to all persons named by the patient as work or social contacts during the infectious period. Determining whether to broaden the investigation will depend on whether evidence of transmission to any of the above contacts exists (positive TST or BAMT results or conversions), the duration of the potential exposure, and the intensity of the exposure (e.g., in a poorly ventilated environment versus outdoors).
If the exposure was to pulmonary TB that was cavitary on chest radiograph or if the patient had positive AFB sputum smear results, usually the minimum exposure duration for a person to be considered a contact would be shorter. Nonetheless, infection with M. tuberculosis requires some degree of prolonged or regular exposure (i.e., days to weeks, not just a few hours). - If an HCW in a setting has a latex allergy, should this person receive a TST? A person with a latex allergy can receive a TST when latex-free products are used. Latex allergy can be a contraindication to skin testing if the allergy is severe and the products used to perform the test (e.g., syringe plungers, PPD antigen bottle stopper, and gloves) contain latex. Latex-free products are, however, usually available. If a person with a latex allergy does have a TST performed using products or equipment that contain latex, interpretation of the TST results can be difficult, because the TST reaction might be the result of the latex allergy, reaction to PPD, or a combination of both. Consider repeating the TST using latex-free products or use BAMT. TST result of 16 mm and 1 year later the TST result was read as 0 mm? If documentation existed for the 16 mm result, administering another TST to the HCW subsequently was not necessary. One or both of these TST results could be false results. The first result might have been documented as 16 mm, but perhaps 16 mm of erythema was measured and no induration was present.
The second result of 0 mm might have been caused by incorrect administration of the TST (i.e., too deeply or too shallow), or was read and recorded incorrectly (if it was actually positive). In this instance, another TST should be placed, or a BAMT should be offered, or if TB disease is suspected, a chest radiograph should be performed. - What steps should be taken if the TST is administered intramuscularly instead of intradermally? QC for administering TST is critical. If the TST is administered intramuscularly (too deeply), repeat the skin test immediately, or offer BAMT.
- How are annual TST conversion rates for HCWs calculated? A TST conversion is a change in the result of a test for M. tuberculosis infection wherein the condition is interpreted as having progressed from uninfected to infected. Annual TST conversion rates are calculated for a given year by dividing the number of test conversions among HCWs in the setting that year (numerator) by the total number of HCWs who received tests in the setting that year (denominator) multiplied by 100. By calculating annual TST conversion rates, year-to-year comparisons can be used to identify transmission of M. tuberculosis that was not previously detected. Portable HEPA filtration units recirculate room air, and the HEPA filters effectively remove all particles from the air in the size range of droplet nuclei, resulting in a dilution of the concentration of infectious particles in the room.
# Respiratory Protection
- What is the difference between a CDC/NIOSHcertified respirator and a surgical or procedure mask? Respirators are designed to help reduce the wearer's (i.e., HCW's) exposure to airborne particles. The primary purpose of a surgical or procedure mask is to help prevent biologic particles from being expelled into the air by the wearer (i.e., patient). - How important is the fit of the respirator? This step is critical. The fit of a respirator is substantially important. If a respirator does not fit tightly on the face, airborne hazards can penetrate or enter underneath the facepiece seal and into the breathing zone. Before each use, the wearer of a respirator should perform a user-seal check on themselves to minimize contaminant leakage into the facepiece (). - How do I perform a respirator user-seal check? Performing a user-seal check (formerly called "fit check") after redonning the respirator each time is critical to ensure adequate respiratory protection. The seal checks for respirators are described in the respirator user instructions and should be consulted before the respirator is used. The two types of user-seal checks usually are positive-pressure and negative-pressure checks.
To check positive pressure seal after donning the respirator, the wearer should cover the surface of the respirator with their hands or with a piece of household plastic film and exhale gently. If air is felt escaping around the facepiece, the respirator should be repositioned, and the user-seal check should be performed again. If the wearer does not feel air escaping around the facepiece, the positive pressure user-seal check was successful.
To check the negative pressure seal after donning the respirator, the wearer should cover the surface of the respirator and gently inhale, which should create a vacuum, causing the respirator to be drawn in toward the face. If the respirator is not drawn in toward the face or if the wearer feels air leaking around the face seal, the respirator should be removed and examined for any defects (e.g., a small hole or poor molding of the respirator to the face ). If no holes are found, the respirator should be repositioned and readjusted, and a second attempt at negative pressure user-seal check should be made. If the check is not successful, try a new respirator.
- Is performing a user-seal check (formerly called "fit check") on a respirator before each use always necessary? Yes, performing a user-seal check on respirators before each use is essential to minimize contaminant leakage into the facepiece. Each respirator manufacturer has a recommended user-seal check procedure that should be followed by the user each time the respirator is worn.
# What is a respirator fit test and who does fit testing?
A fit test is used to determine which respirator does or does not fit the user adequately and to ensure that the user knows when the respirator fits properly. BAMT converter A change from a negative to a positive BAMT result over a 2-year period.
boosting When nonspecific or remote sensitivity to tuberculin purified protein derivative (PPD) in the skin test wanes or disappears over time, subsequent TSTs can restore the sensitivity. This process is called boosting or the booster phenomenon. An initially small TST reaction size is followed by a substantial reaction size on a later test, and this increase in millimeters of induration can be confused with a conversion or a recent M. tuberculosis infection. Two-step testing is used to distinguish new infections from boosted reactions in infection-control surveillance programs. bronchoscopy A procedure for examining the lower respiratory tract in which the end of the endoscopic instrument is inserted through the mouth or nose (or tracheostomy) and into the respiratory tree. Bronchoscopy can be used to obtain diagnostic specimens. Bronchoscopy also creates a high risk for M. tuberculosis transmission to HCWs if it is performed on an untreated patient who has TB disease (even if the patient has negative AFB smear results) because it is a cough-inducing procedure.
case A particular instance of a disease (e.g., TB). A case is detected, documented, and reported.
# cavity (pulmonary)
A hole in the lung parenchyma, usually not involving the pleural space. Although a lung cavity can develop from multiple causes, and its appearance is similar regardless of its cause, in pulmonary TB disease, cavitation results from the destruction of pulmonary tissue by direct bacterial invasion and an immune interaction triggered by M. tuberculosis. A TB cavity substantial enough to see with a normal chest radiograph predicts infectiousness.
clinical examination A physical evaluation of the clinical status of a patient by a physician or equivalent practitioner.
close contact (TB) A person who has shared the same air space in a household or other enclosed environment for a prolonged period (days or weeks, not minutes or a couple hours) with a person with suspected or confirmed TB disease. Close contacts have also been referred to as high-priority contacts because they have the highest risk for infection with M. tuberculosis.
# cluster (TB)
A group of patients with LTBI or TB disease that are linked by epidemiologic, location, or genotyping data. Two or more TST conversions within a short period can be a cluster of TB disease and might suggest transmission within the setting. A genotyping cluster is two or more cases with isolates that have an identical genotyping pattern.
# infectious period
The period during which a person with TB disease might have transmitted M. tuberculosis organisms to others. For patients with positive AFB sputum smear results, the infectious period begins 3 months before the collection date of the first positive smear result or the symptom onset date (whichever is earlier) and ends when the patient is placed into AII or the date of collection for the first of consistently negative smear results. For patients with negative AFB sputum smear results, the infectious period extends from 1 month before the symptom onset date and ends when the patient is placed into AII (whichever was earlier).
interferon-γ release assays (IGRA) A type of an ex vivo test that detects cell-mediated immune response to this cytokine. In the United States, QFT-G is a currently available IGRA.
# isoniazid (INH)
A highly active antituberculosis chemotherapeutic agent that is a cornerstone of treatment for TB disease and the cornerstone of treatment for LTBI.
laryngeal TB A form of TB disease that involves the larynx and can be highly infectious. prevalence
The proportion of persons in a population who have a disease at a specific time.
protection factor A general term for three specific terms: 1) APF, 2) SWPF, and 3) WPF. These terms refer to different methods of defining adequacy of respirator fit. See also APF, SWPF, and WPF.
pulmonary TB TB disease that occurs in the lung parenchyma, usually producing a cough that lasts ≥3 weeks.
purified protein derivative (PPD) A material used in diagnostic tests for detecting infection with M. tuberculosis. In the tuberculin United States, PPD solution is approved for administration as an intradermal injection (5 TU per 0.1 mL), a diagnostic aid for LTBI (see TST). In addition, PPD tuberculin was one of the antigens in the first-generation QFT.
qualitative fit test (QLFT) A pass-fail fit test to assess the adequacy of respirator fit that relies on the response of the person to the test agent.
quality control (QC) A function to ensure that project tools and procedures are reviewed and verified according to project standards.
QFT and QFT-G Types of BAMT that are in vitro cytokine assays that detects cell-mediated immune response (see also DTH) to M. tuberculosis in heparinized whole blood from venipuncture. This test requires only a single patient encounter, and the result can be ready within 1 day. In 2005, QuantiFERON®-TB was replaced by QuantiFERON®-TB Gold (QFT-G), which has greater specificity because of antigen selection. QFT-G appears to be capable of distinguishing between the sensitization caused by M. tuberculosis infection and that caused by BCG vaccination.
quantitative fit test (QNFT) An assessment of the adequacy of respirator fit by numerically measuring the amount of leakage into the respirator.
recirculation Ventilation in which all or the majority of the air exhausted from an area is returned to the same area or other areas of the setting.
recommended exposure limit (REL) The occupational exposure limit established by CDC/NIOSH. RELs are intended to suggest levels of exposure to which the majority of HCWs can be exposed without experiencing adverse health effects. reinfection A second infection that follows from a previous infection by the same causative agent.
Frequently used when referring to an episode of TB disease resulting from a subsequent infection with M. tuberculosis and a different genotype.
source control A process for preventing or minimizing emission (e.g., aerosolized M. tuberculosis) at the place of origin. Examples of source-control methods are booths in which a patient coughs and produces sputum, BSCs in laboratories, and local exhaust ventilation. spirometry A procedure used to measure time expired and the volume inspired, and from these measurements, calculations can be made on the effectiveness of the lungs.
sputum Mucus containing secretions coughed up from inside the lungs. Tests of sputum (e.g., smear and culture) can confirm pulmonary TB disease. Sputum is different from saliva or nasal secretions, which are unsatisfactory specimens for detecting TB disease. However, specimens suspected to be inadequate should still be processed because positive culture results can still be obtained and might be the only bacteriologic indication of disease.
sputum induction A method used to obtain sputum from a patient who is unable to cough up a specimen spontaneously. The patient inhales a saline mist, which stimulates coughing from deep inside the lungs.
supervised TST administration A procedure in which an expert TST trainer supervises a TST trainee who performs all procedures on the procedural observation checklist for administering TSTs.
supervised TST reading A procedure in which an expert TST trainer supervises a TST trainee who performs all procedures on the procedural observation checklist for reading TST results.
suspected TB A tentative diagnosis of TB that will be confirmed or excluded by subsequent testing. Cases should not remain in this category for longer than 3 months. symptomatic A term applied to a patient with health-related complaints (symptoms) that might indicate the presence of disease. In certain instances, the term is applied to a medical condition (e.g., symptomatic pulmonary TB).
symptom screen A procedure used during a clinical evaluation in which patients are asked if they have experienced any departure from normal in function, appearance, or sensation related to TB disease (e.g., cough).
targeted testing A strategy to focus testing for infection with M. tuberculosis in persons at high risk for LTBI and for those at high risk for progression to TB disease if infected.
# tuberculosis (TB) disease
Condition caused by infection with a member of the M. tuberculosis complex that has progressed to causing clinical (manifesting symptoms or signs) or subclinical (early stage of disease in which signs or symptoms are not present, but other indications of disease activity are present ) illness. The bacteria can attack any part of the body, but disease is most commonly found in the lungs (pulmonary TB). Pulmonary TB disease can be infectious, whereas extrapulmonary disease (occurring at a body site outside the lungs) is not infectious, except in rare circumstances. When the only clinical finding is specific chest radiographic abnormalities, the condition is termed "inactive TB" and can be differentiated from active TB disease, which is accompanied by symptoms or other indications of disease activity (e.g., the ability to culture reproducing TB organisms from respiratory secretions or specific chest radiographic finding).
TB case A particular episode of clinical TB disease. Refers only to the disease, not to the person with the disease. According to local laws and regulation, TB cases and suspect TB cases must be reported to the local or state health department.
# TB contact
A person who has shared the same air space with a person who has TB disease for a sufficient amount of time to allow possible transmission of M. tuberculosis.
tubercle bacilli M. tuberculosis organisms.
tuberculin A precipitate made from a sterile filtrate of M. tuberculosis culture medium.
tumor necrosis factor-alpha (TNF-α) A small molecule (called a cytokine) discovered in the blood of animals (and humans) with tumors but which has subsequently been determined to be an essential host mediator of infection and inflammation. TNF-α is released when humans are exposed to bacterial products (e.g., lipopolysaccharide) or BCG. Drugs (agents) that block human TNF-α have been demonstrated to increase the risk for progression to TB disease in persons who are latently infected.
two-step TST Procedure used for the baseline skin testing of persons who will receive serial TSTs (e.g., HCWs and residents or staff of correctional facilities or long-term-care facilities) to reduce the likelihood of mistaking a boosted reaction for a new infection. If an initial TST result is classified as negative, a second step of a two-step TST should be administered 1-3 weeks after the first TST result was read. If the second TST result is positive, it probably represents a boosted reaction, indicating infection most likely occurred in the past and not recently. If the second TST result is also negative, the person is classified as not infected. Two-step skin testing has no place in contact investigations or in other circumstances in which ongoing transmission of M. tuberculosis is suspected.
# ulceration (TST)
A break in the skin or mucosa with loss of surface tissue.
ultraviolet germicidal radiation Use of ultraviolet germicidal irradiation to kill or inactivate microorganisms. (UVGI)
# UVGI lamp
An environmental control measure that includes a lamp that kills or inactivates microorganisms by emitting ultraviolet germicidal irradiation, predominantly at a wavelength of 254 nm (intermediate light waves between visible light and radiographs). UVGI lamps can be used in ceiling or wall fixtures or within air ducts of ventilation systems as an adjunct to other environmental control measures.
user-seal check Formerly called "fit check." A procedure performed after every respirator is donned to check for proper seal of the respirator.
variable air volume (VAV) VAV ventilation systems are designed to vary the quantity of air delivered to a space while maintaining a constant supply air temperature to achieve the desired temperature in the occupied space. Minimum levels are mechanical, and outside air is maintained.
vesiculation An abnormal elevation of the outer layer of skin enclosing a watery liquid; blister. wheal A small bump that is produced when a TST is administered. The wheal disappears in approximately 10 minutes after TST placement.
workplace protection factor (WPF) A measure of the protection provided in the workplace by a properly functioning respirator when correctly worn and used. 1) before performing another procedure.
- Implement a written infection-control plan in the setting. Update annually.
- Perform an annual risk assessment for the setting. 1).
- In medical offices or ambulatorycare settings where patients with TB disease are treated, at least one room should meet requirements for an AII room to be used for patients with suspected or confirmed infectious TB disease (Table 2).
- Perform dialysis for patients with suspected or confirmed infectious TB disease in a room that meets requirements for an AII room (Table 2). - Implement a written infectioncontrol plan in the setting. Update annually.
- Schedule dialysis for patients with TB disease when a minimum number of HCWs and other patients are present and at the end of the day to maximize the time available for removal of airborne contamination (Table 1).
- For dental staff performing procedures on a patient with suspected or confirmed infectious TB disease, at least N95 disposable respirators should be worn.
- For HCWs, visitors, ¶ and others entering the AII room of a patient with suspected or confirmed infectious TB disease, at least N95 disposable respirators should be worn. - If the patient has signs or symptoms of infectious TB disease (positive AFB sputum smear result), consider having the patient wear a surgical or procedure mask, if possible (e.g., if patient is not using a breathing circuit), during transport, in waiting areas, or when others are present. | 79,921 | {
"id": "49ef7b1fd1d1326ba57dc5f56537c1234d988970",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | The transmission of tuberculosis is a recognized risk in health-care settings. Several recent outbreaks of tuberculosis in health-care settings, including outbreaks involving multidrugresistant strains of Mycobacterium tuberculosis, have heightened concern about nosocomial transmission. In addition, increases in tuberculosis cases in many areas are related to the high risk of tuberculosis among persons infected with the human immunodeficiency virus (HIV). Transmission of tuberculosis to persons with HIV infection is of particular concern because they are at high risk of developing active tuberculosis if infected. Health-care workers should be particularly alert to the need for preventing tuberculosis transmission in settings in which persons with HIV infection receive care, especially settings in which cough-inducing procedures (e.g., sputum induction and aerosolized pentamidine (AP) treatments) are being performed. Transmission is most likely to occur from patients with unrecognized pulmonary or laryngeal tuberculosis who are not on effective antituberculosis therapy and have not been placed in tuberculosis (acid-fast bacilli (AFB)) isolation. Health-care facilities in which persons at high risk for tuberculosis work or receive care should periodically review their tuberculosis policies and procedures, and determine the actions necessary to minimize the risk of tuberculosis transmission in their particular settings. The prevention of tuberculosis transmission in health-care settings requires that all of the following basic approaches be used: a) prevention of the generation of infectious airborne particles (droplet nuclei) by early identification and treatment of persons with tuberculous infection and active tuberculosis, b) prevention of the spread of infectious droplet nuclei into the#
general air circulation by applying source-control methods, c) reduction of the number of infectious droplet nuclei in air contaminated with them, and d) surveillance of health-care-facility personnel for tuberculosis and tuberculous infection. Experience has shown that when inadequate attention is given to any of these approaches, the probability of tuberculosis transmission is increased.
Specific actions to reduce the risk of tuberculosis transmission should include a) screening patients for active tuberculosis and tuberculous infection, b) providing rapid diagnostic services, c) prescribing appropriate curative and preventive therapy, d) maintaining physical measures to reduce microbial contamination of the air, e) providing isolation rooms for persons with, or suspected of having, infectious tuberculosis, f) screening health-care-facility personnel for tuberculous infection and tuberculosis, and g) promptly investigating and controlling outbreaks.
Although completely eliminating the risk of tuberculosis transmission in all health-care settings may be impossible, adhering to these guidelines should minimize the risk to persons in these settings. This document was prepared in consultation with experts in tuberculosis, acquired immunodeficiency syndrome, infection-control and hospital epidemiology, microbiology, ventilation and industrial hygiene, respiratory therapy, nursing, and emergency medical services.
# I. INTRODUCTION
A. Purpose of Document The purpose of this document is to review the mode and risk of tuberculosis transmission in health-care settings and to make recommendations for reducing the risk of transmission to persons in health-care settings--including workers, patients, volunteers, and visitors. The document may also serve as a useful resource for educating health-care workers about tuberculosis. Several outbreaks of tuberculosis in health-care settings, including outbreaks involving multidrug-resistant strains of M. tuberculosis, have been reported to CDC during the past 2 years (1; CDC, unpublished data). In addition, CDC has recently received numerous requests for information about reducing tuberculosis transmission in health-care settings. Much of the increased concern is due to the occurrence of tuberculosis among persons infected with HIV (2), who are at increased risk of contracting tuberculosis both from reactivation of a latent tuberculous infection (3) and from a new infection (4). Therefore, in this document, emphasis is given to the transmission of tuberculosis among persons with HIV infection, although the majority of patients with tuberculosis in most areas of the country do not have HIV infection.
These recommendations consolidate and update previously published CDC recommendations (5)(6)(7)(8)(9)(10). The recommendations are applicable to all settings in which health care is provided. In this document, the term "tuberculosis," in the absence of modifiers, refers to a clinically apparent active disease process caused by M. tuberculosis (or, rarely, M. Bovis or M. africanum). The terms "health-care-facility personnel" and "health-care-facility workers" refer to all persons working in a health-care setting--including physicians, nurses, aides, and persons not directly involved in patient care (e.g., dietary, housekeeping, maintenance, clerical, and janitorial staff, and volunteers). B. Epidemiology, Transmission, and Pathogenesis of Tuberculosis Tuberculosis is not evenly distributed throughout all segments of the population of the United States. Groups known to have a high incidence of tuberculosis include blacks, Asians and Pacific Islanders, American Indians and Alaskan Natives, Hispanics, current or past prison inmates, alcoholics, intravenous (IV) drug users, the elderly, foreign-born persons from areas of the world with a high prevalence of tuberculosis (e.g., Asia, Africa, the Caribbean, and Latin America), and persons living in the same household as members of these groups (5).
M. tuberculosis is carried in airborne particles, known as droplet nuclei, that can be generated when persons with pulmonary or laryngeal tuberculosis sneeze, cough, speak, or sing (11). The particles are so small (1-5 microns) that normal air currents keep them airborne and can spread them throughout a room or building (12). Infection occurs when a susceptible person inhales droplet nuclei containing M. tuberculosis, and bacilli become established in the alveoli of the lungs and spread throughout the body. Two to ten weeks after initial human infection with M. tuberculosis, the immune response usually limits further multiplication and spread of the tuberculosis bacilli. For a small proportion of newly infected persons (usually less than 1%), initial infection rapidly progresses to clinical illness. However, for another group (approximately 5%-10%), illness develops after an interval of months, years, or decades, when the bacteria begin to replicate and produce disease (11). The risk of progression to active disease is markedly increased for persons with HIV infection (3).
The probability that a susceptible person will become infected depends upon the concentration of infectious droplet nuclei in the air. Patient factors that enhance transmission are discussed more fully in section II.B.3. Environmental factors that enhance transmission include a) contact between susceptible persons and an infectious patient in relatively small, enclosed spaces, b) inadequate ventilation that results in insufficient dilution or removal of infectious droplet nuclei, and c) recirculation of air containing infectious droplet nuclei.
Tuberculosis transmission is a recognized risk in health-care settings (13)(14)(15)(16)(17)(18)(19)(20)(21). The magnitude of the risk varies considerably by type of health-care setting, patient population served, job category, and the area of the facility in which a person works. The risk may be higher in areas where patients with tuberculosis are provided care before diagnosis (e.g., clinic waiting areas and emergency rooms) or where diagnostic or treatment procedures that stimulate patient coughing are performed.
# Determining infectiousness of tuberculosis patients
The infectiousness of a person with tuberculosis correlates with the number of organisms that are expelled into the air, which, in turn, correlates with the following factors: a) anatomic site of disease, b) presence of cough or other forceful expirational maneuvers, c) presence of AFB in the sputum smear, d) willingness or ability of the patient to cover his or her mouth when coughing, e) presence of cavitation on chest radiograph, f) length of time the patient has been on adequate chemotherapy, g) duration of symptoms, and h) administration of procedures that can enhance coughing (e.g., sputum induction). In high-risk settings, certain techniques can be applied to prevent or to reduce the spread of infectious droplet nuclei into the general air circulation. The application of these techniques, which are called source-control methods because they entrap infectious droplet nuclei as they are emitted by the patient, or "source" (36), is especially important during performance of medical procedures likely to generate aerosols containing infectious particles.
1. Local exhaust ventilation Local exhaust ventilation is a source-control technique that removes airborne contaminants at or near their sources (37). The use of booths for sputum induction or administration of aerosolized medications (e.g., AP) is an example of local exhaust ventilation for preventing the spread of infectious droplet nuclei generated by these procedures into the general air circulation. Booths used for source control should be equipped with exhaust fans that remove nearly 100% of airborne particles during the time interval between the departure of one patient and the arrival of the next. The time required for removing a given percentage of airborne particles from an enclosed space depends upon the number of air exchanges per hour (Table 1, page 26), which is determined by the capacity of the exhaust fan in cubic feet per minute (cfm), the number of cubic feet of air in the room or booth, and the rate at which air is entering the room or booth at the intake source.
The exhaust fan should maintain negative pressure in the booth with respect to adjacent areas, so that air flows into the booth. Maintaining negative pressure in the booth minimizes the possibility that infectious droplet nuclei in the booth will move into adjacent rooms or hallways. Ideally, the air from these booths should be exhausted directly to the outside of the building (away from airintake vents, people, and animals, in accordance with federal, state, and local regulations concerning environmental discharges). If direct exhaust to the outside is impossible, the air from the booth could be exhausted through a properly designed, installed, and maintained highefficiency particulate air (HEPA) filter; however, the efficacy of this method has not been demonstrated in clinical settings (see section II.D.2.a.). 2. Other source-control methods A simple but important source-control technique is for infectious patients to cover all coughs and sneezes with a tissue, thus containing most liquid drops and droplets before evaporation can occur (38). A patient's use of a properly fitted surgical mask or disposable, valveless particulate respirator (PR) (see section II.D.2.c.) also may reduce the spread of infectious particles. However, since the device would need to be worn constantly for the protection of others, it would be practical in only very limited circumstances (e.g., when a patient is being transported within a medical facility or between facilities).
# D. Reducing Microbial Contamination of Air
Once infectious droplet nuclei have been released into room air, they should be eliminated or reduced in number by ventilation, which may be supplemented by additional measures (e.g., trapping organisms by high-efficiency filtration or killing organisms with germicidal ultraviolet (UV) irradiation (100-290 nanometers)). Health-care-facility workers may also reduce the risk of inhaling contaminated air by using PRs.
Although for the past 2-3 decades ventilation and, to a lesser extent, UV lamps and face masks have been used in health-care settings to prevent tuberculosis transmission, few published data exist on which to evaluate their effectiveness and liabilities or to draw conclusions about the role each method should play. From a theoretical standpoint, none of the four methods (ventilation, UV irradiation, high-efficiency filtration, and face masks) appears to be ideal. None of the methods used alone or in combination can completely eliminate the risk of tuberculosis transmission; however, when used with the other infection-control measures outlined in this document, they can substantially reduce the risk. Dilution reduces the concentration of contaminants in a room by introducing air that does not contain those contaminants into the room. Air is then removed from the room by exhaust directly to the outside or by recirculation into the general ventilation system of the building. Continuously recirculating air in a room or in a building may result in the accumulation or concentration of infectious droplet nuclei. Air that is likely contaminated with infectious droplet nuclei should be exhausted to the outside, away from intake vents, people, and animals, in accordance with federal, state, and local regulations for environmental discharges.
b. Air mixing. Proper ventilation requires that within-room mixing of air (ventilation efficiency) must be adequate (42). Air mixing is enhanced by locating air-supply outlets at ceiling level and exhaust inlets near the floor, thus providing downward movement of clean air through the breathing zone to the floor area for exhaust.
c. Direction of air flow. For control of tuberculosis transmission, the direction of air flow is as important as dilution. The direction of air flow is determined by the differences in air pressure between adjacent areas, with air flowing from higher pressure areas to lower pressure areas.
In an area occupied by a patient with infectious tuberculosis, air should flow into the potentially contaminated area (the patient's room) from adjacent areas. The patient's room is said to be under lower or negative pressure.
Proper air flow and pressure differentials between areas of a health-care facility are difficult to control because of open doors, movement of patients and staff, temperature, and the effect of vertical openings (e.g., stairwells and elevator shafts) (40). Air-pressure differentials can best be maintained in completely closed rooms. An open door between two areas may reduce any existing pressure differential and could reduce or eliminate the desired effect. Therefore, doors should remain closed, and the close fit of all doors and other closures of openings between pressurized areas should be maintained. For critical areas in which the direction of air flow must be maintained while allowing for patient or staff movement between adjacent areas, an appropriately pressurized anteroom may be indicated.
Examples of factors that can change the direction of air flow include the following: a) dust in exhaust fans, filters, or ducts, b) malfunctioning fans, c) adjustments made to the ventilation system elsewhere in the building, or d) or automatic shut down of outside air introduction during cold weather. In areas where the direction of air flow is important, trained personnel should monitor air flow frequently to ensure that appropriate conditions are maintained.
Each area to which an infectious tuberculosis patient might be admitted should be evaluated for its potential for the spread of tuberculosis bacilli. that may be performed, and ability to make the necessary changes.
Too much ventilation in an area can create problems. In addition to incurring additional expense at marginal benefits, occupants bothered by the drafts may elect to shut down the system entirely. Furthermore, if the concentration of infectious droplet nuclei in an area is high, the levels of ventilation that are practical to achieve may be inadequate to completely remove the contaminants (43). 2. Potential supplemental approaches a. HEPA filtration. For general-use areas (e.g., emergency rooms and waiting areas) of health-care facilities, recirculating the air is an alternative to using large percentages of outside air for general ventilation. If air is recirculated, care must be taken to ensure that infection is not transmitted in the process. Although they can be expensive, HEPA filters, which remove at least 99.97% of particles greater than 0.3 microns in diameter, have been shown to be effective in clearing the air of Aspergillus spores, which are in the size range of 1.5-6 microns (44-46). The ability of HEPA filters to remove tuberculosis bacilli from the air has not been studied, but tuberculosis-containing droplet nuclei are approximately 1-5 microns in diameter, about the same size as Aspergillus spores; therefore, HEPA filters theoretically should remove infectious droplet nuclei. HEPA filters may be used in general-use areas, but should not be used to recirculate air from a tuberculosis isolation room back into the general circulation.
Applications in preventing nosocomial Aspergillus infection have included using HEPA filters in centralized air-handling units and using whole-wall HEPA filtration units with laminar air flow in patient rooms. In addition, portable HEPA filtration units, which filter the air in a room rather than filtering incoming air, have been effective in reducing nosocomial Aspergillus infections (45,46). Such units have been used as an interim solution for retrofitting old areas of hospitals. Although these units should not be substituted for other accepted tuberculosis isolation procedures, they may be useful in general-use areas (e.g., waiting rooms and emergency rooms) where an increased risk of exposure to tuberculosis may exist, but where other methods of air control may be inadequate.
When HEPA filters are to be installed at a facility, qualified personnel must assess and design the air-handling system to assure adequate supply and exhaust capacity. Proper installation, testing, and meticulous maintenance are critical if a HEPA filter system is used (40). Improper design, installation, or maintenance could permit infectious particles to circumvent filtration and escape into the ventilation (42). The filters should be installed to prevent leakage between filter segments and between the filter bed and its frame. A regular maintenance program is required to monitor HEPA filters for possible leakage and for filter loading. A manometer should be installed in the filter system to provide an accurate means of objectively determining the need for filter replacement. Installation should allow for maintenance without contaminating the delivery system or the area served. The two most common types of UV installation are wall-or ceiling-mounted room fixtures for disinfecting the air within a room and irradiation units for disinfecting air in supply ducts. Wallor ceiling-mounted fixtures act by disinfecting upper room air, and their effectiveness depends in part upon the mixing of air in the room. Organisms must be carried by air currents from the lower portion of the room to within the range of the UV radiation from the fixtures. These fixtures are most likely to be effective in locations where ceilings are high, but some protection may be afforded in areas with ceilings as low as 8 feet. To be maximally effective, lamps should be left on day and night (59).
Installing UV lamps in ventilation ducts may be beneficial in facilities that recirculate the air. UV exposure of air in ducts can be direct and more intense than that provided by room fixtures and may be effective in disinfecting exhaust air. Duct installations provide no protection against tuberculosis transmission to any person who is in the room with an infectious patient. As with HEPA filters, UV installations in ducts may be used in general-use areas but should not be used to recirculate air from a tuberculosis isolation room back into the general circulation.
The main concern about UV lamps is safety. Short-term overexposure to UV irradiation can cause keratoconjunctivitis and erythema of the skin (60). However, with proper installation and maintenance, the risk of short-term overexposure is low. Long-term exposure to UV irradiation is associated with increased risk of basal cell carcinoma of the skin and with cataracts (58). To prevent overexposure of health-care-facility personnel and patients, UV lamp configurations should meet applicable safety guidelines (60).
When UV lamps are used in air-supply ducts, a warning sign should be placed on doors that Consultation from a qualified expert should be obtained before and after UV lamps are installed. After installation, the safety and effectiveness of UV irradiation must be checked with a UV meter and fixtures adjusted as necessary. Bulbs should be periodically checked for dust, cleaned as needed, and replaced at the end of the rated life of the bulb. Maintenance personnel should be cautioned that fixtures should be turned off before inspection or servicing. A timing device that turns on a red light at the end of the rated life of the lamp is available to alert maintenance personnel that the lamp needs to be replaced. c. Disposable PRs for filtration of inhaled air. 1.) For persons exposed to tuberculosis patients.
Appropriate masks, when worn by health-care providers or other persons who must share air space with a patient who has infectious tuberculosis, may provide additional protection against tuberculosis transmission. Standard surgical masks may not be effective in preventing inhalation of droplet nuclei (61), because some are not designed to provide a tight face seal and to filter out particulates in the droplet nucleus size range (1-5 microns). A better alternative is the disposable PR. PRs were originally developed for industrial use to protect workers. Although the appearance and comfort of PRs may be similar to that of cup-shaped surgical masks, they provide a better facial fit and better filtration capability. However, the efficacy of PRs in protecting susceptible persons from infection with tuberculosis has not been demonstrated.
PRs may be most beneficial in the following situations: a) when appropriate ventilation is not available and the patient's signs and symptoms suggest a high potential for infectiousness, b) when the patient is potentially infectious and is undergoing a procedure that is likely to produce bursts of aerosolized infectious particles or to result in copious coughing or sputum production, regardless of whether appropriate ventilation is in place, and c) when the patient is potentially infectious, has a productive cough, and is unable or unwilling to cover coughs.
Comfort influences the acceptability of PRs. Generally, the more efficient the PRs, the greater is the work of breathing through them and the greater the perceived discomfort. A proper fit is vital to protect against inhaling droplet nuclei. When gaps are present, air will preferentially flow through the gaps, allowing the PR to function more like a funnel than a filter, thus providing virtually no protection (61). 2.) For tuberculosis patients.
Masks or PRs worn by patients with suspected or confirmed tuberculosis may be useful in selected circumstances (see section II.C.2.). PRs used by patients should be valveless. Some PRs have valves to release expired air, and these would not be appropriate for patients to use. E. Decontamination: Cleaning, Disinfecting, and Sterilizing Guidelines for cleaning, disinfecting, and sterilizing equipment have been published (10,62,63).
The rationale for cleaning, disinfecting, or sterilizing patient-care equipment can be understood more readily if medical devices, equipment, and surgical materials are divided into three general categories (critical items, semi-critical items, and noncritical items) based on the potential risk of infection involved in their use.
Critical items are instruments such as needles, surgical instruments, cardiac catheters, or implants that are introduced directly into the bloodstream or into other normally sterile areas of the body. These items should be sterile at the time of use.
Semi-critical items are items such as noninvasive flexible and rigid fiberoptic endoscopes or bronchoscopes, endotracheal tubes, or anesthesia breathing circuits that may come in contact with mucous membranes but do not ordinarily penetrate body surfaces. Although sterilization is preferred for these instruments, a high-level disinfection procedure that destroys vegetative microorganisms, most fungal spores, tubercle bacilli, and small, nonlipid viruses may be used. Meticulous physical cleaning before sterilization or high-level disinfection is essential.
Noncritical items are those that either do not ordinarily touch the patient or touch only intact skin. Such items include crutches, bedboards, blood pressure cuffs, and various other medical accessories. These items do not transmit tuberculous infection. Consequently, washing with a detergent is usually sufficient.
Facility policies should identify whether cleaning, disinfecting, or sterilizing an item is indicated to decrease the risk of infection. Procedures for each item depend on its intended use. Generally, critical items should be sterilized, semi-critical items should be sterilized or cleaned with highlevel disinfectants, and noncritical items need only be cleaned with detergents or low-level disinfectants. Decisions about decontamination processes should be based on the intended use of the item and not on the diagnosis of the patient for whom the item was used. Selection of chemical disinfectants depends on the intended use, the level of disinfection required, and the structure and material of the item to be disinfected.
Although microorganisms are normally found on walls, floors, and other surfaces, these environmental surfaces are rarely associated with transmission of infections to patients or healthcare-facility personnel. This is particularly true with organisms such as tubercle bacilli, which generally require inhalation by the host for infection to occur. Therefore, extraordinary attempts to disinfect or sterilize environmental surfaces are rarely indicated. risk for tuberculosis should maintain active surveillance for tuberculosis among patients and health-care-facility personnel and for skin-test conversions among health-care-facility personnel. When tuberculosis is suspected or diagnosed, public health authorities should be notified so that appropriate contact investigation can be performed. Data on the occurrence of tuberculosis and skin-test conversions among patients and health-carefacility personnel should be collected and analyzed to estimate the risk of tuberculosis transmission in the facility and to evaluate the effectiveness of infection-control and screening practices. At the time of employment, all health-care facility personnel, including those with a history of Bacillus of Calmette and Guerin (BCG) vaccination, should receive a Mantoux tuberculin skin test unless a previously positive reaction can be documented or completion of adequate preventive therapy or adequate therapy for active disease can be documented. Initial and follow-up tuberculin skin tests should be administered and interpreted according to current guidelines (5,11). Health-care-facility personnel with a documented history of a positive tuberculin test, or adequate treatment for disease or preventive therapy for infection, should be exempt from further screening unless they develop symptoms suggestive of tuberculosis. Periodic retesting of PPDnegative health-care workers should be conducted to identify persons whose skin tests convert to positive (11). In general, the frequency of repeat testing should be based on the risk of developing new infection. Health-carefacility workers who may be frequently exposed to patients with tuberculosis or who are involved with potentially high-risk procedures (e.g., bronchoscopy, sputum induction, or aerosol treatments given to patients who may have tuberculosis) should be retested at least every 6 months. Health-care-facility personnel in other areas should be retested annually. Data on skin-test conversions should be periodically reviewed so that the risk of acquiring new infection may be estimated for each area of the facility. On the basis of this analysis, the frequency of retesting may be altered accordingly. j. Evaluation of health-care-facility personnel after unprotected exposure to tuberculosis In addition to periodic screening, health-care-facility personnel and patients should be evaluated if they have been exposed to a potentially infectious tuberculosis patient for whom the infection-control procedures outlined in this document have not been taken. Unless a negative skin test has been documented within the preceding 3 months, each exposed healthcare-facility worker (except those already known to be positive reactors) should receive a Mantoux tuberculin skin test as soon as possible after exposure and should be managed in the same way as other contacts (5). If the initial skin test is negative, the test should be repeated 12 weeks after the exposure ended. Exposed persons with skin-test reactions greater than or equal to 5 mm or with symptoms suggestive of tuberculosis should receive 4. Home-health services --For persons visiting the home of patients with suspected or confirmed infectious tuberculosis, precautions may be necessary to prevent exposure to air containing droplet nuclei until infectiousness has been eliminated by chemotherapy. These precautions include instructing patients to cover coughs and sneezes. The worker should wear a PR when entering the home or the patient's room. --Respiratory precautions in the home may be discontinued when the patient is improving clinically, cough has decreased, and the number of organisms in the sputum smear is decreasing. Usually this occurs within 2-3 weeks after tuberculosis medications are begun. Failure to take medications as prescribed and the presence of drug-resistant disease are the two most common reasons for a patient's failure to improve clinically. Home health-care personnel can assist in preventing tuberculosis transmission by educating the patient about the importance of taking medications as prescribed (unless adverse effects are seen). Home health-care personnel and patients who are at risk for contracting active tuberculosis should be reminded periodically of the importance of having pulmonary symptoms evaluated. --Close contacts of any patient with active infectious tuberculosis should be evaluated for tuberculous infection and managed according to CDC and American Thoracic Society guidelines (5,65).
# IV. Research Needs
Additional research is needed regarding the airborne transmission of tuberculosis including the following: a) better quantitating the risk of tuberculosis transmission in a variety of health-care settings, b) assessing the acceptability, efficacy, adverse impact, and cost-effectiveness of currently available methods for preventing transmission, and c) developing better methods for preventing transmission. These needs also extend to other infections transmitted by the airborne route. Currently, large numbers of immunosuppressed persons, including patients infected with HIV, are being brought together in health-care settings in which procedures are used that induce the generation of droplet nuclei. Research is needed to fill many of the gaps in current knowledge and to lead to new and better guidelines for protecting patients and personnel in these settings.
V. Glossary of Abbreviations AFB Acid-fast bacilli--organisms that retain certain stains, even after being washed with acid alcohol. Most are mycobacteria. When seen on a stained smear of sputum or other clinical specimen, a diagnosis of tuberculosis should be considered.
AIDS Acquired immunodeficiency syndrome--an advanced stage of disease caused by infection with the human immunodeficiency virus (HIV). A patient with AIDS is especially susceptible to other infections.
AP Aerosolized pentamidine--drug treatment given to patients with HIV infection to treat or to prevent Pneumocystis carinii pneumonia. The drug is put into solution, the solution is aerosolized, and the patient inhales the aerosol. Questions or messages regarding errors in formatting should be addressed to mmwrq@cdc. gov. | 6,150 | {
"id": "44ddc78dae5b5cde3b936167085a641b24f326ee",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | This document is in the public domain and may be freely copied or reprinted. Disclaimer: Mention of any company, product, policy, or the inclusion of any reference does not constitute endorsement by NIOSH.# Foreword
The Occupational Safety and Health Act of 1970 assures so far as possible every working man and woman in the Nation safe and healthful working conditions. The Act charges the National Institute for Occupational Safety and Health (NIOSH) with conducting research and making science-based recommendations to prevent work-related illness, injury, disability, and death.
On October 8, 2001, the President of the United States established by executive order the Office of Homeland Security (OHS), which is mandated "to develop and coordinate the implementation of a comprehensive national strategy to secure the United States from terrorist threats or attacks." In January 2002, the OHS formed the Interagency Workgroup on Building Air Protection under the Medical and Public Health Preparedness Policy Coordinating Committee of the OHS. The workgroup included representatives from agencies throughout the Federal Government, including NIOSH, which is part of the Department of Health and Human Services, Centers for Disease Control and Prevention. In May 2002, NIOSH, in cooperation with this workgroup, published Guidance for Protecting Building Environments from Airborne Chemical, Biological, and Radiological Attacks. This document provided building owners, managers, and maintenance personnel with recommendations to protect public, private, and government buildings from chemical, biological, or radiological attacks.
With U.S. workers and workplaces facing potential hazards associated with chemical, biological, or radiological terrorism, the occupational health and safety dimension of homeland security is increasingly evident. As with most workplace hazards, preventive steps can reduce the likelihood and mitigate the impact of terrorist threats. This publication is the second NIOSH Guidance document aimed at protecting workplaces from these new threats. It provides detailed, comprehensive information on selecting and using filtration and air-cleaning systems in an efficient and cost-effective manner. Filtration systems can play a major role in protecting both buildings and their occupants.
Prevention is the cornerstone of public and occupational health. This document provides preventive measures that building owners and managers can implement to protect building air environments from a terrorist release of chemical, biological, or radiological contaminants. These recommendations, focusing on filtration and air cleaning, are part of the process to develop more comprehensive guidance. Working with partners in the public and private sectors, NIOSH will continue to build on this effort. Figure 8. Relationship among total cost, filter life, and power requirements . . . . . . . . . . . . . . . . . Figure 9. Effect of face velocity on the collection efficiency and the most penetrating particle size . . . . . . . Figure 10. Breakthrough curves for cyanogen chloride at various filter bed depths . . . . . . . . . . . . . . . airborne contaminants: Gases, vapors, or aerosols.
# Contents
arrestance: Ability of a filter to capture a mass fraction of coarse test dust.
bioaerosol: A suspension of particles of biological origin.
breakthrough concentration: Saturation point of downstream contaminant buildup, which prevents the collection ability of sorbent to protect against gases and vapors.
breakthrough time: Elapsed time between the initial contact of the toxic agent at a reported challenge concentration on the upstream surface of the sorbent bed and the breakthrough concentration on the downstream side.
challenge concentration: Airborne concentration of the hazardous agent entering the sorbent.
channeling: Air passing through portions of the sorbent bed that offer low airflow resistance due to non-uniform packing, irregular particle sizes, etc.
chemisorption: Sorbent capture mechanism dependent on chemically active medium (involves electron transfer).
collection efficiency: Fraction of entering particles that are retained by the filter (based on particle count or mass).
composite efficiency value: Descriptive rating value for a clean filter to incrementally load different particle sizes.
critical bed depth: See: mass transfer zone.
xii xiii diffusion: Particle colliding with a fiber due to random (Brownian) motion.
dust spot efficiency: Measurement of a filter's ability to remove large particles (the staining portion of atmospheric dust).
dust holding capacity: Measurement of the total amount of dust a filter is able to hold during a dust-loading test.
electrostatic attraction: Small particles attracted to fibers, and after being contacted, retained there by a weak electrostatic force. electrostatic filter: A filter that uses electrostatically enhanced fibers to attract and retain particles.
filter bypass: Airflow around a filter or through an unintended path.
filter face velocity: Air stream velocity just prior to entering the filter.
filter performance: A description of a filter's collection efficiency, pressure drop, and dust-holding capacity over time.
gas: Formless fluids which tend to occupy an entire space uniformly at ordinary temperatures.
gas-phase filter: Composed of sorbent medium, e.g., natural zeolite, alumina-activated carbon, specialty carbons, synthetic zeolite, polymers.
impaction: Particle colliding with a fiber due to particle inertia.
interception: Particle colliding with a fiber due to particle size.
large particle: Particles greater than 1 micrometer in diameter.
# life-cycle cost:
Sum of all filter costs from initial investment to disposal and replacement, including energy and maintenance costs. malls, coliseums). While many aspects of this document may apply to residential buildings, it is not intended to address filtration questions pertinent to housing because of their different function, design, construction, and operational characteristics. Likewise, certain types of higher risk or special use facilities-such as industrial facilities, military facilities, selected laboratories, and hospital isolation areas-require special considerations that are beyond the scope of this guide. The likelihood of a specific building being targeted for terrorist activity is difficult to predict. As such, there is no specific formula that will determine a certain building's level of risk. You who own or manage buildings should seek appropriate assistance as described in this document to decide how to reduce your building's risk from a CBR attack- and how to mitigate the effects if such an attack should occur. References on conducting a threat assessment can be found at the end of the NIOSH document Guidance for Protecting Building Environments from Airborne Chemical, Biological, or Radiological Attacks.
After assessing your building's risk, you may wish to consider ways to enhance your filtration system. This document will help you make informed decisions about selecting, installing, and maintaining enhanced air-filtration and air-cleaning systems-important options in providing building protection from a CBR attack. The given recommendations are not intended to be minimum requirements that should be implemented for every building. Rather, they will guide your decision-making effort about the appropriate protective measures to implement in your building. The decision to enhance filtration in a specific building should be based on the perceived risk associated with that building and its tenants, its engineering and architectural applicability and feasibility, and the cost.
While no building can be fully protected from a determined group or individual intent on releasing a CBR agent, effective air filtration and air cleaning can help to limit the number and extent of injuries or fatalities and make subsequent decontamination efforts easier. Measures outlined in the current document also provide the side benefits of improved HVAC efficiency: increased building cleanliness, limited effects from accidental releases, and generally improved indoor-air quality. These measures may also prevent cases of respiratory infection and reduce exacerbations of asthma and allergies among building occupants. Together, these accrued benefits may improve your workforce productivity.
# INTRODUCTION
Air-filtration and air-cleaning systems can remove a variety of contaminants from a building's airborne environment. The effectiveness of a particular filter design or air-cleaning media will depend upon the nature of the contaminant. In this document, air filtration refers to removal of aerosol contaminants from the air, and air cleaning refers to the removal of gases or vapors from the air. Airborne contaminants are gases, vapors, or aerosols (small solid and liquid particles). It is important to realize that sorbents collect gases and vapors, but not aerosols; conversely, particulate filters remove aerosols, but not gases and vapors. The ability of a given sorbent to remove a contaminant depends upon the characteristics of the specific gas or vapor and other related factors. The efficiency of a particulate filter to remove aerosols depends upon the size of the particles, in combination with the type of filter used and HVAC operating conditions. Larger-sized aerosols can be collected on lower-efficiency filters, but the effective removal of a small-sized aerosol requires a higher-efficiency filter. Discussions in later sections of this document provide guidance on selecting the proper filters and/or air-cleaning media for specific types of air contaminants.
In addition to proper filter or sorbent selection, several issues must be considered before installing or upgrading filtration systems:
- Filter bypass is a common problem found in many HVAC filtration systems. Filter bypass occurs when air-rather than moving through the filter-goes around it, decreasing collection efficiency and defeating the intended purpose of the filtration system. Filter bypass is often caused by poorly fitting filters, poor sealing of filters in their framing systems, missing filter panels, or leaks and openings in the air-handling unit between the filter bank and blower. By simply improving filter efficiency without addressing filter bypass, you provide little if any benefit.
- Cost is another issue affected by HVAC filtration systems. Lifecycle cost should be considered (initial installation, replacement, operating, maintenance, etc.). Not only are higher-efficiency filters and sorbent filters more expensive than the commonly used HVAC system filters but also fan units may need to be changed to handle the increased pressure drop associated with the upgraded filtration systems. Although improved filtration will normally come at a higher cost, you can partially offset many of these costs by the accrued benefits, such as cleaner and more efficient HVAC components and improved indoor environmental quality.
- The envelope of your building matters. Filtration and air cleaning affect only the air that passes through the filtration and air-cleaning device, whether it is outdoor air, re-circulated air, or a mixture of the two. Outside building walls in residential, commercial, and institutional buildings are quite leaky, and the effect from negative indoor air pressures (relative to the outdoors) allows significant quantities of unfiltered air to infiltrate the building envelope.
Field studies have shown that, unless specific measures are taken to reduce infiltration, as much air may enter a building through infiltration (unfiltered) as through the mechanical ventilation (filtered) system. Therefore, you cannot expect filtration alone to protect your building from an outdoor CBR release. This is particularly so for systems in which no make-up air or inadequate overpressure is present. Instead, you must consider air filtration in combination with other steps, such as building pressurization and envelope air tightness, to increase the likelihood that the air entering the building actually passes through the filtration and air-cleaning systems.
CBR agents may travel in the air as a gas or an aerosol. Chemical warfare agents with relatively high vapor pressure are gaseous, while many other chemical warfare agents could potentially exist in either state. Biological and radiological agents are largely aerosols. A diagram of the relative sizes of common air contaminants (e.g., tobacco smoke, pollen, dust) is shown in Figure 1. CBR agents could potentially enter a building through either an internal or external release. Some health consequences from CBR agents are immediate, while others may take much longer to appear. CBR agents (e.g., arsine, nitrogen mustard gas, anthrax, radiation from a dirty bomb) can enter the body through a number of routes including inhalation, skin absorption, contact with eyes or mucous membranes, and ingestion. The amount of a CBR agent required to cause specific symptoms varies among agents; however, these agents are generally much more toxic than common indoor air pollutants. In many cases, exposure to extremely small quantities may be lethal. Symptoms are markedly different for the different classes of agents (chemical,
# FILTRATION AND AIR-CLEANING PRINCIPLES
# Particulate Air Filtration
Particulate air filters are classified as either mechanical filters or electrostatic filters (electrostatically enhanced filters). Although there are many important performance differences between the two types of filters, both are fibrous media and used extensively in HVAC systems to remove particles, including biological materials, from the air. A fibrous filter is an assembly of fibers that are randomly laid perpendicular to the airflow (Figure 2). The fibers may range in size from less than 1 µm to greater than 50 µm in diameter. Filter packing density may range from 1% to 30%. Fibers are made from cotton, fiberglass, polyester, polypropylene, or numerous other materials .
Fibrous filters of different designs are used for various applications. Flat-panel filters contain all of the media in the same plane. This design keeps the filter face velocity and the media velocity roughly the same. When pleated filters are used, additional filter media are added to reduce the air velocity through the filter media. This enables the filter to increase collection efficiency for a given pressure drop. Pleated filters can run the range of efficiencies from a minimum efficiency reporting value (MERV) of 6 up to and including high-efficiency particulate air (HEPA) filters. With pocket filters, air flows through small pockets or bags constructed of the filter media. These filters can consist of a single bag or have multiple pockets, and an increased number of pockets increases the filter media surface area. As in pleated filters, the increased surface area of the pocket filter reduces the velocity of the airflow through the filter media, allowing increased collection efficiency for a given pressure drop.
Renewable filters are typically low-efficiency media that are held on rollers. As the filter loads, the media are advanced or indexed, providing the HVAC system with a new filter . Four different collection mechanisms govern particulate air filter performance: inertial impaction, interception, diffusion, and electrostatic attraction (Figure 3). The first three of these mechanisms apply mainly to mechanical filters and are influenced by particle size.
- Impaction occurs when a particle traveling in the air stream and passing around a fiber, deviates from the air stream (due to particle inertia) and collides with a fiber.
- Interception occurs when a large particle, because of its size, collides with a fiber in the filter that the air stream is passing through.
- Diffusion occurs when the random (Brownian) motion of a particle causes that particle to contact a fiber. - Electrostatic attraction, the fourth mechanism, plays a very minor role in mechanical filtration. After fiber contact is made, smaller particles are retained on the fibers by a weak electrostatic force.
Impaction and interception are the dominant collection mechanisms for particles greater than 0.2 µm, and diffusion is dominant for particles less than 0.2 µm. The combined effect of these three collection mechanisms results in the classic collection efficiency curve, shown in Figure 4. Electrostatic filters contain electrostatically enhanced fibers, which actually attract the particles to the fibers, in addition to retaining them. Electrostatic filters rely on charged fibers to dramatically increase collection efficiency for a given pressure drop across the filter.
As mechanical filters load with particles over time, their collection efficiency and pressure drop typically increase. Eventually, the increased pressure drop significantly inhibits airflow, and the filters must be replaced. For this reason, pressure drop across mechanical filters is often monitored because it indicates when to replace filters.
Conversely, electrostatic filters, which are composed of polarized fibers, may lose their collection efficiency over time or when exposed to certain chemicals, aerosols, or high relative humidities. Pressure drop in an electrostatic filter generally increases at a slower rate than it does in a mechanical filter of similar efficiency. Thus, unlike the mechanical filter, pressure drop for the electrostatic filter is a poor indicator of the need to change filters. When selecting an HVAC filter, you should keep these differences between mechanical and electrostatic filters in mind because they will have an impact on your filter's performance (collection efficiency over time), as well as on maintenance requirements (change-out schedules).
Electrostatically enhanced filters are different from electrostatic precipitators, also known as electronic air cleaners. Electrostatic precipitators require power and charged plates to attract and capture particles.
Air filters are commonly described and rated based upon their collection efficiency, pressure drop (or airflow resistance), and particulate-holding capacity. Two filter test methods are currently used in the United States:
- American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) Standard 52.1-1992
- ASHRAE Standard 52.2-1999
Standard 52.1-1992 measures arrestance, dust spot efficiency, and dust holding capacity. Arrestance means a filter's ability to capture a mass fraction of coarse test dust and is suited for describing lowand medium-efficiency filters. Be aware that arrestance values may be high, even for low-efficiency filters, and do not adequately indicate the effectiveness of certain filters for CBR protection. Dust spot efficiency measures a filter's ability to remove large particles, those that tend to soil building interiors. Dust holding capacity is a measure of the total amount of dust a filter is able to hold during a dustloading test.
ASHRAE Standard 52.2-1999 measures particle size efficiency (PSE). This newer standard is a more descriptive test, which quantifies filtration efficiency in different particle size ranges for a clean and incrementally loaded filter to provide a composite efficiency value. It gives a better determination of a filter's effectiveness to capture solid particulate as opposed to liquid aerosols. The 1999 standard rates particle-size efficiency results as a MERV between 1 and 20. A higher MERV indicates a more efficient filter. In addition, Standard 52.2 provides a table (see Table 1) showing minimum PSE in three size ranges for each of the MERV numbers, 1 through 16. Thus, if you know the size of your contaminant, you can identify an appropriate filter that has the desired PSE for that particular particle size. Figure 5 shows actual test results for a MERV 9 filter and the corresponding filter collection efficiency increase due to loading.
# Gas-Phase Air Cleaning
Some HVAC systems may be equipped with sorbent filters, designed to remove pollutant gases and vapors from the building environment.
Sorbents use one of two mechanisms for capturing and controlling gas-phase air contaminants-physical adsorption and chemisorption. Both capture mechanisms remove specific types of gas-phase contaminants from indoor air. Unlike particulate filters, sorbents cover a wide range of highly porous materials (Figure 6), varying from simple clays and carbons to complexly engineered polymers. Many sorbents-not including those that are chemically active-can be regenerated by application of heat or other processes. Understanding the precise removal mechanism for gases and vapors is often difficult due to the nature of the adsorbent and the processes involved. While knowledge of adsorption equilibrium helps in understanding vapor protection, sorbent performance depends on such properties as mass transfer, chemical reaction rates, and chemical reaction capacity. A more thorough discussion of gas-phase aircleaning principles is provided in Appendix C of this document. Some of the most important parameters of gas-phase air cleaning include the following:
- BREAKTHROUGH CONCENTRATION: the downstream contaminant concentration, above which the sorbent is considered to be performing inadequately. Breakthrough concentration indicates the agent has broken through the sorbent, which is no longer giving the intended protection. This parameter is a function of loading history, relative humidity, and other factors.
- BREAKTHROUGH TIME: the elapsed time between the initial contact of the toxic agent at a reported challenge concentration on the upstream surface of the sorbent bed, and the breakthrough concentration on the downstream side of the sorbent bed.
- CHALLENGE CONCENTRATION: the airborne concentration of the hazardous agent entering the sorbent.
- RESIDENCE TIME: the length of time that the hazardous agent spends in contact with the sorbent. This term is generally used in the context of superficial residence time, which is calculated on the basis of the adsorbent bed volume and the volumetric flow rate.
- MASS TRANSFER ZONE OR CRITICAL BED DEPTH: interchangeably used terms, which refer to the adsorbent bed depth required to reduce the chemical vapor challenge to the breakthrough concentration. When applied to the challenge chemicals that are removed by chemical reaction, mass transfer is not a precise descriptor, but is often used in that context. The portion of the adsorbent bed not included in the mass transfer zone is often termed the capacity zone.
# RECOMMENDATIONS REGARDING FILTER AND SORBENT SELECTION, OPERATIONS, UPGRADE, AND MAINTENANCE
Before selecting a filtration and air-cleaning strategy that includes a potential upgrade in response to perceived types of threats, develop an understanding of your building and its HVAC system. A vital part of this effort will be to evaluate your total HVAC system thoroughly. Assess how your HVAC system is designed and intended to operate and compare that to how it currently operates. In large buildings, this evaluation is likely to involve many different air-handling units and system components.
Initially, you will need to answer several questions. Many of these questions may be difficult to answer without the assistance of qualified professionals (security specialists, HVAC engineers, industrial hygienists, etc.) to help you with threat assessments, ventilation/filtration, and indoor air quality. The answers to these questions, however, will guide you in making your decisions about what types of filters and/or sorbents should be installed in your HVAC system, how efficient those filters and/or sorbents must be, and what procedures you should develop to maintain the system. Because of the wide range of building and HVAC system designs, no single, off-the-shelf system can be installed in all buildings to protect against all CBR agents. Some system components could possibly be used in a large number of buildings; however, these systems should be designed on a case-by-case basis for each building and application. Some of the important questions to ask include:
- How are the filters in each system held in place and how are they sealed? Are the filters simply held in place by the negative pressure generated from downstream fans? Do the filter frames (the part of the filter that holds the filter media) provide for an airtight, leak-proof seal with the filter rack system (the part of the HVAC system that holds the filters in place)?
- What types of air contaminants are of concern? Are the air contaminants particulate, gaseous, or both? Are they TICs, toxic industrial materials (TIMs), or military agents? How toxic are they? Consider checking with your local emergency or disaster planning body to determine if there are large quantities of TICs or TIMs near your location or if there are specific concerns about military, chemical, or biologic agents.
- How might the agents enter your building? Are they likely to be released internally or externally to the building envelope, and how can various release scenarios best be addressed? The Environmental Protection Agency (EPA) and the Defense Advanced Research Projects Agency (DARPA) are currently working in this area, and several recent texts discuss various release scenarios .
- What is needed? Are filters or sorbents needed to improve current indoor air quality, provide protection in an accidental or intentional release of a nearby chemical processing plant, or provide protection from a potential terrorist attack using CBR agents?
- How clean does the air need to be for the occupants, and how much can be spent to achieve that desired level of air cleanliness? What are the total costs and benefits associated with the various levels of filtration?
- What are the current system capacities (fans, space for filters, etc.) and what is desired? What are the minimum airflow needs for the building?
- Who will maintain these systems and what are their capabilities?
It is important to recognize that improving building protection is not an all or nothing proposition. Because many CBR agents are extremely toxic, high contaminant removal efficiencies are needed; however, many complex factors can influence the human impact of a CBR release (i.e., agent toxicity, physical and chemical properties, concentration, wind conditions, means of delivery, release location, etc.). Incremental improvements to the removal efficiency of a filtration or air-cleaning system are likely to lessen the impact of a CBR attack to a building environment and its occupants while generally improving indoor air quality.
# Particulate Filter Selection, Installation, Use, and Upgrades
Consider system performance, filter efficiencies, and particle size of interest.
HVAC filters are critical system components. During the selection process, you should keep their importance in mind when thinking about filtration efficiency, flow rate, and pressure drop. Base your particulate filter selection on air contaminant sizes, ASHRAE filter efficiency, and performance of the entire filtration system (Table 1 and Figure 7). Filter banks often consist of two or more sets of filters; therefore, you should consider how the entire filtration system will perform-not just a single filter. The outermost filters are coarse, low-efficiency filters (pre-filters), which remove large particles and debris while protecting the blowers and other mechanical components of the ventilation system. These relatively inexpensive pre-filters are not effective for removing submicrometer particles. Therefore, the performance of the additional downstream filters is critical. These may consist of a single or multiple filters to remove submicrometer particles. As shown in Figure 4, particles in the 0.1 to 0.3 µm size range are the most difficult to remove from the air stream and require high-efficiency filters.
Chemical and biological aerosol dispersions (particulates) are frequently in the 1-to 10-µm range, and HEPA filters provide efficiencies greater than 99.9999% in that particle size range, assuming there is no leakage around the filter and no damage to the fragile pleated media. This high level of filtration efficiency provides protection against most aerosol threats. Chemical aerosols removed by particulate filters include tear gases and low volatility nerve agents, such as VX;- however, a vapor component of these agents could still exist. Biological agents and radioactive particulates are efficiently removed by HEPA filters.
*Military designation.
# Figure 7.
Comparison of collection efficiency and particle size for different filters .
# Understand performance differences between filter types.
When selecting particulate air filters, you must choose between mechanical or electrostatic filters. Keep in mind the already mentioned differences between the two main filter classifications, such as collection mechanism and pressure drop differences. Liquid aerosols are known to cause great reductions in the collection efficiencies of many electrostatic filters, and some studies have shown that ambient aerosols may also degrade performance. The degradation is partially related to the stability of the electrostatic charge. Pressure drop in an electrostatic filter (having less packing density) generally increases at a much slower rate than that of a similar efficiency mechanical filter. Pressure drop is frequently used in mechanical filters to determine filter change out, but is an unreliable indicator for change-out of electrostatic filters. Other measures, such as collection efficiency or time of use, are more suited for determining electrostatic filter change-out schedules.
Electrostatic filters may be an acceptable choice for some building protection applications, but you should recognize that there are limitations and compromises associated with these filters. The filter efficiency rating given by the manufacturer is likely to be substantially higher than what the filter will actually achieve when used. Require your filter supplier to state the type of media used in the filters of interest and provide data showing how these filters perform over time. This will help you to determine whether these lower cost filters will meet your building's air filtration needs.
# Consider total life-cycle costs.
- Filter cost, always a consideration, is directly related to efficiency, duration of effectiveness, and collection mechanism. Mechanical filters (pleated glass fiber) are quite likely to be more expensive than electrostatic (polymeric media) filters, but both may have the same initial fractional collection efficiency. However, over time the two types of filters will perform differently.
- Total life-cycle cost (i.e., energy costs, maintenance, disposal, replacement, etc.) is another consideration, which includes more than just the initial purchase price. You will minimize total cost by selecting the optimum change-out schedule, based on filter life and power requirements (Figure 8). Multiple filters can extend the life of the more expensive, high-efficiency filters. For example, one or more low-efficiency, disposable pre-filters, installed upstream of a HEPA filter, can extend the HEPA filter life by at least 25%. If the disposable filter is followed by a 90% extended surface filter, the life of the HEPA filter can be extended by almost 900% . However, you should not assume that the best way to proceed is to use a pre-filter. First, you should weigh the cost of pre-filter replacement and pressure drop against the extended life of the primary filter. You may find that for the same overall efficiency, it is more cost-effective to avoid pre-filters and, instead, to change the primary filters more frequently. Make this decision by weighing the operating cost analysis against the capture efficiencies provided by different systems.
# Consider all of the elements affected by filter upgrades.
Upgrading your filtration system may require significant changes in the mechanical components of your HVAC system, depending upon the component capacities. You should consider both the direct and indirect impact of upgrading your filtration system. With lowerefficiency filters, the final (loaded or dirty) pressure drop is often in the range of 125 to 250 Pascals (Pa) (0.5 to 1.0 in. water gauge).
Higher quality filters may have an initial pressure drop higher than 125 Pa (0.5 in. water gauge) and a final pressure drop of as high as 325 Pa (1.5 in. water gauge). You should consider the capacity of your existing HVAC system. Many systems (e.g., light-commercial, rooftop package units) do not have the fan capacity to handle the higher pressure drop associated with higher-efficiency filters. If the pressure drop of the filters installed in the system is too high, the HVAC system may be unable to deliver the designed volume of air to the occupied spaces. Higher capacity fans may be needed to overcome To be most effective, your filters should be used at their rated pressure drop and face velocity. Filter face velocity refers to the air stream velocity entering the filter. The rated pressure drop for each filter is given for a specific face velocity (typically 1.3 to 2.5 m/s or 250 to 500 fpm), and the pressure drop increases with airflow velocity. If you upgrade to higher-efficiency filters, the size and shape of your filter rack may need to be changed, in part, to assure appropriate face velocities. High-efficiency filters may experience a significant drop in collection efficiency if they are operated at too high of a face velocity (Figure 9).
# Conduct periodic quantitative performance evaluations.
You should use a quantitative evaluation to determine the total system efficiency. You should perform the evaluation for various particle sizes and at the appropriate system flow rate. You can use your evaluation of the results to implement further modifications (e.g., improved filter seals, etc. Building owners and managers who cannot feasibly upgrade to traditional high-efficiency mechanical filters may consider extended surface or electrostatic filter systems as an attractive low-cost alternative. Energy costs are minimized by the relatively low-pressure drop across these filters, and costly HVAC upgrades (modifications that may be required for higher-efficiency mechanical filters) are frequently avoided. Used properly, both types of filters can provide increased protection to a building and its occupants. However, you should closely monitor filtration efficiency of electrostatic filters that may substantially degrade with time.
Figure 9. Effect of face velocity on the collection efficiency and the most penetrating particle size (MPPS).
# Sorbent Selection, Installation, and Use
Choosing the appropriate sorbent or sorbents for an airborne contaminant is a complex decision, and you, in consultation with a qualified professional, should consider many factors. Before proceeding, seriously consider the issues associated with the installation of sorbent filters for the removal of gaseous contaminants from your building's air, as this is a less common practice than the installation of particulate filtration. Sorbent filters should be located downstream of the particulate filters. This arrangement will allow the sorbent to collect vapors, generated from liquid aerosols that collect on the particulate filter, and reduce the amount of particulate reaching the sorbent. Gas-phase contaminant removal can potentially be a challenging and costly undertaking; therefore, different factors must be addressed.
# Understand sorbent properties and their limitations.
Sorbents have different affinities, removal efficiencies, and saturation points for different chemical agents, which you should consider when selecting a sorbent. The U.S. Environmental Protection Agency states that a well-designed adsorption system should have removal efficiencies ranging from 95% to 98% for industrial contaminant concentrations, in the range of 500 to 2,000 ppm; higher collection efficiencies are needed for high toxicity CBR agents.
Sorbent physicochemical properties-such as pore size and shape, surface area, pore volume, and chemical inertness-all influence the ability of a sorbent to collect gases and vapors. Sorbent manufacturers have published information on the proper use of gas-phase sorbents, based upon contaminants and conditions. Air contaminant concentration, molecular weight, molecule size, and temperature are all important. The activated carbon, zeolites, alumina, and polymer sorbents you select should have pore sizes larger than the gas molecules being adsorbed. This point is particularly important for zeolites because of their uniform pore sizes. With certain adsorbents, compounds having higher molecular weights are often more strongly adsorbed than those with lower molecular weights. Copper-silverzinc-molybdenum-triethylenediamine (ASZM-TEDA) carbon is the current military sorbent recommended for collecting classical chemical warfare agents. You should ask your sorbent supplier for data concerning what specific CBR agents the equipment has been tested against, the test conditions, and the level of protection. The U.S. Army's Edgewood Chemical Biological Center, Aberdeen Proving Ground, Maryland, also has technical expertise on these subjects.
# Understand performance parameters and prevent breakthrough.
Sorbents are rated in terms of adsorption capacity (i.e., the amount of the chemical that can be captured) for many chemicals. This capacity rises as concentration increases and temperature decreases. The rate of adsorption (i.e., the efficiency) falls as the amount of contaminant captured grows. Information about adsorption capacity-available from manufacturers-will allow you to predict the service life of a sorbent bed. Sorbent beds are sized on the basis of challenge agent and concentration, air velocity and temperature, and the maximum allowable downstream concentration.
Gases are removed in the sorbent bed's mass transfer zone. As the sorbent bed removes gases and vapors, the leading edge of this zone is saturated with the contaminant, while the trailing edge is clean, as dictated by the adsorption capacity, bed depth, exposure history, and filtration dynamics. Significant quantities of an air contaminant may pass through the sorbent bed if breakthrough occurs. However, you can avoid breakthrough by selecting the appropriate quantity of sorbent and performing regular maintenance.
A phenomenon known as channeling may occur in sorbent beds and should be avoided. Channeling occurs when a greater flow of air passes through the portions of the bed that have lower resistance. It is caused by non-uniform packing, irregular particle sizes and shapes, wall effects, and gas pockets. If channeling occurs within a sorbent bed, it can adversely affect system performance.
# Establish effective maintenance schedules based on predicted service life.
When determining sorbent bed maintenance schedules and costs, you should consider service life of the sorbent. All sorbents have limited adsorption capacities and require scheduled maintenance. The effective residual capacity of an activated carbon sorbent bed is not easily determined while in use, and saturated sorbents can reemit collected contaminants. Sorbent life depends upon bed volume or mass and its geometric shape, which influences airflow through the sorbent bed. Chemical agent concentrations and other gases (including humidity) affect the bed capacity. Because of differences in affinities, it is possible that one chemical may displace another chemical, which can be re-adsorbed downstream or forced out of the bed. Most sorbents come in pellet form, which makes it possible to mix them. Mixed-and/or layered-sorbent beds permit effective removal of a broader range of contaminants than possible with a single sorbent. Many sorbents can be regenerated, but it is important to follow the manufacturer's guidance closely to ensure that you replace or regenerate sorbents in a safe and effective manner.
# Don't reuse chemically active sorbents.
Some chemically active sorbents are impregnated with strong oxidizers, such as potassium permanganate. The adsorbent part of the bed captures the target gas and gives the oxidizer time to react and destroy other agents. You should not reuse chemically active sorbents because the oxidizer is consumed over time. If the adsorbent bed is exposed to very high concentrations of vapors, exothermic adsorption could lead to a large temperature rise and filter bed ignition. This risk can be exacerbated by the nature of impregnation materials. It is well known that lead and other metals can significantly lower the spontaneous ignition temperature of a carbon filter bed. The risk of sorbent bed fires is generally low and can be further minimized by ensuring that air-cleaning systems are located away from heat sources and that automatic shut-off and warning capabilities are included in the system.
# A Word about Filter or Sorbent Bypass and Air Infiltration
Ideally, all airflow should pass through the installed filters of the HVAC system. However, filter bypass, a common problem, occurs when air flows around a filter or through some other unintended path. Preventing filter bypass becomes more important as filter collection efficiency and pressure drop increase. Airflow around the filters result from various imperfections, e.g., poorly sealed filters, which permit particles to bypass the filters, rather than passing directly into the filter media. Filters can be held in place with a clamping mechanism, but this method may not provide an airtight seal. The best high-efficiency filtration systems have gaskets and clamps that provide an airtight seal. Any deteriorating or distorted gaskets should be replaced and checked for leaks. You can visually inspect filters for major leakage around the edges by placing a light source behind the filter; however, the best method of checking for leaks involves a particle counter or aerosol photometer. Finally, no faults or other imperfections should exist within the filter media, and you should evaluate performance using a quantitative test, as described in the literature .
Another issue to consider is infiltration of outdoor air into the building. Air infiltration may occur through openings in the building envelope-such as doors, windows, ventilation openings, and cracks. Initially, you must decide which portions of your building to include in the protective envelope. Areas requiring high air exchange, such as some mechanical rooms, may be excluded. To maximize building protection, reduce the infiltration of unfiltered outdoor air by increasing the air tightness of the building envelope (eliminating cracks and pores) and introducing enough filtered air to place the building under positive pressure with respect to the outdoors. It is much easier and more cost efficient to maintain positive pressure in a building if the envelope is tight, so use these measures in combination. The U.S. Army Corps of Engineers recommends that for external terrorist threats, buildings should be designed to provide positive pressure at wind speeds up to 12 km/hr (7 mph). Designing for higher wind speeds will give even greater building protection .
In buildings that have a leaky envelope, maintaining positive indoor pressure may be difficult to impossible. Interior/exterior differential air pressures are in constant flux due to wind speed and direction, barometric pressure, indoor/outdoor temperature differences (stack effect), and building operations, such as elevator movement or HVAC system operation. HVAC system operating mode is also important in maintaining positive indoor pressure. For example, many HVAC systems use an energy savings mode on the weekends and at night to reduce outside air supply and, hence, lower building pressurization.
In cold climates, you should ensure that an adequate and properly positioned vapor barrier exists before you pressurize your building to minimize condensation, which may in turn, cause mold and other problems. All of these factors (leaky envelope, negative indoor air pressure, energy savings mode) influence building air infiltration and must be considered when you tighten your building. You can use building pressurization or tracer gas testing to evaluate the air tightness of your building envelope. Information on evaluating building envelope tightness, air infiltration, and water vapor management is described in the ASHRAE Fundamentals Handbook .
# Recommendations Regarding Operations and Maintenance
Filter performance depends on proper selection, installation, operation, testing, and maintenance. The scheduled maintenance program should include procedures for installation, removal, and disposal of filter media and sorbents. Only adequately trained personnel should perform filter maintenance and only while the HVAC system is not operating (locked out/tagged out) to prevent contaminants from being entrained into the moving air stream.
# Do not attempt HVAC system maintenance following a CBR release without first consulting appropriate emergency response and/or health and safety professionals.
If a CBR release occurs in or near your building, significant hazards may be present, particularly within the building's HVAC system. If the HVAC and filtration systems have protected the building from the CBR release, contaminants will have collected on HVAC system components, on the particulate filters, or within the sorbent bed. These accumulated materials present a hazard to personnel servicing the various systems. Therefore, before servicing these systems following a release, consult with the appropriate emergency response and/or health and safety professionals to develop a plan for returning the HVAC systems and your building to service. Because of the wide variety of buildings, contaminants, and scenarios, it is not possible to provide a generic plan here. However, such a plan should include requirements for personnel training and appropriate personal protective equipment.
# Understand how filter type affects change-out schedules.
Proper maintenance, including your monitoring of filter efficiency and system integrity, is critical to ensuring HVAC systems operate as intended. The change-out schedule for various filter types may be significantly different. One reason for differences is that little change in pressure drop occurs during the loading of an electrostatic filter, as opposed to mechanical filters. Ideally, you should determine the change-out schedule for electrostatic filters by using optical particle counters or other quantitative measures of collection efficiency.
Collecting objective data (experimental measurements) will allow you to optimize electrostatic filter life and filtration performance. The data should be particle-size selective so that you can determine filtration efficiencies that are based on particle size (e.g., micrometer, sub-micrometer, and most penetrating size). On the other hand, mechanical filters show larger pressure drop increases during loading, and hence, pressure drop can be used to determine their appropriate change-out schedules. If using mechanical filters, a manometer or other pressure-sensing device should be installed in the mechanical filtration system to provide an accurate and objective means of determining the need for filter replacement. Pressure drop characteristics of both mechanical and electrostatic filters are supplied by the filter manufacturer.
# Ensure maintenance personnel are well trained.
Qualified individuals should be responsible for the operation of the HVAC system. As maintenance personnel, you must have a general working knowledge of the HVAC system and its function. You are responsible for monitoring and maintaining the system, including filter change-out schedules, documentation, and record keeping; therefore, you should also be involved in the selection of the appropriate filter media for a given application. Because of the sensitive nature of these systems, appropriate background checks should be completed and assessed for any personnel who have access to the HVAC equipment.
Handle filters with care and inspect for damage.
Mechanical filters, often made of glass fibers, are relatively delicate and should be handled carefully to avoid damage. Filters enclosed in metal frames are heavy and may cause problems because of the additional weight they place on the filter racks. The increased weight may require a new filter support system that has vertical stiffeners and better sealing properties to ensure total system integrity. Polymeric electrostatic filters are more durable and less prone to damage than mechanical filters.
To prevent installation of a filter that has been damaged in storage or one that has a manufacturing defect, you should check all filters before installing them and visually inspect the seams for total integrity. You should hold the filters in front of a light source and look for voids, tears, or gaps in the filter media and filter frames. Take special care to avoid jarring or dropping the filter element during inspection, installation, or removal.
# Wear appropriate personal protective equipment when performing change-out.
Recent laboratory studies have indicated that re-aerosolization of bioaerosols from HEPA and N95 respirator filter material is unlikely under normal conditions . These studies concluded that biological aerosols are not likely to become an airborne infectious problem once removed by a HEPA filter (or other high-efficiency filter material); however, the risks associated with handling loaded filters in ventilation systems, under Maintenance and filter change-out should be performed only when a system is shut down to avoid re-entrainment and system exposure. You should place old filters in sealed plastic bags upon removal.
Where feasible, particulate filters may be disinfected in a 10% bleach solution or other appropriate biocide before removal. Not only should you shut down the HVAC system when you use disinfecting compounds but also you should ensure that the compounds are compatible with the HVAC system components they may contact. Decontaminating filters exposed to CBR agents requires knowledge of the type of agent, safety-related information concerning the decontaminating compounds, and proper hazardous waste disposal procedures. Your local hazardous materials (HAZMAT) teams and contractors should have expertise in these areas.
# Note on Emerging Technologies
Recently, a number of new technologies have been developed to enhance or augment HVAC filtration systems. Many of these technologies have taken novel approaches to removing contaminants from the building air stream. While some of these new systems may be highly effective, many are unproven. Before you commit to one of these new technologies for the protection of your building and its occupants, require the vendor to provide evidence that demonstrates the effectiveness for your application. Some of the things you should do include:
- Identify data showing the effectiveness and efficiency of the system. This data should be relevant to the application proposed for your building (flow rate, contaminant concentration, etc.).
- Know the source of the data. Did independent researchers collect the data, or was the research done by a vendor? While vendor-collected data can be useful, data collected by an independent organization can reduce or eliminate biases. Where applicable, ask for data collected using consensus protocols (i.e., ASHRAE, Institute of Environmental Sciences and Technology , American Society for Testing and Materials , Air-Conditioning and Refrigeration Institute ).
- Be concerned about long-term maintenance, possible hazards, or generated pollutants resulting from an experimental system.
- Be wary of anecdotal data or testimonials, particularly those exalting the new technology. While this information can be interesting and thought provoking, it may not be relevant to how well the system will work in your building.
- Talk with the vendor's customers who have implemented the systems of interest. Are they satisfied with the system, equipment, installation, and vendor? What problems did they encounter and how were these resolved? If they had it to do over, what would they do differently?
New technologies can and will have a place in protecting a building's airborne environment. However, you should ensure that resources are spent on proven systems and technologies that will continue to be effective when needed.
# ECONOMIC CONSIDERATIONS
Costs associated with air filtration and air-cleaning systems can be divided into three general categories: initial costs, operating costs, and replacement costs. Although some users might consider only the initial costs when selecting an appropriate filtration system, it is important to weigh carefully all of the life-cycle costs. The HVAC design engineer should assist you in understanding the costs and benefits of various air-filtration options.
# Initial Costs
Initial costs include those for original equipment-the filter rack system, individual filters, and auxiliary equipment-and the usual direct and indirect costs associated with installing a new system related to the electrical, ducting, and plumbing work. The total purchase cost of the filtration system is the sum of the costs for the filter rack system, filters, and auxiliary equipment; instruments and controls; taxes; and freight. For particulate filters, expenses generally increase as filter efficiency and quality increase. For some applications, a lower-efficiency filter (e.g., MERV 12) may be adequate and can be used instead of a HEPA filter (MERV 17) to control costs while achieving adequate performance. For gas-phase filters, the cost differences among sorbents can be dramatic. For example, natural zeolite, alumina, and activated carbon are generally the least expensive sorbents. Specialty carbon (such as ASZM-TEDA), synthetic zeolite, and polymers are typically much more expensive (as much as 20 times more expensive). A trade-off to consider is that carbon needs to be replaced frequently (every 6 months to 5 years), while zeolite and polymer replacement can occur less frequently.
Other factors that influence the initial costs of a system include the volumetric flow rate, contaminant concentrations, and in the case of adsorption systems-bed size, sorbent capacity, and humidity.
Volumetric flow and pressure drop may be the most important factors because they determine the size of the ductwork and filter rack, as well as the blower and motor. Effective sorbent filters typically have a resistance of at least 125 Pa (0.5 in. water gauge) for thin beds and 500 Pa (2.0 in. water gauge) or more for deep beds.
# Operating Costs
Annual operating costs include operating labor and materials, replacement filters, maintenance (labor and materials), utilities, waste disposal, and equipment depreciation. These costs vary, based upon the specific filtration system. Many of these costs should be considered in terms of the present value of money. Operating and maintenance labor costs depend on the filter type, size, and operating difficulty of a particular unit. Electrical costs to operate the blowers are directly related to airflow through and pressure drop across the filters.
# Replacement Costs
An important part of replacement costs relates to the estimated life of the filtration system. As filter life increases, the cost per operating hour falls. However, when mechanical filters are exposed to contaminated air, the pressure drop across them increases, and this can increase electrical costs. Costs can be minimized by your evaluation of the system and selection of the best final pressure drop to replace filters, based upon extended filter life and minimized power requirements.
Factors affecting particulate filter life include contaminant concentration, particle size distributions, airflow rates, and filter efficiency and quality. Particulate filters are frequently used in multiple stages to extend the life of more expensive final stage filters. Factors affecting gas-phase filter life include removal capacity and sorbent weight, sorbent collection efficiency, airflow rates, and molecular weight and concentration of the contaminant. Filter replacement labor costs depend on the number, size, and type of filters, their accessibility, how they are held in the filter rack, and other factors affecting labor.
# Cost Data
The cost of air-filtration and pressurization systems in new construction is about $6/ft 2 of floor area for basic, continuous HEPA and gas-phase V-bed filtration, using activated carbon. Operating costs are on the order of $5.40/m 2 /yr ($0.50/ft 2 /yr). Adding sensors and on-demand military style radial HEPA or carbon filters can cost up to $430/m 2 ($40/ft 2 ), and operating costs can increase to over $16/m 2 /yr ($1.50/ft 2 /yr). The cost of renovating an existing system may be up to three times more than the cost of new construction, depending on the amount of demolition, new ductwork, and enlargement of mechanical spaces required.
In most filter applications, the size of the filter bank is determined by the size of the heat transfer coils. The filter is placed upstream of the coils to reduce soiling. The filter bank is sized to the coil because the coil area is the point in the ducted portion of the air distribution system having the lowest velocity. The lower velocity of air through an air filter will result in a lower pressure drop across the filter. A lower pressure drop across the filter leads to a lower system pressure drop, resulting in lower fan horsepower and operating energy.
In most cases, sizing a moderately efficient air-filtration system to be larger than the coil area will result in high filter rack costs, which are not offset by a significantly reduced filter pressure drop. However, as the cost of energy increases, the benefit of lower pressure drop filters and larger filter racks becomes apparent.
Required fan horsepower is related to the total system pressure drop. For example, improving filtration to increase the filter pressure drop from 250 to 500 Pa (1.0 to 2.0 in. water gauge) will boost the total system pressure drop from 1000 to 1250 Pa (4.0 to 5.0 in. water gauge). However, in this example, the higher pressure drop will increase the required fan horsepower by roughly 40%.
The costs and benefits of the filters should be considered. A 25% ASHRAE filter (0.61 by 0.61 m ) will cost approximately $10 to $20, while an 80% or 90% ASHRAE filter will cost in the range of $40 to $75, respectively. For example, if a system uses 60 filters at a cost of $70 each and they are replaced annually, the present value of the enhanced filters over 25 years will cost approximately $14,000. The benefits of higher-efficiency filters may include less need for coil cleaning and a reduced pressure drop due to cleaner coils. If these two factors save $1,000 annually, the present value of the savings is $17,500, which compensates for the increased filter cost.
A standard HEPA filter (0.61 by 0.61 m ) costs approximately $100 to $250. Initial HEPA filter pressure drops are around 250 to 325 Pa (1.0-1.5 in. water gauge), depending on the design flow rate, fan performance curve, and related issues. Peak pressure drops can be as high as 750 Pa (3.0 in. water gauge). Analysis has compared the cost efficiency (particle removal rate divided by life cycle costs) of HEPA filters to ASHRAE 25%, 80%, and 90% filters . This analysis showed that ASHRAE 80% and 90% filters are substantially more cost efficient than HEPA filters.
Filter replacement time must be a trade-off with the energy cost, which is associated with driving the air through the high-pressure drop filter. The higher the cost of energy, the more frequently the building operator should change out the higher-pressure drop filters.
The number of filters that should be used in the design is limited by the available space and energy savings from reducing the system pressure drop. If energy is inexpensive, then fewer filters may be used. However, this does not take into account the environmental impact of wasted energy. If energy costs are high or are expected to increase over the life of the system, then selecting the maximum number of filters for the available space should be considered, along with filter rack costs.
The cost of a standard size (0.61 by 0.61 m ), individual, high-efficiency gas-phase filter is about $2,000 to $4,000. These high filter costs drive the design to use as few filters as possible.
High energy costs (> $0.40 per kilowatt hour ) are required before it is cost effective to increase the number of filters, thus, reducing the system pressure drop (energy) costs. Lower-efficiency and lower-cost gas-phase filters are available for indoor air quality applications. Less expensive gas-phase filters should be designed using the cost trade-off techniques described for particulate filters. However, you should recognize that these lower-cost options may not have the adequate adsorption capacity needed to provide protection during a CBR event.
# CONCLUSIONS
Filtration and air-cleaning systems may protect a building and its occupants from the effects of a CBR attack. Although it is impossible to completely eliminate the risk from an attack, filtration and aircleaning systems are important components of a comprehensive plan to reduce the consequences. CBR agents can effectively be removed by properly designed, installed, and well-maintained filtration and air-cleaning systems. These systems have other benefits besides reducing clean-up costs and delays, should a CBR event occur. These benefits include improving building cleanliness, improving HVAC system efficiency, potentially preventing cases of respiratory infection, reducing exacerbations of asthma and allergies, and generally improving building indoor air quality. Poor indoor air quality has also been associated with eye, nose, and throat irritation, headaches, dizziness, difficulty concentrating, and fatigue .
Initially, you must fully understand the design and operation of your existing building and HVAC system. Backed with that knowledge, along with an assessment of the current threat and the level of protection you want from your system, you can make an informed decision regarding your building's filtration and air-cleaning needs. In some situations, the existing system may be adequate, while in others major changes or improvements may be merited.
In most buildings, mechanical filtration systems for aerosol removal are more common than sorbents for gas and vapor removal.
# Toxic Industrial Chemicals and Materials
Toxic Industrial Chemicals (TICs) and Toxic Industrial Materials (TIMs) are commonly categorized by their hazardous properties, such as reactivity, stability, combustibility, corrosiveness, ability to oxidize other materials, and radioactivity . For the purposes of collection on a sorbent, gaseous agents can be divided into the following categories: organic vapors (i.e., cyclohexane), acid gases (i.e., hydrogen sulfide), base gases (i.e., ammonia), and specialty chemicals (i.e., formaldehyde or phosgene). TICs that have a combination of high toxicity and ready availability are of principal concern. Those having a volatility of less than 10 torr at room temperature are effectively removed by physical adsorption. However, a number of high toxicity TICs, produced industrially on a large scale, have volatilities higher than 10 torr at 20˚C and are more difficult to collect. Potential approaches in addressing performance shortfalls include (1) development of structured filter beds to deal with specific chemicals and (2) impregnation treatments, developed to address several high-priority TICs. Building owners and managers should take into account the potential threat posed by large quantities of TICs and TIMs that may be found in the vicinity of their building.
# Biological Agents
Biological Agents such as Bacillus anthracis (anthrax), Variola major (smallpox), Yersinia pestis (bubonic plague), Brucella suis (brucellosis), Francisella tularensis (tularemia), Coxiella burnetti (Q fever), Clostridium botulinum (botulism toxin), viral hemorrhagic fever agents, and others have the potential for use in a terrorist attack and may present the greatest hazard. Each of these biological agents may travel through the air as an aerosol. Generally, viruses are the smallest, while bacteria and spores are larger. Figure 1 shows the relative sizes of viruses, bacteria, spores, and other common air contaminants . In nature, biological agents and other aerosols often collide to form larger particles; however, terrorists or other groups may modify these agents in ways that reduce the occurrence of this phenomenon, thus, increasing the number of biological agents that may potentially be inhaled. There are significant differences from one agent to another in their adverse public health impact and the mass casualties they can inflict. An agent's infectivity, toxicity, stability as an aerosol, ability to be dispersed, and concentration all influence the extent of the hazard. Other important factors include person-to-person agent communicability and treatment difficulty. Biological agents have many entry routes and physiological effects. They generally are nonvolatile and can normally be removed by appropriately selected particulate filters, as described in the Recommendations section of this document.
# Toxins
Toxin categories include bacterial (exotoxins and endotoxins), algae (blue-green algae and dinoflagellates), mycotoxins (tricothocenes and aflatoxins), botulinum, and plant-and animal-derived toxins.
Toxins form an extremely diverse category of materials and are typically most effectively introduced into the body by inhalation of an aerosol. They are much more toxic than chemical agents. Their persistency is determined by their stability in water and exposure to heat or direct solar radiation. Under normal circumstances toxins can be collected using appropriately selected particulate filters as described in the Recommendations section of this document.
# Radiological Hazards
Radiological hazards can be divided into three general forms: alpha, beta, and gamma radiation. These three forms of radiation are emitted by radioisotopes that may occur as an aerosol, be carried on particulate matter, or occur in a gaseous state. Alpha particles, consisting of two neutrons and two protons, are the least penetrating and the most ionizing form. Alpha particles are emitted from the nucleus of radioactive atoms and transfer their energy at very short distances. Alpha particles are readily shielded by paper or skin and are most dangerous when inhaled and deposited in the respiratory tract. Beta particles are negatively charged particles emitted from the nucleus of radioactive atoms. Beta particles are more penetrating than alpha particles, presenting an internal exposure hazard. They can penetrate the skin and cause burns. If they contact a high density material, they may generate Xrays, also, known as Bremmstrahlung radiation. Gamma rays are emitted from the nucleus of an atom during radioactive decay. Gamma radiation can cause ionization in materials and biological damage to human tissues, presenting an external radiation hazard.
There are three primary scenarios in which radioactive materials could potentially be dispersed by a terrorist: (1) conventional explosives or other means to spread radioactive materials (a dirty bomb),
(2) attack on a fixed nuclear facility, and (3) nuclear weapon. In any of these events, filtration and air-cleaning devices would be ineffective at stopping the blast and radiation itself; however, they would be useful in collecting the material from which the radiation is being emitted. Micrometer-sized aerosols from a radiological event are effectively removed from air streams by HEPA filters. This collection could prevent distribution throughout a building; however, subsequent decontamination of the HVAC system would be required.
their high volatility. The traditional approach to provide protection against such materials is to impregnate the adsorbent material with a reactive component to decompose the vapor. Usually, the vapor is converted to an acid gas byproduct, which must also be removed by reaction with adsorbent impregnation. A typical breakthrough curve for CK at various filter bed depths, using military carbon ASZM-TEDA, is depicted in Figure 10.
Table 3 provides a list of chemical agent categories and the mechanism believed to remove the respective toxic vapors.
# Types of sorbent materials
There are many different sorbents available for various applications. These materials include both adsorbent and chemisorbent materials. Some of the more commonly used materials are described below.
Activated carbon is the most common sorbent used in HVAC systems, and it is excellent for most organic chemicals. Activated carbon is prepared from carbonaceous materials, such as wood, coal, bark, or coconut shells. Activation partially oxidizes the carbon to produce sub-micrometer pores and channels, which give the high surface area-to-volume ratio needed for a good sorbent (Figure 6).
Activated carbon often has surface areas in the range of 1000 m 2 per gram (m 2 /g), but higher porosity materials, i.e., super-activated carbon, are well known. Because activated carbon is nonpolar (does not favorably adsorb water vapor), organic vapors can Protection from Chemical, Biological, or Radiological Attacks 58 be captured at relatively high humidity. Activated carbon does not efficiently adsorb volatile, low-molecular-weight gases, such as formaldehyde and ammonia. However, activated carbon is relatively inexpensive and can retain a significant fraction (50%) of its weight in adsorbed material .
The surface of activated carbon is highly irregular, and pore sizes range from 0.5 to 50 nm, enabling adsorption of many substances. Carbons with smaller pore sizes have a greater affinity for smaller high-volatility vapors. Typically, activated carbon prepared from coconut shells has smaller pore sizes, while carbon produced from bituminous coal has larger pores. When the activated carbon has been spent, it may be regenerated thermally or by using solvent extraction. The American Society for Testing and Materials (ASTM) has established standards for determining the quality of activated carbon and addressed issues such as apparent density, particle size distribution, total ash content, moisture, activated carbon activity, and resistance to attrition.
You can enhance the range of vapors that activated carbon will adsorb by using chemical impregnates, which supplement physical adsorption by an added chemical reaction. Impregnated activated carbon as a removal mechanism has been used since at various filter bed depths. Broken line indicates breakthrough concentration. CK feed concentration is 2,000 mg/m 3 . Filter face velocity is 6 cm/sec, and relative humidity is 80%.
World War I to protect soldiers from chemical warfare agents, such as mustard gas and phosgene. Chemical impregnates aid activated carbon to remove high-volatility vapors and nonpolar contaminants. Low vapor-pressure chemicals-such as isopropyl methylphosphonofluoridate (GB), which is a nerve gas (sarin); and bis-(2-chloroethyl) sulfide (HD), which is a vesicant-are effectively removed by physical adsorption. Reactive chemicals have been successfully impregnated into activated carbon to decompose chemically high-vapor pressure agents, such as the blood agents CK and AC.
One type of impregnated activated carbon, ASZM-TEDA carbon, has been used in U.S. military nuclear, biological, and chemical (NBC) filters since 1993. This material is a coal-based activated carbon that has been impregnated with copper, zinc, silver, and molybdenum compounds, in addition to triethylenediamine. ASZM-TEDA carbon provides a high level of protection against a wide range of chemical warfare agents. Table 3 provides a list of chemical impregnates and the air contaminants against which they are effective.
Silica gel and alumina are common inorganic sorbents that are used to trap polar compounds. Sorption takes place when the polar functional group of a contaminant molecule is attracted by hydrogen bonding or electron cloud interaction with oxygen atoms in the silica or alumina. Silica gels are inorganic polymers having a variety of pore sizes and surface areas. Silica Gel 100 has a pore size of 10 nm and a surface area of 300 m 2 /g. Silica gel 40 has a pore size of 4 nm and surface area of 750 m 2 /g. Silica gel adsorbs water in preference to hydrocarbons, and wet silica gels do not effectively adsorb hydrocarbons. This property makes silica gel a poor sorbent for humid atmospheres; however, amines and other inorganic compounds can be collected on silica gel. Alumina has pore sizes of approximately 5.8 nm and surface areas as high as 155 m 2 /g. By changing the surface pH from acidic to basic, alumina can be modified to sorb a wider polarity range than silica gel.
Zeolites are a large group of naturally occurring aluminosilicate minerals, which form crystalline structures having uniform pore sizes. Zeolites occur in fibrous and non-fibrous forms and may go through reversible selective adsorption. Different molecular structures of zeolites result in pore sizes ranging from 3 to 30 angstroms. Zeolites are hydrophilic and may be chemically impregnated to improve their performance. They are used for organic solvents and for volatile, low molecular weight halides, such as chlorinated fluorocarbons (CFCs). A primary issue related to the effective use of zeolites is the molecular size of the vapor compared to the pore size. Zeolites will not adsorb molecules larger than their pore sizes, nor will they capture compounds for which they have no affinity.
Synthetic zeolites are made in crystals from 1 µm to 1 mm and are bonded to large granules, reducing airflow resistance. They can be manufactured to have large pore sizes and to be hydrophobic for use in high relative humidity. Synthetic zeolites can be designed to adsorb specific contaminants by modification of pore sizes. Alumina-rich zeolites have a high affinity for water and other polar molecules while silica-rich zeolites have an affinity for non-polar molecules .
Synthetic polymeric sorbents are designed to collect specific chemical classes based upon their backbone structure and functional groups. Depending on the chemistry, polymeric sorbents can reversibly sorb compounds while others can capture and destroy contaminants. Some commercially available synthetic polymeric sorbents include the following: Ambersorb®, Amberlite ® , Carboxen ® , Chromosorb ® , Hayesep ® , and Tenax ® . Chemically impregnated fibers (CIF) are a recently developed technology, using smaller, more active sorbent particles of carbon, permanganate/alumina, or zeolite incorporated into a fabric mat. This design provides a combination of particulate and gas-phase filtration. The smaller sorbent particles are more efficient adsorbers than the larger ones found in typical packed beds. This technology provides the advantages of gas-phase filtration without the associated costs. CIF filters are held in media that range from 1 ⁄ 8 to 2 in. thick. Fibers range in size from 2 to 50 µm in diameter. CIF filters contain less sorbent (as much as 20 times less) than the typical packed beds, resulting in much shorter service life. | 14,736 | {
"id": "4eba4e29f4d0570fd3bf2a454917f5bf3fb1dec4",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | I am Mr. Edward J . B aier, Deputy D ire c to r of the N atio nal I n s titu te fo r O ccupational S afety and H ealth (NIOSH) adm inistered by th e C enter fo r D isease C ontrol w ith in the Department of H ealth, E ducation, and W elfare. W ith me today a re : Dr. Joseph K. Wagoner, Dr. P eter F. In fa n te , Dr. R obert N. Ligo and Dr. V icto r E. A rcher, D iv isio n of S u rv e illa n c e , Hazard E valuations and F ield S tu d ies; Dr. David H. G roth, D iv isio n of B iom edical and B ehavioral S ciences; Dr. Jan et C. H aartz and Mr. John Sheehy, D iv isio n of P h y sical Sciences and E ngineering; Dr. Douglas L. Sm ith, D iv isio n of C rite ria Documentation and Standards Development, and Mr. R obert H. S chütz, T esting and C e rtific a tio n Branch.
We welcome th is o p p o rtu n ity to appear here today to d iscu ss th e e ff e c ts of o ccu p ation al exposure to bery lliu m upon human h e a lth , in clu d in g th e r e s u lts of re c e n t stu d ie s conducted by NIOSH.
Since th e e a rly 1940s, evidence in c re a sin g ly has dem onstrated th e presence of beryllium -induced n o n -n eo p lastic re s p ira to ry d ise a se s ( b e r y llio s is ) and th e ir sequelae among workers employed in in d u s trie s producing and using beryllium and i t s compounds^-. Cases of b e r y llio s is a lso have been id e n tifie d among in d iv id u a ls liv in g near th ese in d u s tr ia l f a c ilitie s^-. This accum ulation of evidence led to a NIOSH recom mendation, tra n sm itte d as a c r i t e r i a document to OSHA in 1972, th a t o ccu p atio n al exposure to beryllium be lim ite d to 2 micrograms of to ta l a irb o rn e p a r tic u la te beryllium per cubic m eter of a i r (2 ug Be/m^) based on an 8-hour tim e w eighted average (TWA)^. in a d d itio n , NIOSH recommended a t th a t tim e th a t no w orker be exposed to peak c o n c en tratio n s of beryllium in excess of 25 ug Be/m^ based on th ir ty m inutes sam pling p e rio d s. That stan d ard was recommended w ith the b e lie f th a t i t would "prevent th e developm ent of acute and chronic n o n -n eo p lastic re s p ira to ry d ise a se in w orkers exposed to b e ry lliu m ." That stan d ard was not recommended w ith the s ta te d b e lie f th a t i t would prevent beryllium -induced can cer, as th e human stu d ie s a v a ila b le a t th a t tim e were judged to be c o n tra d ic ta ry , and th us NIOSH concluded th a t th e humaii evidence did not support anim al stu d ie s dem onstrating beryllium to be a carcinog en .
In 1975, OSHA req u ested NIOSH to re -e v a lu a te th e in fo rm atio n a v a ila b le on th e adverse h e a lth e ff e c ts of occu p atio n al exposure to beryllium and to advise OSHA of th e r e s u lts of th a t re -e v a lu a tio n . A fte r a thorough review and ev alu a tio n of th e most p e rtin e n t s tu d ie s considered in t&e 1972 document and papers published during the th re e years sin ce th a t document, NIOSH on December 10, 1975, tra n sm itte d i t s recommendations th a t beryllium posed a carcinog enic r is k to man and th a t occu p atio n al exposures, th e re fo re , should be reduced to a minimum^.
Since 1975 a d d itio n a l d ata have been generated which fu rth e r document the c a rc in o g e n ic ity of beryllium amoung humans. As a r e s u lt of th is c o lle c tiv e evidence, NIOSH now recommends th a t o ccu p atio n al expoure to bery lliu m be c o n tro lle d so th a t no w orker w ill be exposed in excess of 0.5 ug Be/m-3.
We b e liev e th a t a review of th e adverse n o n -n eo p lastic h e a lth e ffe c ts of beryllium have been s u f f ic ie n tly presen ted by OSHA so th a t r e p e titio n a t th is p o in t is unnecessary. We w ill address issu e s in th is testim ony which we b eliev e to be im portant fo r c o n sid e ra tio n by OSHA fo r the permanent stan d ard on beryllium . In the subsequent p o rtio n of th is testim ony, NIOSH w ill address issu e s fo r th e OSHA. permanent stan d ard on b ery lliu m . OSHA has n o t recommended a s p e c ific method of measurement fo r a sse ssin g com pliance w ith p e rm issib le u n its of airb o rn e co n cen tratio n s or employee exposure to b ery lliu m . A s lig h t m o d ificatio n in the sam pling and a n a ly tic a l method recommended in th e NIOSH c r i t e r i a document is in d ic a te d . NIOSH recommends th a t th e method of m easurenent of employee exposure be c o lle c tio n of p erso n al samples on c e llu lo se e s te r membrane f i l t e r s follow ed by fla m e le ss atom ic ab so rp tio n determ in ation of th e t o t a l b ery lliu m in th e sam ple. This method has been evaluated by NIOSH over the range of 2.68-11.84 ug Be/m3 u sin g a 4 0 -lite r a ir sam ple. The p re c isio n and accuracy o f th e method were determ ined to be w ith in +25% of th e "tru e " value a t th e 95% confidence l e v e l^. The d ata indicate^ th a t th is method would be s a tis f a c to r y fo r th e measurement o f a ir co n cen tratio n s of 0.5 ug Be/m3 , p ro v id in g th a t a 220 l i t e r (130 m inutes a t 1. In a NIOSH s u r v e y 2 7 1 p erso n al gross a i r samples of 3 9 w orkers a t a beryllium production p la n t showed average b ery lliu m co n c en tratio n s g re a te r than 2 ug Be/m3 fo r 2 8 of th e w orkers. In a second p la n t th a t produces b eryllium m etal and fa b ric a te s i t , p erso n al gro ss a ir samples o f 1 7 w orkers were t a k e n 2 7 , of th ese 1 7 , a l l but one showed a b ery lliu m c o n c en tratio n g re a te r than 2 . 0 ug/m3 . In a th ir d p la n t which produces b ery lliu m copper, th e b ery lliu m co n c en tratio n s by th e gross p erso n al sample method averaged g re a te r than 2 . 0 ug/m3 fo r fiv e of th e 1 2 w o r k e r s^, a more re c e n t study28 of one of th e above b ery lliu m produ ction f a c i l i t i e s showed th a t th e average b ery lliu m c o n c en tratio n in four of fiv e areas ev alu ated exceeded 2 ug/m3 w ith one a re a averaging 1 3 ug/m3 .
These | 2,119 | {
"id": "598aafced74213ae8336a6a5c9043256fd29f162",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | , CDC AND state and local public health authorities have been investigating cases of bioterrorism-related anthrax. This report updates findings as of October 31, and includes interim guidelines for the clinical evaluation of persons with possible anthrax. A total of 21 cases (16 confirmed and five suspected) of bioterrorism-related anthrax have been reported among persons who worked in the District of Columbia, Florida, New Jersey, and New York City (Figure 1).#
Until the source of these intentional exposures is eliminated, clinicians and laboratorians should be alert for clinical evidence of Bacillus anthracis infection. Epidemiologic investigation of these cases and surveillance to detect new cases of bioterrorism-associated anthrax continues.
# New York
To date, the investigations in New York City have identified one confirmed inhalational case and six (three confirmed and three suspected) cutaneous anthrax cases; the confirmed inhalational and one suspected cutaneous case have been identified since the last report. 1 The six cutaneous cases were associated with four media companies (A-D); the most recent suspected cutaneous case is associated with company D. The most recent confirmed inhalational case is not directly associated with any media company or with mail handling. No cases among postal workers have been identified.
The most recent suspected cutaneous case occurred in a 34-year-old man who worked in the mail room of company D who might have handled a letter postmarked September 18, which the patient handled during October 12-15 and subsequently was found to contain B. anthracis. 1 On October 19, the patient noted a small, erythematous pruritic papule on his left forearm that later developed a small vesicle. On October 21, he started ciprofloxacin. By October 22, an eschar had developed, increased in size, and over the next several days was surrounded by erythema, edema, and induration. A biopsy was positive for B. anthracis by immunohistochemical (IHC) staining.
The inhalational anthrax case occurred in a 61-year-old woman who worked in the stockroom of a hospital in Manhattan. The patient became ill on October 25 with malaise and myalgias. During the next several days, she had shortness of breath, chest discomfort, and a productive cough with blood-tinged sputum. She reported no fever, chills, or night sweats. She presented to an emergency department on October 28 in respiratory distress. Her temperature was 102°F (39°C), and she was admitted to the intensive care unit and required mechanical ventilation. Initial chest radiograph revealed pulmonary venous congestion and bilateral pleural effusions; a chest computerized tomography (CT) scan revealed a widened mediastinum and bilateral pleural effusions. An echocardiogram indicated a small pericardial effusion. She was empirically treated with levofloxacin, rifampin, and clindamycin. Blood cultures grew B. anthracis less than 24 hours after admission. Her pleural effusion revealed hemorrhagic fluid and B. anthracis. The patient died on October 31.
# New Jersey
To date, investigations in New Jersey and Pennsylvania have identified seven (five confirmed and two suspected) anthrax cases. Since the last report, 1 disease was confirmed in two patients, and inhalational anthrax was confirmed in two patients, one of whom was previously classified as a suspected casepatient. Five patients worked in New Jersey at one of two postal facilities. Although no specific contaminated letter was implicated in these cases, contaminated letters destined for both New York City and the District of Columbia passed through at least one of the postal facilities in New Jersey.
Inhalational anthrax was confirmed in a 56yearold female postal worker who initially was classified as a suspected case-patient. 1 Her pleural fluid was positive for B. anthracis by polymerase chain reaction (PCR) and a pleural biopsy was positive for B. anthracis by IHC staining.
On October 13, a 54-year-old Delaware resident who worked as a mail sorter at a New Jersey postal processing and distributing center developed a painless lesion on the dorsum of his left hand. The lesion began as an erythematous "knot" several millimeters in size that developed a crusted scale during the next few days. No associated edema, eschar, or lymphadenopathy was observed. The patient had elevated levels of serum antibody (IgG) to the protective antigen component of the anthrax toxin using enzymelinked immunosorbant assay.
On October 15, a 43-year-old female postal worker who worked at a facility in which anthrax cases have been documented developed fever, headache, chills, and shortness of breath. She was treated with levofloxacin, but her symptoms progressed and she was admitted to a hospital on October 18. A chest radiograph indicated a right perihilar infiltrate and a small pleural effusion. She was started on multidrug therapy, including ciprofloxacin, which was changed to azithromycin after 24 hours. On admission, she was febrile and tachycardic. She had an elevated white blood cell (WBC) count of 11,000 with 14% bands. A CT scan on October 19 showed a right pleural effusion, perihilar consolidation, and mediastinal adenopathy. She subsequently had two thoracenteses that produced serosanguinous pleural fluid and a bronchoscopy that showed grossly edematous bronchi. Both pleural fluid and bronchial biopsy were positive for B. anthracis by IHC stain.
On October 17, a 51-year-old woman developed a large pimple on her forehead with erythema and swelling. On October 18, the lesion enlarged, was slightly painful, nonpruritic, and drained a small amount of yellowish fluid. She sought medical care, cervical and preauricular lymphadenopathy was noted on physical examination, and she was treated with ciprofloxacin. The lesion progressed and ulcerated. On October 22, she presented to an emergency department and was admitted with a diagnosis of cellulitis. On admission, she was afebrile with normal vital signs and had a swollen right face and eyelid and enlarged right anterior cervical nodes. Intravenous ciprofloxacin for cutaneous anthrax was started. On October 24, the ulcer was biopsied and debrided. Biopsy specimens were positive for B. anthracis by PCR and IHC. The patient improved and was discharged on October 27 on oral ciprofloxacin. The patient worked as a bookkeeper and reported receiving no unusual or powdercontaining mail at home or work. She had made no visits to any post offices in several months.
# District of Columbia
To date, investigations in the District of Columbia, Maryland, and Virginia have confirmed inhalational anthrax in four persons who worked at one postal facility in the District of Columbia. An additional case of inhalational anthrax has been confirmed in a 59-year-old postal worker in a U.S. State Department mail sorting facility that receives mail from the District of Columbia postal facility associated with the previous four cases. The patient presented to an emergency department on October 24 with temperature of 100.8°F (38°C), sweats, myalgia, chest discomfort, mild cough, nausea, vomiting, diarrhea, and abdominal pain. A chest radiograph initially was interpreted as normal but on further re-view indicated mediastinal widening. A CT scan showed mediastinal lymphadenopathy, hemorrhagic mediastinitis, small bilateral pleural effusions, and a small pericardial effusion. Blood cultures grew B. anthracis. The patient is receiving ciprofloxacin, rifampin, and penicillin.
# Florida
To date, the investigation in Florida has identified two confirmed inhalational cases. No new cases have been identified since the last report. 1
# Clinical Presentation of Inhalational and Cutaneous Cases
# Inhalational anthrax
To date, CDC has identified 10 patients with confirmed or suspected inhalational anthrax associated with bioterrorism. All but the most recent patients were postal workers (six), mail handlers or sorters (two), or a journalist who were known to or believed to have processed, handled, or received letters containing B. anthracis spores. The hospital employee with inhalational anthrax did not process mail but might have carried mail to other parts of the facility. Preliminary environmental testing of the patient's work area and home was negative for B. anthracis. The investigation is ongoing.
The median age of the 10 patients with inhalational anthrax was 56 years (range: 43-73 years); seven were men. The incubation period from the time of exposure to onset of symptoms when known (seven) was 7 days (range: 5-11 days).
The initial illness in these patients was characterized by fever (nine) and/or sweats/chills (six) (Figure 2). Severe fatigue or malaise was present in eight and minimal or nonproductive cough in nine, including one with bloodtinged sputum. Eight patients reported chest discomfort or pleuritic pain. Abdominal pain or nausea or vomiting occurred in five, and five reported chest heaviness. Other symptoms included shortness of breath (seven), headache (five), myalgias (four), and sore throat (two).
On initial presentation, total WBC count was normal or slightly elevated (7.5-13.3ϫ 10 3 /cu mm); however, elevation in the percentage of neutrophils or band forms was frequently noted. None of the patients had a low WBC count or lymphocytosis when initially evaluated. Chest radiograph was abnormal in all patients, but in two an initial reading was interpreted as within normal limits. Mediastinal changes including mediastinal widening, paratracheal fullness, hilar fullness, and mediastinal lymphadenopathy were noted in all eight patients who had CT scans. Mediastinal widening may be subtle, and careful review of the chest radiograph by a radiologist may be necessary. Pleural effusions were present in seven patients and were a feature of the two patients who did not have mediastinal changes on chest radiograph or did not have a CT scan. Pleural effusions often were large and hemorrhagic, reaccumulated, and required repeated thoracentesis or chest tubes. Pulmonary infiltrates were observed in four patients and were multilobar in three. Blood cultures grew B. anthracis in seven patients and in all who had not received antimicrobials. Diagnosis in the patients with negative cultures was confirmed by bronchial or pleural biopsy and specific IHC staining, by PCR of material from a sterile site, or by a fourfold rise in IgG to the protective antigen.
To date, six of 10 patients with inhalational anthrax have survived. Among those whose condition was recognized early, all remain alive and two have been discharged from the hospital. Prompt recognition of the early features of inhalational anthrax is important in settings of known or suspected exposure.
# Cutaneous anthrax
Eleven patients with cutaneous anthrax have been identified in the current outbreak. Patients with cutaneous anthrax were mail handlers or sorters (four), employees of or visitors to media companies (six), and one bookkeeper. The mean incubation period for cutaneous anthrax was 5 days (range: 1-10 days) based on estimates from the postmark of letters and assumptions of dates of exposures with known positive letters or suspect letters (Figure 3).
Lesions occurred on the forearm, neck, chest, and fingers (two). Lesions were painless but accompanied by a tingling sensation or pruritis. Diagnosis was established by biopsy or culture. Culture negative and no progression of papule to eschar, cutaneous anthrax unlikely ¶ Culture positive
# Continue antimicrobial therapy (1)
Progression to eschar clude identifying factors that promote the aeresolization of B. anthracis in mailhandling facilities and assessing the risk for anthrax in environments contaminated with B. anthracis spores. Safe levels of B. anthracis spore contamination in occupational settings must be defined to determine the need for clean-up of contaminated facilities. The current anti-microbial prophylaxis recommendations address the prevention of inhalational anthrax, but CDC also is evaluating measures to prevent cutaneous anthrax.
Postexposure prophylaxis with a recommended antimicrobial agent for the prescribed period of time can prevent inhalational anthrax. In the case of a known contaminated letter sent to the office of a U.S. Senator, antimicrobial prophylaxis was administered to persons from the area of exposure and firstresponders to the incident. 1 To date, there have been no cases of anthrax, even among those who had the greatest exposure. Antimicrobial prophylaxis had been recommended for the U.S. State Department mail handler with anthrax, but the worker had not started treatment before the onset of illness. Public health response must include prompt initiation of prophylaxis for exposed persons and systems to promote adherence to a full 60-day regimen.
Previous guidelines recommended ciprofloxacin for antimicrobial prophylaxis until antimicrobial susceptibility test data was available. 3 Isolates involved in the current bioterrorism attacks have been susceptible to ciprofloxacin, doxycycline, and several other antimicrobial agents. Considerations for choosing an antimicrobial agent include effectiveness, resistance, side effects, and cost. No evidence demonstrates that ciprofloxacin is more or less effective than doxycycline for antimicrobial prophylaxis to B. anthracis. Widespread use of any antimicrobial will promote resistance. Many common pathogens are already resistant to tetracyclines such as doxycycline. However, fluoroquinolone resistance is not yet common in these same organisms. To preserve the effectiveness of fluo-roquinolone against other infections, use of doxycycline for prevention of B. anthracis infection among populations at risk may be preferable. However, the selection of the antimicrobial agent for an individual patient should be based on side-effect profiles, history of reactions, and the clinical setting.
CDC and state and local public health agencies continue to mobilize epidemiologic, laboratory, and other staff to identify and investigate acts of bioterrorism. Cases of bioterrorismassociated anthrax continue to occur and new risk populations may be identified. Until the cause of these acts are removed, public health authorities and clinicians should remain alert for cases of anthrax. cis strain has been shown to be penicillin-sensitive, prophylactic therapy with amoxicillin, 500 mg three times a day for 60 days, may be considered. Isolates of B. anthracis implicated in the current bioterrorist attacks are susceptible to penicillin in laboratory tests, but may contain penicillinase activity. 2 Pencillins are not recommended for treatment of anthrax, where such penicillinase activity may decrease their effectiveness. However, penicillins are likely to be effective for preventing anthrax, a setting where relatively few organisms are present. Doxycycline should be used with caution in asymptomatic pregnant women and only when contraindications are indicated to the use of other appropriate antimicrobial drugs.
# West Nile Virus
Pregnant women are likely to be among the increasing number of persons receiving antimicrobial prophylaxis for exposure to B. anthracis. Clinicians, public health officials, and women who are candidates for treatment should weigh the possible risks and benefits to the mother and fetus when choosing an antimicrobial for postexposure anthrax prophylaxis. Women who become pregnant while taking antimicrobial prophylaxis should continue the medication and consult a health-care provider or public health official to discuss these issues.
No formal clinical studies of ciprofloxacin have been performed during pregnancy. Based on limited human information, ciprofloxacin use during pregnancy is unlikely to be associated with a high risk for structural malformations in fetal development. Data on ciprofloxacin use during pregnancy from the Teratogen Information System indicate that therapeutic doses during pregnancy are unlikely to pose a substantial teratogenic risk, but data are insufficient to determine that there is no risk. 1 Doxycy-cline is a tetracycline antimicrobial. Potential dangers of tetracyclines to fetal development include risk for dental staining of the primary teeth and concern about possible depressed bone growth and defective dental enamel. Rarely, hepatic necrosis has been reported in pregnant women using tetracyclines. Penicillins generally are considered safe for use during pregnancy and are not associated with an increased risk for fetal malformation. Pregnant women should be advised that congenital malformations occur in approximately 2%-3% of births, even in the absence of known teratogenic exposure.
Additional information about the treatment of anthrax infection is available at /preview/mmwrhtml/mm5042a1.htm.
# Interim
# Recommendations for Protecting Workers From Exposure to Bacillus anthracis in Work Sites in Which Mail
Is Handled or Processed MMWR. 2001;50:961 CDC HAS DEVELOPED INTERIM RECOMmendations to assist personnel respon-sible for occupational health and safety in developing a comprehensive program to reduce potential cutaneous or inhalational exposures to Bacillus anthracis spores among workers in work sites in which mail is handled or processed. Such work sites include post offices, mail distribution/handling centers, bulk mail centers, air mail facilities, priority mail processing centers, public and private mail rooms, and other settings in which workers are responsible for handling and processing mail. The recommendations are based on the limited information available on methods to avoid infection and on the effectiveness of various prevention strategies. These recommendations will be updated as new information becomes available.
The recommendations are divided into the following hierarchical categories describing measures that should be implemented in distribution/handling centers to prevent potential exposures to B. anthracis spores:
- Engineering controls to prevent or capture aerosolized spores
- Administrative controls to limit the number of persons potentially exposed to spores
- Housekeeping controls to further reduce the spread of spores - Personal protective equipment for workers to prevent cutaneous and inhalational exposure to spores These control measures should be selected on the basis of an initial work site evaluation that focuses on determining which processes, operations, jobs, or tasks would be most likely to result in an exposure if a contaminated envelope or package enters the work site. The complete interim recommenda- | 3,918 | {
"id": "bac02ffa8ce9af0258015315f8d678f4e6aed543",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # Table of Contents
I
# I. Introduction
Multidrug-resistant organisms (MDROs), including methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant enterococci (VRE) and certain gram-negative bacilli (GNB) have important infection control implications that either have not been addressed or received only limited consideration in previous isolation guidelines. Increasing experience with these organisms is improving understanding of the routes of transmission and effective preventive measures. Although transmission of MDROs is most frequently documented in acute care facilities, all healthcare settings are affected by the emergence and transmission of antimicrobial-resistant microbes. The severity and extent of disease caused by these pathogens varies by the population(s) affected and by the institution(s) in which they are found. Institutions, in turn, vary widely in physical and functional characteristics, ranging from long-term care facilities (LTCF) to specialty units (e.g., intensive care units , burn units, neonatal ICUs ) in tertiary care facilities. Because of this, the approaches to prevention and control of these pathogens need to be tailored to the specific needs of each population and individual institution. The prevention and control of MDROs is a national priority -one that requires that all healthcare facilities and agencies assume responsibility (1,2). The following discussion and recommendations are provided to guide the implementation of strategies and practices to prevent the transmission of MRSA, VRE, and other MDROs. The administration of healthcare organizations and institutions should ensure that appropriate strategies are fully implemented, regularly evaluated for effectiveness, and adjusted such that there is a consistent decrease in the incidence of targeted MDROs.
Successful prevention and control of MDROs requires administrative and scientific leadership and a financial and human resource commitment (3)(4)(5). Resources must be made available for infection prevention and control, including expert consultation, laboratory support, adherence monitoring, and data analysis. Infection prevention and control professionals have found that healthcare personnel (HCP) are more receptive and adherent to the recommended control measures when organizational leaders participate in efforts to reduce MDRO transmission (3).
# II. Background
MDRO definition. For epidemiologic purposes, MDROs are defined as microorganisms, predominantly bacteria, that are resistant to one or more classes of antimicrobial agents (1). Although the names of certain MDROs describe resistance to only one agent (e.g., MRSA, VRE), these pathogens are frequently resistant to most available antimicrobial agents.
These highly resistant organisms deserve special attention in healthcare facilities (2). In addition to MRSA and VRE, certain GNB, including those producing extended spectrum beta-lactamases (ESBLs) and others that are resistant to multiple classes of antimicrobial agents, are of particular concern 1 . In addition to Escherichia coli and Klebsiella pneumoniae, these include strains of Acinetobacter baumannii resistant to all antimicrobial agents, or all except imipenem, (6)(7)(8)(9)(10)(11)(12), and organisms such as Stenotrophomonas maltophilia (12)(13)(14), Burkholderia cepacia (15,16), and Ralstonia pickettii (17) that are intrinsically resistant to the broadest-spectrum antimicrobial agents. In some residential settings (e.g., LTCFs), it is important to control multidrug-resistant S. pneumoniae (MDRSP) that are resistant to penicillin and other broad-spectrum agents such as macrolides and fluroquinolones (18,19). Clinical importance of MDROs. In most instances, MDRO infections have clinical manifestations that are similar to infections caused by susceptible pathogens.
However, options for treating patients with these infections are often extremely limited. For example, until recently, only vancomycin provided effective therapy for potentially lifethreatening MRSA infections and during the 1990's there were virtually no antimicrobial agents to treat infections caused by VRE. Although antimicrobials are now available for treatment of MRSA and VRE infections, resistance to each new agent has already emerged 1 Multidrug-resistant strains of M. tuberculosis are not addressed in this document because of the markedly different patterns of transmission and spread of the pathogen and the very different control interventions that are needed for prevention of M. tuberculosis infection. Current recommendations for prevention and control of tuberculosis can be found at: Guidelines for Preventing the Transmission of Mycobacterium tuberculosis in Health-Care Settings, 2005 ()
# Management of Multidrug-Resistant Organisms In Healthcare Settings, 2006
Available from: / 6 of 74 Last update: February 15, 2017 in clinical isolates (31)(32)(33)(34)(35)(36)(37). Similarly, therapeutic options are limited for ESBL-producing isolates of gram-negative bacilli, strains of A. baumannii resistant to all antimicrobial agents except imipenem (8)(9)(10)(11)38) and intrinsically resistant Stenotrophomonas sp. (12)(13)(14)39). These limitations may influence antibiotic usage patterns in ways that suppress normal flora and create a favorable environment for development of colonization when exposed to potential MDR pathogens (i.e., selective advantage) (40).
Increased lengths of stay, costs, and mortality also have been associated with MDROs (41)(42)(43)(44)(45)(46). Two studies documented increased mortality, hospital lengths of stay, and hospital charges associated with multidrug-resistant gram-negative bacilli (MDR-GNBs), including an NICU outbreak of ESBL-producing Klebsiella pneumoniae (47) and the emergence of third-generation cephalosporin resistance in Enterobacter spp. in hospitalized adults (48).
Vancomycin resistance has been reported to be an independent predictor of death from enterococcal bacteremia (44,(49)(50)(51)(52)(53). Furthermore, VRE was associated with increased mortality, length of hospital stay, admission to the ICU, surgical procedures, and costs when VRE patients were compared with a matched hospital population (54).
However, MRSA may behave differently from other MDROs. When patients with MRSA have been compared to patients with methicillin-susceptible S. aureus (MSSA), MRSAcolonized patients more frequently develop symptomatic infections (55,56). Furthermore, higher case fatality rates have been observed for certain MRSA infections, including bacteremia (57)(58)(59)(60)(61)(62), poststernotomy mediastinitis (63), and surgical site infections (64). These outcomes may be a result of delays in the administration of vancomycin, the relative decrease in the bactericidal activity of vancomycin (65), or persistent bacteremia associated with intrinsic characteristics of certain MRSA strains (66). Mortality may be increased further by S. aureus with reduced vancomycin susceptibility (VISA) (26,67). Also some studies have reported an association between MRSA infections and increased length of stay, and healthcare costs (46,61,62), while others have not (64). Finally, some hospitals have observed an increase in the overall occurrence of staphylococcal infections following the introduction of MRSA into a hospital or special-care unit (68,69).
# Management of Multidrug-Resistant Organisms In Healthcare Settings, 2006
Available from: / 7 of 74 Last update: February 15, 2017
# III. Epidemiology of MDROs
Trends. Prevalence of MDROs varies temporally, geographically, and by healthcare setting (70,71). For example, VRE emerged in the eastern United States in the early 1990s, but did not appear in the western United States until several years later, and MDRSP varies in prevalence by state (72). The type and level of care also influence the prevalence of MDROs. ICUs, especially those at tertiary care facilities, may have a higher prevalence of MDRO infections than do non-ICU settings (73,74). Antimicrobial resistance rates are also strongly correlated with hospital size, tertiary-level care, and facility type (e.g., LTCF) (75,76). The frequency of clinical infection caused by these pathogens is low in LTCFs (77,78). Nonetheless, MDRO infections in LTCFs can cause serious disease and mortality, and colonized or infected LTCF residents may serve as reservoirs and vehicles for MDRO introduction into acute care facilities (78)(79)(80)(81)(82)(83)(84)(85)(86)(87)(88). Another example of population differences in prevalence of target MDROs is in the pediatric population. Point prevalence surveys conducted by the Pediatric Prevention Network (PPN) in eight U.S. PICUs and 7 U.S. NICUs in 2000 found ≤4% of patients were colonized with MRSA or VRE compared with 10-24% were colonized with ceftazidime-or aminoglycoside-resistant gram-negative bacilli; < 3% were colonized with ESBL-producing gram negative bacilli. Despite some evidence that MDRO burden is greatest in adult hospital patients, MDRO require similar control efforts in pediatric populations as well (89).
During the last several decades, the prevalence of MDROs in U.S. hospitals and medical centers has increased steadily (90,91). MRSA was first isolated in the United States in 1968. By the early 1990s, MRSA accounted for 20%-25% of Staphylococcus aureus isolates from hospitalized patients (92). In 1999, MRSA accounted for >50% of S. aureus isolates from patients in ICUs in the National Nosocomial Infection Surveillance (NNIS) system; in 2003, 59.5% of S. aureus isolates in NNIS ICUs were MRSA (93). A similar rise in prevalence has occurred with VRE (94). From 1990 to 1997, the prevalence of VRE in enterococcal isolates from hospitalized patients increased from <1% to approximately 15% ( 95). VRE accounted for almost 25% of enterococcus isolates in NNIS ICUs in 1999 (94), and 28.5% in 2003 (93). GNB resistant to ESBLs, fluoroquinolones, carbapenems, and aminoglycosides also have increased in prevalence. For example, in 1997, the SENTRY Antimicrobial Surveillance Program found that among K. pneumoniae strains isolated in the United States, resistance rates to ceftazidime and other third-generation cephalosporins were 6.6%, 9.7%, 5.4%, and 3.6% for bloodstream, pneumonia, wound, and urinary tract infections, respectively (95). In 2003, 20.6% of all K. pneumoniae isolates from NNIS ICUs were resistant to these drugs (93). Similarly, between 1999 and 2003, Pseudomonas aeruginosa resistance to fluoroquinolone antibiotics increased from 23% to 29.5% in NNIS ICUs (74). Also, a 3month survey of 15 Brooklyn hospitals in 1999 found that 53% of A. baumannii strains exhibited resistance to carbapenems and 24% of P. aeruginosa strains were resistant to imipenem ( 10). During 1994-2000, a national review of ICU patients in 43 states found that the overall susceptibility to ciprofloxacin decreased from 86% to 76% and was temporally associated with increased use of fluoroquinolones in the United States (96).
Lastly, an analysis of temporal trends of antimicrobial resistance in non-ICU patients in 23 U.S. hospitals during 1996-1997 and 1998-1999 (97) found significant increases in the prevalence of resistant isolates including MRSA, ciprofloxacin-resistant P. aeruginosa, and ciprofloxacin-or ofloxacin-resistant E. coli. Several factors may have contributed to these increases including: selective pressure exerted by exposure to antimicrobial agents, particularly fluoroquinolones, outside of the ICU and/or in the community (7,96,98); increasing rates of community-associated MRSA colonization and infection (99,100); inadequate adherence to infection control practices; or a combination of these factors.
# Important concepts in transmission.
Once MDROs are introduced into a healthcare setting, transmission and persistence of the resistant strain is determined by the availability of vulnerable patients, selective pressure exerted by antimicrobial use, increased potential for transmission from larger numbers of colonized or infected patients ("colonization pressure") (101,102); and the impact of implementation and adherence to prevention efforts. Patients vulnerable to colonization and infection include those with severe disease, especially those with compromised host defenses from underlying medical conditions; recent surgery; or indwelling medical devices (e.g.,
# Management of Multidrug-Resistant Organisms In Healthcare Settings, 2006
Available from: / 9 of 74 Last update: February 15, 2017 urinary catheters or endotracheal tubes (103,104)). Hospitalized patients, especially ICU patients, tend to have more risk factors than non-hospitalized patients do, and have the highest infection rates. For example, the risk that an ICU patient will acquire VRE increases significantly once the proportion of ICU patients colonized with VRE exceeds 50% (101) or the number days of exposure to a VRE-patient exceeds 15 days (105). A similar effect of colonization pressure has been demonstrated for MRSA in a medical ICU (102). Increasing numbers of infections with MDROs also have been reported in non-ICU areas of hospitals (97).
There is ample epidemiologic evidence to suggest that MDROs are carried from one person to another via the hands of HCP (106)(107)(108)(109). Hands are easily contaminated during the process of care-giving or from contact with environmental surfaces in close proximity to the patient (110)(111)(112)(113). The latter is especially important when patients have diarrhea and the reservoir of the MDRO is the gastrointestinal tract (114)(115)(116)(117). Without adherence to published recommendations for hand hygiene and glove use (111) HCP are more likely to transmit MDROs to patients. Thus, strategies to increase and monitor adherence are important components of MDRO control programs (106,118).
Opportunities for transmission of MDROs beyond the acute care hospital results from patients receiving care at multiple healthcare facilities and moving between acute-care, ambulatory and/or chronic care, and LTC environments. System-wide surveillance at LDS Hospital in Salt Lake City, Utah, monitored patients identified as being infected or colonized with MRSA or VRE, and found that those patients subsequently received inpatient or outpatient care at as many as 62 different healthcare facilities in that system during a 5year span (119).
# Role of colonized HCP in MDRO transmission. Rarely, HCP may
introduce an MDRO into a patient care unit (120)(121)(122)(123). Occasionally, HCP can become persistently colonized with an MDRO, but these HCP have a limited role in transmission, unless other factors are present. Additional factors that can facilitate transmission, include chronic sinusitis (120), upper respiratory infection (123), and dermatitis (124). (125)(126)(127)(128).
Historically, genetic analyses of MRSA isolated from patients in hospitals worldwide revealed that a relatively small number of MRSA strains have unique qualities that facilitate their transmission from patient to patient within healthcare facilities over wide geographic areas, explaining the dramatic increases in HAIs caused by MRSA in the 1980s and early 1990s (129). To date, most MRSA strains isolated from patients with CA-MRSA infections have been microbiologically distinct from those endemic in healthcare settings, suggesting that some of these strains may have arisen de novo in the community via acquisition of methicillin resistance genes by established methicillin-susceptible S. aureus (MSSA) strains (130)(131)(132). Two pulsed-field types, termed USA300 and USA400 according to a typing scheme established at CDC, have accounted for the majority of CA-MRSA infections characterized in the United States, whereas pulsed-field types USA100 and USA200 are the predominant genotypes endemic in healthcare settings (133). USA300 and USA400 genotypes almost always carry type IV of the staphylococcal chromosomal cassette (SCC) mec, the mobile genetic element that carries the mecA methicillin-resistance gene (133,134). This genetic cassette is smaller than types I through III, the types typically found in healthcare associated MRSA strains, and is hypothesized to be more easily transferable between S. aureus strains.
CA-MRSA infection presents most commonly as relatively minor skin and soft tissue infections, but severe invasive disease, including necrotizing pneumonia, necrotizing fasciitis, severe osteomyelitis, and a sepsis syndrome with increased mortality have also been described in children and adults (134)(135)(136).
Transmission within hospitals of MRSA strains first described in the community (e.g. USA300 and USA400) are being reported with increasing frequency (137)(138)(139)(140) (90). Infections with these strains have most commonly presented as skin disease in community settings. However, intrinsic virulence characteristics of the organisms can result in clinical manifestations similar to or potentially more severe than traditional healthcare-associated MRSA infections among hospitalized patients. The prevalence of MRSA colonization and infection in the surrounding community may therefore affect the selection of strategies for MRSA control in healthcare settings.
# IV. MDRO Prevention and Control
# Prevention of infections. Preventing infections will reduce the burden of
MDROs in healthcare settings. Prevention of antimicrobial resistance depends on appropriate clinical practices that should be incorporated into all routine patient care.
These include optimal management of vascular and urinary catheters, prevention of lower respiratory tract infection in intubated patients, accurate diagnosis of infectious etiologies, and judicious antimicrobial selection and utilization. Guidance for these preventive practices include the Campaign to Reduce Antimicrobial Resistance in Healthcare Settings (), a multifaceted, evidence-based approach with four parallel strategies: infection prevention; accurate and prompt diagnosis and treatment; prudent use of antimicrobials; and prevention of transmission. Campaign materials are available for acute care hospitals, surgical settings, dialysis units, LTCFs and pediatric acute care units.
To reduce rates of central-venous-line associated bloodstream infections (CVL-BSIs) and ventilator-associated pneumonia (VAP), a group of bundled evidence-based clinical practices have been implemented in many U.S. healthcare facilities (118,(141)(142)(143)(144). One report demonstrated a sustained effect on the reduction in CVL-BSI rates with this approach (145). Although the specific effect on MDRO infection and colonization rates have not been reported, it is logical that decreasing these and other healthcare-associated infections will in turn reduce antimicrobial use and decrease opportunities for emergence and transmission of MDROs. 3. Eradication of endemic MRSA infections from two NICUs. The first NICU included implementation of ASC, Contact Precautions, use of triple dye on the umbilical cord, and systems changes to improve surveillance and adherence to recommended practices and to reduce overcrowding (152). The second NICU used ASC and Contact Precautions; surgical masks were included in the barriers used for Contact Precautions (153) 5. Control of an outbreak of VRE in a NICU over a 3-year period with implementation of ASC, other infection control measures such as use of a waterless hand disinfectant, and mandatory in-service education (155). 6. Eradication of MDR-strains of A. baumannii from a burn unit over a 16-month period with implementation of strategies to improve adherence to hand hygiene, isolation, environmental cleaning, and temporary unit closure (38). 7. In addition, more than 100 reports published during 1982-2005 support the efficacy of combinations of various control interventions to reduce the burden of MRSA, VRE, and MDR-GNBs (Tables 1 and 2). Case-rate reduction or pathogen eradication was reported in a majority of studies.
8. VRE was eradicated in seven special-care units (154,(156)(157)(158)(159)(160), two hospitals (161,162), and one LTCF (163). 9. MRSA was eradicated from nine special-care units (89,152,153,(164)(165)(166)(167)(168)(169), two hospitals (170), one LTCF (167), and one Finnish district (171). Furthermore, four MRSA reports described continuing success in sustaining low endemic MDRO rates for over 5 years (68,166,172,173).
10. An MDR-GNB was eradicated from 13 special-care units (8,9,38,(174)(175)(176)(177)(178)(179)(180) and two hospitals (11,181). These success stories testify to the importance of having dedicated and knowledgeable teams of healthcare professionals who are willing to persist for years, if necessary, to control MDROs. Eradication and control of MDROs, such as those reported, frequently required periodic reassessment and the addition of new and more stringent interventions over time (tiered strategy). For example, interventions were added in a stepwise fashion during a 3-year effort that eventually eradicated MRSA from an NICU (152). A series of interventions was adopted throughout the course of a year to eradicate VRE from a burn unit (154). Similarly, eradication of carbapenem-resistant strains of A. baumannii from a hospital required multiple and progressively more intense interventions over several years (11).
Nearly all studies reporting successful MDRO control employed a median of 7 to 8 different interventions concurrently or sequentially (Table 1). These figures may underestimate the actual number of control measures used, because authors of these reports may have considered their earliest efforts routine (e.g., added emphasis on handwashing), and did not include them as interventions, and some "single measures" are, in fact, a complex combination of several interventions. The use of multiple concurrent control measures in these reports underscores the need for a comprehensive approach for controlling MDROs.
Several factors affect the ability to generalize the results of the various studies reviewed, including differences in definition, study design, endpoints and variables measured, and period of follow-up. Two-thirds of the reports cited in Tables 1 and 2 involved perceived outbreaks, and one-third described efforts to reduce endemic transmission. Few reports described preemptive efforts or prospective studies to control MDROs before they had reached high levels within a unit or facility.
With these and other factors, it has not been possible to determine the effectiveness of individual interventions, or a specific combination of interventions, that would be appropriate for all healthcare facilities to implement in order to control their target MDROs. 1. Administrative support. In several reports, administrative support and involvement were important for the successful control of the target MDRO (3,152,(182)(183)(184)(185), and authorities in infection control have strongly recommended such support (2,106,107,186). There are several examples of MDRO control interventions that require administrative commitment of fiscal and human resources. One is the use of ASC (8,38,68,107,114,151,152,167,168,183,184,(187)(188)(189)(190)(191)(192). Other interventions that require administrative support include:
1. implementing system changes to ensure prompt and effective communications e.g., computer alerts to identify patients previously known to be colonized/infected with MDROs (184,189,193,194);
2. providing the necessary number and appropriate placement of hand washing sinks and alcohol-containing hand rub dispensers in the facility (106,195); 3. maintaining staffing levels appropriate to the intensity of care required (152,(196)(197)(198)(199)(200)(201)(202); and 4. enforcing adherence to recommended infection control practices (e.g., hand hygiene, Standard and Contact Precautions) for MDRO control. Other measures that have been associated with a positive impact on prevention efforts, that require administrative support, are direct observation with feedback to HCP on adherence to recommended precautions and keeping HCP informed about changes in transmission rates (3,152,182,(203)(204)(205). A "How-to guide" for implementing change in ICUs, including analysis of structure, process, and outcomes when designing interventions, can assist in identification of needed administrative interventions (195).
Lastly, participation in existing, or the creation of new, city-wide, state-wide, regional or national coalitions, to combat emerging or growing MDRO problems is an effective strategy that requires administrative support (146,151,167,188,206,207).
2. Education. Facility-wide, unit-targeted, and informal, educational interventions were included in several successful studies (3,189,193,(208)(209)(210)(211). The focus of the interventions was to encourage a behavior change through improved understanding of the problem MDRO that the facility was trying to control. Whether the desired change involved hand hygiene, antimicrobial prescribing patterns, or other outcomes, enhancing understanding and creating a culture that supported and promoted the desired behavior, were viewed as essential to the success of the intervention. Educational campaigns to enhance adherence to hand hygiene practices in conjunction with other control measures have been associated temporally with decreases in MDRO transmission in various healthcare settings (3,106,163).
# Judicious use of antimicrobial agents.
While a comprehensive review of antimicrobial stewardship is beyond the scope of this guideline, recommendations for control of MDROs must include attention to judicious antimicrobial use. A temporal association between formulary changes and decreased occurrence of a target MDRO was found in several studies, especially in those that focused on MDR-GNBs (98,177,209,(212)(213)(214)(215)(216)(217)(218). Occurrence of C. difficile-associated disease has also been associated with changes in antimicrobial use (219). Although some MRSA and VRE control efforts have attempted to limit antimicrobial use, the relative importance of this measure for controlling these MDROs remains unclear (193,220). Limiting antimicrobial use alone may fail to control resistance due to a combination of factors; including 1. the relative effect of antimicrobials on providing initial selective pressure, compared to perpetuating resistance once it has emerged;
2. inadequate limits on usage; or 3. insufficient time to observe the impact of this intervention. With the intent of addressing #2 and #3 above in the study design, one study demonstrated a decrease in the prevalence of VRE associated with a formulary switch from ticarcillin-clavulanate to piperacillin-tazobactam (221).
The CDC Campaign to Prevent Antimicrobial Resistance that was launched in 2002 provides evidence-based principles for judicious use of antimicrobials and tools for implementation (222) . This effort targets all healthcare settings and focuses on effective antimicrobial treatment of infections, use of narrow spectrum agents, treatment of infections and not contaminants, avoiding excessive duration of therapy, and restricting use of broad-spectrum or more potent antimicrobials to treatment of serious infections when the pathogen is not known or when other effective agents are unavailable.
Achieving these objectives would likely diminish the selective pressure that favors proliferation of MDROs. Strategies for influencing antimicrobial prescribing patterns within (223)(224)(225)(226); computer-assisted management programs (227)(228)(229); and active efforts to remove redundant antimicrobial combinations (230). A systematic review of controlled studies identified several successful practices. These include social marketing (i.e. consumer education), practice guidelines, authorization systems, formulary restriction, mandatory consultation, and peer review and feedback. It further suggested that online systems that provide clinical information, structured order entry, and decision support are promising strategies (231). These changes are best accomplished through an organizational, multidisciplinary, antimicrobial management program (232).
# Surveillance for MDROs isolated from routine clinical cultures.
Antibiograms. The simplest form of MDRO surveillance is monitoring of clinical microbiology isolates resulting from tests ordered as part of routine clinical care. This method is particularly useful to detect emergence of new MDROs not previously detected, either within an individual healthcare facility or community-wide. In addition, this information can be used to prepare facility-or unit-specific summary antimicrobial susceptibility reports that describe pathogen-specific prevalence of resistance among clinical isolates. Such reports may be useful to monitor for changes in known resistance patterns that might signal emergence or transmission of MDROs, and also to provide clinicians with information to guide antimicrobial prescribing practices (233 -235). (205,236,237). Such measures may be useful for monitoring MDRO trends and assessing the impact of prevention programs, although they have limitations. Because they are based solely on positive culture results without accompanying clinical information, they do not distinguish colonization from infection, and may not fully demonstrate the burden of MDRO-associated disease. Furthermore, these measures do not precisely measure acquisition of MDRO colonization in a given population or location. Isolating an MDRO from a clinical culture obtained from a patient several days after admission to a given unit or facility does not establish that the patient acquired colonization in that unit. On the other hand, patients who acquire MDRO colonization may remain undetected by clinical cultures (107). Despite these limitations, incidence measures based on clinical culture results may be highly correlated with actual MDRO transmission rates derived from information using ASC, as demonstrated in a recent multicenter study (237). These results suggest that incidence measures based on clinical cultures alone might be useful surrogates for monitoring changes in MDRO transmission rates.
# MDRO Incidence Based on Clinical
# MDRO Infection Rates.
Clinical cultures can also be used to identify targeted MDRO infections in certain patient populations or units (238,239). This strategy requires investigation of clinical circumstances surrounding a positive culture to distinguish colonization from infection, but it can be particularly helpful in defining the clinical impact of MDROs within a facility.
# Molecular typing of MDRO isolates. Many investigators have used molecular
typing of selected isolates to confirm clonal transmission to enhance understanding of MDRO transmission and the effect of interventions within their facility (38,68,89,92,138,152,190,193,236,240).
# Surveillance for MDROs by detecting asymptomatic colonization
Another form of MDRO surveillance is the use of active surveillance cultures (ASC) to identify patients who are colonized with a targeted MDRO (38,107,241). This approach is based upon the observation that, for some MDROs, detection of colonization may be delayed or missed completely if culture results obtained in the course of routine clinical care are the primary means of identifying colonized patients (8,38,107,114,151,153,167,168,183,184,187,189,(191)(192)(193)(242)(243)(244). Several authors report having used ASC when new pathogens emerge in order to define the epidemiology of the particular agent (22,23,107,190). In addition, the authors of several reports have concluded that ASC, in combination with use of Contact Precautions for colonized patients, contributed directly to the decline or eradication of the target MDRO (38,68,107,151,153,184,217,242).
However, not all studies have reached the same conclusion. Poor control of MRSA despite use of ASC has been described (245). A recent study failed to identify crosstransmission of MRSA or MSSA in a MICU during a 10 week period when ASC were obtained, despite the fact that culture results were not reported to the staff (246).
The investigators suggest that the degree of cohorting and adherence to Standard Precautions might have been the important determinants of transmission prevention, rather than the use of ASC and Contact Precautions for MRSA-colonized patients.
The authors of a systematic review of the literature on the use of isolation measures to control healthcare-associated MRSA concluded that there is evidence that concerted efforts that include ASC and isolation can reduce MRSA even in endemic settings. However, the authors also noted that methodological weaknesses and inadequate reporting in published research make it difficult to rule out plausible alternative explanations for reductions in MRSA acquisition associated with these interventions, and therefore concluded that the precise contribution of active surveillance and isolation alone is difficult to assess (247).
Mathematical modeling studies have been used to estimate the impact of ASC use in control of MDROs. One such study evaluating interventions to decrease VRE transmission indicated that use of ASC (versus no cultures) could potentially decrease transmission 39% and that with pre-emptive isolation plus ASC, transmission could be decreased 65% (248). Another mathematical model examining the use of ASC and isolation for control of MRSA predicted that isolating colonized or infected patients on the basis of clinical culture results is unlikely to be successful at controlling MRSA, whereas use of active surveillance and isolation can lead to successful control, even in settings where MRSA is highly endemic. ( 249) There is less literature on the use of ASC in controlling MDR-GNBs. Active surveillance cultures have been used as part of efforts to successful control of MDR-GNBs in outbreak settings. The experience with ASC as part of successful control efforts in endemic settings is mixed. One study reported successful reduction of extended-spectrum beta-lactamase -producing Enterobacteriaceae over a six year period using a multifaceted control program that included use of ASC (245). Other reports suggest that use of ASC is not necessary to control endemic MDR-GNBs. (250,251).
More research is needed to determine the circumstances under which ASC are most beneficial ( 252), but their use should be considered in some settings, especially if other control measures have been ineffective. When use of ASC is incorporated into MDRO prevention programs, the following should be considered:
- The decision to use ASC as part of an infection prevention and control program requires additional support for successful implementation, including:
1. personnel to obtain the appropriate cultures, 2. microbiology laboratory personnel to process the cultures, 3. mechanism for communicating results to caregivers, 4. concurrent decisions about use of additional isolation measures triggered by a positive culture (e.g. Contact Precautions) and 5. mechanism for assuring adherence to the additional isolation measures.
- The populations targeted for ASC are not well defined and vary among published reports. Some investigators have chosen to target specific patient populations considered at high risk for MDRO colonization based on factors such as location (e.g. ICU with high MDRO rates), antibiotic exposure history, presence of underlying diseases, prolonged duration of stay, exposure to other MDRO-colonized patients, patients transferred from other facilities known to have a high prevalence of MDRO carriage, or having a history of recent hospital or nursing home stays (107,151,253 from all patients admitted to units experiencing high rates of colonization/infection with the MDROs of interest, unless they are already known to be MDRO carriers (153,184,242,254). In an effort to better define target populations for active surveillance, investigators have attempted to create prediction rules to identify subpopulations of patients at high risk for colonization on hospital admission (255,256). Decisions about which populations should be targeted for active surveillance should be made in the context of local determinations of the incidence and prevalence of MDRO colonization within the intervention facility as well as other facilities with whom patients are frequently exchanged (257).
- Optimal timing and interval of ASC are not well defined. In many reports, cultures were obtained at the time of admission to the hospital or intervention unit or at the time of transfer to or from designated units (e.g., ICU) (107). In addition, some bacilli can make the process of isolating a specific MDR-GNB a relatively labor-intensive process (38,190,241,250 (268). The impact of rapid testing on the effectiveness of active surveillance as a prevention strategy, however, has not been fully determined. Rapid identification of MRSA in one study was associated with a significant reduction in MRSA infections acquired in the medical ICU, but not the surgical ICU (265). A mathematical model characterizing MRSA transmission dynamics predicted that, in comparison to conventional culture methods, the use of rapid detection tests may decrease isolation needs in settings of low-endemicity and result in more rapid reduction in prevalence in highly-endemic settings (249).
- Some MDRO control reports described surveillance cultures of healthcare personnel during outbreaks, but colonized or infected healthcare personnel are rarely the source of ongoing transmission, and this strategy should be reserved for settings in which specific healthcare personnel have been epidemiologically implicated in the transmission of MDROs (38,92,(152)(153)(154)188). MDRO control efforts frequently involved changes in isolation practices, especially during outbreaks. In the majority of reports, Contact Precautions were implemented for all patients found to be colonized or infected with the target 2). Some facilities also preemptively used Contact Precautions, in conjunction with ASC, for all new admissions or for all patients admitted to a specific unit, until a negative screening culture for the target MDRO was reported (30,184,273).
Contact precautions are intended to prevent transmission of infectious agents, including epidemiologically important microorganisms, which are transmitted by direct or indirect contact with the patient or the patient's environment. A singlepatient room is preferred for patients who require Contact Precautions. When a single-patient room is not available, consultation with infection control is necessary to assess the various risks associated with other patient placement options (e.g., cohorting, keeping the patient with an existing roommate). HCP caring for patients on Contact Precautions should wear a gown and gloves for all interactions that may involve contact with the patient or potentially contaminated areas in the patient's environment. Donning gown and gloves upon room entry and discarding before exiting the patient room is done to contain pathogens, especially those that have been implicated in transmission through environmental contamination (e.g., VRE, C. difficile, noroviruses and other intestinal tract agents; RSV) (109,111,(274)(275)(276)(277).
Cohorting and other MDRO control strategies. In several reports, cohorting of patients (152,153,167,183,184,188,189,217,242), cohorting of staff (184,217,242,278), use of designated beds or units (183,184), and even unit closure (38,146,159,161,279,280) were necessary to control transmission. Some authors indicated that implementation of the latter two strategies were the turning points in their control efforts; however, these measures usually followed many other actions to prevent transmission. In one, two-center study, moving MRSA-positive patients into single rooms or cohorting these patients in designated bays failed to reduce transmission in ICUs. However, in this study adherence to recommendations for hand hygiene between patient contacts was only 21% (281) acquiring MDROs (282). Additional studies are needed to define the specific contribution of using single-patient rooms and/or cohorting on preventing transmission of MDROs.
# Duration of Contact Precautions. The necessary duration of Contact Precautions
for patients treated for infection with an MDRO, but who may continue to be colonized with the organism at one or more body sites, remains an unresolved issue. Patients may remain colonized with MDROs for prolonged periods; shedding of these organisms may be intermittent, and surveillance cultures may fail to detect their presence (84,250,283). The 1995 HICPAC guideline for preventing the transmission of VRE suggested three negative stool/perianal cultures obtained at weekly intervals as a criterion for discontinuation of Contact Precautions (274). One study found these criteria generally reliable (284). However, this and other studies have noted a recurrence of VRE positive cultures in persons who subsequently receive antimicrobial therapy and persistent or intermittent carriage of VRE for more than 1 year has been reported (284)(285)(286). Similarly, colonization with MRSA can be prolonged (287,288). Studies demonstrating initial clearance of MRSA following decolonization therapy have reported a high frequency of subsequent carriage (289,290). There is a paucity of information in the literature on when to discontinue Contact Precautions for patients colonized with a MDR-GNB, possibly because infection and colonization with these MDROs are often associated with outbreaks.
Despite the uncertainty about when to discontinue Contact Precautions, the studies offer some guidance. In the context of an outbreak, prudence would dictate that Contact Precautions be used indefinitely for all previously infected and known colonized patients. Likewise, if ASC are used to detect and isolate patients colonized with MRSA or VRE, and there is no decolonization of these patients, it is logical to assume that Contact Precautions would be used for the duration of stay in the setting where they were first implemented. In general, it seems reasonable to discontinue Contact Precautions when three or more surveillance cultures for the target MDRO are repeatedly negative over the course of a week or two in a patient who has not received antimicrobial therapy for several weeks, especially in the absence of a draining wound, profuse respiratory secretions, or evidence implicating the specific patient in ongoing transmission of the MDRO within the facility.
# Barriers used for contact with patients infected or colonized with
MDROs. Three studies evaluated the use of gloves with or without gowns for all patient contacts to prevent VRE acquisition in ICU settings (30,105,273). Two of the studies showed that use of both gloves and gowns reduced VRE transmission (30,105) while the third showed no difference in transmission based on the barriers used (273). One study in a LTCF compared the use of gloves only, with gloves plus contact isolation, for patients with four MDROs, including VRE and MRSA, and found no difference (86). However, patients on contact isolation were more likely to acquire MDR-K. pneumoniae strains that were prevalent in the facility; reasons for this were not specifically known. In addition to differences in outcome, differing methodologies make comparisons difficult. Specifically, HCP adherence to the recommended protocol, the influence of added precautions on the number of HCPpatient interactions, and colonization pressure were not consistently assessed.
# Impact of Contact Precautions on patient care and well-being.
There are limited data regarding the impact of Contact Precautions on patients. Two studies found that HCP, including attending physicians, were half as likely to enter the rooms of (291), or examine (292), patients on Contact Precautions. Other investigators have reported similar observations on surgical wards (293). Two studies reported that patients in private rooms and on barrier precautions for an MDRO had increased anxiety and depression scores (294,295). Another study found that patients placed on Contact Precautions for MRSA had significantly more preventable adverse events, expressed greater dissatisfaction with their treatment, and had less documented care than control patients who were not in isolation (296). Therefore, when patients are placed on Contact Precautions, efforts must be made by the healthcare team to counteract these potential adverse effects. (109-111, 297, 298). While environmental cultures are not routinely recommended (299), environmental cultures were used in several studies to document contamination, and led to interventions that included the use of dedicated noncritical medical equipment (217,300), assignment of dedicated cleaning personnel to the affected patient care unit (154), and increased cleaning and disinfection of frequently-touched surfaces (e.g., bedrails, charts, bedside commodes, doorknobs). A common reason given for finding environmental contamination with an MDRO was the lack of adherence to facility procedures for cleaning and disinfection.
In an educational and observational intervention, which targeted a defined group of housekeeping personnel, there was a persistent decrease in the acquisition of VRE in a medical ICU (301). Therefore, monitoring for adherence to recommended environmental cleaning practices is an important determinant for success in controlling transmission of MDROs and other pathogens in the environment (274,302).
In the MDRO reports reviewed, enhanced environmental cleaning was frequently undertaken when there was evidence of environmental contamination and ongoing transmission. Rarely, control of the target MDRO required vacating a patient care unit for complete environmental cleaning and assessment (175,279). 7. Decolonization. Decolonization entails treatment of persons colonized with a specific MDRO, usually MRSA, to eradicate carriage of that organism. Although some investigators have attempted to decolonize patients harboring VRE (220), few have achieved success. However, decolonization of persons carrying MRSA in their nares has proved possible with several regimens that include topical mupirocin alone or in combination with orally administered antibiotics (e.g., rifampin in combination with trimethoprim-sulfamethoxazole or ciprofloxacin) plus the use of an antimicrobial soap for bathing (303). In one report, a 3-day regimen of baths with povidone-iodine and nasal therapy with mupirocin resulted in eradication of nasal MRSA colonization (304). These and other methods of MRSA decolonization have been thoroughly reviewed. (303,(305)(306)(307).
Decolonization regimens are not sufficiently effective to warrant routine use. Therefore, most healthcare facilities have limited the use of decolonization to MRSA outbreaks, or other high prevalence situations, especially those affecting special-care units. Several factors limit the utility of this control measure on a widespread basis:
1. identification of candidates for decolonization requires surveillance cultures;
2. candidates receiving decolonization treatment must receive follow-up cultures to ensure eradication; and 3. recolonization with the same strain, initial colonization with a mupirocin-resistant strain, and emergence of resistance to mupirocin during treatment can occur (289,303,(308)(309)(310).
HCP implicated in transmission of MRSA are candidates for decolonization and should be treated and culture negative before returning to direct patient care. In contrast, HCP who are colonized with MRSA, but are asymptomatic, and have not been linked epidemiologically to transmission, do not require decolonization.
# IV. Discussion
This review demonstrates the depth of published science on the prevention and control of MDROs. Using a combination of interventions, MDROs in endemic, outbreak, and non-endemic settings have been brought under control. However, despite the volume of literature, an appropriate set of evidence-based control measures that can be universally applied in all healthcare settings has not been definitively established. This is due in part to differences in study methodology and outcome measures, including an absence of randomized, controlled trials comparing one MDRO control measure or strategy with another. Additionally, the data are largely descriptive and quasi-experimental in design (311). Few reports described preemptive efforts or prospective studies to control MDROs before they had reached high levels within a unit or facility. Furthermore, small hospitals and LTCFs are infrequently represented in the literature.
A number of questions remain and are discussed below.
# Impact on other MDROS from interventions targeted to one
MDRO. Only one report described control efforts directed at more than one MDRO, i.e., MDR-GNB and MRSA (312). Several reports have shown either decreases or increases in other pathogens with efforts to control one MDRO. For example, two reports on VRE control efforts demonstrated an increase in MRSA following the prioritization of VRE patients to private rooms and cohort beds (161). Similarly an outbreak of Serratia marcescens was temporally associated with a concurrent, but unrelated, outbreak of MRSA in an NICU (313). In contrast, Wright and colleagues reported a decrease in MRSA and VRE acquisition in an ICU during and after their successful effort to eradicate an MDRstrain of A. baumannii from the unit (210).
Colonization with multiple MDROs appears to be common (314,315). One study found that nearly 50% of residents in a skilled-care unit in a LTCF were colonized with a target MDRO and that 26% were co-colonized with >1 MDRO; a detailed analysis showed that risk factors for colonization varied by pathogen (316). One review of the literature (317) reported that patient risk factors associated with colonization with MRSA, VRE, MDR-GNB, C. difficile and Candida sp were the same. This review concluded that control programs that focus on only one organism or one antimicrobial drug are unlikely to succeed because vulnerable patients will continue to serve as a magnet for other MDROs.
Costs. Several authors have provided evidence for the cost-effectiveness of approaches that use ASC (153,191,253,318,319). However, the supportive evidence often relied on assumptions, projections, and estimated attributable costs of MDRO infections. Similar limitations apply to a study suggesting that gown use yields a cost benefit in controlling transmission of VRE in ICUs (320). To date, no studies have directly compared the benefits and costs associated with different MDRO control strategies.
# Feasibility.
The subject of feasibility, as it applies to the extrapolation of results to other healthcare settings, has not been addressed. For example, smaller hospitals and LTCFs may lack the on-site laboratory services needed to obtain ASC in a timely manner. and preemptive placement of patients on Contact Precautions in these settings. However, with the growing problem of antimicrobial resistance, and the recognized role of all healthcare settings for control of this problem, it is imperative that appropriate human and fiscal resources be invested to increase the feasibility of recommended control strategies in every setting.
# This factor could limit the applicability of an aggressive program based on obtaining ASC
# Factors that influence selection of MDRO control measures.
Although some common principles apply, the preceding literature review indicates that no single approach to the control of MDROs is appropriate for all healthcare facilities. Many factors influence the choice of interventions to be applied within an institution, including:
- Type and significance of problem MDROs within the institution. Many facilities have an MRSA problem while others have ESBL-producing K.
pneumoniae. Some facilities have no VRE colonization or disease; others have high rates of VRE colonization without disease; and still others have ongoing VRE outbreaks. The magnitude of the problem also varies.
Healthcare facilities may have very low numbers of cases, e.g., with a newly introduced strain, or may have prolonged, extensive outbreaks or colonization in the population. Between these extremes, facilities may have low or high levels of endemic colonization and variable levels of infection.
- Population and healthcare-settings. The presence of high-risk patients (e.g., transplant, hematopoietic stem-cell transplant) and special-care units (e.g. adult, pediatric, and neonatal ICUs; burn; hemodialysis) will influence surveillance needs and could limit the areas of a facility targeted for MDRO control interventions. Although it appears that MDRO transmission seldom occurs in ambulatory and outpatient settings, some patient populations (e.g., hemodialysis, cystic fibrosis) and patients receiving chemotherapeutic agents are at risk for colonization and infection with MDROs. Furthermore, the emergence of VRSA within the outpatient setting (22,23,25) demonstrates that even these settings need to make MDRO prevention a priority. Differences of opinion on the optimal strategy to control MDROs. Published guidance on the control of MDROs reflects areas of ongoing debate on optimal control strategies. A key issue is the use of ASC in control efforts and preemptive use of Contact Precautions pending negative surveillance culture results (107,321,322). The various guidelines currently available exhibit a spectrum of approaches, which their authors deem to be evidence-based. One guideline for control of MRSA and VRE, the Society for Healthcare seek appropriate guidance and adopt effective measures that fit their circumstances and needs. Most studies have been in acute care settings; for non-acute care settings (e.g., LCTF, small rural hospitals), the optimal approach is not well defined. 3). As a rule, these reports indicate that facilities confronted with an MDRO problem selected a combination of control measures, implemented them, and reassessed their impact. In some cases, new measures were added serially to further enhance control efforts. This evidence indicates that the control of MDROs is a dynamic process that requires a systematic approach tailored to the problem and healthcare setting.
# Two-tiered approach for control of MDROs. Reports describing successful control of MDRO transmission in healthcare facilities have included seven categories of interventions (Table
The nature of this evidence gave rise to the two-tiered approach to MDRO control recommended in this guideline. This approach provides the flexibility needed to prevent and control MDRO transmission in every kind of facility addressed by this guideline. Detailed recommendations for MDRO control in all healthcare settings follow and are summarized in Table 3 (Tier 1). Table 3, which applies to all healthcare settings, contains two tiers of activities. In the first tier are the baseline level of MDRO control activities designed to ensure recognition of MDROs as a problem, involvement of healthcare administrators, and provision of safeguards for managing unidentified carriers of MDROs.
With the emergence of an MDRO problem that cannot be controlled with the basic set of infection control measures, additional control measures should be selected from the second tier of interventions presented in Table 3 (Tier 2). Decisions to intensify MDRO control activity arise from surveillance observations and assessments of the risk to patients in various settings.
Circumstances that may trigger these decisions include:
- Identification of an MDRO from even one patient in a facility or special unit with a highly vulnerable patient population (e.g., an ICU, NICU, burn unit) that had previously not encountered that MDRO.
- Failure to decrease the prevalence or incidence of a specific MDRO (e.g., incidence of resistant clinical isolates) despite infection control efforts to stop its transmission.
(Statistical process control charts or other validated methods that account for normal variation can be used to track rates of targeted MDROs) (205,325,326).
The combination of new or increased frequency of MDRO isolates and patients at risk necessitates escalation of efforts to achieve or re-establish control, i.e., to reduce rates of transmission to the lowest possible level. Intensification of MDRO control activities should begin with an assessment of the problem and evaluation of the effectiveness of measures in current use. Once the problem is defined, appropriate additional control measures should be selected from the second tier of Table 3. A knowledgeable infection prevention and control professional or healthcare epidemiologist should make this determination. This approach requires support from the governing body and medical staff of the facility. Once interventions are implemented, ongoing surveillance should be used to determine whether
# V. Prevention of Transmission of Multidrug Resistant Organisms
The CDC/HICPAC system for categorizing recommendations is as follows:
Category IA Strongly recommended for implementation and strongly supported by welldesigned experimental, clinical, or epidemiologic studies.
Category IB Strongly recommended for implementation and supported by some experimental, clinical, or epidemiologic studies and a strong theoretical rationale.
Category IC Required for implementation, as mandated by federal and/or state regulation or standard.
Category II Suggested for implementation and supported by suggestive clinical or epidemiologic studies or a theoretical rationale.
No recommendation Unresolved issue. Practices for which insufficient evidence or no consensus regarding efficacy exists. (3,146,151,154,182,185,194,205,208,210,242,327,328) (3,105,182,184,189,242,273,312,330).
# V.A. General recommendations for all healthcare settings independent of the prevalence of multidrug resistant organism (MDRO) infections or the population served. (See
# Category IB
V.A.1.b.
# Category IB
V.A.1.f. Implement systems to designate patients known to be colonized or infected with a targeted MDRO and to notify receiving healthcare facilities and personnel prior to transfer of such patients within or between facilities. (87, 151) Category IB V.A.1.g. Support participation of the facility or healthcare system in local, regional, and national coalitions to combat emerging or growing MDRO problems. (41,146,151,167,188,206,207,211,331). Category IB V.A.1.h. Provide updated feedback at least annually to healthcare providers and administrators on facility and patient-care-unit trends in MDRO infections.
Include information on changes in prevalence or incidence of infection, results of assessments for system failures, and action plans to improve adherence to and effectiveness of recommended infection control practices to prevent MDRO transmission. (152,154,159,184,204,205,242,312,332) with MDROs and prevention strategies. (38,152,154,173,176,189,190,203,204,217,242,330,333,334) MDR-Acinetobacter baumannii) MDROs. (8,154,177,190,193,209,254,347,(350)(351)(352)(353) Category IB V.A.4.b. In all healthcare organizations, establish systems to ensure that clinical microbiology laboratories (in-house and out-sourced) promptly notify infection control staff or a medical director/ designee when a novel resistance pattern for that facility is detected. (9,22,154,162,169) Category IB V.A.4.c. In hospitals and LTCFs, develop and implement laboratory protocols for storing isolates of selected MDROs for molecular typing when needed to confirm transmission or delineate the epidemiology of the MDRO within the healthcare setting. (7,8,38,140,153,154,187,190,208,217,354,355) (152,154,183,193,205,209,217,242,300,325,326,364,365) Category IA V.A.4.e.i. Specify isolate origin (i.e., location and clinical service) in MDRO monitoring protocols in hospitals and other large multiunit facilities with high-risk patients. (8, 38, 152-154, 217, 358, 361) Category IB V.A.4.e.ii. Establish a baseline (e.g., incidence) for targeted MDRO isolates by reviewing results of clinical cultures; if more timely or localized information is needed, perform baseline point prevalence studies of colonization in high-risk units. When possible, distinguish colonization from infection in analysis of these data. (152,153,183,184,189,190,193,205,242,365) (8,22,151,152,154,189,190,193,208,240,366) V.A.5.c.i. In acute-care hospitals, implement Contact Precautions routinely for all patients infected with target MDROs and for patients that have been previously identified as being colonized with target MDROs (e.g., patients transferred from other units or facilities who are known to be colonized). (11,38,68,114,151,183,188,204,217,242,304) (8, 11, 38, 68, 114, 152-154, 183-185, 189, 190, 193, 194, 209, 217, 242, 312, 364, 365). Category IB V.B.1.a.i. When incidence or prevalence of MDROs are not decreasing despite implementation of and correct adherence to the routine control measures described above, intensify MDRO control efforts by adopting one or more of the interventions described below. (92,152,183,184,193,365) (3, 68, 146, 151-154, 167, 184, 190, 193, 242, 328, 377). Category IB V.B.2.b. Provide necessary leadership, funding, and day-to-day oversight to implement interventions selected. Involve the governing body and leadership of the healthcare facility or system that have organizational responsibility for this and other infection control efforts. (8,38,152,154,184,189,190,208) and training, availability of consumable and durable resources, communication processes, policies and procedures, and adherence to recommended infection control measures (e.g., hand hygiene and Standard or Contact Precautions). Develop, implement, and monitor action plans to correct system failures. (3,8,38,152,154,172,173,175,188,196,198,199,208,217,280,323,379,380) Provide individual or unit-specific feedback when available. (3,38,152,154,159,170,182,183,189,190,193,194,204,205,209,215,218,312) Category IB
# Category IB
# V.A.5.c. Use of Contact Precautions
# V.B.4. Judicious use of antimicrobial agents
Review the role of antimicrobial use in perpetuating the MDRO problem targeted for intensified intervention. Control and improve antimicrobial use as indicated.
Antimicrobial agents that may be targeted include vancomycin, third-generation cephalosporins, and anti-anaerobic agents for VRE (217); third-generation cephalosporins for ESBLs (212,214,215); and quinolones and carbapenems (80,156,166,174,175,209,218,242,254,329,334,335,337,341). Category IB V.B.5. Surveillance V.B.5.a. Calculate and analyze prevalence and incidence rates of targeted MDRO infection and colonization in populations at risk; when possible, distinguish colonization from infection (152,153,183,184,189,190,193,205,215,242,365). Category IB and patients known to have been previously infected or colonized with an MDRO). (8, 38, 68, 114, 151-154, 167, 168, 183, 184, 187-190, 192, 193, 217, 242) ICUs, and at periodic intervals as needed to assess MDRO transmission. (8,151,154,159,184,208,215,242,387) (183,184,189,193,242,339,392) Category IB V.B.7.a.i. Place MDRO patients in single-patient rooms. (6,151,158,160,166,170,187,208,240,282,(393)(394)(395) Category IB V.B.7.a.ii. Cohort patients with the same MDRO in designated areas (e.g., rooms, bays, patient care areas. (8,151,152,159,161,176,181,183,184,188,208,217,242,280,339,344) despite the implementation of the enhanced control measures described above. (Refer to state or local regulations that may apply upon closure of hospital units or services.) (9,38,146,159,161,168,175,205,279,280,332,339,396) (38,104,151,156,159,163,181,217,323,329,367,389,390,394) (38, 154, 159, 165, 172, 173, 175, 178-181, 193, 205, 208, 217, 279, 301, 327, 339, 397) Category IB V.B.8.c. Monitor (i.e., supervise and inspect) cleaning performance to ensure consistent cleaning and disinfection of surfaces in close proximity to the patient and those likely to be touched by the patient and HCP (e.g., bedrails, carts, bedside commodes, doorknobs, faucet handles). (8,38,109,111,154,169,180,208,217,301,333,398) (152,168,170,172,183,194,304) Category II V.B.9.b. When decolonization for MRSA is used, perform susceptibility testing for the decolonizing agent against the target organism in the individual being treated or the MDRO strain that is epidemiologically implicated in Cohorting. In the context of this guideline, this term applies to the practice of grouping patients infected or colonized with the same infectious agent together to confine their care to one area and prevent contact with susceptible patients (cohorting patients). During outbreaks, healthcare personnel may be assigned to a cohort of patients to further limit opportunities for transmission (cohorting staff). Contact Precautions. Contact Precautions are a set of practices used to prevent transmission of infectious agents that are spread by direct or indirect contact with the patient or the patient's environment. Contact Precautions also apply where the presence of excessive wound drainage, fecal incontinence, or other discharges from the body suggest an increased transmission risk. A single patient room is preferred for patients who require Contact Precautions. When a single patient room is not available, consultation with infection control is helpful to assess the various risks associated with other patient placement options (e.g., cohorting, keeping the patient with an existing roommate). In multi-patient rooms, ≥3 feet spatial separation of between beds is advised to reduce the opportunities for inadvertent sharing of items between the infected/colonized patient and other patients. Healthcare personnel caring for patients on Contact Precautions wear a gown and gloves for all interactions that may involve contact with the patient or potentially contaminated areas in the patient's environment. Donning of gown and gloves upon room entry, removal before exiting the patient room and performance of hand hygiene immediately upon exiting are done to contain pathogens.
# Category
# Antimicrobial resistance implications:
- Resistance to first-line therapies (e.g., MRSA, VRE, VISA, VRSA, ESBLproducing organisms).
- Unusual or usual agents with unusual patterns of resistance within a facility, (e.g., the first isolate of Burkholderia cepacia complex or Ralstonia spp. in non-CF patients or a quinolone-resistant strain of Pseudomonas in a facility.
- Difficult to treat because of innate or acquired resistance to multiple classes of antimicrobial agents (e.g., Stenotrophomonas maltophilia, Acinetobacter spp.). all surfaces of hands); or 4. surgical hand antisepsis (antiseptic hand wash or antiseptic hand rub performed preoperatively by surgical personnel to eliminate transient hand flora and reduce resident hand flora).
# Healthcare-associated infection (HAI).
An infection that develops in a patient who is cared for in any setting where healthcare is delivered (e.g., acute care hospital, chronic care facility, ambulatory clinic, dialysis center, surgicenter, home) and is related to receiving health care (i.e., was not incubating or present at the time healthcare was provided). In ambulatory and home settings, HAI would apply to any infection that is associated with a medical or surgical intervention performed in those settings. Home care. A wide-range of medical, nursing, rehabilitation, hospice, and social services delivered to patients in their place of residence (e.g., private residence, senior living center, assisted living facility). Home health-care services include care provided by home health aides and skilled nurses, respiratory therapists, dieticians, physicians, chaplains, and volunteers; provision of durable medical equipment; home infusion therapy; and physical, speech, and occupational therapy. institutions for the developmentally disabled, residential care facilities, assisted living facilities, retirement homes, adult day health care facilities, rehabilitation centers, and long-term psychiatric hospitals.
# Infection prevention and control professional (ICP
# Mask.
A term that applies collectively to items used to cover the nose and mouth and includes both procedure masks and surgical masks ().
# Multidrug-resistant organisms (MDROs).
In general, bacteria (excluding M. tuberculosis) that are resistant to one or more classes of antimicrobial agents and usually are resistant to all but one or two commercially available antimicrobial agents (e.g., MRSA, VRE, extended spectrum beta-lactamase -producing or intrinsically resistant gramnegative bacilli).
# Nosocomial infection. Derived from two
Greek words "nosos" (disease) and "komeion" (to take care of). Refers to any infection that develops during or as a result of an admission to an acute care facility (hospital) and was not incubating at the time of admission.
# Standard Precautions.
A group of infection prevention practices that apply to all patients, regardless of suspected or confirmed diagnosis or presumed infection status. Standard Precautions are a combination and expansion of Universal Precautions and Body Substance Isolation. Standard Precautions are based on the principle that all blood, body fluids, secretions, excretions except sweat, nonintact skin, and mucous membranes may contain transmissible infectious agents. Standard Precautions includes hand hygiene, and depending on the anticipated exposure, use of gloves, gown, mask, eye protection, or face shield. Also, equipment or items in the patient environment likely to have been contaminated with infectious fluids must be handled in a manner to prevent transmission of infectious agents, (e.g. wear gloves for handling, contain heavily soiled equipment, properly clean and disinfect or sterilize reusable equipment before use on another patient). Institute one or more of the interventions described below when 1. incidence or prevalence of MDROs are not decreasing despite the use of routine control measures; or 2. the first case or outbreak of an epidemiologically important MDRO (e.g., VRE, MRSA, VISA, VRSA, MDR-GNB) is identified within a healthcare facility or unit (IB) Continue to monitor the incidence of target MDRO infection and colonization; if rates do not decrease, implement additional interventions as needed to reduce MDRO transmission. | 14,688 | {
"id": "86c28ecefcbbd7b07ddb7a42d51b386161dd80c3",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | This report is being published as a courtesy to both the National Association of State Public Health Veterinarians, Inc., and to the MMWR readership. Its publication does not imply endorsement by CDC.#
The purpose of this compendium is to provide rabies information to veterinarians, public health officials, and others concerned with rabies prevention and control. These recommendations serve as the basis for animal rabies-control programs throughout the United States and facilitate standardization of procedures among jurisdictions, thereby contributing to an effective national rabies-control program. This document is reviewed annually and revised as necessary. Vaccination procedure recommendations are contained in Part I; all animal rabies vaccines licensed by the United States Department of Agriculture (USDA) and marketed in the United States are listed in Part II; Part III details the principles of rabies control.
# Part I: Recommendations for Parenteral Vaccination Procedures
# A. Vaccine Administration
All animal rabies vaccines should be restricted to use by, or under the direct supervision of, a veterinarian.
# B. Vaccine Selection
Part II lists all vaccines licensed by USDA and marketed in the United States at the time of publication. New vaccine approvals or changes in label specifications made subsequent to publication should be considered as part of this list. Vaccines used in state and local rabies-control programs should have a 3-year duration of immunity. This constitutes the most effective method of increasing the proportion of immunized dogs and cats in any population.
# C. Route of Inoculation
All vaccines must be administered in accordance with the specifications of the product label or package insert. Adverse reactions and vaccine failures should be reported to USDA, Animal and Plant Health Inspection Service, Center for Veterinary Biologics at (800) 752-6255 or by e-mail at [email protected].
# D. Wildlife and Hybrid Animal Vaccination
The efficacy of parenteral rabies vaccination of wildlife and hybrids (the offspring of wild animals crossbred to domestic dogs and cats) has not been established, and no such vaccine is licensed for these animals. Zoos or research institutions may establish vaccination programs that attempt to protect valuable animals, but these programs should not replace appropriate public health activities that protect humans.
# E. Accidental Human Exposure to Vaccine
Human exposure to parenteral animal rabies vaccines listed in Part II does not constitute a risk for rabies infection. However, human exposure to vaccinia-vectored oral rabies vaccines should be reported to state health officials.
# F. Identification of Vaccinated Animals
Agencies and veterinarians may adopt the standard tag system to aid in the administration of animal rabies control procedures. ) to allow for verification of rabies vaccination status. b. Licensure. Registration or licensure of all dogs, cats, and ferrets may be used to aid in rabies control. A fee is frequently charged for such licensure, and revenues collected are used to maintain rabies-or animal-control programs.
# Rabies Tags.
# Calendar year
Vaccination is an essential prerequisite to licensure. c. Canvassing of Area. House-to-house canvassing by animal-control personnel facilitates enforcement of vaccination and licensure requirements. d. Citations. Citations are legal summonses issued to owners for violations, including the failure to vaccinate or license their animals. The authority for officers to issue citations should be an integral part of each animal-control program. e. Animal Control. All communities should incorporate stray animal control, leash laws, and training of personnel in their programs. 5. Postexposure Management. Any animal potentially exposed to rabies virus (see Part IIIA1, Rabies Exposure) by a wild, carnivorous mammal or a bat that is not available for testing should be regarded as having been exposed to rabies. a. Dogs, Cats, and Ferrets. Unvaccinated dogs, cats, and ferrets exposed to a rabid animal should be euthanized immediately. If the owner is unwilling to have this done, the animal should be placed in strict isolation for 6 months and vaccinated 1 month before being released. Animals with expired vaccinations need to be evaluated on a case-by-case basis. Dogs, cats, and ferrets that are currently vaccinated should be revaccinated immediately, kept under the owner's control, and observed for 45 days. b. Livestock. All species of livestock are susceptible to rabies; cattle and horses are among the most frequently infected. Livestock exposed to a rabid animal and currently vaccinated with a vaccine approved by USDA for that species should be revaccinated immediately and observed for 45 days. Unvaccinated livestock should be slaughtered immediately. If the owner is unwilling to have this done, the animal should be kept under close observation for 6 months. The following are recommendations for owners of unvaccinated livestock exposed to rabid animals: 1) If the animal is slaughtered within 7 days of being bitten, its tissues may be eaten without risk for infection, provided that liberal portions of the exposed area are discarded. Federal meat inspectors must reject for slaughter any animal known to have been exposed to rabies within 8 months. 2) Neither tissues nor milk from a rabid animal should be used for human or animal consumption. Pasteurization temperatures will inactivate rabies virus; therefore, drinking pasteurized milk or eating cooked meat does not constitute a rabies exposure. 3) Having more than one rabid animal in a herd or having herbivore-toherbivore transmission is rare; therefore, restricting the rest of the herd if a single animal has been exposed to or infected by rabies might not be necessary. c. Other Animals. Other mammals bitten by a rabid animal should be euthanized immediately. Animals maintained in USDA-licensed research facilities or accredited zoological parks should be evaluated on a case-by-case basis. 6. Management of Animals That Bite Humans. a. A healthy dog, cat, or ferret that bites a person should be confined and observed daily for 10 days; administration of rabies vaccine is not recommended during the observation period. Such animals should be evaluated by a veterinarian at the first sign of illness during confinement.
Any illness in the animal should be reported immediately to the local health department. If signs suggestive of rabies develop, the animal should be euthanized, its head removed, and the head shipped under refrigeration (not frozen) for examination of the brain by a qualified laboratory designated by the local or state health department. Any stray or unwanted dog, cat, or ferret that bites a person may be euthanized immediately and the head submitted as described for rabies examination. b. Other biting animals that might have exposed a person to rabies should be reported immediately to the local health department. Prior vaccination of an animal might not preclude the necessity for euthanasia and testing if the period of virus shedding is unknown for that species. Management of animals other than dogs, cats, and ferrets depends on the species, the circumstances of the bite, the epidemiology of rabies in the area, and the biting animal's history, current health status, and potential for exposure to rabies.
# C. Control Methods Among Wildlife
The public should be warned not to handle wildlife. Wild mammals and hybrids that bite or otherwise expose persons, pets, or livestock should be considered for euthanasia and rabies examination. A person bitten by any wild mammal should immediately report the incident to a physician who can evaluate the need for antirabies treatment (see current rabies prophylaxis recommendations of the ACIP*). State-regulated wildlife rehabilitators may play a role in a comprehensive rabies-control program. Minimum standards for persons who rehabilitate wild mammals should include receipt of rabies vaccination, appropriate training, and continuing education. Translocation of infected wildlife has contributed to the spread of rabies; therefore, the translocation of known terrestrial rabies reservoir species should be prohibited. *CDC. Human rabies prevention-United States, 1999: recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR 1999;48(No. RR-1).
1. Terrestrial Mammals. The use of licensed oral vaccines for the mass vaccination of free-ranging wildlife should be considered in selected situations, with the approval of the state agency responsible for animal rabies control. Continuous and persistent government-funded programs for trapping or poisoning wildlife are not cost-effective in reducing wildlife rabies reservoirs on a statewide basis. However, limited control in high-contact areas (e.g., picnic grounds, camps, or suburban areas) may be indicated for the removal of selected high-risk species of wildlife. State agriculture, public health, and wildlife agencies should be consulted for planning, coordination, and evaluation of vaccination or populationreduction programs. 2. Bats. Indigenous rabid bats have been reported from every state except Hawaii and have caused rabies in at least 33 humans in the United States. Bats should be excluded from houses and adjacent structures to prevent direct association with humans. Such structures should then be made bat-proof by sealing entrances used by bats. Controlling rabies among bats by implementing programs designed to reduce bat populations is neither feasible nor desirable. | 1,886 | {
"id": "32d7e5fb7823092279320b21606fd7cd5e68f9fc",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # CRITERIA DOCUMENT: RECOMMENDATIONS FOR AN OCCUPATIONAL EXPOSURE STANDARD FOR SODIUM HYDROXIDE
# Table of Contents
PREFACE
(1) For the purpose of determining the type of respirator to be used, the employer shall measure, when possible, the airborne con centration of sodium hydroxide in the workplace initially and thereafter whenever process, worksite, climate, or control changes occur which are likely to increase the airborne concentration of sodium hydroxide.
# Less than 10X
(1) Single use respirator with dust filter, quarter or half mask, with valves.
(2) Quarter or half mask respirator with replaceable dust, mist, or high efficiency particulate filter.
(3) Type C supplied air respirator, demand (negative pressure) mode with quarter or half mask faceplece.
# Less than or equal to 100X
(1) Full faceplece respirator with chin style or front or back mounted canister with high efficiency particulate filter.
(2) Powered air-purifying respirator with half or full faceplece, hood, or helmet, with high efficiency particulate filter.
(3) Supplied air respirator with full facepiece, hood, or helmet in continuous flow mode or with full faceplece in demand (negative pressure) or pressuredemand (positive pressure) mode.
(4) Self-contained breathing apparatus with full faceplece in demand (negative pressure) mode.
A combination respirator which includes a Type C supplied air respirator with a full faceplece operated in continuous or pressure-demand (positive pressure) mode and an auxiliary self-contained breathing apparatus. (5) In the preparation of solutions from sodium hydroxide, the following practice shall apply: It should be noted that even 1 minute of exposure to 0.5 N sodium hydroxide was sufficient to produce descemetoceles or perforations of corneas In 65-91% of the eyes which were Inadequately washed. "safety solvent," or "aliphatic hydrocarbon" when the specific name is known.
# Greater than 100X (1) (Immediately dangerous to life or health)
# Shapiro
The "%" may be the approximate percentage by weight or volume (indicate basis) which each hazardous ingredient of the mixture bears to the whole mixture. This may be indicated as a range or maximum amount, ie, "10-40% vol" or "10% max wt" to avoid disclosure of trade secrets. | 526 | {
"id": "c0eee719c880a25cbb6e21df6310c4c4799f40fa",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | CDC and state and local public health authorities have been investigating cases of bioterrorism-related anthrax. This report updates findings as of October 31, and includes interim guidelines for the clinical evaluation of persons with possible anthrax. A total of 21 cases (16 confirmed and five suspected) of bioterrorismrelated anthrax have been reported among persons who worked in the District of Columbia, Florida, New Jersey, and New York City (Figure 1). Until the source of these intentional exposures is eliminated, clinicians and laboratorians should be alert for clinical evidence of Bacillus anthracis infection. Epidemiologic investigation of these cases and surveillance to detect new cases of bioterrorism-associated anthrax continues.To date, the investigations in New York City have identified one confirmed inhalational case and six (three confirmed and three suspected) cutaneous anthrax cases; the confirmed inhalational and one suspected cutaneous case have been identified since the last report (1 ). The six cutaneous cases were associated with four media companies (A-D); the most recent suspected cutaneous case is associated with company D. The most#
infiltrate and a small pleural effusion. She was started on multidrug therapy, including ciprofloxacin, which was changed to azithromycin after 24 hours. On admission, she was febrile and tachycardic. She had an elevated white blood cell (WBC) count of 11,000 with 14% bands. A CT scan on October 19 showed a right pleural effusion, perihilar consolidation, and mediastinal adenopathy. She subsequently had two thoracenteses that produced serosanguinous pleural fluid and a bronchoscopy that showed grossly edematous bronchi. Both pleural fluid and bronchial biopsy were positive for B. anthracis by IHC stain.
On October 17, a 51-year-old woman developed a large pimple on her forehead with erythema and swelling. On October 18, the lesion enlarged, was slightly painful, nonpruritic, and drained a small amount of yellowish fluid. She sought medical care, cervical and preauricular lymphadenopathy was noted on physical examination, and she was treated with ciprofloxacin. The lesion progressed and ulcerated. On October 22, she presented to an emergency department and was admitted with a diagnosis of cellulitis. On admission, she was afebrile with normal vital signs and had a swollen right face and eyelid and enlarged right anterior cervical nodes. Intravenous ciprofloxacin for cutaneous anthrax was started. On October 24, the ulcer was biopsied and debrided. Biopsy specimens were positive for B. anthracis by PCR and IHC. The patient improved and was discharged on October 27 on oral ciprofloxacin. The patient worked as a bookkeeper and reported receiving no unusual or powder-containing mail at home or work. She had made no visits to any post offices in several months.
# District of Columbia
To date, investigations in the District of Columbia, Maryland, and Virginia have confirmed inhalational anthrax in four persons who worked at one postal facility in the District of Columbia. An additional case of inhalational anthrax has been confirmed in a 59-year-old postal worker in a U.S. State Department mail sorting facility that receives mail from the District of Columbia postal facility associated with the previous four cases. The patient presented to an emergency department on October 24 with temperature of 100.8 F (38 C), sweats, myalgia, chest discomfort, mild cough, nausea, vomiting, diarrhea, and abdominal pain. A chest radiograph initially was interpreted as normal but on further review indicated mediastinal widening. A CT scan showed mediastinal lymphadenopathy, hemorrhagic mediastinitis, small bilateral pleural effusions, and a small pericardial effusion. Blood cultures grew B. anthracis. The patient is receiving ciprofloxacin, rifampin, and penicillin.
# Florida
To date, the investigation in Florida has identified two confirmed inhalational cases. No new cases have been identified since the last report (1 ).
# Clinical Presentation of Inhalational and Cutaneous Cases
# Inhalational anthrax
To date, CDC has identified 10 patients with confirmed or suspected inhalational anthrax associated with bioterrorism. All but the most recent patients were postal workers (six), mail handlers or sorters (two), or a journalist who were known to or believed to have processed, handled, or received letters containing B. anthracis spores. The hospital employee with inhalational anthrax did not process mail but might have carried mail to other parts of the facility. Preliminary environmental testing of the patient's work area and home was negative for B. anthracis. The investigation is ongoing.
# MMWR November 2, 2001
The median age of the 10 patients with inhalational anthrax was 56 years (range: 43-73 years); seven were men. The incubation period from the time of exposure to onset of symptoms when known (seven) was 7 days (range: 5-11 days).
The initial illness in these patients was characterized by fever (nine) and/or sweats/ chills (six) (Figure 2). Severe fatigue or malaise was present in eight and minimal or nonproductive cough in nine, including one with blood-tinged sputum. Eight patients reported chest discomfort or pleuritic pain. Abdominal pain or nausea or vomiting occurred in five, and five reported chest heaviness. Other symptoms included shortness of breath (seven), headache (five), myalgias (four), and sore throat (two).
On initial presentation, total WBC count was normal or slightly elevated (7.5-13.3 x 10 3 /cumm); however, elevation in the percentage of neutrophils or band forms was frequently noted. None of the patients had a low WBC count or lymphocytosis when initially evaluated. Chest radiograph was abnormal in all patients, but in two an initial reading was interpreted as within normal limits. Mediastinal changes including mediastinal widening, paratracheal fullness, hilar fullness, and mediastinal lymphadenopathy were noted in all eight patients who had CT scans. Mediastinal widening may be subtle, and careful review of the chest radiograph by a radiologist may be necessary. Pleural effusions were present in seven patients and were a feature of the two patients who did not have mediastinal changes on chest radiograph or did not have a CT scan. Pleural effusions often were large and hemorrhagic, reaccumulated, and required repeated thoracentesis or chest tubes. Pulmonary infiltrates were observed in four patients and were multilobar in three. Blood cultures grew B. anthracis in seven patients and in all who had not received antimicrobials. Diagnosis in the patients with negative cultures was confirmed by bronchial or pleural biopsy and specific IHC staining, by PCR of material from a sterile site, or by a fourfold rise in IgG to the protective antigen.
To date, six of 10 patients with inhalational anthrax have survived. Among those whose condition was recognized early, all remain alive and two have been discharged from the hospital. Prompt recognition of the early features of inhalational anthrax is important in settings of known or suspected exposure.
# Cutaneous anthrax
Eleven patients with cutaneous anthrax have been identified in the current outbreak. Patients with cutaneous anthrax were mail handlers or sorters (four), employees of or visitors to media companies (six), and one bookkeeper. The mean incubation period for cutaneous anthrax was 5 days (range: 1-10 days) based on estimates from the postmark of letters and assumptions of dates of exposures with known positive letters or suspect letters (Figure 3).
Lesions occurred on the forearm, neck, chest, and fingers (two). Lesions were painless but accompanied by a tingling sensation or pruritis. Diagnosis was established by biopsy or culture. (1 ). A fourth case occurred in a mail handler at a facility not previously linked to cases but that receives mail from a facility at which cases have occurred previously. Two new cases have no discernable epidemiologic link with anthrax cases previously reported or sites that are associated with known cases. These new cases suggest that anthrax exposure has occurred or is continuing to occur through means that cannot be ascribed to known contaminated letters or the paths these letters took through the mail service.
The public health response to these new anthrax cases will evolve based on ongoing epidemiologic and criminal investigations.
Because exposures are being intentionally perpetrated, public health authorities must be vigilant for the appearance of new cases in previously unaffected populations. Prompt data sharing between law enforcement and public health authorities is essential.
Since September 11, 2001, state and local health departments have been responding to many reports of potential bioterrorist threats including letters containing powder, suspicious packages, and potential dispersal devices. During September 11-October 17, 40 state and territorial health officials who responded to a CDC telephone survey estimated that 7,000 reports had been received at their health departments, approximately 4,800 required phone follow-up, and 1,050 reports led to testing of suspicious materials at a public health laboratory (CDC, unpublished data, 2001). In comparison, the number of anthrax threats reported to federal authorities during 1996-2000 did not exceed 180 reported threats per year (Federal Bureau of Investigation, unpublished data, 2001). Therefore, although only four areas have identified cases of bioterrorism-associated anthrax, health departments throughout the nation are responding to public concerns, bioterrorism hoaxes, and threats.
CDC is working with state and local health departments and the U.S. Postal Service to develop standardized guidelines for identifying populations that should receive antimicrobial prophylaxis for prevention of inhalational anthrax. Current challenges include identifying factors that promote the aeresolization of B. anthracis in mail-handling facilities and assessing the risk for anthrax in environments contaminated with B. anthracis spores. Safe levels of B. anthracis spore contamination in occupational settings must be defined to determine the need for clean-up of contaminated facilities. The current antimicrobial prophylaxis recommendations address the prevention of inhalational anthrax, but CDC also is evaluating measures to prevent cutaneous anthrax.
Postexposure prophylaxis with a recommended antimicrobial agent for the prescribed period of time can prevent inhalational anthrax. In the case of a known contaminated letter sent to the office of a U.S. Senator, antimicrobial prophylaxis was administered to persons from the area of exposure and first-responders to the incident (1 ). To date, there have been no cases of anthrax, even among those who had the greatest exposure. Antimicrobial prophylaxis had been recommended for the U.S. State Department mail handler with anthrax, but the worker had not started treatment before the onset of illness. Public health response must include prompt initiation of prophylaxis for exposed persons and systems to promote adherence to a full 60-day regimen.
# MMWR November 2, 2001
Update: Investigation of Anthrax -Continued Previous guidelines recommended ciprofloxacin for antimicrobial prophylaxis until antimicrobial susceptibility test data was available (3 ). Isolates involved in the current bioterrorism attacks have been susceptible to ciprofloxacin, doxycycline, and several other antimicrobial agents. Considerations for choosing an antimicrobial agent include effectiveness, resistance, side effects, and cost. No evidence demonstrates that ciprofloxacin is more or less effective than doxycycline for antimicrobial prophylaxis to B. anthracis. Widespread use of any antimicrobial will promote resistance. Many common pathogens are already resistant to tetracyclines such as doxycycline. However, fluoroquinolone resistance is not yet common in these same organisms. To preserve the effectiveness of fluoroquinolone against other infections, use of doxycycline for prevention of B. anthracis infection among populations at risk may be preferable. However, the selection of the antimicrobial agent for an individual patient should be based on sideeffect profiles, history of reactions, and the clinical setting.
CDC and state and local public health agencies continue to mobilize epidemiologic, laboratory, and other staff to identify and investigate acts of bioterrorism. Cases of bioterrorism-associated anthrax continue to occur and new risk populations may be identified. Until the cause of these acts are removed, public health authorities and clinicians should remain alert for cases of anthrax.
# Major Cardiovascular Disease (CVD) During 1997-1999 and Major CVD Hospital Discharge Rates in 1997 Among Women with Diabetes -United States
Cardiovascular disease (CVD) is the leading cause of death among all women (1 ) and the risk for death from CVD among women with diabetes is two to four times higher than that for women without diabetes (2 ). The excess risk for death as the result of CVD among persons with diabetes is better understood than the excess risk for CVD morbidity (2 ). To estimate national CVD prevalence and CVD hospital use among women with diabetes, CDC and the Agency for Health Care Research and Quality (AHRQ) analyzed data from the 1997-1999 National Health Interview Survey (NHIS) and the 1997 Nationwide Inpatient Sample (NIS). Findings indicate that the age-adjusted prevalence of major CVD for women with diabetes is twice that for women without diabetes and that the ageadjusted major CVD hospital discharge rate for women with diabetes is almost four times the rate for women without diabetes. These findings underscore the need to reduce risk factors associated with CVD among all women with diabetes through focused public health and clinical efforts.
The prevalence of CVD among women aged >18 years by diabetes status was obtained from a 3-year average of the estimates from the 1997-1999 NHIS, an ongoing nationally representative survey providing information on the health of the noninstitutionalized U.S. civilian population. Respondents were asked whether they had ever been told by a doctor or other health-care provider that they had hypertension, coronary heart disease, angina, heart attack, other kinds of heart conditions or heart disease, stroke, or diabetes. Major CVD was defined as one or more positive responses to the six CVD condition questions, and diabetes was defined as a positive response to the diabetes question.
The number of major CVD hospital discharges was estimated from the 1997 NIS, a stratified probability sample of hospitals in 22 states. Discharges from the 22 states represented approximately 60% of all discharges in the United States. Sample data were weighted using the American Hospital Association Annual Survey of Hospitals to approximate discharges from all U.S. acute-care community hospitals. Analysis was restricted to major CVD discharges (e.g., ischemic heart disease, hypertensive disease, rheumatic heart disease, and stroke) having an International Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM) code of 390-448 as the first-listed diagnosis. Diabetes-related discharges were identified as those with an ICD-9 code of 250 as a secondary diagnosis. Major CVD discharge rates were calculated for the U.S. female population aged >18 years with and without diabetes by dividing the number of major CVD discharges estimated from NIS by the estimated number of women with and without diabetes obtained using 1997 NHIS data. Rate ratios and rate differences were calculated for both major CVD prevalence and hospital discharge rates by comparing rates for women with diabetes with those for women without diabetes. SUDAAN was used to calculate all estimates and their standard errors because of the complex sample designs of NIS and NHIS.
Major CVD rates were age-adjusted to the 2000 U.S. standard population (3 ). Trends were assessed by age for major CVD prevalence and for hospital discharge rates, rate ratios, and rate differences. The difference between age-adjusted major CVD prevalence and hospital discharge rates by diabetes status across all racial/ethnic categories was assessed using z-tests; a t-test in SUDAAN was used to assess the difference in age-adjusted major CVD prevalence and hospital discharge rates by race/ethnicity, comparing non-Hispanic whites, non-Hispanic blacks, and Hispanics. Other racial/ethnic groups were not included because sample size was too small for meaningful analysis.
During 1997-1999, 72% (95% confidence interval =+1.8%) of all women with diabetes self-reported having major CVD (Figure 1). The most common CVD condition was hypertension (64%; 95% CI=±1.8%) followed by other heart disease or conditions (19%; 95% CI=+1.5%), coronary heart disease (12%; 95% CI=+1.2%), heart attack (11%; 95% CI=+1.3%), angina (10%; 95% CI=+1.1%), and stroke (8%; 95% CI=+1.0%) (Figure 1).
The prevalence of major CVD increased with age for women with diabetes from 40.5% (95% CI=+4.9%) for women aged 18-44 years to 85.1% (95% CI=+3.0%) for women aged >75 years (p75 years (p=0.09). Although rate differences were greatest among women aged 45-64 years and lowest among women aged >75 years, no significant trend by age was found (p=0.27) (Table 1). During 1997, 772,346 of all major CVD hospital discharges (28%) had diabetes as a secondary diagnosis (Table 2). Hospital discharge rates for major CVD among women with diabetes increased from 22.9 per 1,000 (95% CI=+4.5) for the youngest age group to 332.7 per 1,000 (95% CI=+54.3) for the oldest age group (p=0.0004) (Table 2). The ageadjusted major CVD hospital discharge rate for women with diabetes was 3.8 times that of women without diabetes (Table 2). No significant difference was found among racial/ ethnic groups for major CVD hospital discharge rates among women with diabetes. The rate ratio comparing major CVD hospital discharges in women with diabetes with those without diabetes decreased with age from 11.8 per 1,000 (95% CI=+2.4) in the youngest age group to 2.4 per 1,000 (95% CI=+0.4) in the oldest (p=0.04). Rate differences increased with age from 20.9 per 1,000 (95% CI=+4.4) in the youngest to 196.5 per 1,000 (95% CI=+55.2) in the oldest (p=0.02) (Table 2). Editorial Note: This report indicates that 72% of U.S. women with diabetes self-reported having major CVD. Major CVD prevalence is twice as common and major CVD hospitalizations are nearly four times as common among women with diabetes compared with women without diabetes. These findings are consistent with mortality studies documenting that women with diabetes are at much higher risk for death as a result of major CVD than women without diabetes (2 ).
Clinical trials indicate that antihypertensive treatment (4 ), aspirin use (5 ), and lipidlowering therapies (6 ) prevent or delay cardiovascular events. Epidemiologic evidence suggests that the risk for major CVD might be reduced through glycemic control (7 ) and the promotion of healthy lifestyles, including weight reduction/obesity prevention,
# MMWR November 2, 2001
Major Cardiovascular Disease -Continued increased physical activity, smoking cessation/prevention, and improved diet (7 ). Primary prevention of diabetes, a risk factor for major CVD, also can prevent major CVD. The Diabetes Prevention Program, a clinical trial examining the effect of intensive lifestyle intervention on the occurrence of type 2 diabetes in high-risk populations, concluded that improved diet, weight loss, and increased physical activity prevented or delayed the onset of diabetes among adults with impaired glucose tolerance (8 ). Despite the efficacy of prevention strategies for major CVD, a large proportion of persons with diabetes have uncontrolled blood pressure (9 ), dyslipidemia (9 ), and hyperglycemia (9 ) and do not take aspirin (5 ). Additional research is needed to learn how to improve the process and outcomes of care among persons with diabetes. A concerted effort among health-care providers, public health officials, members of community-based organizations, and patients and their families will be necessary to reduce major CVD among persons with diabetes.
The high rate of major CVD among women with diabetes of all ages indicates that strategies for CVD risk reduction should be offered to all women with diabetes. Rate differences in hospital discharges increased with age, indicating that the effects of successful CVD prevention efforts should increase with age for women with diabetes.
The findings in this study are subject to at least six limitations. First, NHIS data on history of diabetes and major CVD are self-reported; however, studies comparing selfreported with physician-reported medical history data have found no difference in the prevalence of diabetes, and self-reported prevalence rates for CVD and hypertension were only slightly higher than physician-reported rates (10 ). Second, because NHIS excludes the institutionalized population, the number of persons with major CVD and diabetes is underestimated. Third, because NIS data represent hospital discharges and not individual persons, patients with multiple CVD hospitalizations within 1 year were counted multiple times; this might have resulted in an overestimation of hospital discharge rates. Fourth, by not including data from long-term and federal hospitals, NIS underestimates major CVD hospitalizations. Fifth, race/ethnicity is missing for approximately 20% of hospital discharges in NIS; four states that contribute to NIS provided no information on race/ethnicity, and one state provided race/ethnicity for approximately 25% of discharges. Therefore, race/ethnicity rates might be underreported and might be biased if disease patterns vary differentially across the reporting and nonreporting states. Finally, because the NIS sample comprised only 22 states, these data might be biased and might differ from estimates of the National Hospital Discharge Sample . However, in 1997, both data sources produced similar estimates of discharges with diabetes as the primary diagnosis (AHRQ, unpublished data, 2000).
CDC has published Diabetes and Women's Health Across the Life Stages: A Public Health Perspective that addresses the need for more research to gain a better understanding of the excess risk for major CVD among women with diabetes and to identify modifiable behavior and other determinants that can be used to develop effective interventions. The National Institutes of Health and CDC also have implemented the National Diabetes Education Program that includes public and private partners in the treatment and outcome of persons with diabetes; this program promotes early diagnosis to reduce morbidity and mortality associated with diabetes. CDC supports diabetes control programs in every state and, in 1998, initiated support for cardiovascular health promotion, disease prevention, and control programs. Since 1999, CDC has supported REACH 2010 (Racial and Ethnic Approaches to Community Health) to eliminate racial/ethnic disparities in numerous health areas, including diabetes and CVD. Additional information on diabetes is available at .
# Hospital Discharge Rates for Nontraumatic Lower Extremity Amputation by Diabetes Status -United States, 1997
Lower extremity amputation (LEA) is a costly and disabling procedure that disproportionately affects persons with diabetes (1,2 ). One of the national health objectives for 2000 was to reduce the LEA rate from a 1991 baseline of approximately eight per 1,000 persons with diabetes to a target of approximately five per 1,000 persons with diabetes. Review of 1996 data indicated an LEA rate of approximately 11. To estimate the national rates of hospital discharges for LEA among persons with and without diabetes and to assess the excess risk for LEA among persons with diabetes, CDC and the Agency for Healthcare Research and Quality (AHRQ) analyzed data from the 1997 Nationwide Inpatient Sample (NIS) and the 1997 National Health Interview Survey (NHIS). This report summarizes the findings of the analysis, which indicated that the age-adjusted rates of hospital discharges among persons with LEA who had diabetes were 28 times that of those without diabetes. This higher rate underscores the need to increase efforts to prevent risk factors (e.g., peripheral vascular disease, neuropathy, and infection) that result in LEA among persons with diabetes.
Hospital discharges were estimated from NIS, a stratified probability sample of hospitals in 22 states. Discharges from these states represented approximately 60% of all discharges in the United States. Sample data were weighted using the American Hospital Association Annual Survey of Hospitals to approximate discharges from all U.S. acute-care community hospitals. LEA discharges were defined using International Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM ) codes 84.10-84.19 (traumatic LEA codes 895-897 were excluded). Diabetes-related LEA discharges were identified as discharges that included ICD-9-CM code 250 as one of the listed discharge diagnoses. LEA hospital discharge rates were calculated for populations with and without diabetes. Estimates of the number of persons with and without diabetes were obtained from the 1997 NHIS, an ongoing, nationally representative survey providing information about the health of the noninstitutionalized U.S. civilian population (3 ). SUDAAN was used to calculate estimates and 95% confidence intervals (CIs) of NIS and NHIS data. The 2000 U.S. standard population was used to adjust LEA rates by age. The rate ratio was calculated by dividing the LEA rate for persons with diabetes by the rate for persons without diabetes. The rate difference was defined as the difference in LEA rates between the two populations. The significance of trends by age was assessed for LEA rates, rate ratios, and rate differences, and t-tests in SUDAAN were used to determine the significance of the difference in mean age by diabetes status and the ageadjusted rate differences by sex and race. Z-tests were used to assess the difference between age-adjusted rates by diabetes status among all sex and race groups.
In 1997, 131,218 hospital discharges had an LEA discharge diagnosis code; 87,720 (67%) of these were related to diabetes (Table 1). Among persons with diabetes, 66.7% of LEA hospitalizations were paid by Medicare and an additional 8.1% were paid by Medicaid. Among persons with diabetes, approximately 52% of amputations occurred at or below the foot, and among persons without diabetes, approximately 70% occurred between the ankle and the knee or higher. Patients with diabetes-related LEA hospital discharges had a mean age of 66 years (95% CI=+0.3 years), and the mean age of LEA discharges not related to diabetes was 71 years (95% CI=+0.7 years) (p75 years, respectively. Rate differences ranged from 3.4 to 13.8 per 1,000 persons in those aged 75 years, respectively.
The age-adjusted LEA rate for persons with diabetes (5.5 per 1,000 persons with diabetes) was 28 (95% CI=24-31) times that of persons without diabetes (0.2 per 1,000 persons without diabetes). Regardless of diabetes status, these rates were higher for men than women (p<0.0001) and higher for non-Hispanic blacks than Hispanics or non-Hispanic whites (p<0.05) (Figure 1). Age-adjusted LEA rates were much higher for persons with diabetes for both sexes and all racial/ethnic populations (p<0.0001). Editorial Note: The findings in this report indicate that LEAs occur at a much higher rate among persons with diabetes and that diabetes causes approximately 67% of LEAs. Among persons with diabetes, LEA rates were highest among men, non-Hispanic blacks, and the elderly. These findings indicate that LEAs might increase as the U.S. population ages and as the prevalence of diabetes increases. Because approximately 75% of LEA hospitalizations are paid by Medicare or Medicaid, the increase in prevalence will place a large financial burden on these public health systems. Among persons with diabetes, LEAs result from the single or combined effects of peripheral vascular disease, peripheral neuropathy, and infection (1,4 ). Foot deformities and ulcers occurring as a consequence of neuropathy and/or peripheral vascular disease, minor trauma, and poor foot care also might contribute to LEAs (1,5 ).
The findings in this study are subject to at least five limitations. First, because NIS data represent hospital discharges and not individual persons, patients with multiple amputations within 1 year were counted multiple times; this might have resulted in an overestimation of hospital discharge rates. Second, because NIS data do not include LEAs that occurred in federal hospitals and outpatient settings, the analysis underestimates the total number of LEA discharges that occurred nationally. Third, because NHIS is representative of the noninstitutionalized civilian population, the total population with or without diabetes was underestimated. Fourth, race/ethnicity data are missing for approximately 20% of the hospital discharges in NIS data; four states contributing to NIS provided no race/ethnicity data and one state provided race/ethnicity information for approximately 25% of discharges. Therefore, race/ethnicity-specific rates are underreported and may be biased if race/ethnicity disease patterns vary across reporting and nonreporting states. Finally, because the NIS sample was constructed from only 22 states, these data might be biased and might differ from estimates of the National Hospital Discharge Sample (NHDS). However, in 1997, both data sources produced similar estimates of discharges with diabetes as the primary diagnosis (AHRQ, unpublished data, 2000). Numbers for other racial/ethnic groups were too small for meaningful analysis.
# MMWR November 2, 2001
Nontraumatic Lower Extremity Amputation -Continued Serious foot conditions or LEA can be decreased by 44%-85% in persons with diabetes (5 ). Proper footware can lower abnormal pressure and protect the foot from calluses and ulcers, precursors of LEA (6 ). Education intervention, multidisciplinary care, and insurance coverage for therapeutic shoes are effective in reducing diabetes-related LEA (2 ). Interventions also include early detection of feet at risk through regular foot examination, knowledge of foot hygiene, nonweight-bearing exercise, and provider education on screening examinations for high-risk foot conditions (6,7 ). Good glycemic control can reduce the development of neuropathy, a high-risk condition for LEA (8,9 ).
Because no nationally representative data on lower extremity disease and its risk factors exist, in 1999, CDC and the National Heart, Lung, and Blood Institute of the National Institutes of Health added to the National Health and Nutrition Examination Survey a lower extremity disease examination component for peripheral vascular disease, peripheral neuropathy, and foot deformities, ulcers, and amputations. This component will allow national estimates of the extent of lower extremity disease and identification of its risk factors. It also will increase an understanding of racial/ethnic differences in lower extremity disease and provide information to clinicians and public health providers to develop preventive care and community-based interventions. Materials designed to make good foot care an essential part of diabetes care among health-care providers and persons with diabetes are available at .
# Weekly Update: West Nile Virus Activity -United States, October 24-30, 2001
The following report summarizes West Nile virus (WNV) surveillance data reported to CDC through ArboNET and verified by states and other jurisdictions as of October 30, 2001.
During the week of October 24-30, no human cases of WNV encephalitis or meningitis were reported. During the same period, WNV infections were reported in 200 crows, 43 other birds, and eight horses. A total of 11 WNV-positive mosquito pools were reported in five states (Georgia, Kentucky, New Jersey, Ohio, and Virginia).
During 2001, a total of 37 human cases of WNV encephalitis or meningitis have been reported in Florida (ten), Maryland (six), New Jersey (six), New York (six), Connecticut (five), Pennsylvania (three), and Georgia (one); one death occurred in Georgia. Among these 37 cases, 20 (54%) were in men; the median age was 69 years (range: 36-81 years); and dates of illness onset ranged from July 13 to October 7. A total of 3,996 crows and 1,437 other birds with WNV infection were reported from 25 states and the District of Columbia (Figure 1); 159 WNV infections in other animals (all horses) were reported from 13 states (Alabama, Connecticut, Florida, Georgia, Kentucky, Louisiana, Massachusetts, Mississippi, New York, North Carolina, Pennsylvania, Tennessee, and Virginia); and 736 WNV-positive mosquito pools were reported from 15 states (Connecticut, Florida, Georgia, Illinois, Kentucky, Maryland, Massachusetts, Michigan, New Hampshire, New Jersey, New York, Ohio, Pennsylvania, Rhode Island, and Virginia).
Additional information about WNV activity is available at and .
# FIGURE 1. Areas reporting West Nile virus (WNV) activity -
# Notice to Readers
# Updated Recommendations for Antimicrobial Prophylaxis Among Asymptomatic Pregnant Women After Exposure to Bacillus anthracis
The antimicrobial of choice for initial prophylactic therapy among asymptomatic pregnant women exposed to Bacillus anthracis is ciprofloxacin, 500 mg twice a day for 60 days. In instances in which the specific B. anthracis strain has been shown to be penicillinsensitive, prophylactic therapy with amoxicillin, 500 mg three times a day for 60 days, may be considered. Isolates of B. anthracis implicated in the current bioterrorist attacks are susceptible to penicillin in laboratory tests, but may contain penicillinase activity (2 ). Pencillins are not recommended for treatment of anthrax, where such penicillinase activity may decrease their effectiveness. However, penicillins are likely to be effective for preventing anthrax, a setting where relatively few organisms are present. Doxycycline should be used with caution in asymptomatic pregnant women and only when contraindications are indicated to the use of other appropriate antimicrobial drugs.
Pregnant women are likely to be among the increasing number of persons receiving antimicrobial prophylaxis for exposure to B. anthracis. Clinicians, public health officials, and women who are candidates for treatment should weigh the possible risks and benefits to the mother and fetus when choosing an antimicrobial for postexposure anthrax prophylaxis. Women who become pregnant while taking antimicrobial prophylaxis should continue the medication and consult a health-care provider or public health official to discuss these issues.
No formal clinical studies of ciprofloxacin have been performed during pregnancy. Based on limited human information, ciprofloxacin use during pregnancy is unlikely to be associated with a high risk for structural malformations in fetal development. Data on ciprofloxacin use during pregnancy from the Teratogen Information System indicate that therapeutic doses during pregnancy are unlikely to pose a substantial teratogenic risk, but data are insufficient to determine that there is no risk (1 ). Doxycycline is a tetracycline antimicrobial. Potential dangers of tetracyclines to fetal development include risk for dental staining of the primary teeth and concern about possible depressed bone growth and defective dental enamel. Rarely, hepatic necrosis has been reported in pregnant women using tetracyclines. Penicillins generally are considered safe for use during pregnancy and are not associated with an increased risk for fetal malformation. Pregnant women should be advised that congenital malformations occur in approximately 2%-3% of births, even in the absence of known teratogenic exposure.
Additional information about the treatment of anthrax infection is available at .
# MMWR 961
Notices to Readers -Continued
# Notice to Readers
# Interim Recommendations for Protecting Workers from Exposure to Bacillus anthracis in Work Sites in which Mail is Handled or Processed
CDC has developed interim recommendations to assist personnel responsible for occupational health and safety in developing a comprehensive program to reduce potential cutaneous or inhalational exposures to Bacillus anthracis spores among workers in work sites in which mail is handled or processed. Such work sites include post offices, mail distribution/handling centers, bulk mail centers, air mail facilities, priority mail processing centers, public and private mail rooms, and other settings in which workers are responsible for handling and processing mail. The recommendations are based on the limited information available on methods to avoid infection and on the effectiveness of various prevention strategies. These recommendations will be updated as new informat on becomes available.
The recommendations are divided into the following hierarchical categories describing measures that should be implemented in distribution/handling centers to prevent potential exposures to B. anthracis spores:
- Engineering controls to prevent or capture aerosolized spores - Administrative controls to limit the number of persons potentially exposed to spores - Housekeeping controls to further reduce the spread of spores - Personal protective equipment for workers to prevent cutaneous and inhalational exposure to spores These control measures should be selected on the basis of an initial work site evaluation that focuses on determining which processes, operations, jobs, or tasks would be most likely to result in an exposure if a contaminated envelope or package enters the work site. The complete interim recommendations are available at .
# Notice to Readers
# National Diabetes Awareness Month -November 2001
November is National Diabetes Awareness Month. During 1998 in the United States, an estimated 15.7 million persons had diabetes (1 ). From 1990 to 2000, an increase of 49% occurred in the prevalence of diagnosed diabetes and gestational diabetes in U.S. adults (2 ); however, lifestyle changes, including weight control and regular physical activity can prevent or delay the onset of type 2 diabetes, even in high-risk persons (3 ).
During November, 59 state and territorial diabetes control programs, other partners, and CDC will highlight activities that increase awareness of the Initiative on Diabetes and Women's Health and of the need for persons with diabetes to receive influenza and pneumococcal vaccines. Persons with diabetes should receive pneumococcal and annual influenza vaccinations because they are more likely than persons without diabetes to die from complications of influenza and pneumonia (4 ). In 1997, only half of adults with diabetes received an annual influenza vaccination, and one third received a pneumococcal vaccine (5 ). Perspective, is the first major publication to address the unique and serious impact diabetes has on women throughout life and to address the public health implications of these issues (6 ). The publication presents 1) trends in risk factors for diabetes and its complications during adolescence; 2) the increased risk for offspring to develop diabetes associated with intrauterine exposure to hyperglycemia; 3) the effect of menopause on health status; and 4) the increase in poverty and disability for older women.
Additional information about diabetes is available from CDC, telephone (877) 232-3422, e-mail [email protected], and from .
The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on Friday of each week, send an e-mail message to [email protected]. The body content should read SUBscribe mmwr-toc. Electronic copy also is available from CDC's World-Wide Web server at or from CDC's file transfer protocol server at ftp. To subscribe for paper copy, contact Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone (202) 512-1800.
Data in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. The reporting week concludes at close of business on Friday; compiled data on a national basis are officially released to the public on the following Friday. Address inquiries about the MMWR Series, including material to be considered for publication, to: Editor, MMWR Series, Mailstop C-08, CDC, 1600 Clifton Rd., N.E., Atlanta, GA 30333; telephone (888) 232-3228.
All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated.
# IU.S. Government Printing | 8,655 | {
"id": "2ed2d9acd8f86bd0cb5b50e1a7b083908cf8a78a",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | The recommended immunization schedules for persons aged 0-18 years and the catchup immunization schedule for 2007 have been approved by the Advisory Committee on Immunization Practices, the American Academy of Pediatrics, and the American Academy of Family Physicians. The standard MMWR footnote format has been modified for publication of this schedule.#
The Advisory Committee on Immunization Practices (ACIP) periodically reviews the recommended immunization schedule for persons aged 0-18 years to ensure that the schedule is current with changes in vaccine formulations and reflects revised recommendations for the use of licensed vaccines, including those newly licensed.
The changes to the previous childhood and adolescent immunization schedule, published January 2006 (1), are as follows:
- The new rotavirus vaccine (Rota) is recommended in a 3-dose schedule at ages 2, 4, and 6 months. The first dose should be administered at ages 6 weeks through 12 weeks with subsequent doses administered at 4-10 week intervals. Rotavirus vaccination should not be initiated for infants aged >12 weeks and should not be administered after age 32 weeks (2). - The influenza vaccine is now recommended for all children aged 6-59 months (3). - Varicella vaccine recommendations are updated. The first dose should be administered at age 12-15 months, and a newly recommended second dose should be administered at age 4-6 years (4). - The new human papillomavirus vaccine (HPV) is recommended in a 3-dose schedule with the second and third doses administered 2 and 6 months after the first dose. Routine vaccination with HPV is recommended for females aged 11-12 years; the vaccination series can be started in females as young as age 9 years; and a catch-up vaccination is recommended for females aged 13-26 years who have not been vaccinated previously or who have not completed the full vaccine series (5).
- The main change to the format of the schedule is the division of the recommendation into two schedules: one schedule for persons aged 0-6 years (Figure 1) and another for persons aged 7-18 years (Figure 2). Special populations are represented with purple bars; the 11-12 years assessment is emphasized with the bold, capitalized fonts in the title of that column. Rota, HPV, and varicella vaccines are incorporated in the catch-up immunization schedule (Table ).
# Vaccine Information Statements
The National Childhood Vaccine Injury Act requires that health-care providers provide parents or patients with copies of Vaccine Information Statements before administering each dose of the vaccines listed in the schedule. Additional information is available from state health departments and from CDC at .
Detailed recommendations for using vaccines are available from package inserts, ACIP statements on specific vaccines, and the 2003 Red Book (6). ACIP statements for each recommended childhood vaccine are available from CDC at . In addition, guidance for obtaining and completing a Vaccine Adverse Event Reporting System form is available at http:// www.vaers.hhs.gov or by telephone, 800-822-7967.
# Hepatitis B vaccine (HepB). (Minimum age: birth)
At birth:
- Administer monovalent HepB to all newborns before hospital discharge.
- If mother is hepatitis surface antigen (HBsAg)-positive, administer HepB and 0.5 mL of hepatitis B immune globulin (HBIG) within 12 hours of birth. - If mother's HBsAg status is unknown, administer HepB within 12 hours of birth.
Determine the HBsAg status as soon as possible and if HBsAg-positive, administer HBIG (no later than age 1 week). - If mother is HBsAg-negative, the birth dose can only be delayed with physician's order and mothers' negative HBsAg laboratory report documented in the infant's medical record.
# After the birth dose:
- The HepB series should be completed with either monovalent HepB or a combination vaccine containing HepB. The second dose should be administered at age 1-2 months. The final dose should be administered at age >24 weeks. Infants born to HBsAg-positive mothers should be tested for HBsAg and antibody to HBsAg after completion of >3 doses of a licensed HepB series, at age 9-18 months (generally at the next well-child visit).
# 4-month dose:
- It is permissible to administer 4 doses of HepB when combination vaccines are administered after the birth dose. If monovalent HepB is used for doses after the birth dose, a dose at age 4 months is not needed.
# Rotavirus vaccine (Rota). (Minimum age: 6 weeks)
- Administer the first dose at age 6-12 weeks. Do not start the series later than age 12 weeks. - Administer the final dose in the series by age 32 weeks. Do not administer a dose later than age 32 weeks. - Data on safety and efficacy outside of these age ranges are insufficient.
# Diphtheria and tetanus toxoids and acellular pertussis vaccine (DTaP).
(Minimum age: 6 weeks)
- The fourth dose of DTaP may be administered as early as age 12 months, provided 6 months have elapsed since the third dose. - Administer the final dose in the series at age 4-6 years.
# Haemophilus influenzae type b conjugate vaccine (Hib). (Minimum age: 6 weeks)
# Human papillomavirus vaccine (HPV). (Minimum age: 9 years)
- Administer the first dose of the HPV vaccine series to females at age 11-12 years.
- Administer the second dose 2 months after the first dose and the third dose 6 months after the first dose. - Administer the HPV vaccine series to females at age 13-18 years if not previously vaccinated.
# Hepatitis B vaccine (HepB). (Minimum age: birth)
- Administer the 3-dose series to those who were not previously vaccinated.
- A 2-dose series of Recombivax HB ® is licensed for children aged 11-15 years.
# Inactivated poliovirus vaccine (IPV). (Minimum age: 6 weeks)
- For children who received an all-IPV or all-oral poliovirus (OPV) series, a fourth dose is not necessary if the third dose was administered at age >4 years. - If both OPV and IPV were administered as part of a series, a total of 4 doses should be administered, regardless of the child's current age.
# Measles, mumps, and rubella vaccine (MMR). (Minimum age: 12 months)
- If not previously vaccinated, administer 2 doses of MMR during any visit, with >4 weeks between the doses. 10. Varicella vaccine. (Minimum age: 12 months)
- Administer 2 doses of varicella vaccine to persons without evidence of immunity.
- Administer 2 doses of varicella vaccine to persons aged 28 days after the first dose. - Administer 2 doses of varicella vaccine to persons aged >13 years at least 4 weeks apart.
# Rotavirus vaccine (Rota). (Minimum age: 6 weeks)
- Do not start the series later than age 12 weeks.
- Administer the final dose in the series by age 32 weeks. Do not administer a dose later than age 32 weeks. - Data on safety and efficacy outside of these age ranges are insufficient.
# Diphtheria and tetanus toxoids and acellular pertussis vaccine (DTaP).
(Minimum age: 6 weeks)
- The fifth dose is not necessary if the fourth dose was administered at age >4 years.
- DTaP is not indicated for persons aged >7 years.
# Haemophilus influenzae type b conjugate vaccine (Hib). (Minimum age: 6 weeks)
- Vaccine is not generally recommended for children aged >5 years.
- If current age <12 months and the first 2 doses were PRP-OMP (PedvaxHIB ® or ComVax ® ), the third (and final) dose should be administered at age 12-15 months and at least 8 weeks after the second dose. - If first dose was administered at age 7-11 months, administer 2 doses separated by 4 weeks plus a booster at age 12-15 months.
# Pneumococcal conjugate vaccine (PCV). (Minimum age: 6 weeks)
- Vaccine is not generally recommended for children aged >5 years.
# Inactivated poliovirus vaccine (IPV). (Minimum age: 6 weeks)
- For children who received an all-IPV or all-oral poliovirus (OPV) series, a fourth dose is not necessary if third dose was administered at age >4 years. - If both OPV and IPV were administered as part of a series, a total of 4 doses should be administered, regardless of the child's current age.
# Measles, mumps, and rubella vaccine (MMR
# Human papillomavirus vaccine (HPV). (Minimum age: 9 years)
- Administer the HPV vaccine series to females at age 13-18 years if not previously vaccinated.
The table below provides catch-up schedules and minimum intervals between doses for children whose vaccinations have been delayed. A vaccine series does not need to be restarted, regardless of the time that has elapsed between doses. Use the section appropriate for the child's age.
Information about reporting reactions after immunization is available online at or by telephone via the 24-hour national toll-free information line 800-822-7967. Suspected cases of vaccine-preventable diseases should be reported to the state or local health department. Additional information, including precautions and contraindications for immunization, is available from the National Center for Immunization and Respiratory Diseases at or telephone, 800-CDC-INFO (800-232-4636). | 2,099 | {
"id": "33440970e93a38b5817b65975158bc730d3092c9",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | † National Nosocomial Infection Surveillance definition: a nonhuman-derived implantable foreign body (e.g., prosthetic heart valve, nonhuman vascular graft, mechanical heart, or hip prosthesis) that is permanently placed in a patient during surgery. ‡ If the area around a stab wound becomes infected, it is not an SSI. It is considered a skin or soft tissue infection, depending on its depth.# Table of Contents
EXECUTIVE
. Criteria for Defining a Surgical Site Infection (SSI) Table 2. Site-Specific Classifications of Organ/Space Surgical Site Infection Table 3. Distribution of Pathogens Isolated From Surgical Site Infections, National Nosocomial Infections Surveillance System, 1986 to 1996 Table 4. Operations, Likely Surgical Site Infection (SSI) Pathogens, and References on Use of Antimicrobial Prophylaxis Table 5. Patient and Operation Characteristics That May Influence the Risk of Surgical Site Infection Development Table 6. Mechanism and Spectrum of Activity of Antiseptic Agents Commonly Used for Preoperative Skin Preparation and Surgical Scrubs Table 7. Surgical Wound Classification Table 8. Parameters for Operating Room Ventilation, American Institute of Architects, 1996 Table 9. Parameters for Flash Sterilization Cycles, Association for the Advancement of Medical Instrumentation Table 10. Physical Status Classification, American Society of Anesthesiologists
# EXECUTIVE SUMMARY
The "Guideline for Prevention of Surgical Site Infection, 1999" presents the Centers for Disease Control and Prevention (CDC)'s recommendations for the prevention of surgical site infections (SSIs), formerly called surgical wound infections. This two-part guideline updates and replaces previous guidelines. 1,2 Part I, "Surgical Site Infection: An Over view," describes the epidemiology, definitions, microbiology, pathogenesis, and surveillance of SSIs. Included is a detailed discussion of the pre-, intra-, and postoperative issues relevant to SSI genesis.
Part II, "Recommendations for Prevention of Surgical Site Infection," represents the consensus of the Hospital Infection Control Practices Advisory Committee (HICPAC) regarding strategies for the prevention of SSIs. 3 Whenever possible, the recommendations in Part II are based on data from well-designed scientific studies. However, there are a limited number of studies that clearly validate risk factors and prevention measures for SSI. By necessity, available studies have often been conducted in narrowly defined patient populations or for specific kinds of operations, making generalization of their findings to all specialties and types of operations potentially problematic. This is especially true regarding the implementation of SSI prevention measures. Finally, some of the infection control practices routinely used by surgical teams cannot be rigorously studied for ethical or logistical reasons (e.g., wearing vs not wearing gloves). Thus, some of the recommendations in Part II are based on a strong theoretical rationale and suggestive evidence in the absence of confirmatory scientific knowledge.
It has been estimated that approximately 75% of all operations in the United States will be performed in "ambulatory," "same-day," or "outpatient" operating rooms by the turn of the century. 4 In recommending various SSI prevention methods, this document makes no distinction between surgical care delivered in such settings and that provided in conventional inpatient operating rooms. This document is primarily intended for use by surgeons, operating room nurses, postoperative inpatient and clinic nurses, infection control professionals, anesthesiologists, healthcare epidemiologists, and other personnel directly responsible for the prevention of nosocomial infections. This document does not:
- Specifically address issues unique to burns, trauma, transplant procedures, or transmission of bloodborne pathogens from healthcare worker to patient, nor does it specifically address details of SSI prevention in pediatric surgical practice. It has been recently shown in a multicenter study of pediatric surgical patients that characteristics related to the operations are more important than those related to the physiologic status of the patients. 5 In general, all SSI prevention measures effective in adult surgical care are indicated in pediatric surgical care.
- Specifically address procedures performed outside of the operating room (e.g., endoscopic procedures), nor does it provide guidance for infection prevention for invasive procedures such as cardiac catheterization or interventional radiology. Nonetheless, it is likely that many SSI prevention strategies also could be applied or adapted to reduce infectious complications associated with these procedures.
- Specifically recommend SSI prevention methods unique to minimally invasive operations (i.e., laparoscopic surgery). Available SSI surveillance data indicate that laparoscopic operations generally have a lower or comparable SSI risk when contrasted to open operations. SSI prevention measures applicable in open operations (e.g., open cholecystectomy) are indicated for their laparoscopic counterparts (e.g., laparoscopic cholecystectomy).
- Recommend specific antiseptic agents for patient preoperative skin preparations or for healthcare worker hand/forearm antisepsis. Hospitals should choose from products recommended for these activities in the latest Food and Drug Administration (FDA) monograph. 12
# I. SURGICAL SITE INFECTION (SSI): AN OVERVIEW
# A. INTRODUCTION
Before the mid-19th century, surgical patients commonly developed postoperative "irritative fever," followed by purulent drainage from their incisions, overwhelming sepsis, and often death. It was not until the late 1860s, after Joseph Lister introduced the principles of antisepsis, that postoperative infectious morbidity decreased substantially. Lister's work radically changed surgery from an activity associated with infection and death to a discipline that could eliminate suffering and prolong life.
Currently, in the United States alone, an estimated 27 million surgical procedures are performed each year. 13 The CDC's National Nosocomial Infections Sur veillance (NNIS) system, established in 1970, monitors reported trends in nosocomial infections in U.S. acute-care hospitals. Based on NNIS system reports, SSIs are the third most frequently reported nosocomial infection, accounting for 14% to 16% of all nosocomial infections among hospitalized patients. 14 During 1986 to 1996, hospitals conducting SSI surveillance in the NNIS system reported 15,523 SSIs following 593,344 operations (CDC, unpublished data). Among surgical patients, SSIs were the most common nosocomial infection, accounting for 38% of all such infections. Of these SSIs, two thirds were confined to the incision, and one third involved organs or spaces accessed during the operation. When surgical patients with nosocomial SSI died, 77% of the deaths were reported to be related to the infection, and the majority (93%) were serious infections involving organs or spaces accessed during the operation.
In 1980, Cruse estimated that an SSI increased a patient's hospital stay by approximately 10 days and cost an additional $2,000. 15,16 A 1992 analysis showed that each SSI resulted in 7.3 additional postoperative hospital days, adding $3,152 in extra charges. 17 Other studies corroborate that increased length of hospital stay and cost are associated with SSIs. 18,19 Deep SSIs involving organs or spaces, as compared to SSIs confined to the incision, are associated with even greater increases in hospital stays and costs. 20,21 Advances in infection control practices include improved operating room ventilation, sterilization methods, barriers, surgical technique, and availability of antimicrobial prophylaxis. Despite these activities, SSIs remain a substantial cause of morbidity and mortality among hospitalized patients. This may be partially explained by the emergence of antimicrobial-resistant pathogens and the increased numbers of surgical patients who are elderly and/or have a wide variety of chronic, debilitating, or immunocompromising underlying diseases. There also are increased numbers of prosthetic implant and organ transplant operations performed. Thus, to reduce the risk of SSI, a systematic but realistic approach must be applied with the awareness that this risk is influenced by characteristics of the patient, operation, personnel, and hospital.
# B. KEY TERMS USED IN THE GUIDELINE
# Criteria for Defining SSIs
The identification of SSI involves interpretation of clinical and laboratory findings, and it is crucial that a surveillance program use definitions that are consistent and standardized; otherwise, inaccurate or uninterpretable SSI rates will be computed and reported. The CDC's NNIS system has developed standardized surveillance criteria for defining SSIs (Table 1). 22 By these criteria, SSIs are classified as being either incisional or organ/space. Incisional SSIs are further divided into those involving only skin and subcutaneous tissue (superficial incisional SSI) and those involving deeper soft tissues of the incision (deep incisional SSI). Organ/space SSIs involve any part of the anatomy (e.g., organ or space) other than incised body wall layers, that was opened or manipulated during an operation (Figure ). Table 2 lists site-specific classifications used to differentiate organ/space SSIs. For example, in a patient who had an appendectomy and subsequently developed an intraabdominal abscess not draining through the incision, the infection would be reported as an organ/space SSI at the intra-abdominal site. Failure to use objective criteria to define SSIs has been shown to substantially affect reported SSI rates. 23,24 The CDC NNIS definitions of SSIs have been applied consistently by surveillance and surgical personnel in many settings and currently are a de facto national standard. 22,25 FIGURE. Cross-section of abdominal wall depicting CDC classifications of surgical site infection. 22 April 1999
# Operating Suite
A physically separate area that comprises operating rooms and their interconnecting hallways and ancillar y work areas such as scrub sink rooms. No distinction is made between operating suites located in conventional inpatient hospitals and those used for "same-day" sur-gical care, whether in a hospital or a free-standing facility.
# Operating Room
A room in an operating suite where operations are performed. 1. Purulent drainage, with or without laboratory confirmation, from the superficial incision.
2. Organisms isolated from an aseptically obtained culture of fluid or tissue from the superficial incision.
3. At least one of the following signs or symptoms of infection: pain or tenderness, localized swelling, redness, or heat and superficial incision is deliberately opened by surgeon, unless incision is culture-negative. 4. Diagnosis of superficial incisional SSI by the surgeon or attending physician. Do not report the following conditions as SSI:
1. Stitch abscess (minimal inflammation and discharge confined to the points of suture penetration).
2. Infection of an episiotomy or newborn circumcision site.
3. Infected burn wound. 4. Incisional SSI that extends into the fascial and muscle layers (see deep incisional SSI). Note: Specific criteria are used for identifying infected episiotomy and circumcision sites and burn wounds. 433
# Deep Incisional SSI
Infection occurs within 30 days after the operation if no implant † is left in place or within 1 year if implant is in place and the infection appears to be related to the operation and infection involves deep soft tissues (e.g., fascial and muscle layers) of the incision and at least one of the following:
1. Purulent drainage from the deep incision but not from the organ/space component of the surgical site. 2. A deep incision spontaneously dehisces or is deliberately opened by a surgeon when the patient has at least one of the following signs or symptoms: fever (>38ºC), localized pain, or tenderness, unless site is culture-negative. 3. An abscess or other evidence of infection involving the deep incision is found on direct examination, during reoperation, or by histopatholog ic or radiologic examination. 4. Diagnosis of a deep incisional SSI by a surgeon or attending physician.
# Notes:
1. Report infection that involves both superficial and deep incision sites as deep incisional SSI.
2. Report an organ/space SSI that drains through the incision as a deep incisional SSI.
Organ/Space SSI Infection occurs within 30 days after the operation if no implant † is left in place or within 1 year if implant is in place and the infection appears to be related to the operation and infection involves any part of the anatomy (e.g., organs or spaces), other than the incision, which was opened or manipulated during an operation and at least one of the following: 1. Purulent drainage from a drain that is placed through a stab wound ‡ into the organ/space. 2. Organisms isolated from an aseptically obtained culture of fluid or tissue in the organ/space.
3. An abscess or other evidence of infection involving the organ/space that is found on direct examination, during reoperation, or by histopathologic or radiologic examination. 4. Diagnosis of an organ/space SSI by a surgeon or attending physician.
# Surgical Personnel
Any healthcare worker who provides care to surgical patients during the pre-, intra-, or postoperative periods.
# Surgical Team Member
Any healthcare worker in an operating room during the operation who has a surgical care role. Members of the surgical team may be "scrubbed" or not; scrubbed members have direct contact with the sterile operating field or sterile instruments or supplies used in the field (refer to "Preoperative Hand/Forearm Antisepsis" section).
# C. MICROBIOLOGY
According to data from the NNIS system, the distribution of pathogens isolated from SSIs has not changed markedly during the last decade (Table 3). 26,27 Staphylococcus aureus, coagulase-negative staphylococci, Enterococcus spp., and Escherichia coli remain the most frequently isolated pathogens. An increasing proportion of SSIs are caused by antimicrobial-resistant pathogens, such as methicillin-resistant S. aureus (MRSA), 28,29 or by Candida albicans. 30 From 1991 to 1995, the incidence of fungal SSIs among patients at NNIS hospitals increased from 0.1 to 0.3 per 1,000 discharges. 30 The increased proportion of SSIs caused by resistant pathogens and Candida spp. may reflect increasing numbers of severely ill and immunocompromised surgical patients and the impact of widespread use of broad-spectrum antimicrobial agents.
Outbreaks or clusters of SSIs have also been caused by unusual pathogens, such as Rhizopus oryzae, Clostridium perfringens, Rhodococcus bronchialis, Nocardia farcinica, Legionella pneumophila and Legionella dumof fii, and Pseudomonas multivorans. These rare outbreaks have been traced to contaminated adhesive dressings, 31 elastic bandages, 32 colonized surgical personnel, 33,34 tap water, 35 or contaminated disinfectant solutions. 36 When a cluster of SSIs involves an unusual organism, a formal epidemiologic investigation should be conducted.
# D. PATHOGENESIS
Microbial contamination of the surgical site is a necessary precursor of SSI. The risk of SSI can be conceptualized according to the following relationship 37,38 : Dose of bacterial contamination ϫ virulence = Risk of surgical site infection Resistance of the host patient Quantitatively, it has been shown that if a surgical site is contaminated with >10 5 microorganisms per gram of tissue, the risk of SSI is markedly increased. 39 However, the dose of contaminating microorganisms required to produce infection may be much lower when foreign material is present at the site (i.e., 100 staphylococci per gram of tissue introduced on silk sutures). Microorganisms may contain or produce toxins and other substances that increase their ability to invade a host, produce damage within the host, or survive on or in host tissue. For example, many gram-negative bacteria produce endotoxin, which stimulates cytokine production. In turn, cytokines can trigger the systemic inflammatory response syndrome that sometimes leads to multiple system organ failure. One of the most common causes of multiple system organ failure in modern surgical care is intraabdominal infection. 46,47 Some bacterial surface components, notably polysaccharide capsules, inhibit phagocytosis, 48 a critical and early host defense response to microbial contamination. Certain strains of clostridia and streptococci produce potent exotoxins that disrupt cell membranes or alter cellular metabolism. 49 A variety of microorganisms, including gram-positive bacteria such as coagulasenegative staphylococci, produce glycocalyx and an associated component called "slime," which physically shields bacteria from phagocytes or inhibits the binding or penetration of antimicrobial agents. 56 Although these and other virulence factors are well defined, their mechanistic relationship to SSI development has not been fully determined.
For most SSIs, the source of pathogens is the endogenous flora of the patient's skin, mucous membranes, or hollow viscera. 57 When mucous membranes or skin is incised, the exposed tissues are at risk for contamination with endogenous flora. 58 These organisms are usually aerobic gram-positive cocci (e.g., staphylococci), but may include fecal flora (e.g., anaerobic bacteria and gramnegative aerobes) when incisions are made near the perineum or groin. When a gastrointestinal organ is opened April 1999 during an operation and is the source of pathogens, gramnegative bacilli (e.g., E. coli), gram-positive organisms (e.g., enterococci), and sometimes anaerobes (e.g., Bacillus fragilis) are the typical SSI isolates. Table 4 lists operations and the likely SSI pathogens associated with them. Seeding of the operative site from a distant focus of infection can be another source of SSI pathogens, particularly in patients who have a prosthesis or other implant placed during the operation. Such devices provide a nidus for attachment of the organism. 50, Exogenous sources of SSI pathogens include surgical personnel (especially members of the surgical team), the operating room environment (including air), and all tools, instruments, and materials brought to the sterile field during an operation (refer to "Intraoperative Issues" section). Exogenous flora are primarily aerobes, especially gram-positive organisms (e.g., staphylococci and streptococci). Fungi from endogenous and exogenous sources rarely cause SSIs, and their pathogenesis is not well understood. 79
# E. RISK AND PREVENTION
The term risk factor has a particular meaning in epidemiology and, in the context of SSI pathophysiology and prevention, strictly refers to a variable that has a significant, independent association with the development of SSI after a specific operation. Risk factors are identified by multivariate analyses in epidemiologic studies. Unfortunately, the term risk factor often is used in the surgical literature in a broad sense to include patient or operation features which, although associated with SSI development in univariate analysis, are not necessarily independent predictors. 80 The literature cited in the sections that follow includes risk factors identified by both univariate and multivariate analyses.
Table 5 lists patient and operation characteristics that may influence the risk of SSI development. These characteristics are useful in two ways: (1) they allow stratification of operations, making surveillance data more comprehensible; and, (2) knowledge of risk factors before certain operations may allow for targeted prevention measures. For example, if it is known that a patient has a remote site infection, the surgical team may reduce SSI risk by scheduling an operation after the infection has resolved.
An SSI prevention measure can be defined as an action or set of actions intentionally taken to reduce the risk of an SSI. Many such techniques are directed at reducing opportunities for microbial contamination of the patient's tissues or sterile surgical instruments; others are adjunctive, such as using antimicrobial prophylaxis or avoiding unnecessar y traumatic tissue dissection. Optimum application of SSI prevention measures requires that a variety of patient and operation characteristics be carefully considered.
# Patient Characteristics
In certain kinds of operations, patient characteristics possibly associated with an increased risk of an SSI include coincident remote site infections or colonization, diabetes, cigarette smoking, 85, systemic steroid use, 84,87,93 obesity (>20% ideal body weight), extremes of age, 92, poor nutritional status, 85,94,98, and perioperative transfusion of certain blood products. a. Diabetes The contribution of diabetes to SSI risk is controversial, 98,110 because the independent contribution of diabetes to SSI risk has not typically been assessed after controlling for potential confounding factors. Recent preliminary findings from a study of patients who underwent coronary artery bypass graft showed a significant relationship between increasing levels of HgA1c and SSI rates. 111 Also, increased glucose levels (>200 mg/dL) in the immediate postoperative period (р48 hours) were associated with increased SSI risk. 112,113 More studies are needed to assess the efficacy of perioperative blood glucose control as a prevention measure.
b. Nicotine use Nicotine use delays primary wound healing and may increase the risk of SSI. 85 In a large prospective study, current cigarette smoking was an independent risk factor for sternal and/or mediastinal SSI following cardiac surgery. 85 Other studies have corroborated cigarette smoking as an important SSI risk factor. The limitation of these studies, however, is that terms like current cigarette smoking and active smokers are not always defined. To appropriately determine the contribution of tobacco use to SSI risk, standardized definitions of smoking history must be adopted and used in studies designed to control for confounding variables.
c. Steroid use Patients who are receiving steroids or other immuno- suppressive drugs preoperatively may be predisposed to developing SSI, 84,87 but the data supporting this relationship are contradictory. In a study of long-term steroid use in patients with Crohn's disease, SSI developed significantly more often in patients receiving preoperative steroids (12.5%) than in patients without steroid use (6.7%). 93 In contrast, other investigations have not found a relationship between steroid use and SSI risk. 98,114,115
# d. Malnutrition
For some types of operations, severe protein-calorie malnutrition is crudely associated with postoperative nosocomial infections, impaired wound healing dynamics, or death. The National Academy of Sciences/National Research Council (NAS/NRC), 94 Study on the Efficacy of Infection Control (SENIC), 125 and NNIS 126 schemes for SSI risk stratification do not explicitly incorporate nutritional status as a predictor variable, although it may be represented indirectly in the latter two. In a widely quoted 1987 study of 404 high-risk general surgery operations, Christou and coworkers derived an SSI probability index in which final predictor variables were patient age, operation duration, serum albumin level, delayed hypersensitivity test score, and intrinsic wound contamination level. 117 Although this index predicted SSI risk satisfactorily for 404 subsequent patients and was generally received as a significant advance in SSI risk stratification, it is not widely used in SSI surveillance data analysis, surgical infection research, or analytic epidemiology.
Theoretical arguments can be made for a belief that severe preoperative malnutrition should increase the risk of both incisional and organ/space SSI. However, an epidemiologic association between incisional SSI and malnutrition is difficult to demonstrate consistently for all surgical subspecialties. 124, Multivariate logistic regression modeling has shown that preoperative proteincalorie malnutrition is not an independent predictor of April 1999 mediastinitis after cardiac bypass operations. 85,132 In the modern era, total parenteral nutrition (TPN) and total enteral alimentation (TEA) have enthusiastic acceptance by surgeons and critical care specialists. 118, However, the benefits of preoperative nutritional repletion of malnourished patients in reducing SSI risk are unproven. In two randomized clinical trials, preoperative "nutritional therapy" did not reduce incisional and organ/space SSI risk. In a recent study of high-risk pancreatectomy patients with cancer, the provision of TPN preoperatively had no beneficial effect on SSI risk. 142 A randomized prospective trial involving 395 general and thoracic surgery patients compared outcomes for malnourished patients preoperatively receiving either a 7-to 15-day TPN regimen or a regular preoperative hospital diet. All patients were followed for 90 days postoperatively. There was no detectable benefit of TPN administration on the incidence of incisional or organ/space SSI. 143 Administering TPN or TEA may be indicated in a number of circumstances, but such repletion cannot be viewed narrowly as a prevention measure for organ/space or incisional SSI risk. When a major elective operation is necessary in a severely malnourished patient, experienced surgeons often use both pre-and postoperative nutritional support in consideration of the major morbidity associated with numerous potential complications, only one of which is organ/space SSI. 118,124,130,133,137,138, In addition, postoperative nutritional support is important for certain major oncologic operations, 135,136 after many operations on major trauma victims, 134 or in patients suffering a variety of catastrophic surgical complications that preclude eating or that trigger a hypermetabolic state. Randomized clinical trials will be necessary to determine if nutritional support alters SSI risk in specific patient-operation combinations.
e. Prolonged preoperative hospital stay Prolonged preoperative hospital stay is frequently suggested as a patient characteristic associated with increased SSI risk. However, length of preoperative stay is likely a surrogate for severity of illness and co-morbid conditions requiring inpatient work-up and/or therapy before the operation. 16,26,65,85,94,100,150,151 f. Preoperative nares colonization with S St ta ap ph hy yl lo oc co oc cc cu us s a au ur re eu us s S. aureus is a frequent SSI isolate. This pathogen is carried in the nares of 20% to 30% of healthy humans. 81 It has been known for years that the development of SSI involving S. aureus is definitely associated with preoperative nares carriage of the organism in surgical patients. 81 A recent multivariate analysis demonstrated that such carriage was the most powerful independent risk factor for SSI following cardiothoracic operations. 82 Mupirocin ointment is effective as a topical agent for eradicating S. aureus from the nares of colonized patients or healthcare workers. A recent report by Kluytmans and coworkers suggested that SSI risk was reduced in patients who had cardiothoracic operations when mupirocin was applied preoperatively to their nares, regardless of carrier status. 152 In this study, SSI rates for 752 mupirocin-treated patients were compared with those previously observed for an untreated group of 928 historical control patients, and the significant SSI rate reduction was attributed to the mupirocin treatment. Concerns have been raised regarding the comparability of the two patient groups. 153 Additionally, there is concern that mupirocin resistance may emerge, although this seems unlikely when treatment courses are brief. 81 A prospective, randomized clinical trial will be necessary to establish definitively that eradication of nasal carriage of S. aureus is an effective SSI prevention method in cardiac surgery. Such a trial has recently been completed on 3,909 patients in Iowa. 83 Five types of operations in two facilities were observed. Preliminary analysis showed a significant association between nasal carriage of S. aureus and subsequent SSI development. The effect of mupirocin on reducing SSI risk is yet to be determined.
# g. Perioperative transfusion
It has been reported that perioperative transfusion of leukocyte-containing allogeneic blood components is an apparent risk factor for the development of postoperative bacterial infections, including SSI. 106 In three of five randomized trials conducted in patients undergoing elective colon resection for cancer, the risk of SSI was at least doubled in patients receiving blood transfusions. However, on the basis of detailed epidemiologic reconsid- erations, as many as 12 confounding variables may have influenced the reported association, and any effect of transfusion on SSI risk may be either small or nonexistent. 106 Because of methodologic problems, including the timing of transfusion, and use of nonstandardized SSI definitions, interpretation of the available data is limited. A metaanalysis of published trials will probably be required for resolution of the controversy. 154 There is currently no scientific basis for withholding necessary blood products from surgical patients as a means of either incisional or organ/space SSI risk reduction.
# Operative Characteristics: Preoperative Issues a. Preoperative antiseptic showering
A preoperative antiseptic shower or bath decreases skin microbial colony counts. In a study of >700 patients who received two preoperative antiseptic showers, chlorhexidine reduced bacterial colony counts ninefold (2.8ϫ10 2 to 0.3), while povidone-iodine or triclocarbanmedicated soap reduced colony counts by 1.3-and 1.9-fold, respectively. 155 Other studies corroborate these findings. 156,157 Chlorhexidine gluconate-containing products require several applications to attain maximum antimicrobial benefit, so repeated antiseptic showers are usually indicated. 158 Even though preoperative showers reduce the skin's microbial colony counts, they have not definitively been shown to reduce SSI rates. b. Preoperative hair removal Preoperative shaving of the surgical site the night before an operation is associated with a significantly higher SSI risk than either the use of depilatory agents or no hair removal. 16,100, In one study, SSI rates were 5.6% in patients who had hair removed by razor shave compared to a 0.6% rate among those who had hair removed by depilatory or who had no hair removed. 166 The increased SSI risk associated with shaving has been attributed to microscopic cuts in the skin that later serve as foci for bacterial multi-plication. Shaving immediately before the operation compared to shaving within 24 hours preoperatively was associated with decreased SSI rates (3.1% vs 7.1%); if shaving was performed >24 hours prior to operation, the SSI rate exceeded 20%. 166 Clipping hair immediately before an operation also has been associated with a lower risk of SSI than shaving or clipping the night before an operation (SSI rates immediately before = 1.8% vs night before = 4.0%). Although the use of depilatories has been associated with a lower SSI risk than shaving or clipping, 166,167 depilatories sometimes produce hypersensitivity reactions. 166 Other studies showed that preoperative hair removal by any means was associated with increased SSI rates and suggested that no hair be removed. 100,174,175 c. Patient skin preparation in the operating room Several antiseptic agents are available for preoperative preparation of skin at the incision site (Table 6). The iodophors (e.g., povidone-iodine), alcohol-containing products, and chlorhexidine gluconate are the most commonly used agents. No studies have adequately assessed the comparative effects of these preoperative skin antiseptics on SSI risk in well-controlled, operation-specific studies.
Alcohol is defined by the FDA as having one of the following active ingredients: ethyl alcohol, 60% to 95% by volume in an aqueous solution, or isopropyl alcohol, 50% to 91.3% by volume in an aqueous solution. 12 Alcohol is readily available, inexpensive, and remains the most effective and rapid-acting skin antiseptic. 176 Aqueous 70% to 92% alcohol solutions have germicidal activity against bacteria, fungi, and viruses, but spores can be resistant. 176,177 One potential disadvantage of the use of alcohol in the operating room is its flammability. Both chlorhexidine gluconate and iodophors have broad spectra of antimicrobial activity. 177, In some comparisons of the two antiseptics when used as preoperative hand scrubs, chlorhexidine gluconate achieved greater reductions in skin microflora than did povidone-iodine and April 1999 also had greater residual activity after a single application. Further, chlorhexidine gluconate is not inactivated by blood or serum proteins. 176,179,185,186 Iodophors may be inactivated by blood or serum proteins, but exert a bacteriostatic effect as long as they are present on the skin. 178,179 Before the skin preparation of a patient is initiated, the skin should be free of gross contamination (i.e., dirt, soil, or any other debris). 187 The patient's skin is prepared by applying an antiseptic in concentric circles, beginning in the area of the proposed incision. The prepared area should be large enough to extend the incision or create new incisions or drain sites, if necessary. 1,177,187 The application of the skin preparation may need to be modified, depending on the condition of the skin (e.g., burns) or location of the incision site (e.g., face).
There are reports of modifications to the procedure for preoperative skin preparation which include: (1) removing or wiping off the skin preparation antiseptic agent after application, (2) using an antiseptic-impregnated adhesive drape, (3) merely painting the skin with an antiseptic in lieu of the skin preparation procedure described above, or (4) using a "clean" versus a "sterile" surgical skin preparation kit. However, none of these modifications has been shown to represent an advantage.
d. Preoperative hand/forearm antisepsis Members of the surgical team who have direct contact with the sterile operating field or sterile instruments or supplies used in the field wash their hands and forearms by performing a traditional procedure known as scrubbing (or the surgical scrub) immediately before donning sterile gowns and gloves. Ideally, the optimum antiseptic used for the scrub should have a broad spectrum of activity, be fastacting, and have a persistent effect. 1,192,193 Antiseptic agents commercially available in the United States for this purpose contain alcohol, chlorhexidine, iodine/iodophors, parachloro-meta-xylenol, or triclosan (Table 6). 176,177,179,194,195 Alcohol is considered the gold standard for surgical hand preparation in several European countries. Alcoholcontaining products are used less frequently in the United States than in Europe, possibly because of concerns about flammability and skin irritation. Povidone-iodine and chlorhexidine gluconate are the current agents of choice for most U.S. surgical team members. 177 However, when 7.5% povidone-iodine or 4% chlorhexidine gluconate was compared to alcoholic chlorhexidine (60% isopropanol and 0.5% chlorhexidine gluconate in 70% isopropanol), alcoholic chlorhexidine was found to have greater residual antimicrobial activity. 200,201 No agent is ideal for every situation, and a major factor, aside from the efficacy of any product, is its acceptability by operating room personnel after repeated use. Unfortunately, most studies evaluating surgical scrub antiseptics have focused on measuring hand bacterial colony counts. No clinical trials have evaluated the impact of scrub agent choice on SSI risk. 195, Factors other than the choice of antiseptic agent influence the effectiveness of the surgical scrub. Scrubbing technique, the duration of the scrub, the condition of the hands, or the techniques used for drying and gloving are examples of such factors. Recent studies suggest that scrubbing for at least 2 minutes is as effective as the traditional 10-minute scrub in reducing hand bacterial colony counts, but the optimum duration of scrubbing is not known. The first scrub of the day should include a thorough cleaning underneath fingernails (usually with a brush). 180,194,212 It is not clear that such cleaning is a necessary part of subsequent scrubs during the day. After performing the surgical scrub, hands should be kept up and away from the body (elbows in flexed position) so that water runs from the tips of the fingers toward the elbows. Sterile towels should be used for drying the hands and forearms before the donning of a sterile gown and gloves. 212 A surgical team member who wears artificial nails may have increased bacterial and fungal colonization of the hands despite performing an adequate hand scrub. 212,213 Hand carriage of gram-negative organisms has been shown to be greater among wearers of artificial nails than among non-wearers. 213 An outbreak of Serratia marcescens SSIs in cardiovascular surgery patients was found to be associated with a surgical nurse who wore artificial nails. 214 While the relationship between nail length and SSI risk is unknown, long nails-artificial or naturalmay be associated with tears in surgical gloves. 177,180,212 The relationship between the wearing of nail polish or jewelry by surgical team members and SSI risk has not been adequately studied. 194,212, e. Management of infected or colonized surgical personnel Surgical personnel who have active infections or are colonized with certain microorganisms have been linked to outbreaks or clusters of SSIs. 33,34,76, Thus, it is important that healthcare organizations implement policies to prevent transmission of microorganisms from personnel to patients. These policies should address management of jobrelated illnesses, provision of postexposure prophylaxis after job-related exposures and, when necessary, exclusion of ill personnel from work or patient contact. While work exclusion policies should be enforceable and include a statement of authority to exclude ill personnel, they should also be designed to encourage personnel to report their illnesses and exposures and not penalize personnel with loss of wages, benefits, or job status. 238 f. Antimicrobial prophylaxis Surgical antimicrobial prophylaxis (AMP) refers to a very brief course of an antimicrobial agent initiated just before an operation begins. AMP is not an attempt to sterilize tissues, but a critically timed adjunct used to reduce the microbial burden of intraoperative contamination to a level that cannot overwhelm host defenses. AMP does not pertain to prevention of SSI caused by postoperative contamination. 265 Intravenous infusion is the mode of AMP delivery used most often in modern surgical practice. 20,26,242, Essentially all confirmed AMP indications pertain to elective operations in which skin incisions are closed in the operating room. Four principles must be followed to maximize the benefits of AMP:
- Use an AMP agent for all operations or classes of operations in which its use has been shown to reduce SSI rates based on evidence from clinical trials or for those operations after which incisional or organ/space SSI would represent a catastrophe. 266,268,269, - Use an AMP agent that is safe, inexpensive, and bactericidal with an in vitro spectrum that covers the most probable intraoperative contaminants for the operation.
- Time the infusion of the initial dose of antimicrobial agent so that a bactericidal concentration of the drug is established in serum and tissues by the time the skin is incised. 285 - Maintain therapeutic levels of the antimicrobial agent in both serum and tissues throughout the operation and until, at most, a few hours after the incision is closed in the operating room. 179,282,284,286 Because clotted blood is present in all surgical wounds, therapeutic serum levels of AMP agents are logically important in addition to therapeutic tissue levels. Fibrin-enmeshed bacteria may be resistant to phagocytosis or to contact with antimicrobial agents that diffuse from the wound space.
Table 4 summarizes typical SSI pathogens according to operation type and cites studies that establish AMP efficacy for these operations. A simple way to organize AMP indications is based on using the surgical wound classification scheme shown in Table 7, which employs descriptive case features to postoperatively grade the degree of intraoperative microbial contamination. A surgeon makes the decision to use AMP by anticipating preoperatively the surgical wound class for a given operation.
AMP is indicated for all operations that entail entry into a hollow viscus under controlled conditions. The most frequent SSI pathogens for such clean-contaminated operations are listed in Table 4. Certain clean-contaminated operations, such as elective colon resection, low anterior resection of the rectum, and abdominoperineal resection of the rectum, also require an additional preoperative protective maneuver called "preparation of the colon," to empty the bowel of its contents and to reduce the levels of live microorganisms. 200,239,256,268,284,287 This maneuver includes the administration of enemas and cathartic agents followed by the oral administration of nonabsorbable antimicrobial agents in divided doses the day before the operation. 200,288,289 AMP is sometimes indicated for operations that entail incisions through normal tissue and in which no viscus is entered and no inflammation or infection is encountered. Two well-recognized AMP indications for such clean operations are: (1) when any intravascular prosthetic material or a prosthetic joint will be inserted, and (2) for any operation in which an incisional or organ/space SSI would pose catastrophic risk. Examples are all cardiac operations, including cardiac pacemaker placement, 290 vascular operations involving prosthetic arterial graft placement at any site or the revascularization of the lower extremity, and most neurosurgical operations (Table 4). Some have advocated use of AMP during all operations on the breast. 80,242,264 By definition, AMP is not indicated for an operation classified in Table 7 as contaminated or dirty. In such operations, patients are frequently receiving therapeutic antimicrobial agents perioperatively for established infections.
Cephalosporins are the most thoroughly studied AMP agents. 284 These drugs are effective against many gram-positive and gram-negative microorganisms. They also share the features of demonstrated safety, acceptable pharmacokinetics, and a reasonable cost per dose. 242 In particular, cefazolin is widely used and generally viewed as the AMP agent of first choice for clean operations. 266 If a patient is unable to receive a cephalosporin because of penicillin allergy, an alternative for gram-positive bacterial coverage is either clindamycin or vancomycin.
Cefazolin provides adequate coverage for many clean-contaminated operations, 268,291 but AMP for operations on the distal intestinal tract mandates use of an agent such as cefoxitin (or some other second-generation cephalosporin) that provides anaerobic coverage. If a patient cannot safely receive a cephalosporin because of allergy, a reasonable alternative for gram-negative cover-
# TABLE 7 SURGICAL WOUND CLASSIFICATION
Class I/Clean: An uninfected operative wound in which no inflammation is encountered and the respiratory, alimentary, genital, or uninfected urinary tract is not entered. In addition, clean wounds are primarily closed and, if necessary, drained with closed drainage. Operative incisional wounds that follow nonpenetrating (blunt) trauma should be included in this category if they meet the criteria.
Class II/Clean-Contaminated: An operative wound in which the respiratory, alimentary, genital, or urinary tracts are entered under controlled conditions and without unusual contamination. Specifically, operations involving the biliary tract, appendix, vagina, and oropharynx are included in this category, provided no evidence of infection or major break in technique is encountered.
Class III/Contaminated: Open, fresh, accidental wounds. In addition, operations with major breaks in sterile technique (e.g., open cardiac massage) or gross spillage from the gastrointestinal tract, and incisions in which acute, nonpurulent inflammation is encountered are included in this category.
Class IV/Dirty-Infected: Old traumatic wounds with retained devitalized tissue and those that involve existing clinical infection or perforated viscera. This definition suggests that the organisms causing postoperative infection were present in the operative field before the operation.
April 1999 age is aztreonam. However, an agent such as clindamycin or metronidazole should also be included to ensure anaerobic coverage.
The aminoglycosides are seldom recommended as first choices for AMP, either as single drugs or as components of combination regimens. 242,264 References cited in Table provide many details regarding AMP choices and dosages, antimicrobial spectra and properties, and other practical clinical information.
The routine use of vancomycin in AMP is not recommended for any kind of operation. 242,266,283,292 However, vancomycin may be the AMP agent of choice in certain clinical circumstances, such as when a cluster of MRSA mediastinitis or incisional SSI due to methicillin-resistant coagulase-negative staphylococci has been detected. A threshold has not been scientifically defined that can support the decision to use vancomycin in AMP. The decision should involve consideration of local frequencies of MRSA isolates, SSI rates for particular operations, review of infection prevention practices for compliance, and consultation between surgeons and infectious disease experts. An effective SSI surveillance program must be operational, with careful and timely culturing of SSI isolates to determine species and AMP agent susceptibilities. 80 Agents most commonly used for AMP (i.e., cephalosporins) exhibit time-dependent bactericidal action. The therapeutic effects of such agents are probably maximized when their levels continuously exceed a threshold value best approximated by the minimal bactericidal concentration value observed for the target pathogens in vitro. When the duration of an operation is expected to exceed the time in which therapeutic levels of the AMP agent can be maintained, additional AMP agent should be infused. That time point for cefazolin is estimated as 3 to 4 hours. In general, the timing of a second (or third, etc.) dose of any AMP drug is estimated from three parameters: tissue levels achieved in normal patients by a standard therapeutic dose, the approximate serum half-life of the drug, and awareness of approximate MIC 90 values for anticipated SSI pathogens. References in Table 6 should be consulted for these details and important properties of antimicrobial agents used for AMP in various specialties.
Basic "rules of thumb" guide decisions about AMP dose sizes and timing. For example, it is believed that a full therapeutic dose of cefazolin (1-2 g) should be given to adult patients no more than 30 minutes before the skin is incised. 242,285 There are a few exceptions to this basic guide. With respect to dosing, it has been demonstrated that larger doses of AMP agents are necessary to achieve optimum effect in morbidly obese patients. 293 With respect to timing, an exception occurs for patients undergoing cesarean section in whom AMP is indicated: the initial dose is administered immediately after the umbilical cord is clamped. 266,272,273 If vancomycin is used, an infusion period of approximately 1 hour is required for a typical dose. Clearly, the concept of "on-call" infusion of AMP is flawed simply because delays in transport or schedule changes can mean that suboptimal tissue and serum levels may be present when the operation starts. 242,294 Simple protocols of AMP timing and oversight responsibility should be locally designed to be practical and effective.
# Operative characteristics: Intraoperative issues a. Operating room environment (1) Ventilation
Operating room air may contain microbial-laden dust, lint, skin squames, or respiratory droplets. The microbial level in operating room air is directly proportional to the number of people moving about in the room. 295 Therefore, efforts should be made to minimize personnel traffic during operations. Outbreaks of SSIs caused by group A beta-hemolytic streptococci have been traced to airborne transmission of the organism from colonized operating room personnel to patients. 233,237,296,297 In these outbreaks, the strain causing the outbreak was recovered from the air in the operating room. 237,296 It has been demonstrated that exercising and changing of clothing can lead to airborne dissemination of group A streptococci from vaginal or rectal carriage. 233,234,237,297 Operating rooms should be maintained at positive pressure with respect to corridors and adjacent areas. 298 Positive pressure prevents airflow from less clean areas into more clean areas. All ventilation or air conditioning systems in hospitals, including those in operating rooms, should have two filter beds in series, with the efficiency of the first filter bed being у30% and that of the second filter bed being у90%. 299 Conventional operating room ventilation systems produce a minimum of about 15 air changes of filtered air per hour, three (20%) of which must be fresh air. 299,300 Air should be introduced at the ceiling and exhausted near the floor. 300,301 Detailed ventilation parameters for operating rooms have been published by the American Institute of Architects in collaboration with the U.S. Department of Health and Human Services (Table 8). 299 Laminar airflow and use of UV radiation have been suggested as additional measures to reduce SSI risk for certain operations. Laminar airflow is designed to move particle-free air (called "ultraclean air") over the aseptic operating field at a uniform velocity (0.3 to 0.5 µm/sec), sweeping away particles in its path. Laminar airflow can be directed vertically or horizontally, and recirculated air is usually passed through a high efficiency particulate air (HEPA) filter. 302,303 HEPA filters remove particles у0.3µm in diameter with an efficiency of 99.97%. 64,300,302,304 Most of the studies examining the efficacy of ultraclean air involve only orthopedic operations. 298, Charnley and Eftaknan studied vertical laminar airflow systems and exhaust-ventilated clothing and found that their use decreased the SSI rate from 9% to 1%. 305 However, other variables (i.e., surgeon experience and surgical technique) changed at the same time as the type of ventilation, which may have confounded the associations. In a multicenter study examining 8,000 total hip and knee replacements, Lidwell et al. compared the effects of ultraclean air alone, antimicrobial prophylaxis alone, and ultraclean air in combination with antimicrobial prophylaxis on the rate of deep SSIs. 307 The SSI rate following operations in which ultraclean air alone was used decreased from 3.4% to 1.6%, whereas the rate for those who received only antimicrobial prophylaxis decreased from 3.4% to 0.8%. When both interventions were used in combination, the SSI rate decreased from 3.4% to 0.7%. These findings suggest that both ultraclean air and antimicrobial prophylaxis can reduce the incidence of SSI following orthopedic implant operations, but antimicrobial prophylaxis is more beneficial than ultraclean air. Intraoperative UV radiation has not been shown to decrease overall SSI risk. 94,312 (2) Environmental surfaces Environmental surfaces in U.S. operating rooms (e.g., tables, floors, walls, ceilings, lights) are rarely implicated as the sources of pathogens important in the development of SSIs. Nevertheless, it is important to perform routine cleaning of these surfaces to reestablish a clean environment after each operation. 180,212,300,302 There are no data to support routine disinfecting of environmental surfaces or equipment between operations in the absence of contamination or visible soiling. When visible soiling of surfaces or equipment occurs during an operation, an Environmental Protection Agency (EPA)-approved hospital disinfectant should be used to decontaminate the affected areas before the next operation. 180,212, This is in keeping with the Occupational Safety and Health Administration (OSHA) requirement that all equipment and environmental surfaces be cleaned and decontaminated after contact with blood or other potentially infectious materials. 315 Wet-vacuuming of the floor with an EPAapproved hospital disinfectant is performed routinely after the last operation of the day or night. Care should be taken to ensure that medical equipment left in the operating room be covered so that solutions used during cleaning and disinfecting do not contact sterile devices or equipment. 316 There are no data to support special cleaning procedures or closing of an operating room after a contaminated or dirty operation has been performed. 300,301 Tacky mats placed outside the entrance to an operating room/suite have not been shown to reduce the number of organisms on shoes or stretcher wheels, nor do they reduce the risk of SSI. 1,179,295,301 (3) Microbiologic sampling Because there are no standardized parameters by which to compare microbial levels obtained from cultures of ambient air or environmental surfaces in the operating room, routine microbiologic sampling cannot be justified. Such environmental sampling should only be performed as part of an epidemiologic investigation.
(4) Conventional sterilization of surgical instruments Inadequate sterilization of surgical instruments has resulted in SSI outbreaks. 302,317,318 Surgical instruments can be sterilized by steam under pressure, dry heat, ethylene oxide, or other approved methods. The importance of routinely monitoring the quality of sterilization procedures has been established. 1,180,212,299 Microbial monitoring of steam autoclave performance is necessary and can be accomplished by use of a biological indicator. 212,314,319 Detailed recommendations for sterilization of surgical instruments have been published. 212,314,320,321 (5) Flash sterilization of surgical instruments The Association for the Advancement of Medical Instrumentation defines flash sterilization as "the process designated for the steam sterilization of patient care items for immediate use." 321 During any operation, the need for emergency sterilization of equipment may arise (e.g., to reprocess an inadvertently dropped instrument). However, flash sterilization is not intended to be used for either reasons of convenience or as an alternative to purchasing additional instrument sets or to save time. Also, flash sterilization is not recommended for implantable devices(*) because of the potential for serious infections. 314,320,321 Flash sterilization is not recommended as a routine sterilization method because of the lack of timely biologic indicators to monitor performance, absence of protective packaging following sterilization, possibility for contamination of processed items during transportation to operating rooms, and use of minimal sterilization cycle parameters (i.e., time, temperature, pressure). 319 To address some of these concerns, many hospitals have placed equipment for flash sterilization in close proximity to operating rooms and new biologic indicators that provide results in 1 to 3 hours are now available for flash-sterilized items. Nevertheless, flash sterilization should be restricted to its intended purpose until studies are performed that can demonstrate comparability with conventional sterilization methods regarding risk of SSI. Sterilization cycle parameters for flash sterilization are shown in Table 9.
# b. Surgical attire and drapes
In this section the term surgical attire refers to scrub suits, caps/hoods, shoe covers, masks, gloves, and gowns. Although experimental data show that live microorganisms are shed from hair, exposed skin, and mucous membranes of operating room personnel, 75,181, few controlled clinical studies have evaluated the relationship between the use of surgical attire and SSI risk. Nevertheless, the use of barriers seems prudent to minimize a patient's exposure to the skin, mucous membranes, or hair of surgical team mem-- According to the FDA, an implantable device is a "device that is placed into a surgically or naturally formed cavity of the human body if it is intended to remain there for a period of 30 days or more." 321 April 1999 bers, as well as to protect surgical team members from exposure to blood and bloodborne pathogens (e.g., human immunodeficiency virus and hepatitis viruses).
(1) Scrub suits Surgical team members often wear a uniform called a "scrub suit" that consists of pants and a shirt. Policies for laundering, wearing, covering, and changing scrub suits vary greatly. Some policies restrict the laundering of scrub suits to the facility, while other facilities have policies that allow laundering by employees. There are no wellcontrolled studies evaluating scrub suit laundering as an SSI risk factor. 331 Some facilities have policies that restrict the wearing of scrub suits to the operating suite, while other facilities allow the wearing of cover gowns over scrub suits when personnel leave the suite. The Association of Operating Room Nurses recommends that scrub suits be changed after they become visibly soiled and that they be laundered only in an approved and monitored laundry facility. 212 Additionally, OSHA regulations require that "if a garment(s) is penetrated by blood or other potentially infectious materials, the garment(s) shall be removed immediately or as soon as feasible." 315 (
# 2) Masks
The wearing of surgical masks during operations to prevent potential microbial contamination of incisions is a longstanding surgical tradition. However, some studies have raised questions about the efficacy and cost-benefit of surgical masks in reducing SSI risk. 328, Nevertheless, wearing a mask can be beneficial since it protects the wearer's nose and mouth from inadvertent exposures (i.e., splashes) to blood and other body fluids. OSHA regulations require that masks in combination with protective eyewear, such as goggles or glasses with solid shields, or chinlength face shields be worn whenever splashes, spray, spatter, or droplets of blood or other potentially infectious material may be generated and eye, nose, or mouth contamination can be reasonably anticipated. 315 In addition, a respirator certified by the National Institute for Occupational Safety and Health with protection factor N95 or higher is required when the patient has or is suspected of having infectious tuberculosis. 339 (3) Surgical caps/hoods and shoe covers Surgical caps/hoods are inexpensive and reduce contamination of the surgical field by organisms shed from the hair and scalp. SSI outbreaks have occasionally been traced to organisms isolated from the hair or scalp (S. aureus and group A Streptococcus), 75,76 even when caps were worn by personnel during the operation and in the operating suites.
The use of shoe covers has never been shown to decrease SSI risk or to decrease bacteria counts on the operating room floor. 340,341 Shoe covers may, however, protect surgical team members from exposure to blood and other body fluids during an operation. OSHA regulations require that surgical caps or hoods and shoe covers or boots be worn in situations when gross contamination can reasonably be anticipated (e.g., orthopedic operations, penetrating trauma cases). 315 (4) Sterile gloves Sterile gloves are put on after donning sterile gowns. A strong theoretical rationale supports the wearing of sterile gloves by all scrubbed members of the surgical team. Sterile gloves are worn to minimize transmission of microorganisms from the hands of team members to patients and to prevent contamination of team members' hands with patients' blood and body fluids. If the integrity of a glove is compromised (e.g., punctured), it should be changed as promptly as safety permits. 315,342,343 Wearing two pairs of gloves (double-gloving) has been shown to reduce hand contact with patients' blood and body fluids when compared to wearing only a single pair. 344,345 (5) Gowns and drapes Sterile surgical gowns and drapes are used to create a barrier between the surgical field and potential sources of bacteria. Gowns are worn by all scrubbed surgical team members and drapes are placed over the patient. There are limited data that can be used to understand the relationship of gown or drape characteristics with SSI risk. The wide variation in the products and study designs make interpretation of the literature difficult. 329, Gowns and drapes are classified as disposable (single use) or reusable (multiple use). Regardless of the material used to manufacture gowns and drapes, these items should be impermeable to liquids and viruses. 351,352 In general, only gowns reinforced with films, coatings, or membranes appear to meet standards developed by the American Society for Testing and Materials. However, such "liquid-proof" gowns may be uncomfortable because Association for the Advancement of Medical Instrumentation. 321 they also inhibit heat loss and the evaporation of sweat from the wearer's body. These factors should be considered when selecting gowns. 353,354 A discussion of the role of gowns and drapes in preventing the transmission of bloodborne pathogens is beyond the scope of this document. 355
# c. Asepsis and surgical technique (1) Asepsis
Rigorous adherence to the principles of asepsis by all scrubbed personnel is the foundation of surgical site infection prevention. Others who work in close proximity to the sterile surgical field, such as anesthesia personnel who are separated from the field only by a drape barrier, also must abide by these principles. SSIs have occurred in which anesthesia personnel were implicated as the source of the pathogen. 34,231,234, Anesthesiologists and nurse anesthetists perform a variety of invasive procedures such as placement of intravascular devices and endotracheal tubes, and administration of intravenous drugs and solutions. Lack of adherence to the principles of asepsis during such procedures, 359 including use of common syringes 360,361 and contaminated infusion pumps, 359, and the assembly of equipment and solutions in advance of procedures, 316,360 have been associated with outbreaks of postoperative infections, including SSI. Recommendations for infection control practices in anesthesiology have been published. 212, (2) Surgical technique Excellent surgical technique is widely believed to reduce the risk of SSI. 26,49,179,180,368,369 Such techniques include maintaining effective hemostasis while preserving adequate blood supply, preventing hypothermia, gently handling tissues, avoiding inadvertent entries into a hollow viscus, removing devitalized (e.g., necrotic or charred) tissues, using drains and suture material appropriately, eradicating dead space, and appropriately managing the postoperative incision.
Any foreign body, including suture material, a prosthesis, or drain, may promote inflammation at the surgical site 94 and may increase the probability of SSI after otherwise benign levels of tissue contamination. Extensive research compares different types of suture material and their presumed relationships to SSI risk. In general, monofilament sutures appear to have the lowest infectionpromoting effects. 3,94,179,180 A discussion of appropriate surgical drain use and details of drain placement exceed the scope of this document, but general points should be briefly noted. Drains placed through an operative incision increase incisional SSI risk. 380 Many authorities suggest placing drains through a separate incision distant from the operative incision. 283,381 It appears that SSI risk also decreases when closed suction drains are used rather than open drains. 174 Closed suction drains can effectively evacuate postoperative hematomas or seromas, but timing of drain removal is important. Bacterial colonization of initially sterile drain tracts increases with the duration of time the drain is left in place. 382 Hypothermia in surgical patients, defined as a core body temperature below 36ºC, may result from general anesthesia, exposure to cold, or intentional cooling such as is done to protect the myocardium and central nervous system during cardiac operations. 302,383,384 In one study of patients undergoing colorectal operations, hypothermia was associated with an increased SSI risk. 385 Mild hypothermia appears to increase incisional SSI risk by causing vasoconstriction, decreased delivery of oxygen to the wound space, and subsequent impairment of function of phagocytic leukocytes (i.e., neutrophils). In animal models, supplemental oxygen administration has been shown to reverse the dysfunction of phagocytes in fresh incisions. 391 In recent human experiments, controlled local heating of incisions with an electrically powered bandage has been shown to improve tissue oxygenation. 392 Randomized clinical trials are needed to establish that measures which improve wound space oxygenation can reduce SSI risk.
# Operative Characteristics: Postoperative Issues a. Incision care
The type of postoperative incision care is determined by whether the incision is closed primarily (i.e., the skin edges are re-approximated at the end of the operation), left open to be closed later, or left open to heal by second intention. When a surgical incision is closed primarily, as most are, the incision is usually covered with a sterile dressing for 24 to 48 hours. 393,394 Beyond 48 hours, it is unclear whether an incision must be covered by a dressing or whether showering or bathing is detrimental to healing. When a surgical incision is left open at the skin level for a few days before it is closed (delayed primary closure), a surgeon has determined that it is likely to be contaminated or that the patient's condition prevents primary closure (e.g., edema at the site). When such is the case, the incision is packed with a sterile dressing. When a surgical incision is left open to heal by second intention, it is also packed with sterile moist gauze and covered with a sterile dressing. The American College of Surgeons, CDC, and others have recommended using sterile gloves and equipment (sterile technique) when changing dressings on any type of surgical incision. 180, b. Discharge planning In current practice, many patients are discharged very soon after their operation, before surgical incisions have fully healed. 398 The lack of optimum protocols for home incision care dictates that much of what is done at home by the patient, family, or home care agency practitioners must be individualized. The intent of discharge planning is to maintain integrity of the healing incision, educate the patient about the signs and symptoms of infection, and advise the patient about whom to contact to report any problems.
# F. SSI SURVEILLANCE
Surveillance of SSI with feedback of appropriate data to surgeons has been shown to be an important component of strategies to reduce SSI risk. 16,399,400 A successful surveillance program includes the use of epidemiologically sound infection definitions (Tables 1 and 2) and effective April 1999 surveillance methods, stratification of SSI rates according to risk factors associated with SSI development, and data feedback. 25
# SSI Risk Stratification a. Concepts
Three categories of variables have proven to be reliable predictors of SSI risk: (1) those that estimate the intrinsic degree of microbial contamination of the surgical site, (2) those that measure the duration of an operation, and (3) those that serve as markers for host susceptibility. 25 A widely accepted scheme for classifying the degree of intrinsic microbial contamination of a surgical site was developed by the 1964 NAS/NRC Cooperative Research Study and modified in 1982 by CDC for use in SSI surveillance (Table 7). 2,94 In this scheme, a member of the surgical team classifies the patient's wound at the completion of the operation. Because of its ease of use and wide availability, the surgical wound classification has been used to predict SSI risk. 16,94,126, Some researchers have suggested that surgeons compare clean wound SSI rates with those of other surgeons. 16,399 However, two CDC efforts-the SENIC Project and the NNIS system-incorporated other predictor variables into SSI risk indices. These showed that even within the category of clean wounds, the SSI risk varied by risk category from 1.1% to 15.8% (SENIC) and from 1.0% to 5.4% (NNIS). 125,126 In addition, sometimes an incision is incorrectly classified by a surgical team member or not classified at all, calling into question the reliability of the classification. Therefore, reporting SSI rates stratified by wound class alone is not recommended.
Data on 10 variables collected in the SENIC Project were analyzed by using logistic regression modeling to develop a simple additive SSI risk index. 125 Four of these were found to be independently associated with SSI risk: (1) an abdominal operation, (2) an operation lasting >2 hours, (3) a surgical site with a wound classification of either contaminated or dirty/infected, and (4) an operation performed on a patient having у3 discharge diagnoses. Each of these equally weighted factors contributes a point when present, such that the risk index values range from 0 to 4. By using these factors, the SENIC index predicted SSI risk twice as well as the traditional wound classification scheme alone.
The NNIS risk index is operation-specific and applied to prospectively collected surveillance data. The index values range from 0 to 3 points and are defined by three independent and equally weighted variables. One point is scored for each of the following when present: (1) American Society of Anesthesiologists (ASA) Physical Status Classification of >2 (Table 10), (2) either contaminated or dirty/infected wound classification (Table 7), and (3) length of operation >T hours, where T is the approximate 75th percentile of the duration of the specific operation being performed. 126 The ASA class replaced discharge diagnoses of the SENIC risk index as a surrogate for the patient's underlying severity of illness (host susceptibility) 406,407 and has the advantage of being readily available in the chart during the patient's hospital stay. Unlike SENIC's constant 2-hour cut-point for duration of operation, the operation-specific cut-points used in the NNIS risk index increase its discriminatory power compared to the SENIC index. 126 b. Issues Adjustment for variables known to confound rate estimates is critical if valid comparisons of SSI rates are to be made between surgeons or hospitals. 408 Risk stratification, as described above, has proven useful for this purpose, but relies on the ability of surveillance personnel to find and record data consistently and correctly. For the three variables used in the NNIS risk index, only one study has focused on how accurately any of them are recorded. Cardo et al. found that surgical team members' accuracy in assessing wound classification for general and trauma surgery was 88% (95% CI: 82%-94%). 409 However, there are sufficient ambiguities in the wound class definitions themselves to warrant concern about the reproducibility of Cardo's results. The accuracy of recording the duration of operation (i.e., time from skin incision to skin closure) and the ASA class has not been studied. In an unpublished report from the NNIS system, there was evidence that overreporting of high ASA class existed in some hospitals. Further validation of the reliability of the recorded risk index variables is needed.
Additionally, the NNIS risk index does not adequately discriminate the SSI risk for all types of operations. 27,410 It seems likely that a combination of risk factors specific to patients undergoing an operation will be more predictive. A Patient with severe systemic disease that is not incapacitating 4
Patient with an incapacitating systemic disease that is a constant threat to life 5
Moribund patient who is not expected to survive for 24 hours with or without operation - Reference 406. Note: The above is the version of the ASA Physical Status Classification System that was current at the time of development of, and still is used in, the NNIS Risk Index. Meanwhile, the American Society of Anesthesiologists has revised their classification system; the most recent version is available at .
few studies have been performed to develop procedurespecific risk indices 218, and research in this area continues within CDC's NNIS system.
# SSI Surveillance Methods
SSI surveillance methods used in both the SENIC Project and the NNIS system were designed for monitoring inpatients at acute-care hospitals. Over the past decade, the shift from inpatient to outpatient surgical care (also called ambulatory or day surgery) has been dramatic. It has been estimated that 75% of all operations in the United States will be performed in outpatient settings by the year 2000. 4 While it may be appropriate to use common definitions of SSI for inpatients and outpatients, 415 the types of operations monitored, the risk factors assessed, and the case-finding methods used may differ. New predictor variables may emerge from analyses of SSIs among outpatient surgery patients, which may lead to different ways of estimating SSI risk in this population.
The choice of which operations to monitor should be made jointly by surgeons and infection control personnel. Most hospitals do not have the resources to monitor all surgical patients all the time, nor is it likely that the same intensity of surveillance is necessary for certain low-risk procedures. Instead, hospitals should target surveillance efforts toward high-risk procedures. 416 a. Inpatient SSI sur veillance Two methods, alone or together, have been used to identify inpatients with SSIs: (1) direct observation of the surgical site by the surgeon, trained nurse surveyor, or infection control personnel 16,97,399,402,409, and (2) indirect detection by infection control personnel through review of laboratory reports, patient records, and discussions with primary care providers. 15,84,399,402,404,409,418, The surgical literature suggests that direct observation of surgical sites is the most accurate method to detect SSIs, although sensitivity data are lacking. 16,399,402,417,418 Much of the SSI data reported in the infection control literature has been generated by indirect case-finding methods, 125,126,422,425,426, but some studies of direct methods also have been conducted. 97,409 Some studies use both methods of detection. 84,409,424,427,431 A study that focused solely on the sensitivity and specificity of SSIs detected by indirect methods found a sensitivity of 83.8% (95% CI: 75.7%-91.9%) and a specificity of 99.8% (95% CI: 99%-100%). 409 Another study showed that chart review triggered by a computer-generated report of antibiotic orders for postcesarean section patients had a sensitivity of 89% for detecting endometritis. 432 Indirect SSI detection can readily be performed by infection control personnel during surveillance rounds. The work includes gathering demographic, infection, surgical, and laboratory data on patients who have undergone operations of interest. 433 These data can be obtained from patients' medical records, including microbiology, histopathology, laboratory, and pharmacy data; radiology reports; and records from the operating room. Additionally, inpatient admissions, emergency room, and clinic visit records are sources of data for those postdischarge surgical patients who are readmitted or seek follow-up care.
The optimum frequency of SSI case-finding by either method is unknown and varies from daily to р3 times per week, continuing until the patient is discharged from the hospital. Because duration of hospitalization is often very short, postdischarge SSI surveillance has become increasingly important to obtain accurate SSI rates (refer to "Postdischarge SSI Surveillance" section).
To calculate meaningful SSI rates, data must be collected on all patients undergoing the operations of interest (i.e., the population at risk). Because one of its purposes is to develop strategies for risk stratification, the NNIS system collects the following data on all surgical patients surveyed: operation date; NNIS operative procedure category; 434 surgeon identifier; patient identifier; age and sex; duration of operation; wound class; use of general anesthesia; ASA class; emergency; trauma; multiple procedures; endoscopic approach; and discharge date. 433 With the exception of discharge date, these data can be obtained manually from operating room logs or be electronically downloaded into surveillance software, thereby substantially reducing manual transcription and data entr y errors. 433 Depending on the needs for risk-stratified SSI rates by personnel in infection control, surgery, and quality assurance, not all data elements may be pertinent for every type of operation. At minimum, however, variables found to be predictive of increased SSI risk should be collected (refer to "SSI Risk Stratification" section).
b. Postdischarge SSI sur veillance Between 12% and 84% of SSIs are detected after patients are discharged from the hospital. 98,337,402,428, At least two studies have shown that most SSIs become evident within 21 days after operation. 446,447 Since the length of postoperative hospitalization continues to decrease, many SSIs may not be detected for several weeks after discharge and may not require readmission to the operating hospital. Dependence solely on inpatient case-finding will result in underestimates of SSI rates for some operations (e.g., coronary artery bypass graft) (CDC/NNIS system, unpublished data, 1998). Any comparison of SSI rates must take into account whether case-finding included SSIs detected after discharge. For comparisons to be valid, even in the same institution over time, the postdischarge surveillance methods must be the same.
Postdischarge surveillance methods have been used with varying degrees of success for different procedures and among hospitals and include (1) direct examination of patients' wounds during follow-up visits to either surgery clinics or physicians' offices, 150,399,402,404,430,436,440,441,447,452,455 (2) review of medical records of surger y clinic patients, 404,430,439 (3) patient sur veys by mail or telephone, 435,437,438,441,442,444,445,448,449, or (4) surgeon surveys by mail or telephone. 98,428,430,443,444,446,448,450,451,455 One study found that patients have difficulty assessing their own wounds for infection (52% specificity, 26% positive predictive value), 458 suggesting that data obtained by patient questionnaire may inaccurately represent actual SSI rates.
# April 1999
Recently, Sands et al. performed a computerized search of three databases to determine which best identified SSIs: ambulatory encounter records for diagnostic, testing, and treatment codes; pharmacy records for specific antimicrobial prescriptions; and administrative records for rehospitalizations and emergency room visits. 446 This study found that pharmacy records indicating a patient had received antimicrobial agents commonly used to treat soft tissue infections had the highest sensitivity (50%) and positive predictive value (19%), although even this approach alone was not very effective.
As integrated health information systems expand, tracking surgical patients through the entire course of care may become more feasible, practical, and effective. At this time, no consensus exists on which postdischarge surveillance methods are the most sensitive, specific, and practical. Methods chosen will necessarily reflect the hospital's unique mix of operations, personnel resources, and data needs.
c. Outpatient SSI sur veillance Both direct and indirect methods have been used to detect SSIs that complicate outpatient operations. One 8year study of operations for hernia and varicose veins used home visits by district health nurses combined with a survey completed by the surgeon at the patient's 2-week postoperative clinic visit to identify SSIs. 459 While ascertainment was essentially 100%, this method is impractical for widespread implementation. High response rates have been obtained from questionnaires mailed to surgeons (72%->90%). 443,444,446,455, Response rates from telephone questionnaires administered to patients were more variable (38%, 444 81%, 457 and 85% 455 ), and response rates from questionnaires mailed to patients were quite low (15% 455 and 33% 446 ). At this time, no single detection method can be recommended. Available resources and data needs determine which method(s) should be used and which operations should be monitored. Regardless of which detection method is used, it is recommended that the CDC NNIS definitions of SSI (Tables 1 and 2) be used without modification in the outpatient setting.
# G. GUIDELINE EVALUATION PROCESS
The value of the HICPAC guidelines is determined by those who use them. To help assess that value, HICPAC is developing an evaluation tool to learn how guidelines meet user expectations, and how and when these guidelines are disseminated and implemented.
# II. RECOMMENDATIONS FOR PREVENTION OF SURGICAL SITE INFECTION
# A. RATIONALE
The Guideline for Prevention of Surgical Site Infection, 1999, provides recommendations concerning reduction of surgical site infection risk. Each recommendation is categorized on the basis of existing scientific data, theoretical rationale, and applicability. However, the previous CDC system for categorizing recommendations has been modified slightly.
Category I recommendations, including IA and IB, are those recommendations that are viewed as effective by HICPAC and experts in the fields of surgery, infectious diseases, and infection control. Both Category IA and IB recommendations are applicable for, and should be adopted by, all healthcare facilities; IA and IB recommendations differ only in the strength of the supporting scientific evidence.
Category II recommendations are supported by less scientific data than Category I recommendations; such recommendations may be appropriate for addressing specific nosocomial problems or specific patient populations.
No recommendation is offered for some practices, either because there is a lack of consensus regarding their efficacy or because the available scientific evidence is insufficient to support their adoption. For such unresolved issues, practitioners should use judgement to determine a policy regarding these practices within their organization. Recommendations that are based on federal regulation are denoted with an asterisk.
# B. RANKINGS
Category IA.Strongly recommended for implementation and supported by well-designed experimental, clinical, or epidemiological studies.
Category IB.Strongly recommended for implementation and supported by some experimental, clinical, or epidemiological studies and strong theoretical rationale.
Category II. Suggested for implementation and supported by suggestive clinical or epidemiological studies or theoretical rationale.
No recommendation; unresolved issue. Practices for which insufficient evidence or no consensus regarding efficacy exists.
Practices required by federal regulation are denoted with an asterisk (*). 2. Perform a preoperative surgical scrub for at least 2 to 5 minutes using an appropriate antiseptic (Table 6). Scrub the hands and forearms up to the elbows. Category IB 3. After performing the surgical scrub, keep hands up and away from the body (elbows in flexed position) so that water runs from the tips of the fingers toward the elbows. Dry hands with a sterile towel and don a sterile gown and gloves. Category IB 4. Clean underneath each fingernail prior to performing the first surgical scrub of the day. Category II 5. Do not wear hand or arm jewelry. Category II 6. No recommendation on wearing nail polish. Unresolved Issue c. Management of infected or colonized surgical personnel 1. Educate and encourage surgical personnel who have signs and symptoms of a transmissible infectious illness to report conditions promptly to their supervisory and occupational health service personnel. Category IB 2. Develop well-defined policies concerning patientcare responsibilities when personnel have potentially transmissible infectious conditions. These policies should govern (a) personnel responsibility in using the health service and reporting illness, (b) work restrictions, and (c) clearance to resume work after an illness that required work restriction. The policies also should identify persons who have the authority to remove personnel from duty. Category IB 3. Obtain appropriate cultures from, and exclude from duty, surgical personnel who have draining skin lesions until infection has been ruled out or personnel have received adequate therapy and infection has resolved. Category IB 4. Do not routinely exclude surgical personnel who are colonized with organisms such as S. aureus (nose, hands, or other body site) or group A Streptococcus, unless such personnel have been linked epidemiologically to dissemination of the organism in the healthcare setting. Category IB d. Antimicrobial prophylaxis 1. Administer a prophylactic antimicrobial agent only when indicated, and select it based on its efficacy against the most common pathogens causing SSI for a specific operation (Table 4) and published recommendations. 266,268,269, Category IA 2. Administer by the intravenous route the initial dose of prophylactic antimicrobial agent, timed such that a bactericidal concentration of the drug is established in serum and tissues when the incision is made. Maintain therapeutic levels of the agent in serum and tissues throughout the operation and until, at most, a few hours after the incision is closed in the operating room. Category IA 3. Before elective colorectal operations in addition to d2 above, mechanically prepare the colon by use of enemas and cathartic agents. Administer nonabsorbable oral antimicrobial agents in divided doses on the day before the operation. Category IA 4. For high-risk cesarean section, administer the prophylactic antimicrobial agent immediately after the umbilical cord is clamped. Category IA 5. Do not routinely use vancomycin for antimicrobial prophylaxis. Category IB 1. Sterilize all surgical instruments according to published guidelines. 212,299,314,321 Category IB 2. Perform flash sterilization only for patient care items that will be used immediately (e.g., to reprocess an inadvertently dropped instrument). Do not use flash sterilization for reasons of convenience, as an alternative to purchasing additional instrument sets, or to save time.
Category IB e. Surgical attire and drapes 1. Wear a surgical mask that fully covers the mouth and nose when entering the operating room if an operation is about to begin or already under way, or if sterile instruments are exposed. Wear the mask throughout the operation. Category IB- 2. Wear a cap or hood to fully cover hair on the head and face when entering the operating room. Category IB- 3. Do not wear shoe covers for the prevention of SSI. Category IB- 4. Wear sterile gloves if a scrubbed surgical team member. Put on gloves after donning a sterile gown. Category IB- 5. Use surgical gowns and drapes that are effective barriers when wet (i.e., materials that resist liquid penetration). Category IB 6. Change scrub suits that are visibly soiled, contaminated, and/or penetrated by blood or other potentially infectious materials. Category IB- 7. No recommendations on how or where to launder scrub suits, on restricting use of scrub suits to the operating suite, or for covering scrub suits when out of the operating suite. Unresolved issue f. Asepsis and surgical technique 1. Adhere to principles of asepsis when placing intravascular devices (e.g., central venous catheters), spinal or epidural anesthesia catheters, or when dispensing and administering intravenous drugs. Category IA 2. Assemble sterile equipment and solutions immediately prior to use. Category II 3. Handle tissue gently, maintain effective hemostasis, minimize devitalized tissue and foreign bodies (i.e., sutures, charred tissues, necrotic debris), and eradicate dead space at the surgical site. Category IB 4. Use delayed primary skin closure or leave an incision open to heal by second intention if the surgeon considers the surgical site to be heavily contaminated (e.g., Class III and Class IV). Category IB 5. If drainage is necessary, use a closed suction drain. Place a drain through a separate incision distant from the operative incision. Remove the drain as soon as possible. Category IB
# Postoperative incision care
a. Protect with a sterile dressing for 24 to 48 hours postoperatively an incision that has been closed primarily.
# Category IB
b. Wash hands before and after dressing changes and any contact with the surgical site. Category IB c. When an incision dressing must be changed, use sterile technique. Category II d. Educate the patient and family regarding proper incision care, symptoms of SSI, and the need to report such symptoms. Category II e. No recommendation to cover an incision closed primarily beyond 48 hours, nor on the appropriate time to shower or bathe with an uncovered incision. Unresolved issue 4. Surveillance a. Use CDC definitions of SSI (Table 1) without modification for identifying SSI among surgical inpatients and outpatients. Category IB b. For inpatient case-finding (including readmissions), use direct prospective obser vation, indirect prospective detection, or a combination of both direct and indirect methods for the duration of the patient's hospitalization. Category IB c. When postdischarge surveillance is performed for detecting SSI following certain operations (e.g., coronary artery bypass graft), use a method that accommodates available resources and data needs. Category II d. For outpatient case-finding, use a method that accommodates available resources and data needs.
# Category IB
e. Assign the surgical wound classification upon completion of an operation. A surgical team member should make the assignment. Category II f. For each patient undergoing an operation chosen for surveillance, record those variables shown to be associated with increased SSI risk (e.g., surgical wound class, ASA class, and duration of operation). Category IB g. Periodically calculate operation-specific SSI rates - Federal regulation: OSHA.
stratified by variables shown to be associated with increased SSI risk (e.g., NNIS risk index). Category IB h. Report appropriately stratified, operation-specific SSI rates to surgical team members. The optimum frequency and format for such rate computations will be determined by stratified case-load sizes (denominators) and the objectives of local, continuous quality improvement initiatives. Category IB i. No recommendation to make available to the infection control committee coded surgeon-specific data. Unresolved issue | 18,827 | {
"id": "4207fa4c52bc7fe54b9a3b191f5935ba653d08c5",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | These guidelines for the treatment of patients who have sexually transmitted diseases (STDs) were developed by CDC staff members after consultation with a group of invited experts who met in Atlanta on February 10-12, 1997. The information in this report updates the "1993 Sexually Transmitted Diseases Treatment Guidelines" (MMWR 1993;42). Included are new recommendations for treatment of primary and recurrent genital herpes and management of pelvic inflammatory disease; a new patient-applied medication for treatment of genital warts; and a revised approach to the management of victims of sexual assault. Revised sections describe the evaluation of urethritis and the diagnostic evaluation of congenital syphilis. These guidelines also include expanded sections concerning STDs among infants, children, and pregnant women and the management of patients who have asymptomatic human immunodeficiency virus infection, genital warts, and genital herpes. Guidelines are provided for vaccine-preventable STDs, including recommendations for the use of hepatitis A and hepatitis B vaccines.# Abbreviations Used in This Publication
# INTRODUCTION
Physicians and other health-care providers have a critical role in preventing and treating sexually transmitted diseases (STDs). These recommendations for the treatment of STDs, which were developed by CDC staff members in consultation with a group of invited experts, are intended to assist with that effort.
This report was produced through a multi-stage process. Beginning in the spring of 1996, CDC personnel and invited experts systematically reviewed literature concerning each of the major STDs, focusing on information that had become available since the "1993 Sexually Transmitted Diseases Treatment Guidelines" (MMWR 1993;42) were published. Background papers were written and tables of evidence constructed summarizing the type of study (e.g., randomized controlled trial or case series), study population and setting, treatments or other interventions, outcome measures assessed, reported findings, and weaknesses and biases in study design and analysis. For these reviews, published abstracts and peerreviewed journal articles were considered. A draft document was developed on the basis of the reviews.
In February 1997, invited consultants assembled in Atlanta for a 3-day meeting. CDC personnel and invited experts presented the key questions on STD treatment suggested from the literature reviews and presented the information available to answer those questions. Where relevant, the questions focused on four principal outcomes of STD therapy: a) microbiologic cure, b) alleviation of signs and symptoms, c) prevention of sequelae, and d) prevention of transmission. Costeffectiveness and other advantages (e.g., single-dose formulations and directly observed therapy) of specific regimens also were considered. The consultants then assessed whether the questions identified were appropriate, ranked them in order of priority, and attempted to arrive at answers using the available evidence. In addition, the consultants evaluated the quality of evidence supporting the answers on the basis of the number, type, and quality of the studies.
In several areas, the process diverged from that described previously. The sections concerning adolescents, congenital syphilis, and partner notification were reviewed by other CDC experts on prevention of STDs and human immunodeficiency virus (HIV) infection. The recommendations for STD screening during pregnancy were developed after CDC staff reviewed the published recommendations of other expert groups. The sections concerning early HIV infection are a compilation of recommendations developed by CDC experts in HIV infection. The sections on hepatitis B virus (HBV) (1 ) and hepatitis A virus (HAV) (2 ) infections are based on previously published recommendations of the Advisory Committee on Immunization Practices (ACIP).
Throughout this report, the evidence used as the basis for specific recommendations is discussed briefly. More comprehensive, annotated discussions of such evidence will appear in background papers that will be published in 1998. When more than one therapeutic regimen is recommended, the sequence is alphabetized unless there is priority of choice (i.e., based on efficacy, convenience, and cost). Almost all recommended regimens have similar efficacy and similar rates of intolerance or toxicity unless otherwise specified.
These recommendations were developed in consultation with experts whose experience is primarily with the treatment of patients in public STD clinics. Nevertheless, these recommendations also should be applicable to other patient-care settings, including family planning clinics, private physicians' offices, managed care organizations, and other primary-care facilities. When using these guidelines, the disease prevalence and other characteristics of the medical practice setting should be considered. These recommendations should be regarded as a source of clinical guidance and not as standards or inflexible rules.
These recommendations focus on the treatment and counseling of individual patients and do not address other community services and interventions that are important in STD/HIV prevention. Clinical and laboratory diagnoses are described when such information is related to therapy. For a more comprehensive discussion of diagnosis, refer to CDC's Sexually Transmitted Diseases Clinical Practice Guidelines, 1991 (3 ).
# CLINICAL PREVENTION GUIDELINES
The prevention and control of STDs is based on five major concepts: first, education of those at risk on ways to reduce the risk for STDs; second, detection of asymptomatically infected persons and of symptomatic persons unlikely to seek diagnostic and treatment services; third, effective diagnosis and treatment of infected persons; fourth, evaluation, treatment, and counseling of sex partners of persons who are infected with an STD; and fifth, preexposure vaccination of persons at risk for vaccine-preventable STDs. Although this report focuses primarily on the clinical aspects of STD control, prevention of STDs is based on changing the sexual behaviors that place persons at risk for infection. Moreover, because STD control activities reduce the likelihood of transmission to sex partners, prevention for individuals constitutes prevention for the community.
Clinicians have the opportunity to provide client education and counseling and to participate in identifying and treating infected sex partners in addition to interrupting transmission by treating persons who have the curable bacterial and parasitic STDs. The ability of the health-care provider to obtain an accurate sexual history is crucial in prevention and control efforts. Guidance in obtaining a sexual history is available in the chapter "Sexuality and Reproductive Health" in Contraceptive Technology, 16th edition (4 ). The accurate diagnosis and timely reporting of STDs by the clinician is the basis for effective public health surveillance.
# Prevention Messages
Preventing the spread of STDs requires that persons at risk for transmitting or acquiring infections change their behaviors. The essential first step is for the health-care provider to proactively include questions regarding the patient's sexual history as part of the clinical interview. When risk factors have been identified, the provider has an opportunity to deliver prevention messages. Counseling skills (i.e., respect, compassion, and a nonjudgmental attitude) are essential to the effective delivery of prevention messages. Techniques that can be effective in facilitating a rapport with the patient include using open-ended questions, using understandable language, and reassuring the patient that treatment will be provided regardless of considerations such as ability to pay, citizenship or immigration status, language spoken, or lifestyle.
Prevention messages should be tailored to the patient, with consideration given to the patient's specific risk factors for STDs. Messages should include a description of specific actions that the patient can take to avoid acquiring or transmitting STDs (e.g., abstinence from sexual activity if STD-related symptoms develop).
# Sexual Transmission
The most effective way to prevent sexual transmission of HIV infection and other STDs is to avoid sexual intercourse with an infected partner. Counseling that provides information concerning abstinence from penetrative sexual intercourse is crucial for a) persons who are being treated for an STD or whose partners are undergoing treatment and b) persons who wish to avoid the possible consequences of sexual intercourse (e.g., STD/HIV and pregnancy). A more comprehensive discussion of abstinence is available in Contraceptive Technology, 16th edition (4 ).
- Both partners should get tested for STDs, including HIV, before initiating sexual intercourse.
- If a person chooses to have sexual intercourse with a partner whose infection status is unknown or who is infected with HIV or another STD, a new condom should be used for each act of intercourse.
# Injecting-Drug Users
The following prevention messages are appropriate for injecting-drug users:
- Enroll or continue in a drug-treatment program.
- Do not, under any circumstances, use injection equipment (e.g., needles and syringes) that has been used by another person.
- If needles can be obtained legally in the community, obtain clean needles.
- Persons who continue to use injection equipment that has been used by other persons should first clean the equipment with bleach and water. (Disinfecting with bleach does not sterilize the equipment and does not guarantee that HIV is inactivated. However, for injecting-drug users, thoroughly and consistently cleaning injection equipment with bleach should reduce the rate of HIV transmission when equipment is shared.)
# Preexposure Vaccination
Preexposure vaccination is one of the most effective methods used to prevent transmission of certain STDs. HBV infection frequently is sexually transmitted, and hepatitis B vaccination is recommended for all unvaccinated patients being evaluated for an STD. In the United States, hepatitis A vaccines from two manufacturers were licensed recently. Hepatitis A vaccination is recommended for several groups of patients who might seek treatment in STD clinics; such patients include homosexual or bisexual men and persons who use illegal drugs. Vaccine trials for other STDs are being conducted, and vaccines for these STDs may become available within the next several years.
# Prevention Methods
# Male Condoms
When used consistently and correctly, condoms are effective in preventing many STDs, including HIV infection. Multiple cohort studies, including those of serodiscordant sex partners, have demonstrated a strong protective effect of condom use against HIV infection. Because condoms do not cover all exposed areas, they may be more effective in preventing infections transmitted between mucosal surfaces than those transmitted by skin-to-skin contact. Condoms are regulated as medical devices and are subject to random sampling and testing by the Food and Drug Administration (FDA). Each latex condom manufactured in the United States is tested electronically for holes before packaging. Rates of condom breakage during sexual intercourse and withdrawal are low in the United States (i.e., usually two broken condoms per 100 condoms used). Condom failure usually results from inconsistent or incorrect use rather than condom breakage.
Patients should be advised that condoms must be used consistently and correctly to be highly effective in preventing STDs. Patients also should be instructed in the correct use of condoms. The following recommendations ensure the proper use of male condoms:
- Use a new condom with each act of sexual intercourse.
- Carefully handle the condom to avoid damaging it with fingernails, teeth, or other sharp objects.
- Put the condom on after the penis is erect and before genital contact with the partner.
- Ensure that no air is trapped in the tip of the condom.
- Ensure that adequate lubrication exists during intercourse, possibly requiring the use of exogenous lubricants.
- Use only water-based lubricants (e.g., K-Y Jelly TM , Astroglide TM , AquaLube TM , and glycerin) with latex condoms. Oil-based lubricants (e.g., petroleum jelly, shortening, mineral oil, massage oils, body lotions, and cooking oil) can weaken latex.
- Hold the condom firmly against the base of the penis during withdrawal, and withdraw while the penis is still erect to prevent slippage.
# Female Condoms
Laboratory studies indicate that the female condom (Reality TM )-a lubricated polyurethane sheath with a ring on each end that is inserted into the vagina-is an effective mechanical barrier to viruses, including HIV. Other than one investigation of recurrent trichomoniasis, no clinical studies have been completed to evaluate the efficacy of female condoms in providing protection from STDs, including HIV. If used consistently and correctly, the female condom should substantially reduce the risk for STDs. When a male condom cannot be used appropriately, sex partners should consider using a female condom.
# Condoms and Spermicides
Whether condoms lubricated with spermicides are more effective than other lubricated condoms in protecting against the transmission of HIV and other STDs has not been determined. Furthermore, spermicide-coated condoms have been associated with Escherichia coli urinary tract infection in young women. Whether condoms used with vaginal application of spermicide are more effective than condoms used without vaginal spermicides also has not been determined. Therefore, the consistent use of condoms, with or without spermicidal lubricant or vaginal application of spermicide, is recommended.
# Vaginal Spermicides, Sponges, and Diaphragms
As demonstrated in several randomized controlled trials, vaginal spermicides used alone without condoms reduce the risk for cervical gonorrhea and chlamydia. However, vaginal spermicides offer no protection against HIV infection, and spermicides are not recommended for HIV prevention. The vaginal contraceptive sponge, which is not available in the United States, protects against cervical gonorrhea and chlamydia, but its use increases the risk for candidiasis. In case-control and cross-sectional studies, diaphragm use has been demonstrated to protect against cervical gonorrhea, chlamydia, and trichomoniasis; however, no cohort studies have been conducted. Vaginal sponges or diaphragms should not be assumed to protect women against HIV infection. The role of spermicides, sponges, and diaphragms for preventing STDs in men has not been evaluated.
# Nonbarrier Contraception, Surgical Sterilization, and Hysterectomy
Women who are not at risk for pregnancy might incorrectly perceive themselves to be at no risk for STDs, including HIV infection. Nonbarrier contraceptive methods offer no protection against HIV or other STDs. Hormonal contraception (e.g., oral contraceptives, Norplant TM , and Depo-Provera TM ) has been associated in some cohort studies with cervical STDs and increased acquisition of HIV; however, data concerning this latter finding are inconsistent. Women who use hormonal contraception, have been surgically sterilized, or have had hysterectomies should be counseled regarding the use of condoms and the risk for STDs, including HIV infection.
# HIV Prevention Counseling
Knowledge of HIV status and appropriate counseling are important components in initiating behavior change. Therefore, HIV counseling is an important HIV prevention strategy, although its efficacy in reducing risk behaviors is still being evaluated. By ensuring that counseling is empathic and client-centered, clinicians can develop a realistic appraisal of the patient's risk and help the patient develop a specific and realistic HIV prevention plan (5 ).
Counseling associated with HIV testing has two main components: pretest and posttest counseling. During pretest counseling, the clinician should conduct a personalized risk assessment, explain the meaning of positive and negative test results, ask for informed consent for the HIV test, and help the patient develop a realistic, personalized risk-reduction plan. During posttest counseling, the clinician should inform the patient of the results, review the meaning of the results, and reinforce prevention messages. If the patient has a confirmed positive HIV test result, posttest counseling should include referral for follow-up medical services and, if needed, social and psychological services. HIV-negative patients at continuing risk for HIV infection also may benefit from referral for additional counseling and prevention services.
# Partner Notification
For most STDs, partners of patients should be examined. When exposure to a treatable STD is considered likely, appropriate antimicrobials should be administered even though no clinical signs of infection are evident and laboratory test results are not yet available. In many states, the local or state health department can assist in notifying the partners of patients who have selected STDs (e.g., HIV infection, syphilis, gonorrhea, hepatitis B, and chlamydia).
Health-care providers should advise patients who have an STD to notify sex partners, including those without symptoms, of their exposure and encourage these partners to seek clinical evaluation. This type of partner notification is known as patient referral. In situations in which patient referral may not be effective or possible, health departments should be prepared to assist the patient either through contract referral or provider referral. Contract referral is the process by which patients agree to self-refer their partners within a defined time period. If the partners do not obtain medical evaluation and treatment within that period, then provider referral is implemented. Provider referral is the process by which partners named by infected patients are notified and counseled by health department staff.
Interrupting the transmission of infection is crucial to STD control. For treatable and vaccine-preventable STDs, further transmission and reinfection can be prevented by referral of sex partners for diagnosis, treatment, vaccination (if applicable), and counseling. When health-care providers refer infected patients to local or state health departments for provider-referral partner notification, the patients may be interviewed by trained professionals to obtain the names of their sex partners and information regarding the location of these partners for notification purposes. Every health department protects the privacy of patients in partnernotification activities. Because of the advantage of confidentiality, many patients prefer that public health officials notify partners. However, the ability of public health officials to provide appropriate prophylaxis to contacts of all patients who have STDs may be limited. In situations where the number of anonymous partners is substantial (e.g., situations among persons who exchange sex for drugs), targeted screening of persons at risk may be more effective at stopping the transmission of disease than provider-referral partner notification. Guidelines for management of sex partners and recommendations for partner notification for specific STDs are included for each STD addressed in this report.
# Reporting and Confidentiality
The accurate identification and timely reporting of STDs are integral components of successful disease control efforts. Timely reporting is important for assessing morbidity trends, targeting limited resources, and assisting local health authorities in identifying sex partners who may be infected. STD/HIV and acquired immunodeficiency syndrome (AIDS) cases should be reported in accordance with local statutory requirements.
Syphilis, gonorrhea, and AIDS are reportable diseases in every state. Chlamydial infection is reportable in most states. The requirements for reporting other STDs differ by state, and clinicians should be familiar with local STD reporting requirements. Reporting may be provider-and/or laboratory-based. Clinicians who are unsure of local reporting requirements should seek advice from local health departments or state STD programs.
STD and HIV reports are maintained in strictest confidence; in most jurisdictions, such reports are protected by statute from subpoena. Before public health representatives conduct follow-up of a positive STD-test result, these persons should consult the patient's health-care provider to verify the diagnosis and treatment.
# SPECIAL POPULATIONS Pregnant Women
Intrauterine or perinatally transmitted STDs can have fatal or severely debilitating effects on a fetus. Pregnant women and their sex partners should be questioned about STDs and should be counseled about the possibility of perinatal infections.
# Recommended Screening Tests
- A serologic test for syphilis should be performed on all pregnant women at the first prenatal visit. In populations in which utilization of prenatal care is not optimal, rapid plasma reagin (RPR)-card test screening and treatment, if that test is reactive, should be performed at the time a pregnancy is diagnosed. For patients at high risk, screening should be repeated in the third trimester and again at delivery. Some states also mandate screening all women at delivery. No infant should be discharged from the hospital without the syphilis serologic status of its mother having been determined at least one time during pregnancy and, preferably, again at delivery. Any woman who delivers a stillborn infant should be tested for syphilis.
- A serologic test for hepatitis B surface antigen (HBsAg) should be performed for all pregnant women at the first prenatal visit. HBsAg testing should be repeated late in the pregnancy for women who are HBsAg negative but who are at high risk for HBV infection (e.g., injecting-drug users and women who have concomitant STDs).
- A test for Neisseria gonorrhoeae should be performed at the first prenatal visit for women at risk or for women living in an area in which the prevalence of N. gonorrhoeae is high. A repeat test should be performed during the third trimester for those at continued risk.
- A test for Chlamydia trachomatis should be performed in the third trimester for women at increased risk (i.e., women aged <25 years and women who have a new or more than one sex partner or whose partner has other partners) to prevent maternal postnatal complications and chlamydial infection in the infant. Screening during the first trimester might enable prevention of adverse effects of chlamydia during pregnancy. However, evidence for adverse effects during pregnancy is minimal. If screening is performed only during the first trimester, a longer period exists for acquiring infection before delivery.
- A test for HIV infection should be offered to all pregnant women at the first prenatal visit.
- A test for bacterial vaginosis (BV) may be conducted early in the second trimester for asymptomatic patients who are at high risk for preterm labor (e.g., those who have a history of a previous preterm delivery). Current evidence does not support universal testing for BV.
- A Papanicolaou (Pap) smear should be obtained at the first prenatal visit if none has been documented during the preceding year.
# Other Concerns
Other STD-related concerns are to be considered as follows:
- Pregnant women who have either primary genital herpes infection, HBV, primary cytomegalovirus (CMV) infection, or Group B streptococcal infection and women who have syphilis and who are allergic to penicillin may need to be referred to an expert for management.
- HBsAg-positive pregnant women should be reported to the local and/or state health department to ensure that they are entered into a case-management system and appropriate prophylaxis is provided for their infants. In addition, household and sexual contacts of HBsAg-positive women should be vaccinated.
- In the absence of lesions during the third trimester, routine serial cultures for herpes simplex virus (HSV) are not indicated for women who have a history of recurrent genital herpes. However, obtaining cultures from such women at the time of delivery may be useful in guiding neonatal management. Prophylactic cesarean section is not indicated for women who do not have active genital lesions at the time of delivery.
- The presence of genital warts is not an indication for cesarean section.
For a more detailed discussion of these guidelines, as well as for infections not transmitted sexually, refer to Guidelines for Perinatal Care (6 ).
# NOTE:
The sources for these guidelines for screening of pregnant women include the Guide to Clinical Preventive Services (7 ), Guidelines for Perinatal Care (6 ), American College of Obstetricians and Gynecologists (ACOG) Technical Bulletin: Gonorrhea and Chlamydial Infections (8 ), "Recommendations for the Prevention and Management of Chlamydia trachomatis Infections" (9 ), and "Hepatitis B Virus: A Comprehensive Strategy for Eliminating Transmission in the United States through Universal Childhood Vaccination-Recommendations of the Immunization Practices Advisory Committee (ACIP)" (1 ). These sources are not entirely compatible in their recommendations. The Guide to Clinical Preventive Services recommends screening of patients at high risk for chlamydia, but indicates that the optimal timing for screening is uncertain. The Guidelines for Perinatal Care recommend that pregnant women at high risk for chlamydia be screened for the infection during the first prenatal-care visit and during the third trimester. Recommendations to screen pregnant women for STDs are based on disease severity and sequelae, prevalence in the population, costs, medicolegal considerations (e.g., state laws), and other factors. The screening recommendations in this report are more extensive (i.e., if followed, more women will be screened for more STDs than would be screened by following other recommendations) and are compatible with other CDC guidelines. Physicians should select a screening strategy that is compatible with the population and setting of their medical practices and that meets their goals for STD case detection and treatment.
# Adolescents
Health-care providers who provide care for adolescents should be aware of several issues that relate specifically to these persons. The rates of many STDs are highest among adolescents (e.g., the rate of gonorrhea is highest among females aged 15-19 years). Clinic-based studies have demonstrated that the prevalence of chlamydial infections, and possibly of human papillomavirus (HPV) infections, also is highest among adolescents. In addition, surveillance data indicate that 9% of adolescents who have acute HBV infection either a) have had sexual contact with a chronically infected person or with multiple sex partners or b) gave their sexual preference as homosexual. As part of a comprehensive strategy to eliminate HBV transmission in the United States, ACIP has recommended that all children be administered hepatitis B vaccine.
Adolescents who are at high risk for STDs include male homosexuals, sexually active heterosexuals, clients in STD clinics, and injecting-drug users. Younger adolescents (i.e., persons aged <15 years) who are sexually active are at particular risk for infection. Adolescents are at greatest risk for STDs because they frequently have unprotected intercourse, are biologically more susceptible to infection, and face multiple obstacles to utilization of health care.
Several of these issues can be addressed by clinicians who provide services to adolescents. Clinicians can address the general lack of knowledge and awareness about the risks and consequences of STDs and offer guidance, constituting true primary prevention, to help adolescents develop healthy sexual behaviors and prevent the establishment of patterns of behavior that can undermine sexual health. With limited exceptions, all adolescents in the United States can consent to the confidential diagnosis and treatment of STDs. Medical care for STDs can be provided to adolescents without parental consent or knowledge. Furthermore, in many states adolescents can consent to HIV counseling and testing. Consent laws for vaccination of adolescents differ by state. Several states consider provision of vaccine similar to treatment of STDs and provide vaccination services without parental consent. Providers should appreciate how important confidentiality is to adolescents and should strive to follow policies that comply with state laws to ensure the confidentiality of STD-related services provided to adolescents.
The style and content of counseling and health education should be adapted for adolescents. Discussions should be appropriate for the patient's developmental level and should identify risky behaviors, such as sex and drug-use behaviors. Careful counseling and thorough discussions are especially important for adolescents who may not acknowledge engaging in high-risk behaviors. Care and counseling should be direct and nonjudgmental.
# Children
Management of children who have STDs requires close cooperation between the clinician, laboratorians, and child-protection authorities. Investigations, when indicated, should be initiated promptly. Some diseases (e.g., gonorrhea, syphilis, and chlamydia), if acquired after the neonatal period, are almost 100% indicative of sexual contact. For other diseases, such as HPV infection and vaginitis, the association with sexual contact is not as clear (see Sexual Assault and STDs).
# HIV INFECTION: DETECTION, INITIAL MANAGEMENT, AND REFERRAL
Infection with HIV produces a spectrum of disease that progresses from a clinically latent or asymptomatic state to AIDS as a late manifestation. The pace of disease progression is variable. The time between infection with HIV and the development of AIDS ranges from a few months to as long as 17 years (median: 10 years). Most adults and adolescents infected with HIV remain symptom-free for long periods, but viral replication is active during all stages of infection, increasing substantially as the immune system deteriorates. AIDS eventually develops in almost all HIV-infected persons; in one study of HIV-infected adults, AIDS developed in 87% (95% confidence interval =83%-90%) within 17 years after infection. Additional cases are expected to occur among those who have remained AIDS-free for longer periods.
Greater awareness among both patients and health-care providers of the risk factors associated with HIV transmission has led to increased testing for HIV and earlier diagnosis of the infection, often before symptoms develop. The early diagnosis of HIV infection is important for several reasons. Treatments are available to slow the decline of immune system function. HIV-infected persons who have altered immune function are at increased risk for infections for which preventive measures are available (e.g., Pneumocystis carinii pneumonia , toxoplasmic encephalitis , disseminated Mycobacterium avium complex disease, tuberculosis , and bacterial pneumonia). Because of its effect on the immune system, HIV affects the diagnosis, evaluation, treatment, and follow-up of many other diseases and may affect the efficacy of antimicrobial therapy for some STDs. Finally, the early diagnosis of HIV enables the health-care provider to counsel such patients and to assist in preventing HIV transmission to others.
Proper management of HIV infection involves a complex array of behavioral, psychosocial, and medical services. Although some of these services may be available in the STD treatment facility, other services, particularly medical services, are usually unavailable in this setting. Therefore, referral to a health-care provider or facility experienced in caring for HIV-infected patients is advised. Staff in STD treatment facilities should be knowledgeable about the options for referral available in their communities. While in the STD treatment facility, the HIV-infected patient should be educated about HIV infection and the various options for HIV care that are available.
Because of the complexity of services required for management of HIV infection, detailed information, particularly regarding medical care, is beyond the scope of this report and may be found elsewhere (3,5,10,11 ). Rather, this section provides information on diagnostic testing for HIV-1 and HIV-2, counseling patients who have HIV infection, and preparing the HIV-infected patient for what to expect when medical care is necessary. Information also is provided on management of sex partners, because such services can and should be provided in the STD treatment facility before referral. Finally, the topics of HIV infection during pregnancy and in infants and children are addressed.
# Diagnostic Testing for HIV-1 and HIV-2
Testing for HIV should be offered to all persons whose behavior puts them at risk for infection, including persons who seek evaluation and treatment for STDs. Counseling before and after testing (i.e., pretest and posttest counseling) is an integral part of the testing procedure (see HIV Prevention Counseling). Informed consent must be obtained before an HIV test is performed. Some states require written consent.
HIV infection usually is diagnosed by using HIV-1 antibody tests. Antibody testing begins with a sensitive screening test such as the enzyme immunoassay (EIA). Reactive screening tests must be confirmed by a supplemental test, such as the Western blot (WB) or an immunofluorescence assay (IFA). If confirmed by a supplemental test, a positive antibody test result indicates that a person is infected with HIV and is capable of transmitting the virus to others. HIV antibody is detectable in at least 95% of patients within 6 months after infection. Although a negative antibody test result usually indicates that a person is not infected, antibody tests cannot exclude infection that occurred <6 months before the test.
The prevalence of HIV-2 in the United States is extremely low, and CDC does not recommend routine testing for HIV-2 in settings other than blood centers, unless demographic or behavioral information indicates that HIV-2 infection might be present. Those at risk for HIV-2 infection include persons from a country in which HIV-2 is endemic or the sex partners of such persons. HIV-2 is endemic in parts of West Africa, and an increased prevalence of HIV-2 has been reported in Angola, France, Mozambique, and Portugal. In addition, testing for HIV-2 should be conducted when there is clinical evidence or suspicion of HIV disease in the absence of a positive test for antibodies to HIV-1 (12 ).
Because HIV antibody crosses the placenta, its presence in a child aged <18 months is not diagnostic of HIV infection (see Special Considerations, HIV Infection in Infants and Children).
The following are specific recommendations for diagnostic testing for HIV infection:
- Informed consent must be obtained before an HIV test is performed. Some states require written consent. (See HIV Prevention Counseling for a discussion of pretest and posttest counseling.)
- Positive screening tests for HIV antibody must be confirmed by a more specific confirmatory test (either WB or IFA) before being considered diagnostic of HIV infection.
- Patients who have positive HIV test results must either receive behavioral, psychosocial, and medical evaluation and monitoring services or be referred for these services.
# Acute Retroviral Syndrome
Health-care providers should be alert for the symptoms and signs of acute retroviral syndrome, which is characterized by fever, malaise, lymphadenopathy, and skin rash. This syndrome frequently occurs in the first few weeks after HIV infection, before antibody test results become positive. Suspicion of acute retroviral syndrome should prompt nucleic acid testing to detect the presence of HIV. Recent data indicate that initiation of antiretroviral therapy during this period can delay the onset of HIV-related complications and might influence prognosis. If testing reveals acute HIV infection, health-care providers should either counsel the patient about immediate initiation of antiretroviral therapy or refer the patient for emergency expert consultation. The optimal antiretroviral regimen at this time is unknown. Treatment with zidovudine can delay the onset of HIV-related complications; however, most experts recommend treatment with two nucleoside reverse transcriptase inhibitors and a protease inhibitor.
# Counseling for HIV-Infected Patients
Behavioral and psychosocial services are an integral part of health care for HIVinfected patients; such services should be available on-site or through referral when HIV infection is diagnosed. Patients often are distressed when first informed of a positive HIV test result. Such patients face several major adaptive challenges: a) accepting the possibility of a shortened life span, b) coping with others' reactions to a stigmatizing illness, c) developing and adopting strategies for maintaining physical and emotional health, and d) initiating changes in behavior to prevent HIV transmission to others. Many patients also require assistance with making reproductive choices, gaining access to health services, and confronting employment or housing discrimination.
Interrupting HIV transmission depends on behavioral changes made by those persons at risk for transmitting or acquiring infection. Infected persons, as potential sources of new infections, must receive additional counseling and assistance to support partner notification and counseling to prevent infection of others. Targeting behavior change programs toward HIV-infected persons and their sex partners, or those with whom they share injecting-drug equipment, is an important adjunct to AIDS prevention efforts.
The following are specific recommendations for counseling HIV-infected patients:
- Persons who test positive for HIV antibody should be counseled by a person or persons, either on-site or through referral, who can discuss the behavioral, psychosocial, and medical implications of HIV infection.
- Appropriate social support and psychological resources should be available, either on-site or through referral, to assist patients in coping with emotional distress.
- Persons who continue to be at risk for transmitting HIV should receive assistance in changing or avoiding behaviors that can transmit infection to others.
# Planning for Medical Care and for Continuation of Psychosocial Services
Practice settings for offering HIV care differ depending on local resources and needs. Primary-care providers and outpatient facilities must ensure that appropriate resources are available for each patient and must avoid fragmentation of care as much as possible. A single source that is able to provide comprehensive care for all stages of HIV infection is preferred; however, the limited availability of such resources often results in the need to coordinate care among outpatient, inpatient, and specialist providers in different locations. Providers should do everything possible to avoid fragmentation of care and long delays between diagnosis of HIV infection and access to medical and psychosocial services.
Recently identified HIV infection may not have been recently acquired. Persons newly diagnosed with HIV may be at any of the different stages of infection. Therefore, the health-care provider should be alert for symptoms or signs that suggest advanced HIV infection (e.g., fever, weight loss, diarrhea, cough, shortness of breath, and oral candidiasis). The presence of any of these symptoms should prompt urgent referral for medical care. Similarly, the provider should be alert for signs of severe psychologic distress and be prepared to refer the client accordingly.
HIV-infected patients in the STD treatment setting should be educated about what to expect when medical care is necessary (11 ). In the nonemergent situation, the initial evaluation of the HIV-positive patient usually includes the following components:
- A detailed medical history, including sexual and substance-abuse history, previous STDs, and specific HIV-related symptoms or diagnoses.
- A physical examination; for women, this should include a gynecologic examination.
- For women, testing for N. gonorrhoeae and C. trachomatis, a Pap smear, and wet mount examination of vaginal secretions.
- Complete blood and platelet counts and blood chemistry profile.
- Toxoplasma antibody test, tests for hepatitis B viral markers, and syphilis serology.
- A CD4+ T-lymphocyte analysis and determination of HIV plasma ribonucleic acid (i.e., HIV viral load).
- A tuberculin skin test (TST) (sometimes referred to as a purified protein derivative skin test) administered by the Mantoux method. The test result should be evaluated at 48-72 hours; in HIV-infected persons, a 5 mm induration is considered positive. The usefulness of anergy testing is controversial (13)(14)(15).
- A chest radiograph.
- A thorough psychosocial evaluation, including ascertainment of behavioral factors indicating risk for transmitting HIV and elucidation of information concerning any partners who should be notified about possible exposure to HIV.
In subsequent visits, once the results of laboratory and skin tests are available, the patient may be offered antiretroviral therapy (16 ), as well as specific medications to reduce the incidence of opportunistic infections (e.g., PCP, TE, disseminated MAC infection, and TB) (10,14,(17)(18)(19). Hepatitis B vaccination should be offered to patients who do not have hepatitis B markers, influenza vaccination should be offered annually, and pneumococcal vaccination should be administered. For additional information concerning vaccination of HIV-infected patients, refer to "Recommendations of the Advisory Committee on Immunization Practices (ACIP): Use of Vaccines and Immune Globulins in Persons with Altered Immunocompetence" (20 ).
Specific recommendations for planning medical care and continuation of psychosocial services include the following:
- HIV-infected persons should be referred for appropriate follow-up to facilities in which health-care personnel are experienced in providing care for HIV-infected patients.
- Health-care providers should be alert for medical or psychosocial conditions that require immediate attention.
- Patients should be educated about what to expect in follow-up medical care.
# Management of Sex Partners and Injecting-Drug Partners
When referring to persons who are infected with HIV, the term "partner" includes not only sex partners but also injecting-drug users who share syringes or other injection equipment. The rationale for partner notification is that the early diagnosis and treatment of HIV infection possibly reduces morbidity and provides the opportunity to encourage risk-reducing behaviors. Partner notification for HIV infection must be confidential and will depend on voluntary cooperation of the patient.
Two complementary notification processes, patient referral and provider referral, can be used to identify partners. With patient referral, patients directly inform their partners of their exposure to HIV infection. With provider referral, trained health department personnel locate partners on the basis of the names, descriptions, and addresses provided by the patient. During the notification process, the anonymity of patients is protected; their names are not revealed to partners who are notified. Many state health departments provide assistance, if requested, with provider-referral partner notification.
The results of one randomized trial suggested that provider referral is more effective in notifying partners than patient referral. In that study, 50% of partners in the provider-referral group were notified, compared with 7% of partners notified by persons in the patient-referral group. However, whether behavioral change takes place as a result of partner notification has not been determined, and many patients are reluctant to disclose the names of partners because of concern about discrimination, disruption of relationships, loss of confidentiality for the partners, and possible violence.
The following are specific recommendations for implementing partner-notification procedures:
- HIV-infected patients should be encouraged to notify their partners and to refer them for counseling and testing. If requested by the patient, health-care providers should assist in this process, either directly or by referral to health department partner-notification programs.
- If patients are unwilling to notify their partners, or if they cannot ensure that their partners will seek counseling, physicians or health department personnel should use confidential procedures to notify the partners.
# Special Considerations Pregnancy
All pregnant women should be offered HIV testing as early in pregnancy as possible (21 ). This recommendation is particularly important because of the available treatments for reducing the likelihood of perinatal transmission and maintaining the health of the woman. HIV-infected women should be informed specifically about the risk for perinatal infection. Current evidence indicates that 15%-25% of infants born to untreated HIV-infected mothers are infected with HIV; the virus also can be transmitted from an infected mother by breastfeeding. Zidovudine (ZDV) reduces the risk for HIV transmission to the infant from approximately 25% to 8% if administered to women during the later stage of pregnancy and during labor and to infants for the first 6 weeks of life (22 ). Therefore, ZDV treatment should be offered to all HIV-infected pregnant women. In the United States, HIV-infected women should be advised not to breastfeed their infants.
Insufficient information is available regarding the safety of ZDV or other antiretroviral drugs during early pregnancy; however, on the basis of the ACTG-076 protocol,- ZDV is indicated for the prevention of maternal-fetal HIV transmission as part of a regimen that includes oral ZDV at 14-34 weeks of gestation, intravenous (IV) ZDV during labor, and ZDV Syrup to the neonate after birth (22 ). Glaxo Wellcome, Inc., Hoffmann-LaRoche, Inc., Bristol-Myers Squibb, Co., and Merck & Co., Inc., in cooperation with CDC, maintain a registry to assess the safety of ZDV, didanosine (ddI), lamivudine (3TC), saquinavir (SAQ), stavudine (d4t), and dideoxycytodine (ddC) during pregnancy. Women who receive any of these drugs during pregnancy should be reported to this registry; telephone (800) 722-9292, extension 38465. The number of cases reported through February 1997 represented a sample of insufficient size for reliably estimating the risk for birth defects after administration of ddI, 3TC, SAQ, d4t, ddC, or ZDV, or their combination, to pregnant women and their fetuses. However, the registry findings did not indicate an increase in the number of birth defects after receipt of only ZDV in comparison with the number expected in the U.S. population. Furthermore, no consistent pattern of birth defects has been observed that would suggest a common cause.
Women should be counseled about their options regarding pregnancy. The objective of counseling is to provide HIV-infected women with information for making reproductive decisions, analogous to the model used in genetic counseling. In addition, contraceptive counseling should be offered to HIV-infected women who do not desire pregnancy. Prenatal and abortion services should be available on-site or by referral. Pregnancy among HIV-infected women does not appear to increase maternal morbidity or mortality.
# HIV Infection in Infants and Children
HIV-infected infants and young children differ from adults and adolescents with respect to the diagnosis, clinical presentation, and management of HIV disease. For example, because of transplacental passage of maternal HIV antibody, both infected and uninfected infants born to HIV-infected mothers are expected to have positive HIV-antibody test results. A definitive determination of HIV infection in a child <18 months of age should be based on laboratory evidence of HIV in blood or tissues by culture, nucleic acid, or antigen detection. In addition, CD4+ lymphocyte counts are higher in infants and children aged <5 years than in healthy adults and must be interpreted accordingly. All infants born to HIV-infected mothers should begin PCP prophylaxis at age 4-6 weeks; such prophylaxis should be continued until HIV infection has been excluded (18 ). Other modifications must be made in health services that are recommended for infants and children, such as avoiding vaccination with live oral polio vaccine when a child (or household contact) is infected with HIV. Management of infants, children, and adolescents who are known or suspected to be infected with HIV requires referral to physicians familiar with the manifestations and treatment of pediatric HIV infection.
# DISEASES CHARACTERIZED BY GENITAL ULCERS
# Management of Patients Who Have Genital Ulcers
In the United States, most young, sexually active patients who have genital ulcers have either genital herpes, syphilis, or chancroid. The relative frequency of each differs by geographic area and patient population; however, in most areas of the United States, genital herpes is the most prevalent of these diseases. More than one of these diseases could be present in a patient who has genital ulcers. Each disease has been associated with an increased risk for HIV infection.
A diagnosis based only on the patient's medical history and physical examination often is inaccurate. Therefore, evaluation of all patients who have genital ulcers should include a serologic test for syphilis and diagnostic evaluation for herpes. Although, ideally, all of these tests should be conducted for each patient who has a genital ulcer, use of such tests (other than a serologic test for syphilis) may be based on test availability and clinical or epidemiologic suspicion. Specific tests for the evaluation of genital ulcers include the following:
- Darkfield examination or direct immunofluorescence test for Treponema pallidum,
- Culture or antigen test for HSV, and - Culture for Haemophilus ducreyi.
Polymerase chain reaction (PCR) tests for these organisms might become available commercially.
HIV testing should be a) performed in the management of patients who have genital ulcers caused by T. pallidum or H. ducreyi and b) considered for those who have ulcers caused by HSV (see sections on Syphilis, Chancroid, and Genital Herpes).
A health-care provider often must treat a patient before test results are available. In such a circumstance, the clinician should treat for the diagnosis considered most likely. If the diagnosis is unclear, many experts recommend treatment for syphilis, or for both syphilis and chancroid if the patient resides in a community in which H. ducreyi is a significant cause of genital ulcers, especially when diagnostic capabilities for chancroid or syphilis are not ideal. However, even after complete diagnostic evaluation, at least 25% of patients who have genital ulcers have no laboratory-confirmed diagnosis.
# Chancroid
Chancroid is endemic in some areas of the United States, and the disease also occurs in discrete outbreaks. Chancroid is a cofactor for HIV transmission, and high rates of HIV infection among patients who have chancroid have been reported in the United States and other countries. An estimated 10% of patients who have chancroid could be coinfected with T. pallidum or HSV.
A definitive diagnosis of chancroid requires identification of H. ducreyi on special culture media that are not widely available from commercial sources; even using these media, sensitivity is ≤80%. A probable diagnosis, for both clinical and surveillance purposes, may be made if the following criteria are met: a) the patient has one or more painful genital ulcers; b) the patient has no evidence of T. pallidum infection by darkfield examination of ulcer exudate or by a serologic test for syphilis performed at least 7 days after onset of ulcers; and c) the clinical presentation, appearance of genital ulcers, and regional lymphadenopathy, if present, are typical for chancroid and a test for HSV is negative. The combination of a painful ulcer and tender inguinal adenopathy, which occurs among one third of patients, suggests a diagnosis of chancroid; when accompanied by suppurative inguinal adenopathy, these signs are almost pathognomonic. PCR testing for H. ducreyi might become available soon.
# Treatment
Successful treatment for chancroid cures the infection, resolves the clinical symptoms, and prevents transmission to others. In extensive cases, scarring can result despite successful therapy.
# Recommended Regimens
Azithromycin 1 g orally in a single dose, OR Ceftriaxone 250 mg intramuscularly (IM) in a single dose, OR Ciprofloxacin 500 mg orally twice a day for 3 days, OR Erythromycin base 500 mg orally four times a day for 7 days.
# NOTE:
Ciprofloxacin is contraindicated for pregnant and lactating women and for persons aged <18 years.
All four regimens are effective for treatment of chancroid in HIV-infected patients. Azithromycin and ceftriaxone offer the advantage of single-dose therapy. Worldwide, several isolates with intermediate resistance to either ciprofloxacin or erythromycin have been reported.
# Other Management Considerations
Patients who are uncircumcised and HIV-infected patients might not respond as well to treatment as those who are circumcised or HIV-negative. Patients should be tested for HIV infection at the time chancroid is diagnosed. Patients should be retested 3 months after the diagnosis of chancroid if the initial test results for syphilis and HIV were negative.
# Follow-Up
Patients should be reexamined 3-7 days after initiation of therapy. If treatment is successful, ulcers improve symptomatically within 3 days and objectively within 7 days after therapy. If no clinical improvement is evident, the clinician must consider whether a) the diagnosis is correct, b) the patient is coinfected with another STD, c) the patient is infected with HIV, d) the treatment was not taken as instructed, or e) the H. ducreyi strain causing the infection is resistant to the prescribed antimicrobial. The time required for complete healing depends on the size of the ulcer; large ulcers may require >2 weeks. In addition, healing is slower for some uncircumcised men who have ulcers under the foreskin. Clinical resolution of fluctuant lymphadenopathy is slower than that of ulcers and may require drainage, even during otherwise successful therapy. Although needle aspiration of buboes is a simpler procedure, incision and drainage of buboes may be preferred because of less need for subsequent drainage procedures.
# Management of Sex Partners
Sex partners of patients who have chancroid should be examined and treated, regardless of whether symptoms of the disease are present, if they had sexual contact with the patient during the 10 days preceding onset of symptoms in the patient.
# Special Considerations Pregnancy
The safety of azithromycin for pregnant and lactating women has not been established. Ciprofloxacin is contraindicated during pregnancy. No adverse effects of chancroid on pregnancy outcome or on the fetus have been reported.
# HIV Infection
HIV-infected patients who have chancroid should be monitored closely. Such patients may require longer courses of therapy than those recommended for HIV-negative patients. Healing may be slower among HIV-infected patients, and treatment failures occur with any regimen. Because data are limited concerning the therapeutic efficacy of the recommended ceftriaxone and azithromycin regimens in HIV-infected patients, these regimens should be used for such patients only if follow-up can be ensured. Some experts suggest using the erythromycin 7-day regimen for treating HIV-infected persons.
# Genital Herpes Simplex Virus (HSV) Infection
Genital herpes is a recurrent, incurable viral disease. Two serotypes of HSV have been identified: HSV-1 and HSV-2. Most cases of recurrent genital herpes are caused by HSV-2. On the basis of serologic studies, genital HSV-2 infection has been diagnosed in at least 45 million persons in the United States.
Most HSV-2-infected persons have not received a diagnosis of genital herpes. Such persons have mild or unrecognized infections that shed virus intermittently in the genital tract. Some cases of first-episode genital herpes are manifested by severe disease that might require hospitalization. Many cases of genital herpes are transmitted by persons who are unaware that they have the infection or are asymptomatic when transmission occurs.
Systemic antiviral drugs partially control the symptoms and signs of herpes episodes when used to treat first clinical episodes or recurrent episodes or when used as daily suppressive therapy. However, these drugs neither eradicate latent virus nor affect the risk, frequency, or severity of recurrences after the drug is discontinued. Randomized trials indicate that three antiviral medications provide clinical benefit for genital herpes: acyclovir, valacyclovir, and famciclovir. Valacyclovir is a valine ester of acyclovir with enhanced absorption after oral administration. Famciclovir, a prodrug of penciclovir, also has high oral bioavailability. Topical therapy with acyclovir is substantially less effective than the systemic drug, and its use is discouraged. The recommended acyclovir dosing regimens for both initial and recurrent episodes reflect substantial clinical experience, expert opinion, and FDAapproved dosages.
# First Clinical Episode of Genital Herpes
Management of patients with first clinical episode of genital herpes includes antiviral therapy and counseling regarding the natural history of genital herpes, sexual and perinatal transmission, and methods to reduce such transmission. Five percent to 30% of first-episode cases of genital herpes are caused by HSV-1, but clinical recurrences are much less frequent for HSV-1 than HSV-2 genital infection. Therefore, identification of the type of the infecting strain has prognostic importance and may be useful for counseling purposes.
# Recommended Regimens
Acyclovir 400 mg orally three times a day for 7-10 days, OR Acyclovir 200 mg orally five times a day for 7-10 days, OR Famciclovir 250 mg orally three times a day for 7-10 days, OR Valacyclovir 1 g orally twice a day for 7-10 days.
# NOTE:
Treatment may be extended if healing is incomplete after 10 days of therapy. Higher dosages of acyclovir (i.e., 400 mg orally five times a day) were used in treatment studies of first-episode herpes proctitis and first-episode oral infection, including stomatitis or pharyngitis. It is unclear whether these forms of mucosal infection require higher doses of acyclovir than used for genital herpes. Valacyclovir and famciclovir probably are also effective for acute HSV proctitis or oral infection, but clinical experience is lacking.
Counseling is an important aspect of managing patients who have genital herpes. Although initial counseling can be provided at the first visit, many patients benefit from learning about the chronic aspects of the disease after the acute illness subsides. Counseling of these patients should include the following:
- Patients who have genital herpes should be told about the natural history of the disease, with emphasis on the potential for recurrent episodes, asymptomatic viral shedding, and sexual transmission.
- Patients should be advised to abstain from sexual activity when lesions or prodromal symptoms are present and encouraged to inform their sex partners that they have genital herpes. The use of condoms during all sexual exposures with new or uninfected sex partners should be encouraged.
- Sexual transmission of HSV can occur during asymptomatic periods. Asymptomatic viral shedding occurs more frequently in patients who have genital HSV-2 infection than HSV-1 infection and in patients who have had genital herpes for <12 months. Such patients should be counseled to prevent spread of the infection.
- The risk for neonatal infection should be explained to all patients, including men.
Childbearing-aged women who have genital herpes should be advised to inform health-care providers who care for them during pregnancy about the HSV infection.
- Patients having a first episode of genital herpes should be advised that a) episodic antiviral therapy during recurrent episodes might shorten the duration of lesions and b) suppressive antiviral therapy can ameliorate or prevent recurrent outbreaks.
# Recurrent Episodes of HSV Disease
Most patients with first-episode genital HSV-2 infection will have recurrent episodes of genital lesions. Episodic or suppressive antiviral therapy might shorten the duration of lesions or ameliorate recurrences. Because many patients benefit from antiviral therapy, options for treatment should be discussed with all patients.
When treatment is started during the prodrome or within 1 day after onset of lesions, many patients who have recurrent disease benefit from episodic therapy. If episodic treatment of recurrences is chosen, the patient should be provided with antiviral therapy, or a prescription for the medication, so that treatment can be initiated at the first sign of prodrome or genital lesions.
Daily suppressive therapy reduces the frequency of genital herpes recurrences by ≥75% among patients who have frequent recurrences (i.e., six or more recurrences per year). Safety and efficacy have been documented among patients receiving daily therapy with acyclovir for as long as 6 years, and with valacyclovir and famciclovir for 1 year. Suppressive therapy has not been associated with emergence of clinically significant acyclovir resistance among immunocompetent patients. After 1 year of continuous suppressive therapy, discontinuation of therapy should be discussed with the patient to assess the patient's psychological adjustment to genital herpes and rate of recurrent episodes, as the frequency of recurrences decreases over time in many patients. Insufficient experience with famciclovir and valacyclovir prevents recommendation of these drugs for >1 year.
Suppressive treatment with acyclovir reduces but does not eliminate asymptomatic viral shedding. Therefore, the extent to which suppressive therapy may prevent HSV transmission is unknown.
# Recommended Regimens for Episodic Recurrent Infection
Acyclovir 400 mg orally three times a day for 5 days, Valacyclovir 500 mg once a day appears less effective than other valacyclovir dosing regimens in patients who have very frequent recurrences (i.e., ≥10 episodes per year). Few comparative studies of valacyclovir and famciclovir with acyclovir have been conducted. The results of these studies suggest that valacyclovir and famciclovir are comparable to acyclovir in clinical outcome. However, valacyclovir and famciclovir may provide increased ease in administration, which is an important consideration for prolonged treatment.
# Severe Disease
IV therapy should be provided for patients who have severe disease or complications necessitating hospitalization, such as disseminated infection, pneumonitis, hepatitis, or complications of the central nervous system (e.g., meningitis or encephalitis).
# Recommended Regimen
Acyclovir 5-10 mg/kg body weight IV every 8 hours for 5-7 days or until clinical resolution is attained.
# Management of Sex Partners
The sex partners of patients who have genital herpes are likely to benefit from evaluation and counseling. Symptomatic sex partners should be evaluated and treated in the same manner as patients who have genital lesions. However, most persons who have genital HSV infection do not have a history of typical genital lesions. These persons and their future sex partners may benefit from evaluation and counseling. Thus, even asymptomatic sex partners of patients who have newly diagnosed genital herpes should be questioned concerning histories of typical and atypical genital lesions, and they should be encouraged to examine themselves for lesions in the future and seek medical attention promptly if lesions appear.
Most of the available HSV antibody tests do not accurately discriminate between HSV-1 and HSV-2 antibodies, and their use is not currently recommended. Sensitive and type-specific serum antibody assays may become commercially available and contribute to future intervention strategies.
# Special Considerations Allergy, Intolerance, or Adverse Reactions
Allergic and other adverse reactions to acyclovir, valacyclovir, and famciclovir are infrequent. Desensitization to acyclovir has been described previously (23 ).
# HIV Infection
Immunocompromised patients might have prolonged and/or severe episodes of genital or perianal herpes. Lesions caused by HSV are relatively common among HIV-infected patients and may be severe, painful, and atypical. Intermittent or suppressive therapy with oral antiviral agents is often beneficial.
The dosage of antiviral drugs for HIV-infected patients is controversial, but clinical experience strongly suggests that immunocompromised patients benefit from increased doses of antiviral drugs. Regimens such as acyclovir 400 mg orally three to five times a day, as used for other immunocompromised patients, have been useful. Therapy should be continued until clinical resolution is attained. Famciclovir 500 mg twice a day has been effective in decreasing both the rate of recurrences and the rate of subclinical shedding among HIV-infected patients. In immunocompromised patients, valacyclovir in doses of 8 g per day has been associated with a syndrome resembling either hemolytic uremic syndrome or thrombotic thrombocytopenic purpura. However, in the doses recommended for treatment of genital herpes, valacyclovir, acyclovir, and famciclovir probably are safe for use in immunocompromised patients. For severe cases, acyclovir 5 mg/kg IV every 8 hours may be required.
If lesions persist in a patient receiving acyclovir treatment, resistance of the HSV strain to acyclovir should be suspected. Such patients should be managed in consultation with an expert. For severe cases caused by proven or suspected acyclovir-resistant strains, alternate therapy should be administered. All acyclovirresistant strains are resistant to valacyclovir, and most are resistant to famciclovir. Foscarnet, 40 mg/kg body weight IV every 8 hours until clinical resolution is attained, is often effective for treatment of acyclovir-resistant genital herpes. Topical cidofovir gel 1% applied to the lesions once daily for 5 consecutive days also might be effective.
# Pregnancy
The safety of systemic acyclovir and valacyclovir therapy in pregnant women has not been established. Glaxo-Wellcome, Inc., in cooperation with CDC, maintains a registry to assess the use and effects of acyclovir and valacyclovir during pregnancy. Women who receive acyclovir or valacyclovir during pregnancy should be reported to this registry; telephone (800) 722-9292, extension 38465.
Current registry findings do not indicate an increased risk for major birth defects after acyclovir treatment (i.e., in comparison with the general population). These findings provide some assurance in counseling women who have had prenatal exposure to acyclovir. The accumulated case histories represent an insufficient sample for reaching reliable and definitive conclusions regarding the risks associated with acyclovir treatment during pregnancy. Prenatal exposure to valacyclovir and famciclovir is too limited to provide useful information on pregnancy outcomes.
The first clinical episode of genital herpes during pregnancy may be treated with oral acyclovir. In the presence of life-threatening maternal HSV infection (e.g., disseminated infection, encephalitis, pneumonitis, or hepatitis), acyclovir administered IV is indicated. Investigations of acyclovir use among pregnant women suggest that acyclovir treatment near term might reduce the rate of abdominal deliveries among women who have frequently recurring or newly acquired genital herpes by decreasing the incidence of active lesions. However, routine administration of acyclovir to pregnant women who have a history of recurrent genital herpes is not recommended at this time.
# Perinatal Infection
Most mothers of infants who acquire neonatal herpes lack histories of clinically evident genital herpes. The risk for transmission to the neonate from an infected mother is high among women who acquire genital herpes near the time of delivery (30%-50%) and is low among women who have a history of recurrent herpes at term and women who acquire genital HSV during the first half of pregnancy (3%). Therefore, prevention of neonatal herpes should emphasize prevention of acquisition of genital HSV infection during late pregnancy. Susceptible women whose partners have oral or genital HSV infection, or those whose sex partners' infection status is unknown, should be counseled to avoid unprotected genital and oral sexual contact during late pregnancy. The results of viral cultures during pregnancy do not predict viral shedding at the time of delivery, and such cultures are not indicated routinely.
At the onset of labor, all women should be examined and carefully questioned regarding whether they have symptoms of genital herpes. Infants of women who do not have symptoms or signs of genital herpes infection or its prodrome may be delivered vaginally. Abdominal delivery does not completely eliminate the risk for HSV infection in the neonate.
Infants exposed to HSV during birth, as proven by virus isolation or presumed by observation of lesions, should be followed carefully. Some authorities recom-mend that such infants undergo surveillance cultures of mucosal surfaces to detect HSV infection before development of clinical signs. Available data do not support the routine use of acyclovir for asymptomatic infants exposed during birth through an infected birth canal, because the risk for infection in most infants is low. However, infants born to women who acquired genital herpes near term are at high risk for neonatal herpes, and some experts recommend acyclovir therapy for these infants. Such pregnancies and newborns should be managed in consultation with an expert. All infants who have evidence of neonatal herpes should be promptly evaluated and treated with systemic acyclovir (19 ). Acyclovir 30-60 mg/kg/day for 10-21 days is the regimen of choice.
# Granuloma Inguinale (Donovanosis)
Granuloma inguinale, a rare disease in the United States, is caused by the intracellular Gram-negative bacterium Calymmatobacterium granulomatis. The disease is endemic in certain tropical and developing areas, including India, Papua New Guinea, central Australia, and southern Africa. The disease presents clinically as painless, progressive, ulcerative lesions without regional lymphadenopathy. The lesions are highly vascular (i.e., a beefy red appearance) and bleed easily on contact. The causative organism cannot be cultured on standard microbiologic media, and diagnosis requires visualization of dark-staining Donovan bodies on tissue crush preparation or biopsy. A secondary bacterial infection might develop in the lesions, or the lesions might be coinfected with another sexually transmitted pathogen.
# Treatment
Treatment appears to halt progressive destruction of tissue, although prolonged duration of therapy often is required to enable granulation and re-epithelialization of the ulcers. Relapse can occur 6-18 months later despite effective initial therapy.
# Recommended Regimens
Trimethoprim-sulfamethoxazole one double-strength tablet orally twice a day for a minimum of 3 weeks, OR Doxycycline 100 mg orally twice a day for a minimum of 3 weeks.
Therapy should be continued until all lesions have healed completely.
# Alternative Regimens
Ciprofloxacin 750 mg orally twice a day for a minimum of 3 weeks, OR Erythromycin base 500 mg orally four times a day for a minimum of 3 weeks.
For any of the above regimens, the addition of an aminoglycoside (gentamicin 1 mg/kg IV every 8 hours) should be considered if lesions do not respond within the first few days of therapy.
# Follow-Up
Patients should be followed clinically until signs and symptoms have resolved.
# Management of Sex Partners
Sex partners of patients who have granuloma inguinale should be examined and treated if they a) had sexual contact with the patient during the 60 days preceding the onset of symptoms in the patient and b) have clinical signs and symptoms of the disease.
# Special Considerations Pregnancy
Pregnancy is a relative contraindication to the use of sulfonamides. Both pregnant and lactating women should be treated with the erythromycin regimen. The addition of a parenteral aminoglycoside (e.g., gentamicin) should be strongly considered.
# HIV Infection
HIV-infected persons who have granuloma inguinale should be treated following the regimens cited previously. The addition of a parenteral aminoglycoside (e.g., gentamicin) should be strongly considered.
# Lymphogranuloma Venereum
Lymphogranuloma venereum (LGV), a rare disease in the United States, is caused by the invasive serovars L1, L2, or L3 of C. trachomatis. The most frequent clinical manifestation of LGV among heterosexual men is tender inguinal and/or femoral lymphadenopathy that is usually unilateral. Women and homosexually active men might have proctocolitis or inflammatory involvement of perirectal or perianal lymphatic tissues that can result in fistulas and strictures. When most patients seek medical care, they no longer have the self-limited genital ulcer that sometimes occurs at the inoculation site. The diagnosis usually is made serologically and by exclusion of other causes of inguinal lymphadenopathy or genital ulcers.
# Treatment
Treatment cures infection and prevents ongoing tissue damage, although tissue reaction can result in scarring. Buboes may require aspiration through intact skin or incision and drainage to prevent the formation of inguinal/femoral ulcerations. Doxycycline is the preferred treatment.
# Recommended Regimen
Doxycycline 100 mg orally twice a day for 21 days.
# Alternative Regimen
Erythromycin base 500 mg orally four times a day for 21 days.
The activity of azithromycin against C. trachomatis suggests that it may be effective in multiple doses over 2-3 weeks, but clinical data regarding its use are lacking.
# Follow-Up
Patients should be followed clinically until signs and symptoms have resolved.
# Management of Sex Partners
Sex partners of patients who have LGV should be examined, tested for urethral or cervical chlamydial infection, and treated if they had sexual contact with the patient during the 30 days preceding onset of symptoms in the patient.
# Special Considerations Pregnancy
Pregnant women should be treated with the erythromycin regimen.
# HIV Infection
HIV-infected persons who have LGV should be treated according to the regimens cited previously. Anecdotal evidence suggests that LGV infection in HIVpositive patients may require prolonged therapy and that resolution might be delayed.
# Syphilis General Principles Background
Syphilis is a systemic disease caused by T. pallidum. Patients who have syphilis may seek treatment for signs or symptoms of primary infection (i.e., ulcer or chancre at the infection site), secondary infection (i.e., manifestations that include rash, mucocutaneous lesions, and adenopathy), or tertiary infection (i.e., cardiac, neurologic, ophthalmic, auditory, or gummatous lesions). Infections also may be detected by serologic testing during the latent stage. Latent syphilis acquired within the preceding year is referred to as early latent syphilis; all other cases of latent syphilis are either late latent syphilis or syphilis of unknown duration. Treatment for late latent syphilis, as well as tertiary syphilis, theoretically may require a longer duration of therapy because organisms are dividing more slowly; however, the validity and importance of this concept have not been determined.
# Diagnostic Considerations and Use of Serologic Tests
Darkfield examinations and direct fluorescent antibody tests of lesion exudate or tissue are the definitive methods for diagnosing early syphilis. A presumptive diagnosis is possible with the use of two types of serologic tests for syphilis: a) nontreponemal (e.g., Venereal Disease Research Laboratory and RPR) and b) treponemal (e.g., fluorescent treponemal antibody absorbed and microhemagglutination assay for antibody to T. pallidum ). The use of only one type of test is insufficient for diagnosis because false-positive nontreponemal test results occasionally occur secondary to various medical conditions. Nontreponemal test antibody titers usually correlate with disease activity, and results should be reported quantitatively. A fourfold change in titer, equivalent to a change of two dilutions (e.g., from 1:16 to 1:4 or from 1:8 to 1:32), usually is considered necessary to demonstrate a clinically significant difference between two nontreponemal test results that were obtained by using the same serologic test. It is expected that the nontreponemal test will eventually become nonreactive after treatment; however, in some patients, nontreponemal antibodies can persist at a low titer for a long period, sometimes for the remainder of their lives. This response is referred to as the serofast reaction. Most patients who have reactive treponemal tests will have reactive tests for the remainder of their lives, regardless of treatment or disease activity. However, 15%-25% of patients treated during the primary stage might revert to being serologically nonreactive after 2-3 years. Treponemal test antibody titers correlate poorly with disease activity and should not be used to assess treatment response.
Sequential serologic tests should be performed by using the same testing method (e.g., VDRL or RPR), preferably by the same laboratory. The VDRL and RPR are equally valid, but quantitative results from the two tests cannot be compared directly because RPR titers often are slightly higher than VDRL titers.
HIV-infected patients can have abnormal serologic test results (i.e., unusually high, unusually low, and fluctuating titers). For such patients with clinical syndromes suggestive of early syphilis, use of other tests (e.g., biopsy and direct microscopy) should be considered. However, for most HIV-infected patients, serologic tests appear to be accurate and reliable for the diagnosis of syphilis and for evaluation of treatment response.
No single test can be used to diagnose all cases of neurosyphilis. The diagnosis of neurosyphilis can be made based on various combinations of reactive serologic test results, abnormalities of cerebrospinal fluid (CSF) cell count or protein, or a reactive VDRL-CSF with or without clinical manifestations. The CSF leukocyte count usually is elevated (>5 WBCs/mm 3 ) when neurosyphilis is present, and it also is a sensitive measure of the effectiveness of therapy. The VDRL-CSF is the standard serologic test for CSF; when reactive in the absence of substantial contamination of CSF with blood, it is considered diagnostic of neurosyphilis. However, the VDRL-CSF may be nonreactive when neurosyphilis is present. Some experts recommend performing an FTA-ABS test on CSF. The CSF FTA-ABS is less specific (i.e., yields more false-positive results) for neurosyphilis than the VDRL-CSF. However, the test is believed to be highly sensitive, and some experts believe that a negative CSF FTA-ABS test excludes neurosyphilis.
# Treatment
Parenteral penicillin G is the preferred drug for treatment of all stages of syphilis. The preparation(s) used (i.e., benzathine, aqueous procaine, or aqueous crystalline), the dosage, and the length of treatment depend on the stage and clinical manifestations of disease.
The efficacy of penicillin for the treatment of syphilis was well established through clinical experience before the value of randomized controlled clinical trials was recognized. Therefore, almost all the recommendations for the treatment of syphilis are based on expert opinion reinforced by case series, clinical trials, and 50 years of clinical experience.
Parenteral penicillin G is the only therapy with documented efficacy for neurosyphilis or for syphilis during pregnancy. Patients who report a penicillin allergy, including pregnant women with syphilis in any stage and patients with neurosyphilis, should be desensitized and treated with penicillin. Skin testing for penicillin allergy may be useful in some settings (see Management of Patients Who Have a History of Penicillin Allergy), because the minor determinants needed for penicillin skin testing are unavailable commercially.
The Jarisch-Herxheimer reaction is an acute febrile reaction-often accompanied by headache, myalgia, and other symptoms-that might occur within the first 24 hours after any therapy for syphilis; patients should be advised of this possible adverse reaction. The Jarisch-Herxheimer reaction often occurs among patients who have early syphilis. Antipyretics may be recommended, but no proven methods prevent this reaction. The Jarisch-Herxheimer reaction may induce early labor or cause fetal distress among pregnant women. This concern should not prevent or delay therapy (see Syphilis During Pregnancy).
# Management of Sex Partners
Sexual transmission of T. pallidum occurs only when mucocutaneous syphilitic lesions are present; such manifestations are uncommon after the first year of infection. However, persons exposed sexually to a patient who has syphilis in any stage should be evaluated clinically and serologically according to the following recommendations:
- Persons who were exposed within the 90 days preceding the diagnosis of primary, secondary, or early latent syphilis in a sex partner might be infected even if seronegative; therefore, such persons should be treated presumptively.
- Persons who were exposed >90 days before the diagnosis of primary, secondary, or early latent syphilis in a sex partner should be treated presumptively if serologic test results are not available immediately and the opportunity for follow-up is uncertain.
- For purposes of partner notification and presumptive treatment of exposed sex partners, patients with syphilis of unknown duration who have high nontreponemal serologic test titers (i.e., ≥1:32) may be considered as having early syphilis. However, serologic titers should not be used to differentiate early from late latent syphilis for the purpose of determining treatment (see section regarding treatment of latent syphilis).
- Long-term sex partners of patients who have late syphilis should be evaluated clinically and serologically for syphilis and treated on the basis of the findings of the evaluation.
The time periods before treatment used for identifying at-risk sex partners are a) 3 months plus duration of symptoms for primary syphilis, b) 6 months plus duration of symptoms for secondary syphilis, and c) 1 year for early latent syphilis.
# Primary and Secondary Syphilis Treatment
Parenteral penicillin G has been used effectively for four decades to achieve a local cure (i.e., healing of lesions and prevention of sexual transmission) and to prevent late sequelae. However, no adequately conducted comparative trials have been performed to guide the selection of an optimal penicillin regimen (i.e., the dose, duration, and preparation). Substantially fewer data are available concerning nonpenicillin regimens.
# Recommended Regimen for Adults
Patients who have primary or secondary syphilis should be treated with the following regimen:
Benzathine penicillin G 2.4 million units IM in a single dose.
# NOTE:
Recommendations for treating pregnant women and HIV-infected patients for syphilis are discussed in separate sections.
# Recommended Regimen for Children
After the newborn period, children in whom syphilis is diagnosed should have a CSF examination to detect asymptomatic neurosyphilis, and birth and maternal medical records should be reviewed to assess whether the child has congenital or acquired syphilis (see Congenital Syphilis). Children with acquired primary or secondary syphilis should be evaluated (including consultation with child-protection services) and treated by using the following pediatric regimen (see Sexual Assault or Abuse of Children).
Benzathine penicillin G 50,000 units/kg IM, up to the adult dose of 2.4 million units in a single dose.
# Other Management Considerations
All patients who have syphilis should be tested for HIV infection. In geographic areas in which the prevalence of HIV is high, patients who have primary syphilis should be retested for HIV after 3 months if the first HIV test result was negative. This recommendation will become particularly important if it can be demonstrated that intensive antiviral therapy administered soon after HIV seroconversion is beneficial.
Patients who have syphilis and who also have symptoms or signs suggesting neurologic disease (e.g., meningitis) or ophthalmic disease (e.g., uveitis) should be evaluated fully for neurosyphilis and syphilitic eye disease; this evaluation should include CSF analysis and ocular slit-lamp examination. Such patients should be treated appropriately according to the results of this evaluation.
Invasion of CSF by T. pallidum accompanied by CSF abnormalities is common among adults who have primary or secondary syphilis. However, neurosyphilis develops in only a few patients after treatment with the regimens described in this report. Therefore, unless clinical signs or symptoms of neurologic or ophthalmic involvement are present, lumbar puncture is not recommended for routine evaluation of patients who have primary or secondary syphilis.
# Follow-Up
Treatment failures can occur with any regimen. However, assessing response to treatment often is difficult, and no definitive criteria for cure or failure have been established. Serologic test titers may decline more slowly for patients who previously had syphilis. Patients should be reexamined clinically and serologically at both 6 months and 12 months; more frequent evaluation may be prudent if followup is uncertain.
Patients who have signs or symptoms that persist or recur or who have a sustained fourfold increase in nontreponemal test titer (i.e., in comparison with either the baseline titer or a subsequent result) probably failed treatment or were reinfected. These patients should be re-treated after reevaluation for HIV infection. Unless reinfection with T. pallidum is certain, a lumbar puncture also should be performed.
Failure of nontreponemal test titers to decline fourfold within 6 months after therapy for primary or secondary syphilis identifies persons at risk for treatment failure. Such persons should be reevaluated for HIV infection. Optimal management of such patients is unclear. At a minimum, these patients should have additional clinical and serologic follow-up. HIV-infected patients should be evaluated more frequently (i.e., at 3-month intervals instead of 6-month intervals). If additional follow-up cannot be ensured, re-treatment is recommended. Some experts recommend CSF examination in such situations.
When patients are re-treated, most experts recommend re-treatment with three weekly injections of benzathine penicillin G 2.4 million units IM, unless CSF examination indicates that neurosyphilis is present.
# Management of Sex Partners
Refer to General Principles, Management of Sex Partners.
# Special Considerations Penicillin Allergy
Nonpregnant penicillin-allergic patients who have primary or secondary syphilis should be treated with one of the following regimens. Close follow-up of such patients is essential.
# Recommended Regimens
Doxycycline 100 mg orally twice a day for 2 weeks, OR Tetracycline 500 mg orally four times a day for 2 weeks.
There is less clinical experience with doxycycline than with tetracycline, but compliance is likely to be better with doxycycline. Therapy for a patient who cannot tolerate either doxycycline or tetracycline should depend on whether the patient's compliance with the therapy regimen and with follow-up examinations can be ensured.
Pharmacologic and bacteriologic considerations suggest that ceftriaxone should be effective, but data concerning ceftriaxone are limited and clinical experience is insufficient to enable identification of late failures. The optimal dose and duration have not been established for ceftriaxone, but a suggested daily regimen of 1 g may be considered if treponemacidal levels in the blood can be maintained for 8-10 days. Single-dose ceftriaxone therapy is not effective for treating syphilis.
For nonpregnant patients whose compliance with therapy and follow-up can be ensured, an alternative regimen is erythromycin 500 mg orally four times a day for 2 weeks. However, erythromycin is less effective than the other recommended regimens.
Patients whose compliance with therapy or follow-up cannot be ensured should be desensitized and treated with penicillin. Skin testing for penicillin allergy may be useful in some circumstances in which the reagents and expertise to perform the test adequately are available (see Management of Patients Who Have a History of Penicillin Allergy).
# Pregnancy
Pregnant patients who are allergic to penicillin should be desensitized, if necessary, and treated with penicillin (see Management of Patients Who Have a History of Penicillin Allergy and Syphilis During Pregnancy).
# HIV Infection
Refer to Syphilis in HIV-Infected Persons.
# Latent Syphilis
Latent syphilis is defined as those periods after infection with T. pallidum when patients are seroreactive, but demonstrate no other evidence of disease. Patients who have latent syphilis and who acquired syphilis within the preceding year are classified as having early latent syphilis. Patients can be demonstrated as having early latent syphilis if, within the year preceding the evaluation, they had a) a documented seroconversion, b) unequivocal symptoms of primary or secondary syphilis, or c) a sex partner who had primary, secondary, or early latent syphilis. Almost all other patients have latent syphilis of unknown duration and should be managed as if they had late latent syphilis. Nontreponemal serologic titers usually are higher during early latent syphilis than late latent syphilis. However, early latent syphilis cannot be reliably distinguished from late latent syphilis solely on the basis of nontreponemal titers. Regardless of the level of the nontreponemal titers, patients in whom the illness does not meet the definition of early syphilis should be treated as if they have late latent infection. All sexually active women with reactive nontreponemal serologic tests should have a pelvic examination before syphilis staging is completed to evaluate for internal mucosal lesions. All patients who have syphilis should be tested for HIV infection.
# Treatment
Treatment of latent syphilis is intended to prevent occurrence or progression of late complications. Although clinical experience supports the effectiveness of penicillin in achieving these goals, limited evidence is available for guidance in choosing specific regimens. There is minimal evidence to support the use of nonpenicillin regimens.
# Recommended Regimens for Adults
The following regimens are recommended for nonallergic patients who have normal CSF examinations (if performed):
# Early Latent Syphilis:
Benzathine penicillin G 2.4 million units IM in a single dose.
# Late Latent Syphilis or Latent Syphilis of Unknown Duration:
Benzathine penicillin G 7.2 million units total, administered as three doses of 2.4 million units IM each at 1-week intervals.
# Recommended Regimens for Children
After the newborn period, children in whom syphilis is diagnosed should have a CSF examination to exclude neurosyphilis, and birth and maternal medical records should be reviewed to assess whether the child has congenital or acquired syphilis (see Congenital Syphilis). Older children with acquired latent syphilis should be evaluated as described for adults and treated using the following pediatric regimens (see Sexual Assault or Abuse of Children). These regimens are for nonallergic children who have acquired syphilis and whose results of the CSF examination were normal.
# Early Latent Syphilis:
Benzathine penicillin G 50,000 units/kg IM, up to the adult dose of 2.4 million units in a single dose.
# Late Latent Syphilis or Latent Syphilis of Unknown Duration:
Benzathine penicillin G 50,000 units/kg IM, up to the adult dose of 2.4 million units, administered as three doses at 1-week intervals (total 150,000 units/kg up to the adult total dose of 7.2 million units).
# Other Management Considerations
All patients who have latent syphilis should be evaluated clinically for evidence of tertiary disease (e.g., aortitis, neurosyphilis, gumma, and iritis). Patients who have syphilis and who demonstrate any of the following criteria should have a prompt CSF examination:
- Neurologic or ophthalmic signs or symptoms;
- Evidence of active tertiary syphilis (e.g., aortitis, gumma, and iritis); - Treatment failure; and - HIV infection with late latent syphilis or syphilis of unknown duration.
If dictated by circumstances and patient preferences, a CSF examination may be performed for patients who do not meet these criteria. If a CSF examination is performed and the results indicate abnormalities consistent with neurosyphilis, the patient should be treated for neurosyphilis (see Neurosyphilis).
# Follow-Up
Quantitative nontreponemal serologic tests should be repeated at 6, 12, and 24 months. Limited data are available to guide evaluation of the treatment response for patients who have latent syphilis. Patients should be evaluated for neurosyphilis and re-treated appropriately if a) titers increase fourfold, b) an initially high titer (≥1:32) fails to decline at least fourfold (i.e., two dilutions) within 12-24 months, or c) signs or symptoms attributable to syphilis develop in the patient.
# Management of Sex Partners
Refer to General Principles, Management of Sex Partners.
# Special Considerations Penicillin Allergy
Nonpregnant patients who have latent syphilis and who are allergic to penicillin should be treated with one of the following regimens.
# Recommended Regimens
Doxycycline 100 mg orally twice a day, OR Tetracycline 500 mg orally four times a day. Both drugs should be administered for 2 weeks if the duration of infection is known to have been <1 year; otherwise, they should be administered for 4 weeks.
# Pregnancy
Pregnant patients who are allergic to penicillin should be desensitized and treated with penicillin (see Management of Patients Who Have a History of Penicillin Allergy and Syphilis During Pregnancy).
# HIV Infection
Refer to Syphilis in HIV-Infected Persons.
# Tertiary Syphilis
Tertiary syphilis refers to gumma and cardiovascular syphilis, but not to neurosyphilis. Nonallergic patients without evidence of neurosyphilis should be treated with the following regimen.
# Recommended Regimen
Benzathine penicillin G 7.2 million units total, administered as three doses of 2.4 million units IM at 1-week intervals.
# Other Management Considerations
Patients who have symptomatic late syphilis should have a CSF examination before therapy is initiated. Some experts treat all patients who have cardiovascular syphilis with a neurosyphilis regimen. The complete management of patients who have cardiovascular or gummatous syphilis is beyond the scope of these guidelines. These patients should be managed in consultation with an expert.
# Follow-Up
Information is lacking with regard to follow-up of patients who have late syphilis. The clinical response depends partially on the nature of the lesions.
# Management of Sex Partners
Refer to General Principles, Management of Sex Partners.
# Special Considerations Penicillin Allergy
Patients allergic to penicillin should be treated according to the recommended regimens for late latent syphilis.
# Pregnancy
Pregnant patients who are allergic to penicillin should be desensitized, if necessary, and treated with penicillin (see Management of Patients Who Have a History of Penicillin Allergy and Syphilis During Pregnancy).
# HIV Infection
Refer to Syphilis in HIV-Infected Persons.
# Neurosyphilis Treatment
Central nervous system disease can occur during any stage of syphilis. A patient who has clinical evidence of neurologic involvement with syphilis (e.g., ophthalmic or auditory symptoms, cranial nerve palsies, and symptoms or signs of meningitis) should have a CSF examination.
Syphilitic uveitis or other ocular manifestations frequently are associated with neurosyphilis; patients with these symptoms should be treated according to the recommendations for neurosyphilis. A CSF examination should be performed for all such patients to identify those with abnormalities who should have follow-up CSF examinations to assess treatment response.
Patients who have neurosyphilis or syphilitic eye disease (e.g., uveitis, neuroretinitis, or optic neuritis) and who are not allergic to penicillin should be treated with the following regimen:
# Recommended Regimen
Aqueous crystalline penicillin G 18-24 million units a day, administered as 3-4 million units IV every 4 hours for 10-14 days.
If compliance with therapy can be ensured, patients may be treated with the following alternative regimen:
# Alternative Regimen
Procaine penicillin 2.4 million units IM a day, PLUS Probenecid 500 mg orally four times a day, both for 10-14 days.
The durations of the recommended and alternative regimens for neurosyphilis are shorter than that of the regimen used for late syphilis in the absence of neurosyphilis. Therefore, some experts administer benzathine penicillin, 2.4 million units IM, after completion of these neurosyphilis treatment regimens to provide a comparable total duration of therapy.
# Other Management Considerations
Other considerations in the management of patients who have neurosyphilis are as follows:
- All patients who have syphilis should be tested for HIV.
- Many experts recommend treating patients who have evidence of auditory disease caused by syphilis in the same manner as for neurosyphilis, regardless of the findings on CSF examination. Although systemic steroids are used frequently as adjunctive therapy for otologic syphilis, such drugs have not been proven beneficial.
# Follow-Up
If CSF pleocytosis was present initially, a CSF examination should be repeated every 6 months until the cell count is normal. Follow-up CSF examinations also can be used to evaluate changes in the VDRL-CSF or CSF protein after therapy; however, changes in these two parameters are slower, and persistent abnormalities are of less importance. If the cell count has not decreased after 6 months, or if the CSF is not entirely normal after 2 years, re-treatment should be considered.
# Management of Sex Partners
Refer to General Principles, Management of Sex Partners.
# Special Considerations Penicillin Allergy
Data have not been collected systematically for evaluation of therapeutic alternatives to penicillin for treatment of neurosyphilis. Patients who report being allergic to penicillin should either be densensitized to penicillin or be managed in consultation with an expert. In some situations, skin testing to confirm penicillin allergy may be useful (see Management of Patients Who Have a History of Penicillin Allergy).
# Pregnancy
Pregnant patients who are allergic to penicillin should be desensitized, if necessary, and treated with penicillin (see Syphilis During Pregnancy).
# HIV Infection
Refer to Syphilis in HIV-Infected Persons.
# Syphilis in HIV-Infected Persons Diagnostic Considerations
Unusual serologic responses have been observed among HIV-infected persons who have syphilis. Most reports involved serologic titers that were higher than expected, but false-negative serologic test results or delayed appearance of seroreactivity also have been reported. Nevertheless, both treponemal and nontreponemal serologic tests for syphilis can be interpreted in the usual manner for most patients who are coinfected with T. pallidum and HIV.
When clinical findings suggest that syphilis is present, but serologic tests are nonreactive or unclear, alternative tests (e.g., biopsy of a lesion, darkfield examination, or direct fluorescent antibody staining of lesion material) may be useful.
Neurosyphilis should be considered in the differential diagnosis of neurologic disease in HIV-infected persons.
# Treatment
In comparison with HIV-negative patients, HIV-infected patients who have early syphilis may be at increased risk for neurologic complications and may have higher rates of treatment failure with currently recommended regimens. The magnitude of these risks, although not defined precisely, is probably minimal. No treatment regimens for syphilis are demonstrably more effective in preventing neurosyphilis in HIV-infected patients than the syphilis regimens recommended for HIV-negative patients. Careful follow-up after therapy is essential.
# Primary and Secondary Syphilis in HIV-Infected Persons Treatment
Treatment with benzathine penicillin G, 2.4 million units IM, as for HIV-negative patients, is recommended. Some experts recommend additional treatments (e.g., three weekly doses of benzathine penicillin G as suggested for late syphilis) or other supplemental antibiotics in addition to benzathine penicillin G 2.4 million units IM.
# Other Management Considerations
CSF abnormalities often occur among both asymptomatic HIV-infected patients in the absence of syphilis and HIV-negative patients who have primary or secondary syphilis. Such abnormalities in HIV-infected patients who have primary or secondary syphilis are of unknown prognostic significance. Most HIV-infected patients respond appropriately to the currently recommended penicillin therapy; however, some experts recommend CSF examination before therapy and modification of treatment accordingly.
# Follow-Up
It is important that HIV-infected patients be evaluated clinically and serologically for treatment failure at 3, 6, 9, 12, and 24 months after therapy. Although of unproven benefit, some experts recommend a CSF examination after therapy (i.e., at 6 months).
HIV-infected patients who meet the criteria for treatment failure should be managed the same as HIV-negative patients (i.e., a CSF examination and re-treatment). CSF examination and re-treatment also should be strongly considered for patients whose nontreponemal test titer does not decrease fourfold within 6-12 months. Most experts would re-treat patients with 7.2 million units of benzathine penicillin G (administered as three weekly doses of 2.4 million units each) if CSF examinations are normal.
# Special Considerations Penicillin Allergy
Penicillin-allergic patients who have primary or secondary syphilis and HIV infection should be managed according to the recommendations for penicillinallergic HIV-negative patients.
# Latent Syphilis in HIV-Infected Persons Diagnostic Considerations
HIV-infected patients who have early latent syphilis should be managed and treated according to the recommendations for HIV-negative patients who have primary and secondary syphilis.
HIV-infected patients who have either late latent syphilis or syphilis of unknown duration should have a CSF examination before treatment.
# Treatment
A patient with late latent syphilis or syphilis of unknown duration and a normal CSF examination can be treated with 7.2 million units of benzathine penicillin G (as three weekly doses of 2.4 million units each). Patients who have CSF consistent with neurosyphilis should be treated and managed as described for neurosyphilis (see Neurosyphilis).
# Follow-Up
Patients should be evaluated clinically and serologically at 6, 12, 18, and 24 months after therapy. If, at any time, clinical symptoms develop or nontreponemal titers rise fourfold, a repeat CSF examination should be performed and treatment administered accordingly. If between 12 and 24 months the nontreponemal titer fails to decline fourfold, the CSF examination should be repeated, and treatment administered accordingly.
# Special Considerations Penicillin Allergy
Penicillin regimens should be used to treat all stages of syphilis in HIV-infected patients. Skin testing to confirm penicillin allergy may be used (see Management of Patients Who Have a History of Penicillin Allergy). Patients may be desensitized, then treated with penicillin.
# Syphilis During Pregnancy
All women should be screened serologically for syphilis during the early stages of pregnancy. In populations in which utilization of prenatal care is not optimal, RPR-card test screening and treatment (i.e., if the RPR-card test is reactive) should be performed at the time a pregnancy is diagnosed. For communities and populations in which the prevalence of syphilis is high or for patients at high risk, serologic testing should be performed twice during the third trimester, at 28 weeks of gestation and at delivery. (Some states mandate screening at delivery for all women.) Any woman who delivers a stillborn infant after 20 weeks of gestation should be tested for syphilis. No infant should leave the hospital without the maternal serologic status having been determined at least once during pregnancy.
# Diagnostic Considerations
Seropositive pregnant women should be considered infected unless an adequate treatment history is documented clearly in the medical records and sequential serologic antibody titers have declined.
# Treatment
Penicillin is effective for preventing maternal transmission to the fetus and for treating fetal-established infection. Evidence is insufficient to determine whether the specific, recommended penicillin regimens are optimal.
# Recommended Regimens
Treatment during pregnancy should be the penicillin regimen appropriate for the stage of syphilis.
# Other Management Considerations
Some experts recommend additional therapy in some settings. A second dose of benzathine penicillin 2.4 million units IM may be administered 1 week after the initial dose for women who have primary, secondary, or early latent syphilis. Ultrasonographic signs of fetal syphilis (i.e., hepatomegaly and hydrops) indicate a greater risk for fetal treatment failure; such cases should be managed in consultation with obstetric specialists.
Women treated for syphilis during the second half of pregnancy are at risk for premature labor and/or fetal distress if the treatment precipitates the Jarisch-Herxheimer reaction. These women should be advised to seek obstetric attention after treatment if they notice any contractions or decrease in fetal movements. Stillbirth is a rare complication of treatment, but concern for this complication should not delay necessary treatment. All patients who have syphilis should be offered testing for HIV infection.
# Follow-Up
Coordinated prenatal care and treatment follow-up are important, and syphilis case management may help facilitate prenatal enrollment. Serologic titers should be repeated in the third trimester and at delivery. Serologic titers may be checked monthly in women at high risk for reinfection or in geographic areas in which the prevalence of syphilis is high. The clinical and antibody response should be appropriate for the stage of disease. Most women will deliver before their serologic response to treatment can be assessed definitively.
# Management of Sex Partners
Refer to General Principles, Management of Sex Partners.
# Special Considerations Penicillin Allergy
There are no proven alternatives to penicillin for treatment of syphilis during pregnancy. Pregnant women who have a history of penicillin allergy should be desensitized and treated with penicillin. Skin testing may be helpful (see Management of Patients Who Have a History of Penicillin Allergy).
Tetracycline and doxycycline usually are not used during pregnancy. Erythromycin should not be used, because it does not reliably cure an infected fetus. Data are insufficient to recommend azithromycin or ceftriaxone.
# HIV Infection
Refer to Syphilis in HIV-Infected Persons.
# CONGENITAL SYPHILIS
Effective prevention and detection of congenital syphilis depends on the identification of syphilis in pregnant women and, therefore, on the routine serologic screening of pregnant women at the time of the first prenatal visit. Serologic testing and a sexual history also should be obtained at 28 weeks of gestation and at delivery in communities and populations in which the risk for congenital syphilis is high. Moreover, as part of the management of pregnant women who have syphilis, information concerning treatment of sex partners should be obtained in order to assess possible maternal reinfection. All pregnant women who have syphilis should be tested for HIV infection.
Routine screening of newborn sera or umbilical cord blood is not recommended. Serologic testing of the mother's serum is preferred to testing infant serum, because the serologic tests performed on infant serum can be nonreactive if the mother's serologic test result is of low titer or if the mother was infected late in pregnancy. No infant should leave the hospital without the maternal serologic status having been documented at least once during pregnancy.
# Evaluation and Treatment of Infants During the First Month of Life Diagnostic Considerations
The diagnosis of congenital syphilis is complicated by the transplacental transfer of maternal nontreponemal and treponemal IgG antibodies to the fetus. This transfer of antibodies makes the interpretation of reactive serologic tests for syphilis in infants difficult. Treatment decisions often must be made based on a) identification of syphilis in the mother; b) adequacy of maternal treatment; c) presence of clinical, laboratory, or radiographic evidence of syphilis in the infant; and d) comparison of the infant's nontreponemal serologic test results with those of the mother.
# Who Should Be Evaluated
All infants born to seroreactive mothers should be evaluated with a quantitative nontreponemal serologic test (RPR or VDRL) performed on infant serum (i.e., umbilical cord blood might be contaminated with maternal blood and might yield a false-positive result). A treponemal test (i.e., MHA-TP or FTA-ABS) of a newborn's serum is not necessary.
# Evaluation
All infants born to women who have reactive serologic tests for syphilis should be examined thoroughly for evidence of congenital syphilis (e.g., nonimmune hydrops, jaundice, hepatosplenomegaly, rhinitis, skin rash, and/or pseudoparalysis of an extremity). Pathologic examination of the placenta or umbilical cord using specific fluorescent antitreponemal antibody staining is suggested. Darkfield microscopic examination or direct fluorescent antibody staining of suspicious lesions or body fluids (e.g., nasal discharge) also should be performed.
Further evaluation of the infant is dependent on a) whether any abnormalities are present on physical examination, b) maternal treatment history, c) stage of infection at the time of treatment, and d) comparison of maternal (at delivery) and infant nontreponemal titers utilizing the same test and preferably the same laboratory.
# Treatment
Infants should be treated for presumed congenital syphilis if they were born to mothers who met any of the following criteria:
- Had untreated syphilis at delivery;- - Had serologic evidence of relapse or reinfection after treatment (i.e., a fourfold or greater increase in nontreponemal antibody titer);
- Was treated with erythromycin or other nonpenicillin regimen for syphilis during pregnancy; †
- Was treated for syphilis ≤1 month before delivery;
- Did not have a well-documented history of treatment for syphilis;
- Was treated for early syphilis during pregnancy with the appropriate penicillin regimen, but nontreponemal antibody titers did not decrease at least fourfold; or - Was treated appropriately before pregnancy but had insufficient serologic follow-up to ensure an adequate treatment response and lack of current infection (i.e., an appropriate response includes a] at least a fourfold decrease in nontreponemal antibody titers for patients treated for early syphilis and b] stable or declining nontreponemal titers of ≤1:4 for other patients).
Regardless of a maternal history of infection with T. pallidum or treatment for syphilis, the evaluation should include the following tests if the infant has either a) an abnormal physical examination that is consistent with congenital syphilis, b) a serum quantitative nontreponemal serologic titer that is fourfold greater than the mother's titer, or c) a positive darkfield or fluorescent antibody test of body fluid(s).
- CSF analysis for VDRL, cell count, and protein;
- Complete blood count (CBC) and differential CBC and platelet count;
- Other tests as clinically indicated (e.g., long-bone radiographs, chest radiograph, liver-function tests, cranial ultrasound, ophthalmologic examination, and auditory brainstem response).
# Recommended Regimens
Aqueous crystalline penicillin G 100,000-150,000 units/kg/day, administered as 50,000 units/kg/dose IV every 12 hours during the first 7 days of life, and every 8 hours thereafter for a total of 10 days; OR Procaine penicillin G 50,000 units/kg/dose IM a day in a single dose for 10 days. *A woman treated with a regimen other than those recommended in these guidelines for treatment of syphilis should be considered untreated. † The absence of a fourfold greater titer for an infant does not exclude congenital syphilis.
If >1 day of therapy is missed, the entire course should be restarted. Data are insufficient regarding the use of other antimicrobial agents (e.g., ampicillin). When possible, a full 10-day course of penicillin is preferred. The use of agents other than penicillin requires close serologic follow-up to assess adequacy of therapy.
In all other situations, the maternal history of infection with T. pallidum and treatment for syphilis must be considered when evaluating and treating the infant. For infants who have a normal physical examination and a serum quantitative nontreponemal serologic titer the same or less than fourfold the maternal titer, the evaluation depends on the maternal treatment history and stage of infection.
- The infant should receive the following treatment if a) the maternal treatment was not given, was undocumented, was a nonpenicillin regimen, or was administered ≤4 weeks before delivery; b) the adequacy of maternal treatment for early syphilis cannot be evaluated because the nontreponemal serologic titer has not decreased fourfold; or c) relapse or reinfection is suspected because of a fourfold increase in maternal nontreponemal serologic titer. a. Aqueous penicillin G or procaine penicillin G for 10 days. Some experts prefer this therapy if the mother has untreated early syphilis at delivery. A complete evaluation is unnecessary if 10 days of parenteral therapy is given. However such evaluation may be useful; a lumbar puncture may document CSF abnormalities that would prompt close follow-up.- Other tests (e.g., CBC and platelet count and bone radiographs) may be performed to further support a diagnosis of congenital syphilis; or b. Benzathine penicillin G 50,000 units/kg (single dose IM) if the infant's evaluation (i.e., CSF examination, long-bone radiographs, and CBC with platelets) is normal and follow-up is certain. If any part of the infant's evaluation is abnormal or not done, or the CSF analysis is uninterpretable secondary to contamination with blood, then a 10-day course of penicillin (see preceding paragraph) is required. †
- Evaluation is unnecessary if the maternal treatment a) was during pregnancy, appropriate for the stage of infection, and >4 weeks before delivery; b) was for early syphilis and the nontreponemal serologic titers decreased fourfold after appropriate therapy; or c) was for late latent infection, the nontreponemal titers remained stable and low, and there is no evidence of maternal reinfection or relapse. A single dose of benzathine penicillin G 50,000 units/kg IM should be administered. (Note: Some experts would not treat the infant but would provide *CSF test results obtained during the neonatal period can be difficult to interpret; normal values differ by gestational age and are higher in preterm infants. Values as high as 25 white blood cells (WBCs)/mm 3 and/or protein of 150 mg/dL might occur among normal neonates; some experts, however, recommend that lower values (i.e., 5 WBCs/mm 3 and protein of 40 mg/dL) be considered the upper limits of normal. Other causes of elevated values also should be considered when an infant is being evaluated for congenital syphilis. † If the infant's nontreponemal test is nonreactive and the likelihood of the infant being infected is low, some experts recommend no evaluation but treatment of the infant with a single IM dose of benzathine penicillin G 50,000 units/kg for possible incubating syphilis, after which the infant should have close serologic follow-up.
close serologic follow-up.) Furthermore, in these situations, if the infant's nontreponemal test is nonreactive, no treatment is necessary.
- Evaluation and treatment are unnecessary if the maternal treatment was before pregnancy, after which the mother was evaluated multiple times, and the nontreponemal serologic titer remained low and stable before and during pregnancy and at delivery (VDRL ≤1:2; RPR ≤1:4). Some experts would treat with benzathine penicillin G 50,000 units/kg as a single IM injection, particularly if follow-up is uncertain.
# Evaluation and Treatment of Older Infants and Children Who Have Congenital Syphilis
Children who are identified as having reactive serologic tests for syphilis after the neonatal period (i.e., at >1 month of age) should have maternal serology and records reviewed to assess whether the child has congenital or acquired syphilis (for acquired syphilis, see Primary and Secondary Syphilis and Latent Syphilis). If the child possibly has congenital syphilis, the child should be evaluated fully (i.e., a CSF examination for cell count, protein, and VDRL ; an eye examination; and other tests such as long-bone radiographs, CBC, platelet count, and auditory brainstem response as indicated clinically). Any child who possibly has congenital syphilis or who has neurologic involvement should be treated with aqueous crystalline penicillin G, 200,000-300,000 units/kg/day IV (administered as 50,000 units/kg every 4-6 hours) for 10 days.
# Follow-Up
All seroreactive infants (or an infant whose mother was seroreactive at delivery) should receive careful follow-up examinations and serologic testing (i.e., a nontreponemal test) every 2-3 months until the test becomes nonreactive or the titer has decreased fourfold. Nontreponemal antibody titers should decline by 3 months of age and should be nonreactive by 6 months of age if the infant was not infected (i.e., if the reactive test result was caused by passive transfer of maternal IgG antibody) or was infected but adequately treated. The serologic response after therapy may be slower for infants treated after the neonatal period. If these titers are stable or increasing after 6-12 months of age, the child should be evaluated, including a CSF examination, and treated with a 10-day course of parenteral penicillin G.
Treponemal tests should not be used to evaluate treatment response because the results for an infected child can remain positive despite effective therapy. Passively transferred maternal treponemal antibodies could be present in an infant until age 15 months. A reactive treponemal test after age 18 months is diagnostic of congenital syphilis. If the nontreponemal test is nonreactive at this time, no further evaluation or treatment is necessary. If the nontreponemal test is reactive at age 18 months, the infant should be fully (re)evaluated and treated for congenital syphilis.
Infants whose initial CSF evaluation is abnormal should undergo a repeat lumbar puncture approximately every 6 months until the results are normal. A reactive CSF VDRL test or abnormal CSF indices that cannot be attributed to other ongoing illness requires re-treatment for possible neurosyphilis.
Follow-up of children treated for congenital syphilis after the newborn period should be the same as that prescribed for congenital syphilis among neonates.
# Special Considerations Penicillin Allergy
Infants and children who require treatment for syphilis but who have a history of penicillin allergy or develop an allergic reaction presumed secondary to penicillin should be desensitized, if necessary, and treated with penicillin. Skin testing may be helpful in some patients and settings (see Management of Patients Who Have a History of Penicillin Allergy). Data are insufficient regarding the use of other antimicrobial agents (e.g., ceftriaxone); if a nonpenicillin agent is used, close serologic and CSF follow-up is indicated.
# HIV Infection
Data are insufficient regarding whether infants who have congenital syphilis and whose mothers are coinfected with HIV require different evaluation, therapy, or follow-up for syphilis than is recommended for all infants.
# MANAGEMENT OF PATIENTS WHO HAVE A HISTORY OF PENICILLIN ALLERGY
No proven alternatives to penicillin are available for treating neurosyphilis, congenital syphilis, or syphilis in pregnant women. Penicillin also is recommended for use, whenever possible, in HIV-infected patients. Of the adult U.S. population, 3%-10% have experienced urticaria, angioedema, or anaphylaxis (i.e., upper airway obstruction, bronchospasm, or hypotension) after penicillin therapy. Readministration of penicillin to these patients can cause severe, immediate reactions. Because anaphylactic reactions to penicillin can be fatal, every effort should be made to avoid administering penicillin to penicillin-allergic patients, unless the anaphylactic sensitivity has been removed by acute desensitization.
An estimated 10% of persons who report a history of severe allergic reactions to penicillin are still allergic. With the passage of time after an allergic reaction to penicillin, most persons who have had a severe reaction stop expressing penicillinspecific IgE. These persons can be treated safely with penicillin. The results of many investigations indicate that skin testing with the major and minor determinants can reliably identify persons at high risk for penicillin reactions. Although these reagents are easily generated and have been available in academic centers for >30 years, only benzylpenicilloyl poly-L-lysine (Pre-Pen, the major determinant) and penicillin G are available commercially. Experts estimate that testing with only the major determinant and penicillin G identifies 90%-97% of the currently allergic patients. However, because skin testing without the minor determinants would still miss 3%-10% of allergic patients, and serious or fatal reactions can occur among these minor-determinant-positive patients, experts suggest caution when the full battery of skin-test reagents is not available (Table 1).
# Recommendations
If the full battery of skin-test reagents is available, including the major and minor determinants (see Penicillin Allergy Skin Testing), patients who report a history of penicillin reaction and are skin-test negative can receive conventional penicillin therapy. Skin-test-positive patients should be desensitized.
If the full battery of skin-test reagents, including the minor determinants, is not available, the patient should be skin tested using benzylpenicilloyl poly-L-lysine (i.e., the major determinant, Pre-Pen) and penicillin G. Patients who have positive test results should be desensitized. Some experts believe that persons who have negative test results should be regarded as probably allergic and should be desensitized. Others suggest that those with negative skin-test results can be test-dosed gradually with oral penicillin in a monitored setting in which treatment for anaphylactic reaction is possible.
# Penicillin Allergy Skin Testing
Patients at high risk for anaphylaxis (i.e., those who have a history of penicillinrelated anaphylaxis, asthma, or other diseases that would make anaphylaxis more dangerous or who are being treated with beta-adrenergic blocking agents) should be tested with 100-fold dilutions of the full-strength skin-test reagents before being tested with full-strength reagents. In these situations, patients should be tested in a monitored setting in which treatment for an anaphylactic reaction is available. If possible, the patient should not have taken antihistamines recently (e.g., chlorpheniramine maleate or terfenadine during the preceding 24 hours, diphenhydramine HCl or hydroxyzine during the preceding 4 days, or astemizole during the preceding 3 weeks).
# Reagents (Adapted from Beall
# Minor Determinant Precursors †
- Benzylpenicillin G (10 -2 M, 3.3 mg/mL, 6000 units/mL),
- Benzylpenicilloate (10 -2 M, 3.3 mg/mL),
- Benzylpenilloate (or penicilloyl propylamine) (10 -2 M, 3.3 mg/mL).
# Positive Control
- Commercial histamine for epicutaneous skin testing (1 mg/mL).
# Negative Control
- Diluent used to dissolve other reagents, usually phenol saline.
# Procedures
Dilute the antigens a) 100-fold for preliminary testing if the patient has had a life-threatening reaction to penicillin or b) 10-fold if the patient has had another type of immediate, generalized reaction to penicillin within the preceding year.
# Epicutaneous (prick) tests.
Duplicate drops of skin-test reagent are placed on the volar surface of the forearm. The underlying epidermis is pierced with a 26-gauge needle without drawing blood.
An epicutaneous test is positive if the average wheal diameter after 15 minutes is 4 mm larger than that of negative controls; otherwise, the test is negative. The histamine controls should be positive to ensure that results are not falsely negative because of the effect of antihistaminic drugs. *Reprinted with permission from G.N. Beall in Annals of Internal Medicine (25 ).
† Aged penicillin is not an adequate source of minor determinants. Penicillin G should be freshly prepared or should come from a fresh-frozen source.
Intradermal tests. If epicutaneous tests are negative, duplicate 0.02 mL intradermal injections of negative control and antigen solutions are made into the volar surface of the forearm using a 26-or 27-gauge needle on a syringe. The crossed diameters of the wheals induced by the injections should be recorded.
An intradermal test is positive if the average wheal diameter 15 minutes after injection is ≥2 mm larger than the initial wheal size and also is ≥2 mm larger than the negative controls. Otherwise, the tests are negative.
# Desensitization
Patients who have a positive skin test to one of the penicillin determinants can be desensitized. This is a straightforward, relatively safe procedure that can be done orally or IV. Although the two approaches have not been compared, oral desensitization is regarded as safer to use and easier to perform. Patients should be desensitized in a hospital setting because serious IgE-mediated allergic reactions, although unlikely, can occur. Desensitization usually can be completed in approximately 4 hours, after which the first dose of penicillin is given (Table 1). STD programs should have a referral center where patients who have positive skin test results can be desensitized. After desensitization, patients must be maintained on penicillin continuously for the duration of the course of therapy.
# DISEASES CHARACTERIZED BY URETHRITIS AND CERVICITIS
# Management of Male Patients Who Have Urethritis
Urethritis, or inflammation of the urethra, is caused by an infection characterized by the discharge of mucopurulent or purulent material and by burning during urination. Asymptomatic infections are common. The only bacterial pathogens of proven clinical importance in men who have urethritis are N. gonorrhoeae and C. trachomatis. Testing to determine the specific disease is recommended because both of these infections are reportable to state health departments, and a specific diagnosis may improve compliance and partner notification. If diagnostic tools (e.g., a Gram stain and microscope) are unavailable, patients should be treated for both infections. The extra expense of treating a person who has nongonococcal urethritis (NGU) for both infections also should encourage the health-care provider to make a specific diagnosis. New nucleic acid amplification tests enable detection of N. gonorrhoeae and C. trachomatis on first-void urine; in some settings, these tests are more sensitive than traditional culture techniques.
# Etiology
NGU is diagnosed if Gram-negative intracellular organisms cannot be identified on Gram stains. C. trachomatis is the most frequent cause (i.e., in 23%-55% of cases); however, the prevalence differs by age group, with lower prevalence among older men. The proportion of NGU cases caused by chlamydia has been declining gradually. Complications of NGU among men infected with C. trachomatis include epididymitis and Reiter's syndrome. Documentation of chlamydia infection is important because partner referral for evaluation and treatment would be indicated.
The etiology of most cases of nonchlamydial NGU is unknown. Ureaplasma urealyticum and possibly Mycoplasma genitalium are implicated in as many as one third of cases. Specific diagnostic tests for these organisms are not indicated.
Trichomonas vaginalis and HSV sometimes cause NGU. Diagnostic and treatment procedures for these organisms are reserved for situations in which NGU is nonresponsive to therapy.
# Confirmed Urethritis
Clinicians should document that urethritis is present. Urethritis can be documented by the presence of any of the following signs: a. Mucopurulent or purulent discharge. b. Gram stain of urethral secretions demonstrating ≥5 WBCs per oil immersion field. The Gram stain is the preferred rapid diagnostic test for evaluating urethritis. It is highly sensitive and specific for documenting both urethritis and the presence or absence of gonococcal infection. Gonococcal infection is established by documenting the presence of WBCs containing intracellular Gram-negative diplococci. c. Positive leukocyte esterase test on first-void urine, or microscopic examination of first-void urine demonstrating ≥10 WBCs per high power field.
If none of these criteria is present, then treatment should be deferred, and the patient should be tested for N. gonorrhoeae and C. trachomatis and followed closely in the event of a positive test result. If the results demonstrate infection with either N. gonorrhoeae or C. trachomatis, the appropriate treatment should be given and sex partners referred for evaluation and treatment.
Empiric treatment of symptoms without documentation of urethritis is recommended only for patients at high risk for infection who are unlikely to return for a follow-up evaluation (e.g., adolescents who have multiple partners). Such patients should be treated for gonorrhea and chlamydia. Partners of patients treated empirically should be referred for evaluation and treatment.
# Management of Patients Who Have Nongonococcal Urethritis Diagnosis
All patients who have urethritis should be evaluated for the presence of gonococcal and chlamydial infection. Testing for chlamydia is strongly recommended because of the increased utility and availability of highly sensitive and specific testing methods and because a specific diagnosis might improve compliance and partner notification.
# Treatment
Treatment should be initiated as soon as possible after diagnosis. Single-dose regimens have the important advantage of improved compliance and of directly observed therapy. If multiple-dose regimens are used, the medication should be provided in the clinic or health-care provider's office. Treatment with the recommended regimen can result in alleviation of symptoms and microbiologic cure of infection.
# Recommended Regimens
Azithromycin 1 g orally in a single dose, OR Doxycycline 100 mg orally twice a day for 7 days.
# Alternative Regimens
Erythromycin base 500 mg orally four times a day for 7 days OR Erythromycin ethylsuccinate 800 mg orally four times a day for 7 days, OR Ofloxacin 300 mg twice a day for 7 days.
If only erythromycin can be used and a patient cannot tolerate highdose erythromycin schedules, one of the following regimens can be used:
Erythromycin base 250 mg orally four times a day for 14 days, OR Erythromycin ethylsuccinate 400 mg orally four times a day for 14 days.
# Follow-Up for Patients Who Have Urethritis
Patients should be instructed to return for evaluation if symptoms persist or recur after completion of therapy. Symptoms alone, without documentation of signs or laboratory evidence of urethral inflammation, are not a sufficient basis for retreatment. Patients should be instructed to abstain from sexual intercourse until therapy is completed.
# Partner Referral
Patients should refer for evaluation and treatment all sex partners within the preceding 60 days. A specific diagnosis may facilitate partner referral; therefore, testing for gonorrhea and chlamydia is encouraged.
# Recurrent and Persistent Urethritis
Objective signs of urethritis should be present before initiation of antimicrobial therapy. Effective regimens have not been identified for treating patients who have persistent symptoms or frequent recurrences after treatment. Patients who have persistent or recurrent urethritis should be re-treated with the initial regimen if they did not comply with the treatment regimen or if they were reexposed to an untreated sex partner. Otherwise, a wet mount examination and culture of an intraurethral swab specimen for T. vaginalis should be performed. Urologic examinations usually do not reveal a specific etiology. If the patient was compliant with the initial regimen and reexposure can be excluded, the following regimen is recommended:
Recommended Treatment for Recurrent/Persistent Urethritis Metronidazole 2 g orally in a single dose,
# PLUS
Erythromycin base 500 mg orally four times a day for 7 days, OR Erythromycin ethylsuccinate 800 mg orally four times a day for 7 days.
# Special Considerations HIV Infection
Gonococcal urethritis, chlamydial urethritis, and nongoncoccal, nonchlamydial urethritis may facilitate HIV transmission. Patients who have NGU and also are infected with HIV should receive the same treatment regimen as those who are HIV-negative.
# Management of Patients Who Have Mucopurulent Cervicitis (MPC)
MPC is characterized by a purulent or mucopurulent endocervical exudate visible in the endocervical canal or in an endocervical swab specimen. Some experts also make the diagnosis on the basis of easily induced cervical bleeding. Although some experts consider an increased number of polymorphonuclear leukocytes on endocervical Gram stain as being useful in the diagnosis of MPC, this criterion has not been standardized, has a low positive-predictive value (PPV), and is not available in some settings. MPC often is asymptomatic, but some women have an abnormal vaginal discharge and vaginal bleeding (e.g., after sexual intercourse). MPC can be caused by C. trachomatis or N. gonorrhoeae; however, in most cases neither organism can be isolated. MPC can persist despite repeated courses of antimicrobial therapy. Because relapse or reinfection with C. trachomatis or N. gonorrhoeae usually does not apply to persistent cases of MPC, other nonmicrobiologic determinants (e.g., inflammation in an ectropion) could be involved.
Patients who have MPC should be tested for C. trachomatis and for N. gonorrhoeae by using the most sensitive and specific test for the population served. However, MPC is not a sensitive predictor of infection with these organisms, because most women who have C. trachomatis or N. gonorrhoeae do not have MPC.
# Treatment
The results of sensitive tests for C. trachomatis or N. gonorrhoeae (e.g., culture or nucleic acid amplification tests) should determine the need for treatment, unless the likelihood of infection with either organism is high or the patient is unlikely to return for treatment. Empiric treatment should be considered for a patient who has a suspected case of gonorrhea and/or chlamydia if a) the prevalence of these diseases differs substantially (i.e., >15%) between clinics in the geographic area and b) the patient might be difficult to locate for treatment. After the possibilities of relapse and reinfection have been excluded, management of persistent MPC is unclear. For such cases, additional antimicrobial therapy may be of little benefit.
# Follow-Up
Follow-up should be as recommended for the infections for which the woman is being treated. If symptoms persist, women should be instructed to return for reevaluation and to abstain from sexual intercourse even if they have completed the prescribed therapy.
# Management of Sex Partners
Management of sex partners of women treated for MPC should be appropriate for the identified or suspected STD. Partners should be notified, examined, and treated for the STD identified or suspected in the index patient.
Patients should be instructed to abstain from sexual intercourse until they and their sex partners are cured. Because a microbiologic test of cure usually is not recommended, patients should abstain from sexual intercourse until therapy is completed (i.e., 7 days after a single-dose regimen or after completion of a 7-day regimen).
# Special Considerations HIV Infection
Patients who have MPC and also are infected with HIV should receive the same treatment regimen as those who are HIV-negative.
# Chlamydial Infection
In the United States, chlamydial genital infection occurs frequently among sexually active adolescents and young adults. Asymptomatic infection is common among both men and women. Screening sexually active adolescents for chlamydial infection should be routine during annual examinations, even if symptoms are not present. Screening women aged 20-24 years also is suggested, particularly for those who have new or multiple sex partners and who do not consistently use barrier contraceptives.
# Chlamydial Infection in Adolescents and Adults
Several important sequelae can result from C. trachomatis infection in women; the most serious of these include PID, ectopic pregnancy, and infertility. Some women who have apparently uncomplicated cervical infection already have subclinical upper reproductive tract infection. A recent investigation of patients in a health maintenance organization demonstrated that screening and treatment of cervical infection can reduce the likelihood of PID.
# Treatment
Treatment of infected patients prevents transmission to sex partners; and, for infected pregnant women, treatment might prevent transmission of C. trachomatis to infants during birth. Treatment of sex partners helps to prevent reinfection of the index patient and infection of other partners.
Coinfection with C. trachomatis often occurs among patients who have gonococcal infection; therefore, presumptive treatment of such patients for chlamydia is appropriate (see Gonococcal Infection, Dual Therapy for Gonococcal and Chlamydial Infections). The following recommended treatment regimens and the alternative regimens cure infection and usually relieve symptoms.
# Recommended Regimens
Azithromycin 1 g orally in a single dose, OR Doxycycline 100 mg orally twice a day for 7 days.
# Alternative Regimens
Erythromycin base 500 mg orally four times a day for 7 days, OR Erythromycin ethylsuccinate 800 mg orally four times a day for 7 days, OR Ofloxacin 300 mg orally twice a day for 7 days.
The results of clinical trials indicate that azithromycin and doxycycline are equally efficacious. These investigations were conducted primarily in populations in which follow-up was encouraged and adherence to a 7-day regimen was good. Azithromycin should always be available to health-care providers to treat at least those patients for whom compliance is in question.
In populations with erratic health-care-seeking behavior, poor compliance with treatment, or minimal follow-up, azithromycin may be more cost-effective because it provides single-dose, directly observed therapy. Doxycycline costs less than azithromycin, and it has been used extensively for a longer period. Erythromycin is less efficacious than either azithromycin and doxycycline, and gastrointestinal side effects frequently discourage patients from complying with this regimen. Ofloxacin is similar in efficacy to doxycycline and azithromycin, but it is more expensive to use and offers no advantage with regard to the dosage regimen. Other quinolones either are not reliably effective against chlamydial infection or have not been adequately evaluated.
To maximize compliance with recommended therapies, medications for chlamydial infections should be dispensed on site, and the first dose should be directly observed. To minimize further transmission of infection, patients treated for chlamydia should be instructed to abstain from sexual intercourse for 7 days after single-dose therapy or until completion of a 7-day regimen. Patients also should be instructed to abstain from sexual intercourse until all of their sex partners are cured to minimize the risk for reinfection.
# Follow-Up
Patients do not need to be retested for chlamydia after completing treatment with doxycycline or azithromycin unless symptoms persist or reinfection is suspected, because these therapies are highly efficacious. A test of cure may be considered 3 weeks after completion of treatment with erythromycin. The validity of chlamydial culture testing at <3 weeks after completion of therapy to identify patients who did not respond to therapy has not been established. False-negative results can occur because of small numbers of chlamydial organisms. In addition, nonculture tests conducted at <3 weeks after completion of therapy for patients who were treated successfully could be false-positive because of continued excretion of dead organisms.
Some studies have demonstrated high rates of infection among women retested several months after treatment, presumably because of reinfection. In some populations (e.g., adolescents), rescreening women several months after treatment might be effective for detecting further morbidity.
# Management of Sex Partners
Patients should be instructed to refer their sex partners for evaluation, testing, and treatment. Because exposure intervals have received limited evaluation, the following recommendations are somewhat arbitrary. Sex partners should be evaluated, tested, and treated if they had sexual contact with the patient during the 60 days preceding onset of symptoms in the patient or diagnosis of chlamydia. Health-care providers should treat the most recent sex partner even if the time of the last sexual contact was >60 days before onset or diagnosis.
Patients should be instructed to abstain from sexual intercourse until they and their sex partners have completed treatment. Because a microbiologic test of cure usually is not recommended, abstinence should be continued until therapy is completed (i.e., 7 days after a single-dose regimen or after completion of a 7-day regimen). Timely treatment of sex partners is essential for decreasing the risk for reinfecting the index patient.
# Special Considerations Pregnancy
Doxycycline and ofloxacin are contraindicated for pregnant women. The safety and efficacy of azithromycin use in pregnant and lactating women have not been established. Repeat testing, preferably by culture, 3 weeks after completion of therapy with the following regimens is recommended, because a) none of these regimens are highly efficacious and b) the frequent side effects of erythromycin might discourage patient compliance with this regimen.
# Recommended Regimens for Pregnant Women
Erythromycin base 500 mg orally four times a day for 7 days, OR Amoxicillin 500 mg orally three times a day for 7 days.
# Alternative Regimens for Pregnant Women
Erythromycin base 250 mg orally four times a day for 14 days, OR Erythromycin ethylsuccinate 800 mg orally four times a day for 7 days, OR Erythromycin ethylsuccinate 400 mg orally four times a day for 14 days, OR Azithromycin 1 g orally in a single dose.
Note: Erythromycin estolate is contraindicated during pregnancy because of drugrelated hepatotoxicity. Preliminary data indicate that azithromycin may be safe and effective. However, data are insufficient to recommend the routine use of azithromycin in pregnant women.
# HIV Infection
Patients who have chlamydial infection and also are infected with HIV should receive the same treatment regimen as those who are HIV-negative.
# Chlamydial Infection in Infants
Prenatal screening of pregnant women can prevent chlamydial infection among neonates. Pregnant women who are <25 years of age or who have new or multiple sex partners particularly should be targeted for screening. Periodic prevalence surveys of chlamydial infection can be conducted to confirm the validity of using these recommendations in specific clinical settings.
C. trachomatis infection of neonates results from perinatal exposure to the mother's infected cervix. The prevalence of C. trachomatis infection among pregnant women usually is >5%, regardless of race/ethnicity or socioeconomic status. Neonatal ocular prophylaxis with silver nitrate solution or antibiotic ointments does not prevent perinatal transmission of C. trachomatis from mother to infant. However, ocular prophylaxis with those agents does prevent gonoccocal ophthalmia and should be continued for that reason (see Prevention of Ophthalmia Neonatorum).
Initial C. trachomatis perinatal infection involves mucous membranes of the eye, oropharynx, urogenital tract, and rectum. C. trachomatis infection in neonates is most often recognized by conjunctivitis that develops 5-12 days after birth. Chlamydia is the most frequent identifiable infectious cause of ophthalmia neonatorum. C. trachomatis also is a common cause of subacute, afebrile pneumonia with onset from 1 to 3 months of age. Asymptomatic infections also can occur in the oropharynx, genital tract, and rectum of neonates.
# Ophthalmia Neonatorum Caused by C. trachomatis
A chlamydial etiology should be considered for all infants aged ≤30 days who have conjunctivitis.
# Diagnostic Considerations
Sensitive and specific methods used to diagnose chlamydial ophthalmia in the neonate include both tissue culture and nonculture tests (e.g., direct fluorescent antibody tests and immunoassays). Giemsa-stained smears are specific for C. trachomatis, but such tests are not sensitive. Specimens must contain conjunctival cells, not exudate alone. Specimens for culture isolation and nonculture tests should be obtained from the everted eyelid using a dacron-tipped swab or the swab specified by the manufacturer's test kit. A specific diagnosis of C. trachomatis infection confirms the need for treatment not only for the neonate, but also for the mother and her sex partner(s). Ocular exudate from infants being evaluated for chlamydial conjunctivitis also should be tested for N. gonorrhoeae.
# Recommended Regimen
Erythromycin 50 mg/kg/day orally divided into four doses daily for 10-14 days.
Topical antibiotic therapy alone is inadequate for treatment of chlamydial infection and is unnecessary when systemic treatment is administered.
# Follow-Up
The efficacy of erythromycin treatment is approximately 80%; a second course of therapy may be required. Follow-up of infants to determine resolution is recommended. The possibility of concomitant chlamydial pneumonia should be considered.
# Management of Mothers and Their Sex Partners
The mothers of infants who have chlamydial infection and the sex partners of these women should be evaluated and treated (see Chlamydial Infection in Adolescents and Adults).
# Infant Pneumonia Caused by C. trachomatis
Characteristic signs of chlamydial pneumonia in infants include a) a repetitive staccato cough with tachypnea and b) hyperinflation and bilateral diffuse infiltrates on a chest radiograph. Wheezing is rare, and infants are typically afebrile. Peripheral eosinophilia sometimes occurs in infants who have chlamydial pneumonia. Because clinical presentations differ, initial treatment and diagnostic tests should encompass C. trachomatis for all infants aged 1-3 months who possibly have pneumonia.
# Diagnostic Considerations
Specimens for chlamydial testing should be collected from the nasopharynx. Tissue culture is the definitive standard for chlamydial pneumonia; nonculture tests can be used with the knowledge that nonculture tests of nasopharyngeal specimens produce lower sensitivity and specificity than nonculture tests of ocular specimens. Tracheal aspirates and lung biopsy specimens, if collected, should be tested for C. trachomatis.
The microimmunofluorescence test for C. trachomatis antibody is useful but not widely available. An acute IgM antibody titer ≥1:32 is strongly suggestive of C. trachomatis pneumonia.
Because of the delay in obtaining test results for chlamydia, the decision to include an agent in the antibiotic regimen that is active against C. trachomatis must frequently be based on the clinical and radiologic findings. The results of tests for chlamydial infection assist in the management of an infant's illness and determine the need for treating the mother and her sex partner(s).
# Recommended Regimen
Erythromycin base 50 mg/kg/day orally divided into four doses daily for 10-14 days.
# Follow-Up
The effectiveness of erythromycin treatment is approximately 80%; a second course of therapy may be required. Follow-up of infants is recommended to determine whether the pneumonia has resolved. Some infants with chlamydial pneumonia have had abnormal pulmonary function tests later in childhood.
# Management of Mothers and Their Sex Partners
Mothers of infants who have chlamydial infection and the sex partners of these women should be evaluated and treated according to the recommended treatment of adults for chlamydial infections (see Chlamydial Infection in Adolescents and Adults).
# Infants Born to Mothers Who Have Chlamydial Infection
Infants born to mothers who have untreated chlamydia are at high risk for infection; however, prophylatic antibiotic treatment is not indicated, and the efficacy of such treatment is unknown. Infants should be monitored to ensure appropriate treatment if infection develops.
# Chlamydial Infection in Children
Sexual abuse must be considered a cause of chlamydial infection in preadolescent children, although perinatally transmitted C. trachomatis infection of the nasopharynx, urogenital tract, and rectum may persist for >1 year (see Sexual Assault or Abuse of Children). Because of the potential for a criminal investigation and legal proceedings for sexual abuse, a diagnosis of C. trachomatis in a preadolescent child requires the high specificity provided by isolation in cell culture. The cultures should be confirmed by microscopic identification of the characteristic intracytoplasmic inclusions, preferably by fluorescein-conjugated monoclonal antibodies specific for C. trachomatis.
# Diagnostic Considerations
Nonculture tests for chlamydia should not be used because of the possibility of false-positive test results. With respiratory tract specimens, false-positive results can occur because of cross-reaction of test reagents with Chlamydia pneumoniae; with genital and anal specimens, false-positive results occur because of crossreaction with fecal flora.
# Recommended Regimens
Children who weigh <45 kg:
Erythromycin base 50 mg/kg/day orally divided into four doses daily for 10-14 days.
# NOTE:
The effectiveness of treatment with erythromycin is approximately 80%; a second course of therapy may be required.
Children who weigh ≥45 kg but are <8 years of age: Azithromycin 1 g orally in a single dose.
Children ≥8 years of age: Azithromycin 1 g orally in a single dose, OR Doxycycline 100 mg orally twice a day for 7 days.
# Other Management Considerations
See Sexual Assault or Abuse of Children.
# Follow-Up
Follow-up cultures are necessary to ensure that treatment has been effective.
# Gonococcal Infection Gonococcal Infection in Adolescents and Adults
In the United States, an estimated 600,000 new infections with N. gonorrhoeae occur each year. Most infections among men produce symptoms that cause them to seek curative treatment soon enough to prevent serious sequelae-but this may not be soon enough to prevent transmission to others. Many infections among women do not produce recognizable symptoms until complications (e.g., pelvic inflammatory disease ) have occurred. Both symptomatic and asymptomatic cases of PID can result in tubal scarring that leads to infertility or ectopic pregnancy. Because gonococcal infections among women often are asymptomatic, an important component of gonorrhea control in the United States continues to be the screening of women at high risk for STDs.
# Dual Therapy for Gonococcal and Chlamydial Infections
Patients infected with N. gonorrhoeae often are coinfected with C. trachomatis; this finding led to the recommendation that patients treated for gonococcal infection also be treated routinely with a regimen effective against uncomplicated genital C. trachomatis infection. Routine dual therapy without testing for chlamydia can be cost-effective for populations in which chlamydial infection accompanies 20%-40% of gonococcal infections, because the cost of therapy for chlamydia (e.g., $0.50-$1.50 for doxycycline) is less than the cost of testing. Some experts believe that the routine use of dual therapy has resulted in substantial decreases in the prevalence of chlamydial infection. Because most gonococci in the United States are susceptible to doxycycline and azithromycin, routine cotreatment might hinder the development of antimicrobial-resistant N. gonorrhoeae.
Since the introduction of dual therapy, the prevalence of chlamydial infection has decreased in some populations, and simultaneous testing for chlamydial infection has become quicker, more sensitive, and more widely available. In geographic areas in which the rates of coinfection are low, some clinicians might prefer to test for chlamydia rather than treat presumptively. However, presumptive treatment is indicated for patients who may not return for test results.
# Quinolone-Resistant N. gonorrhoeae (QRNG)
Cases of gonorrhea caused by N. gonorrhoeae resistant to fluoroquinolones have been reported sporadically from many parts of the world, including North America, and are becoming widespread in parts of Asia. As of February 1997, however, QRNG occurred rarely in the United States: <0.05% of 4,639 isolates collected by CDC's Gonococcal Isolate Surveillance Project (GISP) during 1996 had minimum inhibitory concentrations (MICs) ≥1.0 µg/mL to ciprofloxacin. The GISP sample is collected from 26 cities and includes approximately 1.3% of all reported gonococcal infections among men in the United States. As long as QRNG strains comprise <1% of all N. gonorrhoeae strains isolated at each of the 26 cities, the fluoroquinolone regimens can be used with confidence. However, importation of QRNG will probably continue, and the prevalence of QRNG in the United States could increase to the point that fluoroquinolones no longer reliably eradicate gonococcal infections.
# Uncomplicated Gonococcal Infections of the Cervix, Urethra, and Rectum
# Recommended Regimens
Cefixime 400 mg orally in a single dose, Cefixime has an antimicrobial spectrum similar to that of ceftriaxone, but the 400-mg oral dose does not provide as high nor as sustained a bactericidal level as that provided by the 125-mg dose of ceftriaxone. In published clinical trials, the 400-mg dose cured 97.1% of uncomplicated urogenital and anorectal gonococcal infections. The advantage of cefixime is that it can be administered orally.
Ceftriaxone in a single injection of 125 mg provides sustained, high bactericidal levels in the blood. Extensive clinical experience indicates that ceftriaxone is safe and effective for the treatment of uncomplicated gonorrhea at all sites, curing 99.1% of uncomplicated urogenital and anorectal infections in published clinical trials.
Ciprofloxacin is effective against most strains of N. gonorrhoeae. At a dose of 500 mg, ciprofloxacin provides sustained bactericidal levels in the blood; in published clinical trials, it has cured 99.8% of uncomplicated urogenital and anorectal infections. Ciprofloxacin is safe, relatively inexpensive, and can be administered orally.
Ofloxacin also is effective against most strains of N. gonorrhoeae, and it has favorable pharmacokinetics. The 400-mg oral dose has been effective for treatment of uncomplicated urogenital and anorectal infections, curing 98.4% of infections in published clinical trials.
# Alternative Regimens
Spectinomycin 2 g IM in a single dose. Spectinomycin is expensive and must be injected; however, it has been effective in published clinical trials, curing 98.2% of uncomplicated urogenital and anorectal gonococcal infections. Spectinomycin is useful for treatment of patients who cannot tolerate cephalosporins and quinolones.
Single-dose cephalosporin regimens other than ceftriaxone 125 mg IM and cefixime 400 mg orally that are safe and highly effective against uncomplicated urogenital and anorectal gonococcal infections include a) ceftizoxime 500 mg IM, b) cefotaxime 500 mg IM, c) cefotetan 1 g IM, and d) cefoxitin 2 g IM with probenecid 1 g orally. None of these injectable cephalosporins offers any advantage in comparison with ceftriaxone, and clinical experience with these regimens for treatment of uncomplicated gonorrhea is limited.
Single-dose quinolone regimens include enoxacin 400 mg orally, lomefloxacin 400 mg orally, and norfloxacin 800 mg orally. These regimens appear to be safe and effective for the treatment of uncomplicated gonorrhea, but data regarding their use are limited. None of the regimens appears to offer any advantage over ciprofloxacin at a dose of 500 mg or ofloxacin at 400 mg.
Many other antimicrobials are active against N. gonorrhoeae; however, these guidelines are not intended to be a comprehensive list of all effective treatment regimens. Azithromycin 2 g orally is effective against uncomplicated gonococcal infection, but it is expensive and causes gastrointestinal distress too often to be recommended for treatment of gonorrhea. At an oral dose of 1 g, azithromycin is insufficiently effective, curing only 93% of patients in published studies.
# Uncomplicated Gonococcal Infection of the Pharynx
Gonococcal infections of the pharynx are more difficult to eradicate than infections at urogenital and anorectal sites. Few antigonococcal regimens can reliably cure such infections >90% of the time.
Although chlamydial coinfection of the pharynx is unusual, coinfection at genital sites sometimes occurs. Therefore, treatment for both gonorrhea and chlamydia is suggested.
# Recommended Regimens
Ceftriaxone 125 mg IM in a single dose,
# Follow-Up
Patients who have uncomplicated gonorrhea and who are treated with any of the recommended regimens need not return for a test of cure. Patients who have symptoms that persist after treatment should be evaluated by culture for N. gonorrhoeae, and any gonococci isolated should be tested for antimicrobial susceptibility. Infections identified after treatment with one of the recommended regimens usually result from reinfection rather than treatment failure, indicating a need for improved patient education and referral of sex partners. Persistent urethritis, cervicitis, or proctitis also may be caused by C. trachomatis and other organisms.
# Management of Sex Partners
Patients should be instructed to refer sex partners for evaluation and treatment. All sex partners of patients who have N. gonorrhoeae infection should be evaluated and treated for N. gonorrhoeae and C. trachomatis infections if their last sexual contact with the patient was within 60 days before onset of symptoms or diagnosis of infection in the patient. If a patient's last sexual intercourse was >60 days before onset of symptoms or diagnosis, the patient's most recent sex partner should be treated. Patients should be instructed to avoid sexual intercourse until therapy is completed and they and their sex partners no longer have symptoms.
# Special Considerations Allergy, Intolerance, or Adverse Reactions
Persons who cannot tolerate cephalosporins or quinolones should be treated with spectinomycin. Because spectinomycin is unreliable (i.e., only 52% effective) against pharyngeal infections, patients who have suspected or known pharyngeal infection should have a pharyngeal culture evaluated 3-5 days after treatment to verify eradication of infection.
# Pregnancy
Pregnant women should not be treated with quinolones or tetracyclines. Those infected with N. gonorrhoeae should be treated with a recommended or alternate cephalosporin. Women who cannot tolerate a cephalosporin should be administered a single 2-g dose of spectinomycin IM. Either erythromycin or amoxicillin is recommended for treatment of presumptive or diagnosed C. trachomatis infection during pregnancy (see Chlamydial Infection).
# HIV Infection
Patients who have gonococcal infection and also are infected with HIV should receive the same treatment regimen as those who are HIV-negative.
# Gonococcal Conjunctivitis
Only one study of the treatment of gonococcal conjunctivitis among adults in North America has been published recently. In that study, 12 of 12 patients responded favorably to a single 1-g IM injection of ceftriaxone. The following recommendations reflect the opinions of expert consultants.
# Treatment Recommended Regimen
Ceftriaxone 1 g IM in a single dose, and lavage the infected eye with saline solution once.
# Management of Sex Partners
Patients should be instructed to refer their sex partners for evaluation and treatment (see Gonococcal Infection, Management of Sex Partners).
# Disseminated Gonococcal Infection (DGI)
DGI results from gonococcal bacteremia. DGI often results in petechial or pustular acral skin lesions, asymmetrical arthralgia, tenosynovitis, or septic arthritis. The infection is complicated occasionally by perihepatitis, and rarely by endocarditis or meningitis. Strains of N. gonorrhoeae that cause DGI tend to cause minimal genital inflammation. In the United States, these strains have occurred infrequently during the past decade.
No studies of the treatment of DGI among persons in North America have been published recently. The following recommendations reflect the opinions of experts. No treatment failures have been reported.
# Treatment
Hospitalization is recommended for initial therapy, especially for patients who cannot be relied on to comply with treatment, for those in whom the diagnosis is uncertain, and for those who have purulent synovial effusions or other complications. Patients should be examined for clinical evidence of endocarditis and meningitis. Patients treated for DGI should be treated presumptively for concurrent C. trachomatis infection unless appropriate testing excludes this infection.
# Recommended Initial Regimen
Ceftriaxone 1 g IM or IV every 24 hours. All regimens should be continued for 24-48 hours after improvement begins, at which time therapy may be switched to one of the following regimens to complete a full week of antimicrobial therapy: Cefixime 400 mg orally twice a day, OR Ciprofloxacin 500 mg orally twice a day, OR Ofloxacin 400 mg orally twice a day.
# Alternative Initial Regimens
# Management of Sex Partners
Gonococcal infection often is asymptomatic in sex partners of patients who have DGI. As with uncomplicated gonococcal infections, patients should be instructed to refer their sex partners for evaluation and treatment (see Gonococcal Infection, Management of Sex Partners).
# Gonococcal Meningitis and Endocarditis
Recommended Initial Regimen Ceftriaxone 1-2 g IV every 12 hours.
Therapy for meningitis should be continued for 10-14 days; therapy for endocarditis should be continued for at least 4 weeks. Treatment of complicated DGI should be undertaken in consultation with an expert.
# Management of Sex Partners
Patients should be instructed to refer their sex partners for evaluation and treatment (see Gonococcal Infection, Management of Sex Partners).
# Gonococcal Infection in Infants
Gonococcal infection usually results from exposure to infected cervical exudate at birth. It is usually an acute illness that becomes manifest 2-5 days after birth. The prevalence of infection among infants depends on the prevalence of infection among pregnant women, on whether pregnant women are screened for gonorrhea, and on whether newborns receive ophthalmia prophylaxis.
The most serious manifestations of N. gonorrhoeae infection in newborns are ophthalmia neonatorum and sepsis, including arthritis and meningitis. Less serious manifestations include rhinitis, vaginitis, urethritis, and inflammation at sites of fetal monitoring.
# Ophthalmia Neonatorum Caused by N. gonorrhoeae
Although N. gonorrhoeae is a less frequent cause of ophthalmia neonatorum in the United States than C. trachomatis and nonsexually transmitted agents, it is especially important because it may result in perforation of the globe of the eye and in blindness.
# Diagnostic Considerations
Infants at increased risk for gonococcal ophthalmia are those who do not receive ophthalmia prophylaxis and those whose mothers have had no prenatal care or whose mothers have a history of STDs or substance abuse. Gonococcal ophthalmia is strongly suggested when typical Gram-negative diplococci are identified in conjunctival exudate, justifying presumptive treatment for gonorrhea after appropriate cultures for N. gonorrhoeae are obtained. Appropriate chlamydial testing should be done simultaneously. Presumptive treatment for N. gonorrhoeae may be indicated for newborns who are at increased risk for gonococcal ophthalmia and who have conjunctivitis but do not have gonococci in a Gramstained smear of conjunctival exudate.
In all cases of neonatal conjunctivitis, conjunctival exudate should be cultured for N. gonorrhoeae and tested for antibiotic susceptibility before a definitive diagnosis is made. A definitive diagnosis is important because of the public health and social consequences of a diagnosis of gonorrhea. Nongonococcal causes of neonatal ophthalmia include Moraxella catarrhalis and other Neisseria species that are indistinguishable from N. gonorrhoeae on Gram-stained smear but can be differentiated in the microbiology laboratory.
# Recommended Regimen
Ceftriaxone 25-50 mg/kg IV or IM in a single dose, not to exceed 125 mg. NOTE: Topical antibiotic therapy alone is inadequate and is unnecessary if systemic treatment is administered.
# Other Management Considerations
Simultaneous infection with C. trachomatis should be considered when a patient does not respond satisfactorily to treatment. Both mother and infant should be tested for chlamydial infection at the same time that gonorrhea testing is done (see Ophthalmia Neonatorum Caused by C. trachomatis). Ceftriaxone should be administered cautiously to hyperbilirubinemic infants, especially those born prematurely.
# Follow-Up
Infants who have gonococcal ophthalmia should be hospitalized and evaluated for signs of disseminated infection (e.g., sepsis, arthritis, and meningitis). One dose of ceftriaxone is adequate therapy for gonococcal conjunctivitis, but many pediatricians prefer to continue antibiotics until cultures are negative at 48-72 hours. The duration of therapy should be decided in consultation with experienced physicians.
# Management of Mothers and Their Sex Partners
The mothers of infants who have gonococcal infection and the mothers' sex partners should be evaluated and treated according to the recommendations for treating gonococcal infections in adults (see Gonococcal Infection in Adolescents and Adults).
# Disseminated Gonococcal Infection and Gonococcal Scalp Abscess in Newborns
Sepsis, arthritis, meningitis, or any combination of these are rare complications of neonatal gonococcal infection. Localized gonococcal infection of the scalp might result from fetal monitoring through scalp electrodes. Detection of gonococcal infection in neonates who have sepsis, arthritis, meningitis, or scalp abscesses requires cultures of blood, CSF, and joint aspirate on chocolate agar. Specimens obtained from the conjunctiva, vagina, oropharynx, and rectum that are cultured on gonococcal selective medium are useful for identifying the primary site(s) of infection, especially if inflammation is present. Positive Gram-stained smears of exudate, CSF, or joint aspirate provide a presumptive basis for initiating treatment for N. gonorrhoeae. Diagnoses based on Gram-stained smears or presumptive identification of cultures should be confirmed with definitive tests on culture isolates.
# Recommended Regimens
Ceftriaxone 25-50 mg/kg/day IV or IM in a single daily dose for 7 days, with a duration of 10-14 days if meningitis is documented; OR Cefotaxime 25 mg/kg IV or IM every 12 hours for 7 days, with a duration of 10-14 days if meningitis is documented.
# Prophylactic Treatment for Infants Whose Mothers Have Gonococcal Infection
Infants born to mothers who have untreated gonorrhea are at high risk for infection.
# Recommended Regimen in the Absence of Signs of Gonococcal Infection
Ceftriaxone 25-50 mg/kg IV or IM, not to exceed 125 mg, in a single dose.
# Other Management Considerations
Mother and infant should be tested for chlamydial infection.
# Follow-Up
A follow-up examination is not required.
# Management of Mothers and Their Sex Partners
The mothers of infants who have gonococcal infection and the mothers' sex partners should be evaluated and treated according to the recommendations for treatment of gonococcal infections in adults (see Gonococcal Infection).
# Gonococcal Infection in Children
After the neonatal period, sexual abuse is the most frequent cause of gonococcal infection in preadolescent children (see Sexual Assault or Abuse of Children). Vaginitis is the most common manifestation of gonococcal infection in preadolescent children. PID following vaginal infection is probably less common than among adults. Among sexually abused children, anorectal and pharyngeal infections with N. gonorrhoeae are common and frequently asymptomatic.
# Diagnostic Considerations
Because of the legal implications of a diagnosis of N. gonorrhoeae infection in a child, only standard culture procedures for the isolation of N. gonorrhoeae should be used for children. Nonculture gonococcal tests for gonococci (e.g., Gramstained smear, DNA probes, and EIA tests) should not be used alone; none of these tests have been approved by FDA for use with specimens obtained from the oropharynx, rectum, or genital tract of children. Specimens from the vagina, urethra, pharynx, or rectum should be streaked onto selective media for isolation of N. gonorrhoeae, and all presumptive isolates of N. gonorrhoeae should be identified definitively by at least two tests that involve different principles (e.g., biochemical, enzyme substrate, or serologic). Isolates should be preserved to enable additional or repeated testing.
# Recommended Regimens for Children Who Weigh ≥45 kg
Children who weigh ≥45 kg should be treated with one of the regimens recommended for adults (see Gonococcal Infection).
# NOTE:
Quinolones are not approved for use in children because of concerns about toxicity based on animal studies. However, investigations of ciprofloxacin treatment in children who have cystic fibrosis demonstrated no adverse effects.
# Recommended Regimen for Children Who Weigh <45 kg and Who Have Uncomplicated Gonococcal Vulvovaginitis, Cervicitis, Urethritis, Pharyngitis, or Proctitis
Ceftriaxone 125 mg IM in a single dose.
# Alternative Regimen
Spectinomycin 40 mg/kg (maximum dose: 2 g) IM in a single dose may be used, but this therapy is unreliable for treatment of pharyngeal infections. Some experts use cefixime to treat gonococcal infections in children because it can be administered orally; however, no reports have been published concerning the safety or effectiveness of cefixime used for this purpose.
# Recommended Regimen for Children Who Weigh <45 kg and Who Have Bacteremia or Arthritis
Ceftriaxone 50 mg/kg (maximum dose: 1 g) IM or IV in a single dose daily for 7 days.
# Recommended Regimen for Children Who Weigh ≥45 kg and Who Have Bacteremia or Arthritis
Ceftriaxone 50 mg/kg (maximum dose: 2 g) IM or IV in a single dose daily for 10-14 days.
# DISEASES CHARACTERIZED BY VAGINAL DISCHARGE
# Management of Patients Who Have Vaginal Infections
Vaginitis is usually characterized by a vaginal discharge or vulvar itching and irritation; a vaginal odor may be present. The three diseases most frequently associated with vaginal discharge are trichomoniasis (caused by T. vaginalis ), BV (caused by a replacement of the normal vaginal flora by an overgrowth of anaerobic microorganisms and Gardnerella vaginalis ), and candidiasis (usually caused by Candida albicans ). MPC caused by C. trachomatis or N. gonorrhoeae can sometimes cause vaginal discharge. Although vulvovaginal candidiasis usually is not transmitted sexually, it is included in this section because it is often diagnosed in women being evaluated for STDs.
Vaginitis is diagnosed by pH and microscopic examination of fresh samples of the discharge. The pH of the vaginal secretions can be determined by narrow-range pH paper for the elevated pH typical of BV or trichomoniasis (i.e., a pH of >4.5). One way to examine the discharge is to dilute a sample in one to two drops of 0.9% normal saline solution on one slide and 10% potassium hydroxide (KOH) solution on a second slide. An amine odor detected immediately after applying KOH suggests BV. A cover slip is placed on each slide, and they are examined under a microscope at low-and high-dry power. The motile T. vaginalis or the clue cells of BV usually are identified easily in the saline specimen. The yeast or pseudohyphae of Candida species are more easily identified in the KOH specimen. The presence of objective signs of vulvar inflammation in the absence of vaginal pathogens, along with a minimal amount of discharge, suggests the possibility of mechanical, chemical, allergic, or other noninfectious irritation of the vulva. Culture for T. vaginalis is more sensitive than microscopic examination. Laboratory testing fails to identify the cause of vaginitis among a substantial minority of women.
# Bacterial Vaginosis
BV is a clinical syndrome resulting from replacement of the normal H2O2producing Lactobacillus sp. in the vagina with high concentrations of anaerobic bacteria (e.g., Prevotella sp. and Mobiluncus sp.), G. vaginalis, and Mycoplasma hominis. BV is the most prevalent cause of vaginal discharge or malodor; however, half of the women whose illnesses meet the clinical criteria for BV are asymptomatic. The cause of the microbial alteration is not fully understood. Although BV is associated with having multiple sex partners, it is unclear whether BV results from acquisition of a sexually transmitted pathogen. Women who have never been sexually active are rarely affected. Treatment of the male sex partner has not been beneficial in preventing the recurrence of BV.
# Diagnostic Considerations
BV can be diagnosed by the use of clinical or Gram stain criteria. Clinical criteria require three of the following symptoms or signs: a. A homogeneous, white, noninflammatory discharge that smoothly coats the vaginal walls; b. The presence of clue cells on microscopic examination; c. A pH of vaginal fluid >4.5; d. A fishy odor of vaginal discharge before or after addition of 10% KOH (i.e., the whiff test).
When a Gram stain is used, determining the relative concentration of the bacterial morphotypes characteristic of the altered flora of BV is an acceptable laboratory method for diagnosing BV. Culture of G. vaginalis is not recommended as a diagnostic tool because it is not specific.
# Treatment
The principal goal of therapy for BV is to relieve vaginal symptoms and signs of infection. All women who have symptomatic disease require treatment, regardless of pregnancy status.
BV during pregnancy is associated with adverse pregnancy outcomes. The results of several investigations indicate that treatment of pregnant women who have BV and who are at high risk for preterm delivery (i.e., those who previously delivered a premature infant) might reduce the risk for prematurity. Therefore, high-risk pregnant women who do not have symptoms of BV may be evaluated for treatment.
Although some experts recommend treatment for high-risk pregnant women who have asymptomatic BV, others believe more information is needed before such a recommendation is made. A large, randomized clinical trial is underway to assess treatment for asymptomatic BV in pregnant women; the results of this investigation should clarify the benefits of therapy for BV in women at both low and high risk for preterm delivery.
The bacterial flora that characterizes BV has been recovered from the endometria and salpinges of women who have PID. BV has been associated with endometritis, PID, and vaginal cuff cellulitis after invasive procedures such as endometrial biopsy, hysterectomy, hysterosalpingography, placement of an intrauterine device, cesarean section, and uterine curettage. The results of one randomized controlled trial indicated that treatment of BV with metronidazole substantially reduced postabortion PID. On the basis of these data, consideration should be given to treatment of women who have symptomatic or asymptomatic BV before surgical abortion procedures are performed. However, more information is needed before recommending whether patients who have asymptomatic BV should be treated before other invasive procedures are performed.
# Recommended Regimens for Nonpregnant Women
For treatment of pregnant women, see Bacterial Vaginosis, Special Considerations, Pregnancy.
Metronidazole 500 mg orally twice a day for 7 days,
# OR
Clindamycin cream 2%, one full applicator (5 g) intravaginally at bedtime for 7 days, OR Metronidazole gel 0.75%, one full applicator (5 g) intravaginally twice a day for 5 days.
# NOTE:
Patients should be advised to avoid consuming alcohol during treatment with metronidazole and for 24 hours thereafter. Clindamycin cream is oil-based and might weaken latex condoms and diaphragms. Refer to condom product labeling for additional information.
# Alternative Regimens
Metronidazole 2 g orally in a single dose, OR Clindamycin 300 mg orally twice a day for 7 days.
Metronidazole 2-g single-dose therapy is an alternative regimen because of its lower efficacy for BV. Oral metronidazole (500 mg twice a day) is efficacious for the treatment of BV, resulting in relief of symptoms and improvement in clinical course and flora disturbances. Based on efficacy data from four randomized controlled trials, overall cure rates 4 weeks after completion of treatment did not differ significantly between the 7-day regimen of oral metronidazole and the clindamycin vaginal cream (78% vs. 82%, respectively). Similarly, the results of another randomized controlled trial indicated that cure rates 7-10 days after completion of treatment did not differ significantly between the 7-day regimen of oral metronidazole and the metronidazole vaginal gel (84% vs. 75%, respectively). FDA has approved Flagyl ER TM (750 mg) once daily for 7 days for treatment of BV. However, data concerning clinical equivalency with other regimens have not been published. Some health-care providers remain concerned about the possible teratogenicity of metronidazole, which has been suggested by experiments using extremely high and prolonged doses in animals. However, a recent meta-analysis does not indicate teratogenicity in humans. Some health-care providers prefer the intravaginal route because of a lack of systemic side effects (e.g., mild-to-moderate gastrointestinal disturbance and unpleasant taste). Mean peak serum concentrations of metronidazole after intravaginal administration are <2% the levels of standard 500-mg oral doses, and the mean bioavailability of clindamycin cream is approximately 4%.
# Follow-Up
Follow-up visits are unnecessary if symptoms resolve. Recurrence of BV is not unusual. Because treatment of BV in high-risk pregnant women who are asymptomatic might prevent adverse pregnancy outcomes, a follow-up evaluation, at 1 month after completion of treatment, should be considered to evaluate whether therapy was successful. The alternative BV treatment regimens may be used to treat recurrent disease. No long-term maintenance regimen with any therapeutic agent is recommended.
# Management of Sex Partners
The results of clinical trials indicate that a woman's response to therapy and the likelihood of relapse or recurrence are not affected by treatment of her sex partner(s). Therefore, routine treatment of sex partners is not recommended.
# Special Considerations
# Allergy or Intolerance to the Recommended Therapy
Clindamycin cream is preferred in case of allergy or intolerance to metronidazole. Metronidazole gel can be considered for patients who do not tolerate systemic metronidazole, but patients allergic to oral metronidazole should not be administered metronidazole vaginally.
# Pregnancy
BV has been associated with adverse pregnancy outcomes (e.g., premature rupture of the membranes, preterm labor, and preterm birth), and the organisms found in increased concentration in BV also are frequently present in postpartum or postcesarean endometritis. Because treatment of BV in high-risk pregnant women (i.e., those who have previously delivered a premature infant) who are asymptomatic might reduce preterm delivery, such women may be screened, and those with BV can be treated. The screening and treatment should be conducted at the earliest part of the second trimester of pregnancy. The recommended regimen is metronidazole 250 mg orally three times a day for 7 days. The alternative regimens are a) metronidazole 2 g orally in a single dose or b) clindamycin 300 mg orally twice a day for 7 days.
Low-risk pregnant women (i.e., women who previously have not had a premature delivery) who have symptomatic BV should be treated to relieve symptoms. The recommended regimen is metronidazole 250 mg orally three times a day for 7 days. The alternative regimens are a) metronidazole 2 g orally in a single dose; b) clindamycin 300 mg orally twice a day for 7 days; or c) metronidazole gel, 0.75%, one full applicator (5 g) intravaginally, twice a day for 5 days. Some experts prefer the use of systemic therapy for low-risk pregnant women to treat possible subclinical upper genital tract infections.
Lower doses of medication are recommended for pregnant women to minimize exposure to the fetus. Data are limited concerning the use of metronidazole vaginal gel during pregnancy. The use of clindamycin vaginal cream during pregnancy is not recommended, because two randomized trials indicated an increase in the number of preterm deliveries among pregnant women who were treated with this medication.
# HIV Infection
Patients who have BV and also are infected with HIV should receive the same treatment regimen as those who are HIV-negative.
# Trichomoniasis
Trichomoniasis is caused by the protozoan T. vaginalis. Most men who are infected with T. vaginalis do not have symptoms of infection, although a minority of men have NGU. Many women do have symptoms of infection. Of these women, T. vaginalis characteristically causes a diffuse, malodorous, yellow-green discharge with vulvar irritation; many women have fewer symptoms. Vaginal trichomoniasis might be associated with adverse pregnancy outcomes, particularly premature rupture of the membranes and preterm delivery.
# Recommended Regimen
Metronidazole 2 g orally in a single dose.
# Alternative Regimen*
Metronidazole 500 mg twice a day for 7 days.
Metronidazole is the only oral medication available in the United States for the treatment of trichomoniasis. In randomized clinical trials, the recommended metronidazole regimens have resulted in cure rates of approximately 90%-95%; ensuring treatment of sex partners might increase the cure rate. Treatment of patients and sex partners results in relief of symptoms, microbiologic cure, and reduction of transmission. Metronidazole gel is approved for treatment of BV, but, like other topically applied antimicrobials that are unlikely to achieve therapeutic levels in the urethra or perivaginal glands, it is considerably less efficacious for treatment of trichomoniasis than oral preparations of metronidazole and is not recommended for use. Several other topically applied antimicrobials have been used for treatment of trichomoniasis, but it is unlikely that these preparations will have greater efficacy than metronidazole gel.
# Follow-Up
Follow-up is unnecessary for men and women who become asymptomatic after treatment or who are initially asymptomatic. Infections with strains of T. vaginalis that have diminished susceptibility to metronidazole can occur; however, most of these organisms respond to higher doses of metronidazole. If treatment failure occurs with either regimen, the patient should be re-treated with metronidazole *FDA has approved Flagyl 375 TM mg twice a day for 7 days for treatment of trichomoniasis on the basis of pharmacokinetic equivalency of this regimen with metronidazole 250 mg three times a day for 7 days. No clinical data are available, however, to demonstrate clinical equivalency of the two regimens. 500 mg twice a day for 7 days. If treatment failure occurs repeatedly, the patient should be treated with a single 2-g dose of metronidazole once a day for 3-5 days.
Patients with culture-documented infection who do not respond to the regimens described in this report and in whom reinfection has been excluded should be managed in consultation with an expert; consultation is available from CDC. Evaluation of such cases should include determination of the susceptibility of T. vaginalis to metronidazole.
# Management of Sex Partners
Sex partners should be treated. Patients should be instructed to avoid sex until they and their sex partners are cured. In the absence of a microbiologic test of cure, this means when therapy has been completed and patient and partner(s) are asymptomatic.
# Special Considerations Allergy, Intolerance, or Adverse Reactions
Effective alternatives to therapy with metronidazole are not available. Patients who are allergic to metronidazole can be managed by desensitization (26 ).
# Pregnancy
Patients can be treated with 2 g of metronidazole in a single dose.
# HIV Infection
Patients who have trichomoniasis and also are infected with HIV should receive the same treatment regimen as those who are HIV-negative.
# Vulvovaginal Candidiasis
Vulvovaginal candidiasis (VVC) is caused by C. albicans or, occasionally, by other Candida sp., Torulopsis sp., or other yeasts. An estimated 75% of women will have at least one episode of VVC, and 40%-45% will have two or more episodes. A small percentage of women (i.e., probably <5%) experience recurrent VVC (RVVC). Typical symptoms of VVC include pruritus and vaginal discharge. Other symptoms may include vaginal soreness, vulvar burning, dyspareunia, and external dysuria. None of these symptoms is specific for VVC.
# Diagnostic Considerations
A diagnosis of Candida vaginitis is suggested clinically by pruritus and erythema in the vulvovaginal area; a white discharge may occur. The diagnosis can be made in a woman who has signs and symptoms of vaginitis, and when either a) a wet preparation or Gram stain of vaginal discharge demonstrates yeasts or pseudohyphae or b) a culture or other test yields a positive result for a yeast species. Candida vaginitis is associated with a normal vaginal pH (≤4.5). Use of 10% KOH in wet preparations improves the visualization of yeast and mycelia by disrupting cellular material that might obscure the yeast or pseudohyphae. Identifying Candida by culture in the absence of symptoms should not lead to treatment, because approximately 10%-20% of women usually harbor Candida sp. and other yeasts in the vagina. VVC can occur concomitantly with STDs or frequently following antibacterial vaginal or systemic therapy.
# Treatment
Topical formulations effectively treat VVC. The topically applied azole drugs are more effective than nystatin. Treatment with azoles results in relief of symptoms and negative cultures among 80%-90% of patients who complete therapy. Preparations for intravaginal administration of butaconazole, clotrimazole, miconazole, and tioconazole are available OTC, and women with VVC can choose one of those preparations. The duration for treatment with these preparations may be 1, 3, or 7 days. Self-medication with OTC preparations should be advised only for women who have been diagnosed previously with VVC and who have a recurrence of the same symptoms. Any woman whose symptoms persist after using an OTC preparation or who has a recurrence of symptoms within 2 months should seek medical care.
# Recommended Regimens
A new classification of VVC may facilitate antifungal selection as well as duration of therapy. Uncomplicated VVC (i.e., mild-to-moderate, sporadic, nonrecurrent disease in a normal host with normally susceptible C. albicans) responds to all the aforementioned azoles, even those that are short-term (<7 days) and single-dose therapies. In contrast, complicated VVC (i.e., severe local or recurrent VVC in an abnormal host ) requires a longer duration of therapy (i.e, 10-14 days) with either topical or oral azoles. Additional studies confirming this approach are in progress.
# Alternative Regimens
Several trials have demonstrated that oral azole agents (e.g., ketoconazole and itraconazole) might be as effective as topical agents. The ease of administering oral agents is an advantage over topical therapies. However, the potential for toxicity associated with using a systemic drug, particularly ketoconazole, must be considered.
# Follow-Up
Patients should be instructed to return for follow-up visits only if symptoms persist or recur.
# Management of Sex Partners
VVC usually is not acquired through sexual intercourse; treatment of sex partners is not recommended but may be considered for women who have recurrent infection. A minority of male sex partners may have balanitis, which is characterized by erythematous areas on the glans in conjunction with pruritus or irritation. These sex partners might benefit from treatment with topical antifungal agents to relieve symptoms.
# Special Considerations Allergy or Intolerance to the Recommended Therapy
Topical agents usually are free of systemic side effects, although local burning or irritation may occur. Oral agents occasionally cause nausea, abdominal pain, and headaches. Therapy with the oral azoles has been associated rarely with abnormal elevations of liver enzymes. Hepatotoxicity secondary to ketoconazole therapy occurs in an estimated one of every 10,000-15,000 exposed persons. Clinically important interactions might occur when these oral agents are administered with other drugs, including astemizole, calcium channel antagonists, cisapride, coumadin, cyclosporin A, oral hypoglycemic agents, phenytoin, protease inhibitors, tacrolimus, terfenadine, theophylline, trimetrexate, and rifampin.
# Pregnancy
VVC often occurs during pregnancy. Only topical azole therapies should be used to treat pregnant women. Of those treatments that have been investigated for use during pregnancy, the most effective are butoconazole, clotrimazole, miconazole, and terconazole. Many experts recommend 7 days of therapy during pregnancy.
# HIV Infection
Prospective controlled studies are in progress to confirm an alleged increase in incidence of VVC in HIV-infected women. No confirmed evidence has indicated a differential response to conventional antifungal therapy among HIV-positive women who have VVC. As such, women who have acute VVC and also are infected with HIV should receive the same treatment regimens as those who are HIVnegative.
# Recurrent Vulvovaginal Candidiasis
RVVC, which usually is defined as four or more episodes of symptomatic VVC annually, affects a small percentage of women (i.e., probably <5%). The pathogenesis of RVVC is poorly understood. Risk factors for RVVC include uncontrolled diabetes mellitus, immunosuppression, and corticosteroid use. In some women who have RVVC, episodes occur after repeated courses of topical or systemic antibacterials. However, this association is not apparent in the majority of women. Most women who have RVVC have no apparent predisposing conditions. Clinical trials addressing the management of RVVC have involved continuing therapy between episodes.
# Treatment
The optimal treatment for RVVC has not been established; however, an initial intensive regimen continued for approximately 10-14 days, followed immediately by a maintenance regimen for at least 6 months, is recommended. Maintenance ketoconazole 100 mg orally, once a day for ≤6 months, reduces the frequency of RVVC episodes. Investigations are evaluating a weekly fluconazole regimen, the results of which will be compared with once-monthly oral and topical antimycotic regimens that have only moderate protective efficacy. All cases of RVVC should be confirmed by culture before maintenance therapy is initiated.
Although patients with RVVC should be evaluated for predisposing conditions, routinely performing HIV testing for women with RVVC who do not have HIV risk factors is unnecessary.
# Follow-Up
Patients who are receiving treatment for RVVC should receive regular follow-up evaluations to monitor the effectiveness of therapy and the occurrence of side effects.
# Management of Sex Partners
Treatment of sex partners may be considered for partners who have symptomatic balanitis or penile dermatitis. However, routine treatment of sex partners usually is unnecessary.
# Special Considerations HIV Infection
Information is insufficient to determine the optimal management of RVVC among HIV-infected women. Until such information becomes available, management should be the same as for HIV-negative women who have RVVC.
# PELVIC INFLAMMATORY DISEASE (PID)
PID comprises a spectrum of inflammatory disorders of the upper female genital tract, including any combination of endometritis, salpingitis, tubo-ovarian abscess, and pelvic peritonitis. Sexually transmitted organisms, especially N. gonorrhoeae and C. trachomatis, are implicated in most cases; however, microorganisms that can be part of the vaginal flora (e.g., anaerobes, G. vaginalis, H. influenzae, enteric Gram-negative rods, and Streptococcus agalactiae) also can cause PID. In addition, M. hominis and U. urealyticum might be etiologic agents of PID.
# Diagnostic Considerations
Acute PID is difficult to diagnose because of the wide variation in the symptoms and signs. Many women with PID have subtle or mild symptoms that do not readily indicate PID. Consequently, delay in diagnosis and effective treatment probably contributes to inflammatory sequelae in the upper reproductive tract. Laparoscopy can be used to obtain a more accurate diagnosis of salpingitis and a more complete bacteriologic diagnosis. However, this diagnostic tool often is not readily available for acute cases, and its use is not easy to justify when symptoms are mild or vague. Moreover, laparoscopy will not detect endometritis and may not detect subtle inflammation of the fallopian tubes. Consequently, a diagnosis of PID usually is based on clinical findings.
The clinical diagnosis of acute PID also is imprecise. Data indicate that a clinical diagnosis of symptomatic PID has a PPV for salpingitis of 65%-90% in comparison with laparoscopy. The PPV of a clinical diagnosis of acute PID differs depending on epidemiologic characteristics and the clinical setting, with higher PPV among sexually active young (especially teenaged) women and among patients attending STD clinics or from settings in which rates of gonorrhea or chlamydia are high. In all settings, however, no single historical, physical, or laboratory finding is both sensitive and specific for the diagnosis of acute PID (i.e., can be used both to detect all cases of PID and to exclude all women without PID). Combinations of diagnostic findings that improve either sensitivity (i.e., detect more women who have PID) or specificity (i.e., exclude more women who do not have PID) do so only at the expense of the other. For example, requiring two or more findings excludes more women who do not have PID but also reduces the number of women with PID who are identified.
Many episodes of PID go unrecognized. Although some cases are asymptomatic, others are undiagnosed because the patient or the health-care provider fails to recognize the implications of mild or nonspecific symptoms or signs (e.g., abnormal bleeding, dyspareunia, or vaginal discharge ). Because of the difficulty of diagnosis and the potential for damage to the reproductive health of women even by apparently mild or atypical PID, health-care providers should maintain a low threshold for the diagnosis of PID. Even so, the long-term outcome of early treatment of women with asymptomatic or atypical PID is unknown. The following recommendations for diagnosing PID are intended to help health-care providers recognize when PID should be suspected and when they need to obtain additional information to increase diagnostic certainty. These recommendations are based partially on the fact that diagnosis and management of other common causes of lower abdominal pain (e.g., ectopic pregnancy, acute appendicitis, and functional pain) are unlikely to be impaired by initiating empiric antimicrobial therapy for PID.
Empiric treatment of PID should be initiated in sexually active young women and others at risk for STDs if all the following minimum criteria are present and no other cause(s) for the illness can be identified:
- Lower abdominal tenderness, - Adnexal tenderness, and - Cervical motion tenderness.
More elaborate diagnostic evaluation often is needed, because incorrect diagnosis and management might cause unnecessary morbidity. These additional criteria may be used to enhance the specificity of the minimum criteria listed previously. Additional criteria that support a diagnosis of PID include the following:
- Oral temperature >101 F (>38.3 C),
- Abnormal cervical or vaginal discharge, - Elevated erythrocyte sedimentation rate, - Elevated C-reactive protein, and - Laboratory documentation of cervical infection with N. gonorrhoeae or C. trachomatis.
The definitive criteria for diagnosing PID, which are warranted in selected cases, include the following:
- Histopathologic evidence of endometritis on endometrial biopsy, - Transvaginal sonography or other imaging techniques showing thickened fluidfilled tubes with or without free pelvic fluid or tubo-ovarian complex, and - Laparoscopic abnormalities consistent with PID.
Although treatment can be initiated before bacteriologic diagnosis of C. trachomatis or N. gonorrhoeae infection, such a diagnosis emphasizes the need to treat sex partners.
# Treatment
PID treatment regimens must provide empiric, broad-spectrum coverage of likely pathogens. Antimicrobial coverage should include N. gonorrhoeae, C. trachomatis, anaerobes, Gram-negative facultative bacteria, and streptococci. Although several antimicrobial regimens have been effective in achieving a clinical and microbiologic cure in randomized clinical trials with short-term follow-up, few investigations have a) assessed and compared these regimens with regard to elimination of infection in the endometrium and fallopian tubes or b) determined the incidence of long-term complications (e.g., tubal infertility and ectopic pregnancy).
All regimens should be effective against N. gonorrhoeae and C. trachomatis, because negative endocervical screening does not preclude upper-reproductive tract infection. Although the need to eradicate anaerobes from women who have PID has not been determined definitively, the evidence suggests that this may be important. Anaerobic bacteria have been isolated from the upper-reproductive tract of women who have PID, and data from in vitro studies have revealed that anaerobes such as Bacteroides fragilis can cause tubal and epithelial destruction. In addition, BV also is diagnosed in many women who have PID. Until treatment regimens that do not adequately cover these microbes have been shown to prevent sequelae as well as the regimens that are effective against these microbes, the recommended regimens should have anaerobic coverage. Treatment should be initiated as soon as the presumptive diagnosis has been made, because prevention of long-term sequelae has been linked directly with immediate administration of appropriate antibiotics. When selecting a treatment regimen, health-care providers should consider availability, cost, patient acceptance, and antimicrobial susceptibility.
In the past, many experts recommended that all patients who had PID be hospitalized so that bed rest and supervised treatment with parenteral antibiotics could be initiated. However, hospitalization is no longer synonymous with parenteral therapy. No currently available data compare the efficacy of parenteral with oral therapy or inpatient with outpatient treatment settings. Until the results from ongoing trials comparing parenteral inpatient therapy with oral outpatient therapy for women who have mild PID are available, such decisions must be based on observational data and consensus opinion. The decision of whether hospitalization is necessary should be based on the discretion of the health-care provider.
The following criteria for HOSPITALIZATION are based on observational data and theoretical concerns:
- Surgical emergencies such as appendicitis cannot be excluded; - The patient is pregnant;
- The patient does not respond clinically to oral antimicrobial therapy; - The patient is unable to follow or tolerate an outpatient oral regimen;
- The patient has severe illness, nausea and vomiting, or high fever;
- The patient has a tubo-ovarian abscess; or - The patient is immunodeficient (i.e., has HIV infection with low CD4 counts, is taking immunosuppressive therapy, or has another disease).
Most clinicians favor at least 24 hours of direct inpatient observation for patients who have tubo-ovarian abscesses, after which time home parenteral therapy should be adequate.
There are no efficacy data comparing parenteral with oral regimens. Experts have extensive experience with both of the following regimens. Also, there are multiple randomized trials demonstrating the efficacy of each regimen. Although most trials have used parenteral treatment for at least 48 hours after the patient demonstrates substantial clinical improvement, this is an arbitrary designation. Clinical experience should guide decisions regarding transition to oral therapy, which may be accomplished within 24 hours of clinical improvement.
# Parenteral Regimen
# NOTE:
Because of pain associated with infusion, doxycycline should be administered orally when possible, even when the patient is hospitalized. Both oral and IV administration of doxycycline provide similar bioavailability. In the event that IV administration is necessary, use of lidocaine or other short-acting local anesthetic, heparin, or steroids with a steel needle or extension of the infusion time may reduce infusion complications. Parenteral therapy may be discontinued 24 hours after a patient improves clinically, and oral therapy with doxycycline (100 mg twice a day) should continue for a total of 14 days. When tubo-ovarian abscess is present, many health-care providers use clindamycin or metronidazole with doxycycline for continued therapy rather than doxycycline alone, because it provides more effective anaerobic coverage.
Clinical data are limited regarding the use of other second-or third-generation cephalosporins (e.g., ceftizoxime, cefotaxime, and ceftriaxone), which also might be effective therapy for PID and might replace cefotetan or cefoxitin. However, they are less active than cefotetan or cefoxitin against anaerobic bacteria.
# Parenteral Regimen B
Clindamycin 900 mg IV every 8 hours, PLUS Gentamicin loading dose IV or IM (2 mg/kg of body weight), followed by a maintenance dose (1.5 mg/kg) every 8 hours. Single daily dosing may be substituted.
# NOTE:
Although use of a single daily dose of gentamicin has not been evaluated for the treatment of PID, it is efficacious in other analogous situations. Parenteral therapy may be discontinued 24 hours after a patient improves clinically, and continuing oral therapy should consist of doxycycline 100 mg orally twice a day or clindamycin 450 mg orally four times a day to complete a total of 14 days of therapy. When tubo-ovarian abscess is present, many health-care providers use clindamycin for continued therapy rather than doxycycline, because clindamycin provides more effective anaerobic coverage.
# Alternative Parenteral Regimens
Limited data support the use of other parenteral regimens, but the following three regimens have been investigated in at least one clinical trial, and they have broad spectrum coverage. Ampicillin/sulbactam plus doxycycline has good coverage against C. trachomatis, N. gonorrhoeae, and anaerobes and appears to be effective for patients who have tubo-ovarian abscess. Both IV ofloxacin and ciprofloxacin have been investigated as single agents. Because ciprofloxacin has poor coverage against C. trachomatis, it is recommended that doxycycline be added routinely. Because of concerns regarding the anaerobic coverage of both quinolones, metronidazole should be included with each regimen.
# Oral Treatment
As with parenteral regimens, clinical trials of outpatient regimens have provided minimal information regarding intermediate and long-term outcomes. The following regimens provide coverage against the frequent etiologic agents of PID, but evidence from clinical trials supporting their use is limited. Patients who do not respond to oral therapy within 72 hours should be reevaluated to confirm the diagnosis and be administered parenteral therapy on either an outpatient or inpatient basis.
# Regimen A
Ofloxacin 400 mg orally twice a day for 14 days,
# PLUS
Metronidazole 500 mg orally twice a day for 14 days.
Oral ofloxacin has been investigated as a single agent in two well-designed clinical trials, and it is effective against both N. gonorrhoeae and C. trachomatis. Despite the results of these trials, ofloxacin's lack of anaerobic coverage is a concern; the addition of metronidazole provides this coverage.
# Regimen B
Ceftriaxone 250 mg IM once, The optimal choice of a cephalosporin for Regimen B is unclear; although cefoxitin has better anaerobic coverage, ceftriaxone has better coverage against N. gonorrhoeae. Clinical trials have demonstrated that a single dose of cefoxitin is effective in obtaining short-term clinical response in women who have PID; however, the theoretical limitations in its coverage of anaerobes may require the addition of metronidazole. The metronidazole also will effectively treat BV, which also is frequently associated with PID. No data have been published regarding the use of oral cephalosporins for the treatment of PID.
# Alternative Oral Regimens
Information regarding other outpatient regimens is limited, but one other regimen has undergone at least one clinical trial and has broad spectrum coverage. Amoxicillin/clavulanic acid plus doxycycline was effective in obtaining short-term clinical response in a single clinical trial; however, gastrointestinal symptoms might limit the overall success of this regimen. Several recent investigations have evaluated the use of azithromycin in the treatment of upper-reproductive tract infections; however, the data are insufficient to recommend this agent as a component of any of the treatment regimens for PID.
# Follow-Up
Patients receiving oral or parenteral therapy should demonstrate substantial clinical improvement (e.g., defervescence; reduction in direct or rebound abdominal tenderness; and reduction in uterine, adnexal, and cervical motion tenderness) within 3 days after initiation of therapy. Patients who do not demonstrate improvement within this time period usually require additional diagnostic tests, surgical intervention, or both.
If the health-care provider prescribes outpatient oral or parenteral therapy, a follow-up examination should be performed within 72 hours, using the criteria for clinical improvement described previously. Some experts also recommend rescreening for C. trachomatis and N. gonorrhoeae 4-6 weeks after therapy is completed. If PCR or LCR is used to document a test of cure, rescreening should be delayed for 1 month after completion of therapy.
# Management of Sex Partners
Sex partners of patients who have PID should be examined and treated if they had sexual contact with the patient during the 60 days preceding onset of symptoms in the patient. The evaluation and treatment are imperative because of the risk for reinfection and the strong likelihood of urethral gonococcal or chlamydial infection in the sex partner. Male partners of women who have PID caused by C. trachomatis and/or N. gonorrhoeae often are asymptomatic.
Sex partners should be treated empirically with regimens effective against both of these infections, regardless of the apparent etiology of PID or pathogens isolated from the infected woman.
Even in clinical settings in which only women are treated, special arrangements should be made to provide care for male sex partners of women who have PID. When this is not feasible, health-care providers should ensure that sex partners are referred for appropriate treatment.
# Special Considerations Pregnancy
Because of the high risk for maternal morbidity, fetal wastage, and preterm delivery, pregnant women who have suspected PID should be hospitalized and treated with parenteral antibiotics.
# HIV Infection
Differences in the clinical manifestations of PID between HIV-infected women and HIV-negative women have not been well delineated. In early observational studies, HIV-infected women with PID were more likely to require surgical intervention. In a subsequent and more comprehensive observational study, HIV-infected women who had PID had more severe symptoms than HIV-negative women but responded equally well to standard parenteral antibiotic regimens. In another study, the microbiologic findings for HIV-infected and HIV-negative women were similar, except for higher rates of concomitant Candida and HPV infections and HPV-related cytologic abnormalities among HIV-infected women. Immunosuppressed HIV-infected women who have PID should be managed aggressively using one of the parenteral antimicrobial regimens recommended in this report.
# EPIDIDYMITIS
Among sexually active men aged 35 years, men who have recently undergone urinary tract instrumentation or surgery, and men who have anatomical abnormalities.
Although most patients can be treated on an outpatient basis, hospitalization should be considered when severe pain suggests other diagnoses (e.g., torsion, testicular infarction, and abscess) or when patients are febrile or might be noncompliant with an antimicrobial regimen.
# Diagnostic Considerations
Men who have epididymitis typically have unilateral testicular pain and tenderness; hydrocele and palpable swelling of the epididymis usually are present. Testicular torsion, a surgical emergency, should be considered in all cases but is more frequent among adolescents. Torsion occurs more frequently in patients who do not have evidence of inflammation or infection. Emergency testing for torsion may be indicated when the onset of pain is sudden, pain is severe, or the test results available during the initial examination do not enable a diagnosis of urethritis or urinary tract infection to be made. If the diagnosis is questionable, an expert should be consulted immediately, because testicular viability may be compromised. The evaluation of men for epididymitis should include the following procedures:
- A Gram-stained smear of urethral exudate or intraurethral swab specimen for diagnosis of urethritis (i.e., ≥5 polymorphonuclear leukocytes per oil immersion field) and for presumptive diagnosis of gonococcal infection.
- A culture of urethral exudate or intraurethral swab specimen, or nucleic acid amplification test (either on intraurethral swab or first-void urine) for N. gonorrhoeae and C. trachomatis.
- Examination of first-void urine for leukocytes if the urethral Gram stain is negative. Culture and Gram-stained smear of uncentrifuged urine should be obtained.
- Syphilis serology and HIV counseling and testing.
# Treatment
Empiric therapy is indicated before culture results are available. Treatment of epididymitis caused by C. trachomatis or N. gonorrhoeae will result in a) a microbiologic cure of infection, b) improvement of the signs and symptoms, c) prevention of transmission to others, and d) a decrease in the potential complications (e.g., infertility or chronic pain).
# Recommended Regimens
For epididymitis most likely caused by gonococcal or chlamydial infection:
Ceftriaxone 250 mg IM in a single dose, PLUS Doxycycline 100 mg orally twice a day for 10 days.
For epididymitis most likely caused by enteric organisms, or for patients allergic to cephalosporins and/or tetracyclines:
Ofloxacin 300 mg orally twice a day for 10 days.
As an adjunct to therapy, bed rest, scrotal elevation, and analgesics are recommended until fever and local inflammation have subsided.
# Follow-Up
Failure to improve within 3 days requires reevaluation of both the diagnosis and therapy. Swelling and tenderness that persist after completion of antimicrobial therapy should be evaluated comprehensively. The differential diagnosis includes tumor, abscess, infarction, testicular cancer, and tuberculous or fungal epididymitis.
# Management of Sex Partners
Patients who have epididymitis that is known or suspected to be caused by N. gonorrhoeae or C. trachomatis should be instructed to refer sex partners for evaluation and treatment. Sex partners of these patients should be referred if their contact with the index patient was within the 60 days preceding onset of symptoms in the patient.
Patients should be instructed to avoid sexual intercourse until they and their sex partners are cured. In the absence of a microbiologic test of cure, this means until therapy is completed and patient and partner(s) no longer have symptoms.
# Special Considerations HIV Infection
Patients who have uncomplicated epididymitis and also are infected with HIV should receive the same treatment regimen as those who are HIV-negative. Fungi and mycobacteria, however, are more likely to cause epididymitis in immunosuppressed patients than in immunocompetent patients.
# HUMAN PAPILLOMAVIRUS INFECTION
# Genital Warts
More than 20 types of HPV can infect the genital tract. Most HPV infections are asymptomatic, subclinical, or unrecognized. Visible genital warts usually are caused by HPV types 6 or 11. Other HPV types in the anogenital region (i.e., types 16, 18, 31, 33, and 35) have been strongly associated with cervical dysplasia. Diagnosis of genital warts can be confirmed by biopsy, although biopsy is rarely needed (e.g., if the diagnosis is uncertain; the lesions do not respond to standard therapy; the disease worsens during therapy; the patient is immunocompromised; or warts are pigmented, indurated, fixed, and ulcerated). No data support the use of typespecific HPV nucleic acid tests in the routine diagnosis or management of visible genital warts.
HPV types 6 and 11 also can cause warts on the uterine cervix and in the vagina, urethra, and anus; these warts are sometimes symptomatic. Intra-anal warts are seen predominately in patients who have had receptive anal intercourse; these warts are distinct from perianal warts, which can occur in men and women who do not have a history of anal sex. Other than the genital area, these HPV types have been associated with conjunctival, nasal, oral, and laryngeal warts. HPV types 6 and 11 are associated rarely with invasive squamous cell carcinoma of the external genitalia. Depending on the size and anatomic locations, genital warts can be painful, friable, and/or pruritic.
HPV types 16, 18, 31, 33, and 35 are found occasionally in visible genital warts and have been associated with external genital (i.e., vulvar, penile, and anal) squamous intraepithelial neoplasia (i.e., squamous cell carcinoma in situ, bowenoid papulosis, Erythroplasia of Queyrat, or Bowen's disease of the genitalia). These HPV types have been associated with vaginal, anal, and cervical intraepithelial dysplasia and squamous cell carcinoma. Patients who have visible genital warts can be infected simultaneously with multiple HPV types.
# Treatment
The primary goal of treating visible genital warts is the removal of symptomatic warts. Treatment can induce wart-free periods in most patients. Genital warts often are asymptomatic. No evidence indicates that currently available treatments eradicate or affect the natural history of HPV infection. The removal of warts may or may not decrease infectivity. If left untreated, visible genital warts may resolve on their own, remain unchanged, or increase in size or number. No evidence indicates that treatment of visible warts affects the development of cervical cancer. the application. Many patients may be clear of warts by 8-10 weeks or sooner. The safety of imiquimod during pregnancy has not been established.
# Provider-Administered:
Cryotherapy with liquid nitrogen or cryoprobe. Repeat applications every 1 to 2 weeks.
# OR
Podophyllin resin 10%-25% in compound tincture of benzoin. A small amount should be applied to each wart and allowed to air dry. To avoid the possibility of complications associated with systemic absorption and toxicity, some experts recommend that application be limited to ≤0.5 mL of podophyllin or ≤10 cm 2 of warts per session. Some experts suggest that the preparation should be thoroughly washed off 1-4 hours after application to reduce local irritation. Repeat weekly if necessary. The safety of podophyllin during pregnancy has not been established.
OR TCA or BCA 80%-90%. Apply a small amount only to warts and allow to dry, at which time a white "frosting" develops; powder with talc or sodium bicarbonate (i.e., baking soda) to remove unreacted acid if an excess amount is applied. Repeat weekly if necessary. For patient-applied treatments, patients must be able to identify and reach warts to be treated. Podofilox 0.5% solution or gel is relatively inexpensive, easy to use, safe, and self-applied by patients. Podofilox is an antimitotic drug that results in destruction of warts. Most patients experience mild/moderate pain or local irritation after treatment. Imiquimod is a topically active immune enhancer that stimulates production of interferon and other cytokines. Before wart resolution, local inflammatory reactions are common; these reactions usually are mild to moderate.
Cryotherapy, which requires the use of basic equipment, destroys warts by thermal-induced cytolysis. Its major drawback is that proper use requires substantial training, without which warts are frequently overtreated or undertreated, resulting in poor efficacy or increased likelihood of complications. Pain after application of the liquid nitrogen, followed by necrosis and sometimes blistering, are not unusual. Although local anesthesia (topical or injected) is not used routinely, its use facilitates treatment if there are many warts or if the area of warts is large.
Podophyllin resin contains a number of compounds, including the podophyllin lignans that are antimitotic. The resin is most frequently compounded at 10%-25% in tincture of benzoin. However, podophyllin resin preparations differ in the concentration of active components and contaminants. The shelf life and stability of podophyllin preparations are unknown. It is important to apply a thin layer of podophyllin resin to the warts and allow it to air dry before the treated area comes into contact with clothing. Overapplication or failure to air dry can result in local irritation caused by spread of the compound to adjacent areas.
Both TCA and BCA are caustic agents that destroy warts by chemical coagulation of the proteins. Although these preparations are widely used, they have not been investigated thoroughly. TCA solutions have a low viscosity comparable to water and can spread rapidly if applied excessively, thus damaging adjacent normal tissue. Both TCA and BCA should be applied sparingly and allowed to dry before the patient sits or stands. If pain is intense, the acid can be neutralized with soap or sodium bicarbonate (i.e., baking soda).
Surgical removal of warts has an advantage over other treatment modalities in that it renders the patient wart-free, usually with a single visit. However, substantial clinical training, additional equipment, and a longer office visit are required. Once local anesthesia is achieved, the visible genital warts can be physically destroyed by electrosurgery, in which case no additional hemostasis is required. Alternatively, the warts can be removed either by tangential excision with a pair of fine scissors or a scalpel or by curettage. Because most warts are exophytic, this can be accomplished with a resulting wound that only extends into the upper dermis. Hemostasis can be achieved with an electrosurgical unit or a chemical styptic (e.g., an aluminum chloride solution). Suturing is neither required nor indicated in most cases when surgical removal is done properly. Surgery is most beneficial for patients who have a large number or area of genital warts. Carbon dioxide laser and surgery may be useful in the management of extensive warts or intraurethral warts, particularly for those patients who have not responded to other treatments.
Interferons, either natural or recombinant, used for the treatment of genital warts have been administered systemically (i.e., subcutaneously at a distant site or IM) and intralesionally (i.e., injected into the warts). Systemic interferon is not effective. The efficacy and recurrence rates of intralesional interferon are comparable to other treatment modalities. Interferon is believed to be effective because of antiviral and/or immunostimulating effects. However, interferon therapy is not recommended for routine use because of inconvenient routes of administration, frequent office visits, and the association between its use and a high frequency of systemic adverse effects.
Because of the shortcomings of available treatments, some clinics employ combination therapy (i.e., the simultaneous use of two or more modalities on the same wart at the same time). Most experts believe that combining modalities does not increase efficacy but may increase complications.
# Cervical Warts
For women who have exophytic cervical warts, high-grade squamous intraepithelial lesions (SIL) must be excluded before treatment is begun. Management of exophytic cervical warts should include consultation with an expert.
# Vaginal Warts
Cryotherapy with liquid nitrogen. The use of a cryoprobe in the vagina is not recommended because of the risk for vaginal perforation and fistula formation.
OR TCA or BCA 80%-90% applied only to warts. Apply a small amount only to warts and allow to dry, at which time a white "frosting" develops; powder with talc or sodium bicarbonate (i.e., baking soda) to remove unreacted acid if an excess amount is applied. Repeat weekly if necessary.
# OR
Podophyllin 10%-25% in compound tincture of benzoin applied to a treated area that must be dry before the speculum is removed. Treat with ≤2 cm 2 per session. Repeat application at weekly intervals. Because of concern about potential systemic absorption, some experts caution against vaginal application of podophyllin. The safety of podophyllin during pregnancy has not been established.
# Urethral Meatus Warts
Cryotherapy with liquid nitrogen, OR Podophyllin 10%-25% in compound tincture of benzoin. The treatment area must be dry before contact with normal mucosa. Podophyllin must be applied weekly if necessary. The safety of podophyllin during pregnancy has not been established.
# Anal Warts
Cryotherapy with liquid nitrogen.
# OR
TCA or BCA 80%-90% applied to warts. Apply a small amount only to warts and allow to dry, at which time a white "frosting" develops; powder with talc or sodium bicarbonate (i.e., baking soda) to remove unreacted acid if an excess amount is applied. Repeat weekly if necessary.
# OR
# Surgical removal.
instances, cesarean delivery may be indicated for women with genital warts if the pelvic outlet is obstructed or if vaginal delivery would result in excessive bleeding.
# Immunosuppressed Patients
Persons who are immunosuppressed because of HIV or other reasons may not respond as well as immunocompetent persons to therapy for genital warts, and they may have more frequent recurrences after treatment. Squamous cell carcinomas arising in or resembling genital warts might occur more frequently among immunosuppressed persons, requiring more frequent biopsy for confirmation of diagnosis.
# Squamous Cell Carcinoma in situ
Patients in whom squamous cell carcinoma in situ of the genitalia is diagnosed should be referred to an expert for treatment. Ablative modalities usually are effective, but careful follow-up is important. The risk for these lesions leading to invasive squamous cell carcinoma of the external genitalia in immunocompetent patients is unknown but is probably low. Female partners of patients who have squamous cell carcinoma in situ are at high risk for cervical abnormalities.
# Subclinical Genital HPV Infection (Without Exophytic Warts)
Subclinical genital HPV infection occurs more frequently than visible genital warts among both men and women. Infection often is indirectly diagnosed on the cervix by Pap smear, colposcopy, or biopsy and on the penis, vulva, and other genital skin by the appearance of white areas after application of acetic acid. However, the routine use of acetic acid soaks and examination with light and magnification, as a screening test, to detect "subclinical" or "acetowhite" genital warts is not recommended. Acetowhitening is not a specific test for HPV infection. Thus, in populations at low risk for this infection, many false-positives may be detected when this test is used for screening. The specificity and sensitivity of this procedure has not been defined. In special situations, experienced clinicians find this test useful for identification of flat genital warts.
A definitive diagnosis of HPV infection depends on detection of viral nucleic acid (DNA or RNA) or capsid protein. Pap smear diagnosis of HPV does not always correlate with detection of HPV DNA in cervical cells. Cell changes attributed to HPV in the cervix are similar to those of mild dysplasia and often regress spontaneously without treatment. Tests that detect several types of HPV DNA or RNA in cells scraped from the cervix are available, but the clinical utility of these tests for managing patients is unclear. Management decisions should not be made on the basis of HPV tests. Screening for subclinical genital HPV infection using DNA or RNA tests or acetic acid is not recommended.
# Treatment
In the absence of coexistent dysplasia, treatment is not recommended for subclinical genital HPV infection diagnosed by Pap smear, colposcopy, biopsy, acetic acid soaking of genital skin or mucous membranes, or the detection of HPV (DNA or RNA). The diagnosis of subclinical genital HPV infection is often questionable, and no therapy has been identified to eradicate infection. HPV has been demonstrated in adjacent tissue after laser treatment of HPV-associated dysplasia and after attempts to eliminate subclinical HPV by extensive laser vaporization of the anogenital area. In the presence of coexistent dysplasia, management should be based on the grade of dysplasia.
# Management of Sex Partners
Examination of sex partners is unnecessary. Most sex partners of infected patients probably are already infected subclinically with HPV. No practical screening tests for subclinical infection are available. The use of condoms may reduce transmission to sex partners who are likely to be uninfected (e.g., new partners); however, the period of communicability is unknown. Whether patients who have subclinical HPV infection are as contagious as patients who have exophytic warts is unknown.
# CERVICAL CANCER SCREENING FOR WOMEN WHO ATTEND STD CLINICS OR HAVE A HISTORY OF STDs
Women who have a history of STD are at increased risk for cervical cancer, and women attending STD clinics may have other risk factors that place them at even greater risk. Prevalence studies have determined that precursor lesions for cervical cancer occur about five times more frequently among women attending STD clinics than among women attending family planning clinics.
The Pap smear (i.e., cervical smear) is an effective and relatively low-cost screening test for invasive cervical cancer and SIL,- the precursors of cervical cancer. Both ACOG and the American Cancer Society (ACS) recommend annual Pap smears for all sexually active women. Although these guidelines take the position that Pap smears can be obtained less frequently in some situations, women with a history of STDs may need more frequent screening because of their increased risk for cervical cancer. Moreover, surveys of women attending STD clinics indicate that many women do not understand the purpose or importance of Pap smears, and almost half of the women who have had a pelvic examination erroneously believe they have had a Pap smear when they actually have not.
# Recommendations
At the time of a pelvic examination for STD screening, the health-care provider should inquire about the result of the patient's last Pap smear and discuss the following information with the patient: If a woman has not had a Pap smear during the previous 12 months, a Pap smear should be obtained as part of the routine pelvic examination. Health-care providers should be aware that, after a pelvic examination, many women believe they have had a Pap smear when they actually have not, and thus may report having had a recent Pap smear. Therefore, in STD clinics, a Pap smear should be strongly considered during the routine clinical evaluation of women who have not had a normal Pap smear within the preceding 12 months that is documented within the clinic record or linked-system record.
A woman may benefit from receiving printed information about Pap smears and a report containing a statement that a Pap smear was obtained during her clinic visit. If possible, a copy of the Pap smear result should be provided to the patient for her records.
# Follow-Up
Clinics and health-care providers who provide Pap smear screening services are encouraged to use cytopathology laboratories that report results using the Bethesda System of classification. If the results of the Pap smear are abnormal, care should be provided according to the Interim Guidelines for Management of Abnormal Cervical Cytology published by the National Cancer Institute Consensus Panel and briefly summarized below (27 ). Appropriate follow-up of Pap smears showing a high-grade SIL always includes referral to a clinician who has the capacity to provide a colposcopic examination of the lower genital tract and, if indicated, colposcopically directed biopsies. For a Pap smear showing low-grade SIL or atypical squamous cells of undetermined significance (ASCUS), follow-up without colposcopy may be acceptable in circumstances when the diagnosis is not qualified further or the cytopathologist favors a reactive process. In general, this would involve repeated Pap smears every 4-6 months for 2 years until the results of three consecutive smears have been negative. If repeated smears show persistent abnormalities, colposcopy and directed biopsy are indicated for low-grade SIL and should be considered for ASCUS. Women with a diagnosis of unqualified ASCUS associated with severe inflammation should at least be reevaluated with a repeat Pap smear after 2-3 months, then repeated Pap smears every 4-6 months for 2 years until the results of three consecutive smears have been negative. If specific infections are identified, the patient should be reevaluated after appropriate treatment for those infections. In all follow-up strategies using repeated Pap smears, the tests not only must be negative but also must be interpreted by the laboratory as "satisfactory for evaluation."
Because many public health clinics, including most STD clinics, cannot provide clinical follow-up of abnormal Pap smears with colposcopy and biopsy, women with Pap smears demonstrating high grade SIL or persistent low-grade SIL or ASCUS usually will need a referral to other local health-care providers or clinics for colposcopy and biopsy. Clinics and health-care providers who offer Pap smear screening services but cannot provide appropriate colposcopic follow-up of abnormal Pap smears should arrange referral services that a) can promptly evaluate and treat patients and b) will report the results of the evaluation to the referring clinician or health-care provider. Clinics and health-care providers should develop protocols that identify women who miss initial appointments (i.e., so that these women can be scheduled for repeat Pap smears), and they should reevaluate such protocols routinely. Pap smear results, type and location of follow-up appointments, and results of follow-up should be clearly documented in the clinic record. The development of colposcopy and biopsy services in local health departments, especially in circumstances where referrals are difficult and follow-up is unlikely, should be considered.
# Other Management Considerations
Other considerations in performing Pap smears are as follows:
- The Pap smear is not an effective screening test for STDs.
- If a woman is menstruating, a Pap smear should be postponed, and the woman should be advised to have a Pap smear at the earliest opportunity.
- The presence of a mucopurulent discharge might compromise interpretation of the Pap smear. However, if the woman is unlikely to return for follow-up, a Pap smear can be obtained after careful removal of the discharge with a salinesoaked cotton swab.
- A woman who has external genital warts does not need to have Pap smears more frequently than a woman who does not have warts, unless otherwise indicated.
- In an STD clinic setting or when other cultures or specimens are collected for STD diagnoses, the Pap smear may be obtained last.
- Women who have had a hysterectomy do not require an annual Pap smear unless the hysterectomy was related to cervical cancer or its precursor lesions. In this situation, women should be advised to continue follow-up with the physician(s) who provided health care at the time of the hysterectomy.
- Both health-care providers who receive basic retraining on Pap smear collection and clinics that use simple quality assurance measures obtain fewer unsatisfactory smears.
- Although type-specific HPV testing to identify women at high and low risk for cervical cancer may become clinically relevant in the future, its utility in clinical practice is unclear, and such testing is not recommended.
# Special Considerations Pregnancy
Women who are pregnant should have a Pap smear as part of routine prenatal care. A cytobrush may be used for obtaining Pap smears in pregnant women, although care should be taken not to disrupt the mucous plug.
# HIV Infection
Several studies have documented an increased prevalence of SIL in HIV-infected women, and HIV is believed by many experts to hasten the progression of precursor lesions to invasive cervical cancer. The following recommendations for Pap smear screening among HIV-infected women are consistent with other guidelines published by the U.S. Department of Health and Human Services (10,11,27,28 ) and are based partially on the opinions of experts in the care and management of cervical cancer and HIV infection in women.
- After obtaining a complete history of previous cervical disease, HIV-infected women should have a comprehensive gynecologic examination, including a pelvic examination and Pap smear as part of their initial evaluation. A Pap smear should be obtained twice in the first year after diagnosis of HIV infection and, if the results are normal, annually thereafter. If the results of the Pap smear are abnormal, care should be provided according to the Interim Guidelines for Management of Abnormal Cervical Cytology (28 ). Women who have a cytological diagnosis of high-grade SIL or squamous cell carcinoma should undergo colposcopy and directed biopsy. HIV infection is not an indication for colposcopy in women who have normal Pap smears.
# VACCINE-PREVENTABLE STDs
One of the most effective means of preventing the transmission of STDs is preexposure immunization. Currently licensed vaccines for the prevention of STDs include those for hepatitis A and hepatitis B. Clinical development and trials are underway for vaccines against a number of other STDs, including HIV and HSV. As more vaccines become available, immunization possibly will become one of the most widespread methods used to prevent STDs.
Five different viruses (i.e., hepatitis A-E) account for almost all cases of viral hepatitis in humans. Serologic testing is necessary to confirm the diagnosis. For example, a health-care provider might assume that an injecting-drug user with jaundice has hepatitis B when, in fact, outbreaks of hepatitis A among injectingdrug users often occur. The correct diagnosis is essential for the delivery of appropriate preventive services. To ensure accurate reporting of viral hepatitis and appropriate prophylaxis of household contacts and sex partners, all case reports of viral hepatitis should be investigated and the etiology established through serologic testing.
# Hepatitis A
Hepatitis A is caused by infection with the hepatitis A virus (HAV). HAV replicates in the liver and is shed in the feces. Virus in the stool is found in the highest concentrations from 2 weeks before to 1 week after the onset of clinical illness. Virus also is present in serum and saliva during this period, although in much lower concentrations than in feces. The most common mode of HAV transmission is fecal-oral, either by person-to-person transmission between household contacts or sex partners or by contaminated food or water. Because viremia occurs in acute infection, bloodborne HAV transmission can occur; however, such cases have been reported infrequently. Although HAV is present in low concentrations in the saliva of infected persons, no evidence indicates that saliva is involved in transmission.
Of patients who have acute hepatitis A, ≤20% require hospitalization; fulminant liver failure develops in 0.1% of patients. The overall mortality rate for acute hepatitis A is 0.3%, but it is higher (1.8%) for adults aged >49 years. HAV infection is not associated with chronic liver disease.
In the United States during 1995, 31,582 cases of hepatitis A were reported. The most frequently reported source of infection was household or sexual contact with a person who had hepatitis A, followed by attendance or employment at a day care center; recent international travel; homosexual activity; injecting-drug use; and a suspected food or waterborne outbreak. Many persons who have hepatitis A do not identify risk factors; their source of infection may be other infected persons who are asymptomatic. The prevalence of previous HAV infection among the U.S. population is 33% (CDC, unpublished data).
Outbreaks of hepatitis A among homosexual men have been reported in urban areas, both in the United States and in foreign countries. In one investigation, the prevalence of HAV infection among homosexual men was significantly higher (30%) than that among heterosexual men (12%). In New York City, a case-control study of homosexual men who had acute hepatitis A determined that case-patients were more likely to have had more anonymous sex partners and to have engaged in group sex than were the control subjects; oral-anal intercourse (i.e., the oral role) and digital-rectal intercourse (i.e., the digital role) also were associated with illness.
# Treatment
Because HAV infection is self-limited and does not result in chronic infection or chronic liver disease, treatment is usually supportive. Hospitalization may be necessary for patients who are dehydrated because of nausea and vomiting or who have fulminant hepatitis A. Medications that might cause liver damage or that are metabolized by the liver should be used with caution. No specific diet or activity restrictions are necessary.
# Prevention
General measures for hepatitis A prevention (e.g., maintenance of good personal hygiene) have not been successful in interrupting outbreaks of hepatitis A when the mode of transmission is from person to person, including sexual contact. To help control hepatitis A outbreaks among homosexual and bisexual men, health education messages should stress the modes of HAV transmission and the measures that can be taken to reduce the risk for transmission of any STD, including enterically transmitted agents such as HAV. However, vaccination is the most effective means of preventing HAV infection.
Two types of products are available for the prevention of hepatitis A: immune globulin (IG) and hepatitis A vaccine. IG is a solution of antibodies prepared from human plasma that is made with a serial ethanol precipitation procedure that inactivates HBV and HIV. When administered intramuscularly before exposure to HAV, or within 2 weeks after exposure, IG is >85% effective in preventing hepatitis A. IG administration is recommended for a variety of exposure situations (e.g., for persons who have sexual or household contact with patients who have hepatitis A). The duration of protection is relatively short (i.e., 3-6 months) and dose dependent.
Inactivated hepatitis A vaccines have been available in the United States since 1995. These vaccines, administered as a two-dose series, are safe, highly immunogenic, and efficacious. Immunogenicity studies indicate that 99%-100% of persons respond to one dose of hepatitis A vaccine; the second dose provides long-term protection. Efficacy studies indicate that inactivated hepatitis A vaccines are 94%-100% effective in preventing HAV infection (2 ).
# Preexposure Prophylaxis
Vaccination with hepatitis A vaccine for preexposure protection against HAV infection is indicated for persons who have the following risk factors and who are likely to seek treatment in settings where STDs are being treated.
- Men who have sex with men. Sexually active men who have sex with men (both adolescents and adults) should be vaccinated.
- Illegal drug users. Vaccination is recommended for users of illegal injecting and noninjecting drugs if local epidemiologic evidence indicates previous or current outbreaks among persons with such risk behaviors.
# Postexposure Prophylaxis
Persons who were exposed recently to HAV (i.e., household or sexual contact with a person who has hepatitis A) and who had not been vaccinated before the exposure should be administered a single IM dose of IG (0.02 mL/kg) as soon as possible, but not >2 weeks after exposure. Persons who received at least one dose of hepatitis A vaccine ≥1 month before exposure to HAV do not need IG.
# Hepatitis B
Hepatitis B is a common STD. During the past 10 years, sexual transmission accounted for approximately 30%-60% of the estimated 240,000 new HBV infections that occurred annually in the United States. Chronic HBV infection develops in 1%-6% of persons infected as adults. These persons are capable of transmitting HBV to others, and they are at risk for chronic liver disease. In the United States, HBV infection leads to an estimated 6,000 deaths annually; these deaths result from cirrhosis of the liver and primary hepatocellular carcinoma.
The risk for perinatal HBV infection among infants born to HBV-infected mothers is 10%-85%, depending on the mother's hepatitis B e antigen (HbeAg) status. Chronic HBV infection develops in approximately 90% of infected newborns; these children are at high risk for chronic liver disease. Even when not infected during the perinatal period, children of HBV-infected mothers are at high risk for acquiring chronic HBV infection by person-to-person transmission during the first 5 years of life.
# Treatment
No specific treatment is available for persons who have acute HBV infection. Supportive and symptomatic care usually are the mainstays of therapy. During the past decade, numerous antiviral agents have been investigated for treatment of chronic HBV infection. Alpha-2b interferon has been 40% effective in eliminating chronic HBV infection; persons who became infected during adulthood were most likely to respond to this treatment. Antiretroviral agents (e.g., lamivudine) have been effective in eliminating HBV infection, and a number of other compounds are being evaluated. The goal of antiviral treatment is to stop HBV replication. Response to treatment can be demonstrated by normalization of liver function tests, improvement in liver histology, and seroreversion from HBeAg-positive to HBeAgnegative. Long-term follow-up of treated patients suggests that the remission of chronic hepatitis induced by alpha interferon is of long duration. Patient characteristics associated with positive response to interferon therapy include low pretherapy HBV DNA levels, high pretherapy alanine aminotransferase levels, short duration of infection, acquisition of disease in adulthood, active histology, and female sex.
# Prevention
Although methods used to prevent other STDs should prevent HBV infection, hepatitis B vaccination is the most effective means of preventing infection. The epidemiology of HBV infection in the United States indicates that multiple age groups must be targeted to provide widespread immunity and effectively prevent HBV transmission and HBV-related chronic liver disease (1 ). Vaccination of persons who have a history of STDs is part of a comprehensive strategy to eliminate HBV transmission in the United States. This comprehensive strategy also includes prevention of perinatal HBV infection by a) routine screening of all pregnant women, b) routine vaccination of all newborns, c) vaccination of older children at high risk for HBV infection (e.g., Alaskan Natives, Pacific Islanders, and residents in households of first-generation immigrants from countries in which HBV is of high or intermediate endemicity), d) vaccination of children aged 11-12 years who have not previously received hepatitis B vaccine, and e) vaccination of adolescents and adults at high risk for infection.
# Preexposure Prophylaxis
With the implementation of routine infant hepatitis B vaccination and the widescale implementation of vaccination programs for adolescents, vaccination of adults at high risk for HBV has become a priority in the strategy to eliminate HBV transmission in the United States. All persons attending STD clinics and persons known to be at high risk for HBV infection (e.g., persons with multiple sex partners, sex partners of persons with chronic HBV infection, and injecting-drug users) should be offered hepatitis B vaccine and advised of their risk for HBV infection (as well as their risk for HIV infection) and the means to reduce their risk (i.e., exclusivity in sexual relationships, use of condoms, and avoidance of nonsterile druginjection equipment).
Persons who should receive hepatitis B vaccine include the following:
- Sexually active homosexual and bisexual men;
- Sexually active heterosexual men and women, including those a) in whom another STD was recently diagnosed, b) who had more than one sex partner in the preceding 6 months, c) who received treatment in an STD clinic, and d) who are prostitutes;
- Illegal drug users, including injecting-drug users and users of illegal noninjecting drugs;
- Health-care workers;
- Recipients of certain blood products;
- Household and sexual contacts of persons who have chronic HBV infection;
- Adoptees from countries in which HBV infection is endemic;
- Certain international travelers;
- Clients and employees of facilities for the developmentally disabled;
- Infants and children; and - Hemodialysis patients.
# Screening for Antibody Versus Vaccination Without Screening
The prevalence of previous HBV infection among sexually active homosexual men and among injecting-drug users is high. Serologic screening for evidence of previous infection before vaccinating adult members of these groups may be costeffective, depending on the costs of laboratory testing and vaccine. At the current cost of vaccine, prevaccination testing on adolescents is not cost-effective. For adults attending STD clinics, the prevalence of HBV infection and the vaccine cost may justify prevaccination testing. However, because prevaccination testing may lower compliance with vaccination, the first dose of vaccine should be administered at the time of testing. The additional doses of hepatitis vaccine should be administered on the basis of the prevaccination test results. The preferred serologic test for prevaccination testing is the total antibody to hepatitis B core antigen (anti-HBc), because it will detect persons who have either resolved or chronic infection. Because anti-HBc testing will not identify persons immune to HBV infection as a result of vaccination, a history of hepatitis B vaccination should be obtained, and fully vaccinated persons should not be revaccinated.
# Vaccination Schedules
Hepatitis B vaccine is highly immunogenic. Protective levels of antibody are present in approximately 50% of young adults after one dose of vaccine; in 85%, after two doses; and >90%, after three doses. The third dose is required to provide long-term immunity. The most often used schedule is vaccination at 0, 1-2, and 4-6 months. The first and second doses of vaccine must be administered at least 1 month apart, and the first and third doses at least 4 months apart. If the vaccination series is interrupted after the first or second dose of vaccine, the missing dose should be administered as soon as possible. The series should not be restarted if a dose has been missed. The vaccine should be administered IM in the deltoid, not in the buttock.
# Postexposure Prophylaxis
# Exposure to Persons Who Have Acute Hepatitis B Sexual Contacts
Patients who have acute HBV infection are potentially infectious to persons with whom they have sexual contact. Passive immunization with hepatitis B immune globulin (HBIG) prevents 75% of these infections. Hepatitis B vaccination alone is less effective in preventing infection than HBIG and vaccination. Sexual contacts of patients who have acute hepatitis B should receive HBIG and begin the hepatitis B vaccine series within 14 days after the most recent sexual contact. Testing of sex partners for susceptibility to HBV infection (anti-HBc) can be considered if it does not delay treatment >14 days.
# Nonsexual Household Contacts
Nonsexual household contacts of patients who have acute hepatitis B are not at high risk for infection unless they are exposed to the patient's blood (e.g., by sharing a toothbrush or razor blade). However, vaccination of household contacts is encouraged, especially for children and adolescents. If the patient remains HBsAgpositive after 6 months (i.e., becomes chronically infected), all household contacts should be vaccinated.
# Exposure to Persons Who Have Chronic HBV Infection
Hepatitis B vaccination without the use of HBIG is highly effective in preventing HBV infection in household and sexual contacts of persons who have chronic HBV infection, and all such contacts should be vaccinated. Postvaccination serologic testing is indicated for sex partners of persons who have chronic hepatitis B infections and for infants born to HBsAg-positive women.
# Special Considerations Pregnancy
Pregnancy is not a contraindication to hepatitis B vaccine or HBIG vaccine administration.
# HIV Infection
HBV infection in HIV-infected persons is more likely to lead to chronic HBV infection. HIV infection also can impair the response to hepatitis B vaccine. Therefore, HIV-infected persons who are vaccinated should be tested for hepatitis B surface antibody 1-2 months after the third vaccine dose. Revaccination with three more doses should be considered for those who do not respond initially to vaccination. Those who do not respond to additional doses should be advised that they might remain susceptible to HBV infection.
# PROCTITIS, PROCTOCOLITIS, AND ENTERITIS
Sexually transmitted gastrointestinal syndromes include proctitis, proctocolitis, and enteritis. Proctitis occurs predominantly among persons who participate in anal intercourse, and enteritis occurs among those whose sexual practices include oral-fecal contact. Proctocolitis can be acquired by either route, depending on the pathogen. Evaluation should include appropriate diagnostic procedures (e.g., anoscopy or sigmoidoscopy, stool examination, and culture).
Proctitis is an inflammation limited to the rectum (the distal 10-12 cm) that is associated with anorectal pain, tenesmus, and rectal discharge. N. gonorrhoeae, C. trachomatis (including LGV serovars), T. pallidum, and HSV usually are the sexually transmitted pathogens involved. In patients coinfected with HIV, herpes proctitis may be especially severe.
Proctocolitis is associated with symptoms of proctitis plus diarrhea and/or abdominal cramps and inflammation of the colonic mucosa extending to 12 cm. Fecal leukocytes may be detected on stool examination depending on the pathogen. Pathogenic organisms include Campylobacter sp., Shigella sp., Entamoeba histolytica, and, rarely, C. trachomatis (LGV serovars). CMV or other opportunistic agents may be involved in immunosuppressed HIV-infected patients.
Enteritis usually results in diarrhea and abdominal cramping without signs of proctitis or proctocolitis. In otherwise healthy patients, Giardia lamblia is most frequently implicated. Among HIV-infected patients, other infections that usually are not sexually transmitted may occur, including CMV, Mycobacterium avium-intracellulare, Salmonella sp., Cryptosporidium, Microsporidium, and Isospora. Multiple stool examinations may be necessary to detect Giardia, and special stool preparations are required to diagnose cryptosporidiosis and microsporidiosis. Additionally, enteritis may be a primary effect of HIV infection. Lindane 1% shampoo applied for 4 minutes to the affected area, and then thoroughly washed off. This regimen is not recommended for pregnant or lactating women or for children aged <2 years.
# OR
Pyrethrins with piperonyl butoxide applied to the affected area and washed off after 10 minutes.
The lindane regimen is the least expensive therapy; toxicity, as indicated by seizure and aplastic anemia, has not been reported when treatment was limited to the recommended 4-minute period. Permethrin has less potential for toxicity than lindane.
# Other Management Considerations
The recommended regimens should not be applied to the eyes. Pediculosis of the eyelashes should be treated by applying occlusive ophthalmic ointment to the eyelid margins twice a day for 10 days.
Bedding and clothing should be decontaminated (i.e., either machine-washed or machine-dried using the heat cycle or dry-cleaned) or removed from body contact for at least 72 hours. Fumigation of living areas is not necessary.
# Follow-Up
Patients should be evaluated after 1 week if symptoms persist. Re-treatment may be necessary if lice are found or if eggs are observed at the hair-skin junction. Patients who do not respond to one of the recommended regimens should be retreated with an alternative regimen.
# Management of Sex Partners
Sex partners within the preceding month should be treated.
# Special Considerations Pregnancy
Pregnant and lactating women should be treated with either permethrin or pyrethrins with piperonyl butoxide.
# HIV Infection
Patients who have pediculosis pubis and also are infected with HIV should receive the same treatment regimen as those who are HIV-negative.
# Scabies
The predominant symptom of scabies is pruritus. Sensitization to Sarcoptes scabiei must occur before pruritus begins. The first time a person is infected with S. scabiei, sensitization takes several weeks to develop. Pruritus might occur within 24 hours after a subsequent reinfestation. Scabies in adults may be sexually transmitted, although scabies in children usually is not.
# Recommended Regimen
Permethrin cream (5%) applied to all areas of the body from the neck down and washed off after 8-14 hours.
# Alternative Regimens
Lindane (1%) 1 oz. of lotion or 30 g of cream applied thinly to all areas of the body from the neck down and thoroughly washed off after 8 hours.
OR Sulfur (6%) precipitated in ointment applied thinly to all areas nightly for 3 nights. Previous applications should be washed off before new applications are applied. Thoroughly wash off 24 hours after the last application.
Permethrin is effective and safe but costs more than lindane. Lindane is effective in most areas of the country, but lindane resistance has been reported in some areas of the world, including parts of the United States. Seizures have occurred when lindane was applied after a bath or used by patients who had extensive dermatitis. Aplastic anemia following lindane use also has been reported. NOTE: Lindane should not be used after a bath, and it should not be used by a) persons who have extensive dermatitis, b) pregnant or lactating women, and c) children aged <2 years.
Ivermectin (single oral dose of 200 µg/kg or 0.8% topical solution) is a potential new therapeutic modality. However, no controlled clinical trials have been conducted to compare ivermectin with the currently recommended therapies.
# Other Management Considerations
Bedding and clothing should be decontaminated (i.e., either machine-washed or machine-dried using the hot cycle or dry-cleaned) or removed from body contact for at least 72 hours. Fumigation of living areas is unnecessary.
# Follow-Up
Pruritus may persist for several weeks. Some experts recommend re-treatment after 1 week for patients who are still symptomatic; other experts recommend retreatment only if live mites are observed. Patients who do not respond to the recommended treatment should be retreated with an alternative regimen.
# Management of Sex Partners and Household Contacts
Both sexual and close personal or household contacts within the preceding month should be examined and treated. offered treatment or prophylaxis for possible infection. The following prophylactic regimen is suggested as preventive therapy:
- Postexposure hepatitis B vaccination (without HBIG) should adequately protect against HBV. Hepatitis B vaccine should be administered to victims of sexual assault at the time of the initial examination. Follow-up doses of vaccine should be administered 1-2 and 4-6 months after the first dose.
- An empiric antimicrobial regimen for chlamydia, gonorrhea, trichomonas, and BV should be administered.
# Recommended Regimen
Ceftriaxone 125 mg IM in a single dose, PLUS Metronidazole 2 g orally in a single dose, PLUS Azithromycin 1 g orally in a single dose or Doxycycline 100 mg orally twice a day for 7 days.
# NOTE:
For patients requiring alternative treatments, see the sections in this report that specifically address those agents.
The efficacy of these regimens in preventing gonorrhea, BV, or C. trachomatis genitourinary infections after sexual assault has not been evaluated. The clinician might consider counseling the patient regarding the possible benefits, as well as the possibility of toxicity, associated with these treatment regimens, because of possible gastrointestinal side effects with this combination.
# Other Management Considerations
At the initial examination and, if indicated, at follow-up examinations, patients should be counseled regarding the following:
- Symptoms of STDs and the need for immediate examination if symptoms occur, and - Abstinence from sexual intercourse until STD prophylactic treatment is completed.
# Risk for Acquiring HIV Infection
Although HIV-antibody seroconversion has been reported among persons whose only known risk factor was sexual assault or sexual abuse, the risk for acquiring HIV infection through sexual assault is low. The overall probability of HIV transmission from an HIV-infected person during a single act of intercourse depends on many factors. These factors may include the type of sexual intercourse (i.e., oral, vaginal, or anal); presence of oral, vaginal or anal trauma; site of exposure to ejaculate; viral load in ejaculate; and presence of an STD.
In certain circumstances, the likelihood of HIV transmission also may be affected by postexposure therapy for HIV with antiretroviral agents. Postexposure therapy with zidovudine has been associated with a reduced risk for HIV infection in a study of health-care workers who had percutaneous exposures to HIV-infected blood. On the basis of these results and the biologic plausibility of the effectiveness of antiretroviral agents in preventing infection, postexposure therapy has been recommended for health-care workers who have percutaneous exposures to HIV. However, whether these findings can be extrapolated to other HIV-exposure situations, including sexual assault, is unknown. A recommendation cannot be made, on the basis of available information, regarding the appropriateness of postexposure antiretroviral therapy after sexual exposure to HIV.
Health-care providers who consider offering postexposure therapy should take into account the likelihood of exposure to HIV, the potential benefits and risks of such therapy, and the interval between the exposure and initiation of therapy. Because timely determination of the HIV-infection status of the assailant is not possible in many sexual assaults, the health-care provider should assess the nature of the assault, any available information about HIV-risk behaviors exhibited by persons who are sexual assailants (e.g., high-risk sexual practices and injecting-drug or crack cocaine use), and the local epidemiology of HIV/AIDS. If antiretroviral postexposure prophylaxis is offered, the following information should be discussed with the patient: a) the unknown efficacy and known toxicities of antiretrovirals, b) the critical need for frequent dosing of medications, c) the close follow-up that is necessary, d) the importance of strict compliance with the recommended therapy, and e) the necessity of immediate initiation of treatment for maximal likelihood of effectiveness. If the patient decides to take postexposure therapy, clinical management of the patient should be implemented according to the guidelines for occupational mucous membrane exposure.
# Sexual Assault or Abuse of Children
Recommendations in this report are limited to the identification and treatment of STDs. Management of the psychosocial aspects of the sexual assault or abuse of children is important but is not included in these recommendations.
The identification of sexually transmissible agents in children beyond the neonatal period suggests sexual abuse. However, there are exceptions; for example, rectal or genital infection with C. trachomatis among young children may be the result of perinatally acquired infection and may persist for as long as 3 years. In addition, genital warts, BV, and genital mycoplasmas have been diagnosed in children who have been abused and in those not abused. There are several modes by which HBV is transmitted to children; the most common of these is household exposure to persons who have chronic HBV infection.
The possibility of sexual abuse should be considered if no obvious risk factor for infection can be identified. When the only evidence of sexual abuse is the isolation of an organism or the detection of antibodies to a sexually transmissible agent, findings should be confirmed and the implications considered carefully. The evaluation for determining whether sexual abuse has occurred among children who have infections that can be sexually transmitted should be conducted in compliance with expert recommendations by practitioners who have experience and training in the evaluation of abused or assaulted children (29 ).
# Initial and 2-Week Follow-Up Examinations
During the initial examination and 2-week follow-up examination (if indicated), the following should be performed:
- Visual inspection of the genital, perianal, and oral areas for genital warts and ulcerative lesions.
- Cultures for N. gonorrhoeae specimens collected from the pharynx and anus in both boys and girls, the vagina in girls, and the urethra in boys. Cervical specimens are not recommended for prepubertal girls. For boys, a meatal specimen of urethral discharge is an adequate substitute for an intraurethral swab specimen when discharge is present. Only standard culture systems for the isolation of N. gonorrhoeae should be used. All presumptive isolates of N. gonorrhoeae should be confirmed by at least two tests that involve different principles (e.g., biochemical, enzyme substrate, or serologic methods). Isolates should be preserved in case additional or repeated testing is needed.
- Cultures for C. trachomatis from specimens collected from the anus in both boys and girls and from the vagina in girls. Limited information suggests that the likelihood of recovering Chlamydia from the urethra of prepubertal boys is too low to justify the trauma involved in obtaining an intraurethral specimen. A urethral specimen should be obtained if urethral discharge is present. Pharyngeal specimens for C. trachomatis also are not recommended for either sex because the yield is low, perinatally acquired infection may persist beyond infancy, and culture systems in some laboratories do not distinguish between C. trachomatis and C. pneumoniae.
Only standard culture systems for the isolation of C. trachomatis should be used. The isolation of C. trachomatis should be confirmed by microscopic identification of inclusions by staining with fluorescein-conjugated monoclonal antibody specific for C. trachomatis. Isolates should be preserved. Nonculture tests for chlamydia are not sufficiently specific for use in circumstances involving possible child abuse or assault. Data are insufficient to adequately assess the utility of nucleic acid amplification tests in the evaluation of children who might have been sexually abused, but expert opinion suggests these tests may be an alternative if confirmation is available but culture systems for C. trachomatis are unavailable.
- Culture and wet mount of a vaginal swab specimen for T. vaginalis infection. The presence of clue cells in the wet mount or other signs, such as a positive whiff test, suggests BV in girls who have vaginal discharge. The significance of clue cells or other indicators of BV as an indicator of sexual exposure is unclear. The clinical significance of clue cells or other indicators of BV in the absence of vaginal discharge also is unclear.
- Collection of a serum sample to be evaluated immediately, preserved for subsequent analysis, and used as a baseline for comparison with follow-up serologic tests. Sera should be tested immediately for antibodies to sexually transmitted agents. Agents for which suitable tests are available include T. pal-lidum, HIV, and HBsAg. The choice of agents for serologic tests should be made on a case-by-case basis (see Examination 12 Weeks After Assault). HIV antibodies have been reported in children whose only known risk factor was sexual abuse. Serologic testing for HIV infection should be considered for abused children. The decision to test for HIV infection should be made on a case-by-case basis, depending on likelihood of infection among assailant(s). Data are insufficient concerning the efficacy and safety of postexposure prophylaxis among children. Vaccination for HBV should be recommended if the medical history or serologic testing suggests that it has not been received (see Hepatitis B).
# Examination 12 Weeks After Assault
An examination approximately 12 weeks after the last suspected sexual exposure is recommended to allow time for antibodies to infectious agents to develop if baseline tests are negative. Serologic tests for T. pallidum, HIV, and HBsAg should be considered. The prevalence of these infections differs substantially by community, and serologic testing depends on whether risk factors are known to be present in the abuser or assailant. In addition, results of HBsAg testing must be interpreted carefully, because HBV also can be transmitted nonsexually. The choice of tests must be made on an individual basis.
# Follow-Up
Follow-up cultures are unnecessary if ceftriaxone is used. If spectinomycin is used to treat pharyngitis, a follow-up culture is necessary to ensure that treatment was effective.
# Other Management Considerations
Only parenteral cephalosporins are recommended for use in children. Ceftriaxone is approved for all gonococcal infections in children; cefotaxime is approved for gonococcal ophthalmia only. Oral cephalosporins used for treatment of gonococcal infections in children have not been evaluated adequately.
All children who have gonococcal infections should be evaluated for coinfection with syphilis and C. trachomatis. For a discussion of concerns regarding sexual assault, refer to Sexual Assault or Abuse of Children.
# Ophthalmia Neonatorum Prophylaxis
Instillation of a prophylactic agent into the eyes of all newborn infants is recommended to prevent gonococcal ophthalmia neonatorum; this procedure is required by law in most states. All the recommended prophylactic regimens in this section prevent gonococcal ophthalmia. However, the efficacy of these preparations in preventing chlamydial ophthalmia is less clear, and they do not eliminate nasopharyngeal colonization by C. trachomatis. The diagnosis and treatment of gonococcal and chlamydial infections in pregnant women is the best method for preventing neonatal gonococcal and chlamydial disease. Not all women, however, receive prenatal care; and ocular prophylaxis is warranted because it can prevent sight-threatening gonococcal ophthalmia and it is safe, easy to administer, and inexpensive.
# Prophylaxis Recommended Regimens
Silver nitrate (1%) aqueous solution in a single application, OR Erythromycin (0.5%) ophthalmic ointment in a single application, OR Tetracycline ophthalmic ointment (1%) in a single application.
One of these recommended preparations should be instilled into both eyes of every neonate as soon as possible after delivery. If prophylaxis is delayed (i.e., not administered in the delivery room), a monitoring system should be established to ensure that all infants receive prophylaxis. All infants should be administered ocular prophylaxis, regardless of whether delivery is vaginal or cesarian. Single-use tubes or ampules are preferable to multiple-use tubes. Bacitracin is not effective. Povidone iodine has not been studied adequately.
# Regimens
Treatment of genital warts should be guided by the preference of the patient, the available resources, and the experience of the health-care provider. None of the available treatments is superior to other treatments, and no single treatment is ideal for all patients or all warts.
The available treatments for visible genital warts are patient-applied therapies (i.e., podofilox and imiquimod) and provider-administered therapies (i.e., cryotherapy, podophyllin resin, trichloroacetic acid , bichloroacetic acid , interferon, and surgery). Most patients have from one to 10 genital warts, with a total wart area of 0.5-1.0 cm 2 , that are responsive to most treatment modalities. Factors that might influence selection of treatment include wart size, wart number, anatomic site of wart, wart morphology, patient preference, cost of treatment, convenience, adverse effects, and provider experience. Having a treatment plan or protocol is important, because many patients will require a course of therapy rather than a single treatment. In general, warts located on moist surfaces and/or in intertriginous areas respond better to topical treatment (e.g., TCA, podophyllin, podofilox, and imiquimod) than do warts on drier surfaces.
The treatment modality should be changed if a patient has not improved substantially after three provider-administered treatments or if warts have not completely cleared after six treatments. The risk-benefit ratio of treatment should be evaluated throughout the course of therapy to avoid overtreatment. Providers should be knowledgeable about, and have available to them, at least one patientapplied and one provider-administered treatment.
Complications rarely occur if treatments for warts are employed properly. Patients should be warned that scarring in the form of persistent hypopigmentation or hyperpigmentation is common with ablative modalities. Depressed or hypertrophic scars are rare but can occur, especially if the patient has had insufficient time to heal between treatments. Treatment can result rarely in disabling chronic pain syndromes (e.g., vulvodynia or hyperesthesia of the treatment site).
# External Genital Warts, Recommended Treatments
Patient-Applied: Podofilox 0.5% solution or gel. Patients may apply podofilox solution with a cotton swab, or podofilox gel with a finger, to visible genital warts twice a day for 3 days, followed by 4 days of no therapy. This cycle may be repeated as necessary for a total of four cycles. The total wart area treated should not exceed 10 cm 2 , and a total volume of podofilox should not exceed 0.5 mL per day. If possible, the health-care provider should apply the initial treatment to demonstrate the proper application technique and identify which warts should be treated. The safety of podofilox during pregnancy has not been established.
# OR
Imiquimod 5% cream. Patients should apply imiquimod cream with a finger at bedtime, three times a week for as long as 16 weeks. The treatment area should be washed with mild soap and water 6-10 hours after Note: Management of warts on rectal mucosa should be referred to an expert.
# Oral Warts
Cryotherapy with liquid nitrogen, OR Surgical removal.
# Follow-Up
After visible genital warts have cleared, a follow-up evaluation is not mandatory. Patients should be cautioned to watch for recurrences, which occur most frequently during the first 3 months. Because the sensitivity and specificity of self-diagnosis of genital warts is unknown, patients concerned about recurrences should be offered a follow-up evaluation 3 months after treatment. Earlier followup visits also may be useful a) to document a wart-free state, b) to monitor for or treat complications of therapy, and c) to provide the opportunity for patient education and counseling. Women should be counseled regarding the need for regular cytologic screening as recommended for women without genital warts. The presence of genital warts is not an indication for cervical colposcopy.
# Management of Sex Partners
Examination of sex partners is not necessary for the management of genital warts because the role of reinfection is probably minimal and, in the absence of curative therapy, treatment to reduce transmission is not realistic. However, because self-or partner-examination has not been evaluated as a diagnostic method for genital warts, sex partners of patients who have genital warts may benefit from examination to assess the presence of genital warts and other STDs. Sex partners also might benefit from counseling about the implications of having a partner who has genital warts. Because treatment of genital warts probably does not eliminate the HPV infection, patients and sex partners should be cautioned that the patient might remain infectious even though the warts are gone. The use of condoms may reduce, but does not eliminate, the risk for transmission to uninfected partners. Female sex partners of patients who have genital warts should be reminded that cytologic screening for cervical cancer is recommended for all sexually active women.
# Special Considerations Pregnancy
Imiquimod, podophyllin, and podofilox should not be used during pregnancy. Because genital warts can proliferate and become friable during pregnancy, many experts advocate their removal during pregnancy. HPV types 6 and 11 can cause laryngeal papillomatosis in infants and children. The route of transmission (i.e., transplacental, perinatal, or postnatal) is not completely understood. The preventive value of cesarean section is unknown; thus, cesarean delivery should not be performed solely to prevent transmission of HPV infection to the newborn. In rare When laboratory diagnostic capabilities are available, treatment should be based on the specific diagnosis. Diagnostic and treatment recommendations for all enteric infections are beyond the scope of these guidelines.
# Treatment
Acute proctitis of recent onset among persons who have recently practiced receptive anal intercourse is most often sexually transmitted. Such patients should be examined by anoscopy and should be evaluated for infection with HSV, N. gonorrhoeae, C. trachomatis, and T. pallidum. If anorectal pus is found on examination, or if polymorphonuclear leukocytes are found on a Gram-stained smear of anorectal secretions, the following therapy may be prescribed pending results of additional laboratory tests.
# Recommended Regimen
Ceftriaxone 125 mg IM (or another agent effective against anal and genital gonorrhea), PLUS Doxycycline 100 mg orally twice a day for 7 days.
# NOTE:
For patients who have herpes proctitis, refer to Genital Herpes Simplex Virus (HSV) Infection.
# Follow-Up
Follow-up should be based on specific etiology and severity of clinical symptoms. Reinfection may be difficult to distinguish from treatment failure.
# Management of Sex Partners
Sex partners of patients who have sexually transmitted enteric infections should be evaluated for any diseases diagnosed in the patient.
# ECTOPARASITIC INFECTIONS Pediculosis Pubis
Patients who have pediculosis pubis (i.e., pubic lice) usually seek medical attention because of pruritus. Such patients also usually notice lice or nits on their pubic hair.
# Recommended Regimens
Permethrin 1% creme rinse applied to affected areas and washed off after 10 minutes.
# OR
# Management of Outbreaks in Communities, Nursing Homes, and Other Institutional Settings
Scabies epidemics often occur in nursing homes, acute-and chronic-care hospitals, residential facilities, and communities. Control of an epidemic can only be achieved by treatment of the entire population at risk. Epidemics should be managed in consultation with an expert.
# Special Considerations Infants, Young Children, and Pregnant or Lactating Women
Infants, young children, and pregnant or lactating women should not be treated with lindane. They may be treated with permethrin.
# HIV Infection
Patients who have uncomplicated scabies and also are infected with HIV should receive the same treatment regimen as those who are HIV-negative. HIV-infected patients and others who are immunosuppressed are at increased risk for Norwegian scabies, a disseminated dermatologic infection. Such patients should be managed in consultation with an expert.
# SEXUAL ASSAULT AND STDs
# Adults and Adolescents
The recommendations in this report are limited to the identification and treatment of sexually transmitted infections and conditions commonly identified in the management of such infections. The documentation of findings and collection of nonmicrobiologic specimens for forensic purposes and the management of potential pregnancy or physical and psychological trauma are not included. Among sexually active adults, the identification of sexually transmitted infections after an assault is usually more important for the psychological and medical management of the patient than for legal purposes, because the infection could have been acquired before the assault.
Trichomoniasis, BV, chlamydia, and gonorrhea are the most frequently diagnosed infections among women who have been sexually assaulted. Because the prevalence of these STDs is substantial among sexually active women, the presence of these infections after an assault does not necessarily signify acquisition during the assault. Chlamydial and gonococcal infections in women are of special concern because of the possibility of ascending infection. In addition, HBV infection, if transmitted to a woman during an assault, can be prevented by postexposure administration of hepatitis B vaccine.
# Evaluation for Sexually Transmitted Infections Initial Examination
An initial examination should include the following procedures:
- Cultures for N. gonorrhoeae and C. trachomatis from specimens collected from any sites of penetration or attempted penetration.
- If chlamydial culture is not available, nonculture tests, particularly the nucleic acid amplification tests, are an acceptable substitute. Nucleic acid amplification tests offer advantages of increased sensitivity if confirmation is available. If a nonculture test is used, a positive test result should be verified with a second test based on a different diagnostic principle. EIA and direct fluorescent antibody are not acceptable alternatives, because false-negative test results occur more often with these nonculture tests, and false-positive test results may occur.
- Wet mount and culture of a vaginal swab specimen for T. vaginalis infection. If vaginal discharge or malodor is evident, the wet mount also should be examined for evidence of BV and yeast infection.
- Collection of a serum sample for immediate evaluation for HIV, hepatitis B, and syphilis (see Prophylaxis, Risk for Acquiring HIV Infection and Follow-Up Examination 12 Weeks After Assault).
# Follow-Up Examinations
Although it is often difficult for persons to comply with follow-up examinations weeks after an assault, such examinations are essential a) to detect new infections acquired during or after the assault; b) to complete hepatitis B immunization, if indicated; and c) to complete counseling and treatment for other STDs. For these reasons, it is recommended that assault victims be reevaluated at follow-up examinations.
# Follow-Up Examination After Assault
Examination for STDs should be repeated 2 weeks after the assault. Because infectious agents acquired through assault may not have produced sufficient concentrations of organisms to result in positive test results at the initial examination, a culture (or cultures), a wet mount, and other tests should be repeated at the 2-week follow-up visit unless prophylactic treatment has already been provided.
Serologic tests for syphilis and HIV infection should be repeated 6, 12, and 24 weeks after the assault if initial test results were negative.
# Prophylaxis
Many experts recommend routine preventive therapy after a sexual assault. Most patients probably benefit from prophylaxis because the follow-up of patients who have been sexually assaulted can be difficult, and they may be reassured if
# Evaluation for Sexually Transmitted Infections
Examinations of children for sexual assault or abuse should be conducted so as to minimize pain and trauma to the child. The decision to evaluate the child for STDs must be made on an individual basis. Situations involving a high risk for STDs and a strong indication for testing include the following:
- A suspected offender is known to have an STD or to be at high risk for STDs (e.g., has multiple sex partners or a history of STD).
- The child has symptoms or signs of an STD or of an infection that can be sexually transmitted.
- The prevalence of STDs in the community is high. Other indications recommended by experts include a) evidence of genital or oral penetration or ejaculation or b) STDs in siblings or other children or adults in the household. If a child has symptoms, signs, or evidence of an infection that might be sexually transmitted, the child should be tested for other common STDs. Obtaining the indicated specimens requires skill to avoid psychological and physical trauma to the child. The clinical manifestations of some STDs are different among children in comparison with adults. Examinations and specimen collections should be conducted by practitioners who have experience and training in the evaluation of abused or assaulted children.
A principal purpose of the examination is to obtain evidence of an infection that is likely to have been sexually transmitted. However, because of the legal and psychosocial consequences of a false-positive diagnosis, only tests with high specificities should be used. The additional cost of such tests and the time required to conduct them are justified.
The scheduling of examinations should depend on the history of assault or abuse. If the initial exposure was recent, the infectious agents acquired through the exposure may not have produced sufficient concentrations of organisms to result in positive test results. A follow-up visit approximately 2 weeks after the most recent sexual exposure should include a repeat physical examination and collection of additional specimens. To allow sufficient time for antibodies to develop, another follow-up visit approximately 12 weeks after the most recent sexual exposure may be necessary to collect sera. A single examination may be sufficient if the child was abused for an extended time period or if the last suspected episode of abuse occurred well before the child received the medical evaluation.
The following recommendation for scheduling examinations is a general guide. The exact timing and nature of follow-up contacts should be determined on an individual basis and should be considerate of the child's psychological and social needs. Compliance with follow-up appointments may be improved when law enforcement personnel or child protective services are involved.
# Presumptive Treatment
The risk for a child's acquiring an STD as a result of sexual abuse has not been determined. The risk is believed to be low in most circumstances, although documentation to support this position is inadequate.
Presumptive treatment for children who have been sexually assaulted or abused is not widely recommended because girls appear to be at lower risk for ascending infection than adolescent or adult women, and regular follow-up usually can be ensured. However, some children-or their parent(s) or guardian(s)-may be concerned about the possibility of infection with an STD, even if the risk is perceived by the health-care provider to be low. Patient or parental/guardian concerns may be an appropriate indication for presumptive treatment in some settings (i.e., after all specimens relevant to the investigation have been collected).
# Reporting
Every state, the District of Columbia, Puerto Rico, Guam, the U.S. Virgin Islands, and American Samoa have laws that require the reporting of child abuse. The exact requirements differ by state, but, generally, if there is reasonable cause to suspect child abuse, it must be reported. Health-care providers should contact their state or local child-protection service agency about child abuse reporting requirements in their areas. | 60,056 | {
"id": "f6a6cf0f04be76ad996d299639af661b7374d716",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # I.
RECOMMENDATIONS FOR AN ACRYLAMIDE STANDARD 1 Sufficient technology exists to permit compliance with the recommended standard. Although the workplace environmental limit is considered to be a safe level based on current information, it should be regarded as the upper boundary of exposure and every effort should be made to maintain the exposure at levels as low as is technically feasible. The criteria and standard will be subject to review and revision as necessary.
Synonyms for acrylamide include propenamide, acrylic amide, and akrylamid. The terms "acrylamide" or "acrylamide monomer" are used in this document interchangeably. "Action level" is defined as a time-weighted average (TWA) concentration of one-half the environmental limit.
"Occupational exposure to acrylamide," because of systemic effects and dermal irritation produced by contact of acrylamide with the skin, is defined as work in an area where acrylamide is stored, produced, processed, or otherwise used, except as an unintentional contaminant in other materials at a concentration of less than 1% by weight.
If an employee is occupationally exposed to airborne concentrations of acrylamide in excess of the action level, then all sections of the recommended standard shall be complied with; if the employee is occupationally exposed at or below the action level, then all sections of the recommended standard shall be complied with except Section 8.
# Section 1 -Environmental (Workplace Air) (a) Concentration
The employer shall control workplace concentrations of acrylamide so that no employee is exposed at a concentration greater than 0. Procedures for the collection and analysis of environmental samples shall be as provided in Appendices I and II, or by any method shown to be at least equivalent in accuracy, precision, and sensitivity to the methods specified.
# Section 2 -Medical
Medical surveillance shall be made available to all persons subject to occupational exposure to acrylamide as described below.
(a) Preplacement medical examinations shall include:
(1) Comprehensive medical and work histories with special emphasis to such areas as weight loss and neurologic disturbances.
(2) Complete physical examination giving particular attention to the skin, eyes, and nervous system.
Judgment of the worker's ability to use positive-or negative-pressure respirators.
(b) Periodic examinations shall be made available on an annual basis, or as otherwise determined by the responsible physician. These examinations shall include at least:
(1) Interim medical and work histories.
(2) Weekly examination by trained personnel of the fingertips of hands and other portions of the body exposed to acrylamide for evidence of skin peeling.
(3) Physical examination as outlined in paragraph (a)(2) of this section.
# (c)
In an emergency involving exposure to acrylamide, all affected personnel shall be provided immediate first-aid assistance and prompt medical attention, especially with respect to the skin and eyes. Medical attendants shall be informed of the need of observation and followup for any delayed neurologic effects.
(d) In the event of skin contact with acrylamide, grossly contaminated clothing and shoes shall be removed. Any exposed body area shall be immediately and thoroughly washed with soap and water. In the case of eye contact with acrylamide, eyes shall be flushed with copious amounts of water and a physician shall be consulted promptly. (1) Appropriate protective clothing, including gloves, aprons, long-sleeved overalls, footwear, and face shields (8-inch minimum), shall be worn where needed to limit skin contact with acrylamide.
Impervious clothing may be needed in specialized operations. Appropriate eye protection (chemical safety goggles or face shields and safety glasses with side shields) shall be worn in any operation in which acrylamide (solid, liquid, or spray) may come in contact with eyes.
The employer shall provide the employee with the appropriate equipment specified in paragraph (a)(1) of this section.
(b) Respiratory Protection (I) Engineering controls shall be used if needed to keep acrylamide concentrations at or below the TWA environmental limit.
Respiratory protective equipment may be used:
(A) During the time necessary to install or test the required engineering controls.
(B) During emergencies or during the performance of nonroutine maintenance or repair activities which may cause exposures at concentrations in excess of the TWA environmental limit.
(2) When a respirator is permitted by paragraph (b)(1) of this section, it shall be selected and used pursuant to the following requirements:
(A)
The employer shall establish and enforce a respiratory protective program meeting the requirements of 29 CFR 1910.134.
# (B)
The employer shall provide respirators in accordance with Table 1-1 and shall ensure that the employee uses the respirator provided when necessary.
The respiratory protective devices provided in conformance with Table 1-1 shall comply with the standards jointly approved by NIOSH and the Mining Enforcement and Safety Administration (formerly Bureau of Mines) as specified under the provisions of 30 CFR 11.
(C) Respirators specified for use in higher concentrations of acrylamide may be used in atmospheres of lower concentrations.
(D)
The employer shall ensure that respirators are adequately cleaned and maintained, and that employees are instructed in the proper use and testing for leakage of respirators assigned to them.
(E) Respirators shall be easily accessible, and employees shall be informed of their location. (1) Supplied-air respirator, demand mode, with full facepiece (2) Self-contained breathing apparatus, demand mode, with full facepiece Less than or equal to 100 ppm (300 mg/cu m)
(1) Supplied-air respirator, continuousflow type or pressure-demand (positive pressure) mode, with half-mask or full facepiece (2) Supplied-air respirator, continuousflow type, with hood, helmet, or suit Greater than 100 ppm (300 mg/cu m)
(1) Self-contained breathing apparatus with full facepiece operated in pressuredeipand or other positive-pressure mode (2) Combination Type C supplied-air respirator with full facepiece operated in pressure-demand mode, with an auxiliary self-contained air supply Emergency entry (into an area of unknown concen tration)
(1) Self-contained breathing apparatus with full facepiece operated in pressuredemand or other positive-pressure mode (2) Combination Type C supplied-air respirator with full facepiece operated in pressure-demand mode, with an auxiliary self-contained air supply Escape (from an area of unknown concentration)
(1) Gas mask, full facepiece, equipped with a combination organic vapor canister and a high-efficiency filter (2) Self-contained breathing apparatus operated in either demand or pressuredemand mode (F)
In case of an accident which could result employee exposure to acrylamide in excess of the environmental limit, the employer shall provide respiratory protection as listed in Table 1-1.
Section 5 -Informing Employees of Hazards from Acrylamide (a)
The employer shall ensure that each employee occupationally exposed to acrylamide is informed at the beginning of employment or on assignment to an acrylamide area of the hazards, relevant symptoms such as skin peeling, numbness ("pins and needles" in fingers), sleepiness, loss of weight, and weakness, appropriate emergency procedures, and proper conditions and precautions for the safe use of acrylamide. People engaged in maintenance and repair shall be included in these training programs.
The employee shall be reinformed at least once a year. Each employee shall be advised of the availability of such relevant information kept on file, including the material safety data sheet. (1) Ventilation systems if used shall be designed to prevent the accumulation or recirculation of acrylamide in the workplace, to maintain acrylamide concentrations at or below the recommended environmental limit, and to effectively remove acrylamide from the breathing zones of employees.
Ventilation systems shall be subject to regular preventive maintenance and cleaning to ensure effectiveness, which shall be verified by periodic performance measurements.
(2) A partially enclosed, ventilated, and automated system should be used to empty and transfer bags of solid acrylamide into a bin, so that dust are effectively removed. The bag should be cut open automatically and any dust should be removed by local exhaust ventilation.
(3) Concrete floors in operations areas shall be sealed in a manner that minimizes permeation of acrylamide into the concrete. (.2) Acrylamide contact with skin and eyes of workers shall be prevented. Equipment, walls, and floors should be kept clean to limit worker exposure.
(3) Prior to maintenance work, sources of acrylamide and its vapor shall be eliminated to the extent feasible. If concentrations at or below the recommended workplace environmental limit cannot be ensured, respiratory protective equipment as specified in Table 1-1 shall be used during such maintenance work. (2) Individuals entering confined spaces where they may be exposed to acrylamide shall wear respirators as outlined in Section 4.
(3) Confined spaces shall be ventilated while work is in progress to keep the concentration of acrylamide at or below the workplace environmental limit.
(4) When a person enters a confined space, another properly protected worker shall be on standby outside.
# (e) Emergency Procedures
For all work areas where there is a reasonable potential for accidents involving acrylamide, the employer shall take all necessary steps to ensure that employees are instructed in and follow the procedures specified below and any others appropriate for a specific operation or process.
(1) Procedures shall include prearranged plans for obtaining emergency medical care and for the necessary transportation of injured workers.
Employees shall also be trained in administering immediate first aid and shall be prepared to render such assistance when necessary.
(2) Approved eye, skin, and respiratory protection as specified in Section 4 shall be used by persons involved in the cleaning procedure of the accident site.
(3) All persons who may be required to shut off sources of acrylamide, clean up spills, and repair leaks shall be properly trained in emergency procedures and shall be adequately protected against attendant hazards from exposure to acrylamide.
Employees not essential to clean-up operations shall be evacuated from exposure areas during emergencies. Perimeters of hazardous exposure areas shall be delineated, posted, and secured.
(5) Eyewash fountains and showers shall be provided in accordance with 29 CFR 1910.151.
If an employee is found to be exposed to acrylamide in excess of the recommended TWA environmental limit, the exposure of that employee shall be measured at least once a week, control measures shall be initiated, and the employee shall be notified of the exposure and of the control measures being implemented. Such monitoring shall continue until two consecutive determinations, at least 1 week apart, indicate that the employee's exposure no longer exceeds the recommended environmental limit;
routine monitoring may then be resumed.
# (b) Recordkeeping
Records of environmental monitoring shall be kept by the employer for at least 20 years. These records shall include the dates of measurements, the most popular and commercially useful method, by the use of free radical initiators of redox catalytic systems . Molten acrylamide polymerizes vigorously with i_he evolution of heat .
Acrylonitrile is the major starting material used in all industrial methods for the manufacture of acrylamide . The starting material is mixed with sulfuric acid, an exothermic reaction, and then diluted with water. Acrylamide is then prepared from acrylamide sulfate either by the lime process, ammonia process, carbonate process, or ion-exchange process.
Acrylamide is difficult to recover at the aqueous stage since it may polymerize or undergo hydrolysis. Various processes have been devised by manufacturers to facilitate the recovery and to control the amount of heat generated in the procedure. Recently, a few manufacturers have developed pollution-and byproduct-free processes for direct production of acrylamide monomer via hydration of acrylonitrile over a catalyst .
Acrylamide monomer production has been estimated to have amounted to about 15-20 million pounds in 1966, 30 million in 1969, 32 million in 1972, 40 million in 1973, and approximately 70 million in 1974 . During the past 20 years, the use of acrylamide polymers has also increased very rapidly with the increased production of acrylamide monomer . In 1972, about 35 million pounds of acrylamide polymers were used in the United States alone. These are the latest years for which data are available.
The major use of acrylamide monomer is in the production of polymers . Aqueous solutions of the monomers and a redox catalyst are used for soil stabilization.
Polyacrylamides are very effective flocculants for finely divided solids in aqueous suspensions. AM-9 has found increasing application as a chemical grout. The largest market for acrylamide now is in the manufacture of polyacrylamides for waste and water treatment flocculants, in products for aiding sewage dewatering, and in a variety of products for the water treatment industry. These uses consumed more than 40% of the acrylamide used in 1973 . Acrylamide is also used to prepare polyacrylamides, which are used as strengtheners in papermaking and retention aids (to keep the fibers from being washed away). The pulp and paper industry accounted for about 20% of the acrylamide consumption in 1973. Some other uses of polyacrylamides are drilling-mud additives, textile treatment, and surface coatings.
In very small quantities, acrylamide polymers are also used for flocculation of ores , mine tailings and coal, friction reduction , thickening , soil stabilization, gel chromatography and electrophoresis, photography, fog dissipation, breaking of oil-in-water emulsions, dyeing, ceramics, and clarification and treatment of potable water and foods .
Several other uses for monomeric acrylamide have been proposed by various investigators. Compounds such as N,N'-ethylene-bis-acrylamide and some bromo combinations have shown promise as antitumor agents in mice , in tomato plant tumors , and in plant tissue cultures .
NIOSH estimates that approximately 20,000 workers are potentially exposed to acrylamide in the United States.
Interest was not shown in its preparation and properties until acrylonitrile became commercially available in 1940 . It was first offered by American Cyanamid Company for developmental consideration in 1952, and they began manufacture for the commercial market in 1954.
It was not until the advent of large-scale commercial production that some pharmacologic and toxicologic experiments were initiated by American Cyanamid Company at Hazleton Laboratories. After single large oral doses, death occurred as a consequence of convulsions and asphyxia in mice, rats, rabbits, and dogs. However, after repeated administration of acrylamide, a neurologic syndrome was characterized by postural and motor incoordination in these animals.
The single-dose toxicity of monomeric acrylamide in animals was also reported by Druckrey et al in 1953.
The so-called average lethal dose by intraperitoneal (ip) injection was reported to be 120 mg/kg in rats which died within 1 or 2 days with severe pulmonary obstruction.
A toxicologic study reported by Hamblin in 1956 showed that the oral administration of acrylamide monomer produced neuropathy in mice, rats, and dogs.
# Effects on Humans
In 1953, a new method of acrylamide production was undertaken at an American Cyanamid Company manufacturing plant where acrylonitrile was hydrated by sulfuric acid to form acrylamide sulfate after which it was neutralized by ammonia, yielding free acrylamide. About 5 months after the new process was begun, a "handful" of the hundreds of potentially exposed plant workers noticed numbness and tingling of their hands and weakness of their hands and legs. Dermal contact of the workers was thought to have been limited because they wore protective clothing and gloves. The air in the plant was sampled and only traces of acrylamide were found. However, by the use of methods (not described) of detection then in use, it was calculated that the maximum amount of acrylamide which could have been inhaled by one worker in a 5-month period was approximately 1.8 mg/kg. The whole manufacturing process was altered and the exposure of the workers to acrylamide was reduced or eliminated. Further details were not presented. In some instances there was evidence of CNS involvement .
A pattern of reactions emerged when signs, symptoms, and results of neurologic examinations of the workers were compiled and compared (see 18,21], sleepiness , and complete collapse (which occurred in two people after drinking alcoholic beverages) . Still later, emotional changes and finally, reactions typical of overt peripheral neuropathy, positive Romberg's sign 23,24], loss of vibration and position senses 23] Some of the incidents of acrylamide intoxication which occurred in Japan were reported by Fujita et al . They described in considerable detail the signs and symptoms which resulted from acrylamide exposure in 10 workers in one factory. Nine men and one woman ranging in age from 17 to 43 years were exposed at a pilot plant where manufacturing procedures were being developed. The length of employment in that plant varied from 3 to of the typical signs and symptoms of acrylamide intoxication relating more to the legs than to the arms. Of the three workers who were hospitalized because they were the most severly affected, one had been employed for 12 months and the other two for 3 months in their present jobs. Those three workers had, in addition to the typical reactions (tremor and numbness of the hands and feet, dizziness, loss of the knee jerk, heavy feeling of the legs, staggering, and positive Romberg's sign), emotional changes which were also somewhat similar to, but very much less severe than those of the family reported by Igisu et al , which is described in detail later.
The symptoms were attributed to the presence of the peripheral neuropathy and the desquamation and sensitivity of the soles of the feet, which may also have been in contact with acrylamide. Fujita et al also found sufficient signs and symptoms to postulate that the CNS and probably the cerebellum, in addition to peripheral nerves, were involved in acrylamide intoxication. All 10 workers improved during about 4 months of rest and supportive treatment.
Takahashi et al described the reactions of 13 factory and 2 laboratory workers who were exposed to acrylamide from 2 months to 8 years.
All of the workers were males aged 18-32 years. They were exposed during the polymerization of the monomer in the manufacture of papercoating materials. The described reactions conformed to the typical ones (numbness of lower limbs, ataxia, dizziness, gastrointestinal upset, and hand peeling).
The authors concluded that, although peripheral neuropathy was one of the most important effects in the patients, a few CNS effects also may have been present. When the work environment was changed to limit or prevent contact with acrylamide, the workers gradually recovered; however, the authors did not describe the controls used. Takahashi et al also performed special studies in which motor and sensory nerve conduction velocities and action potentials were determined in some of the peripheral nerves in arms and legs of the affected workers. The motor nerves tested had essentially normal reactions, whereas the sensory nerves' had decreased action potential amplitude. The authors suggested that the defect in the action potential would precede decreases in conduction velocity and that this indicated sensory nerve injury.
# Garland
and Patterson described six cases of acrylamide intoxication in workers in three factories in Great Britain where acrylamide flocculants were produced. The workers, all men and aged 19-59 years, had worked in the factories for periods varying from 1 to over 12 months. Although limited details of the medical histories and examinations were reported, the authors suggested that what was recorded agreed with some of the signs and symptoms of the typical reactions (increased sweating of feet and hands, fatigue, muscle weakness and pain, hand peeling, sensory loss, and positive Romberg's sign). The authors stated that all of the men recovered after they were removed from exposure, although it took from 2 to 12 months. Garland and Patterson interpreted that, because of the sleepiness of the men, the midbrain, as well as peripheral nerves, was involved. Fullerton studied nerve conduction velocities in three of the patients reported by Garland and Patterson while they were recovering from the ill effects of exposure.
Maximal motor nerve conduction in the distal ends of median and ulnar nerves was found to be normal or slightly reduced and response to nerve stimulation in small muscles was dispersed irregularly. Fullerton suggested that those changes were caused by degeneration and regeneration of the distal nerves (nerve endings near muscles). The action potentials of the sensory nerves were also decreased or absent; the author indicated that the peripheral sensory nerves had been more severely damaged than their associated motor nerves.
In addition to the determination of the neurophysiologic phenomena, Fullerton microscopically examined biopsy specimens of nerves from the calf muscles from two of the three patients. The author concluded that simultaneous nerve degeneration and regeneration occurred immediately after, and probably during acrylamide exposure, and that most probably nerve function was impaired before structural changes were evident. obtained qualitatively similar results in adult Porton-strain albino rats.
Fullerton and Barnes also observed that acrylamide administered to rats at a single dose of 100 mg/kg by stomach tube produced only fine tremors, but when repeated in 24 hours killed most of the animals within 3 days. General weakness was also observed in the dying rats. When intubated with 50-mg/kg doses, 12 times over a 15-day period, all rats (number not specified) developed severe weakness and died within a few days after the final dose. At necropsy, many rats (both males and females) had gross distention of the bladder. Ten-week-old rats given acrylamide orally at a dose of 25 mg/kg/day, 5 days/week, developed the first signs of weakness after the fourth week. A dose of 10 mg/kg/day, 5 days/week, given for about 5 months, produced no signs of toxicity in six female rats.
# Fullerton and Barnes
also measured motor nerve conduction velocity in the fibers supplying the small muscles on the plantar surface of the hind paws in Porton-strain albino rats. Nerve conduction velocity in control animals was 56 (SD 5.8) meters/second compared with 44 (SD 2.2) meters/second in the 11 of the 15 rats fed acrylamide in their diets (200 ppm for 6 months or 400 ppm for 2-3 months) and showed severe neurologic abnormalities of the hindlimbs. The conduction velocities were normal in the remaining four rats recovering from the severe leg weakness.
The influence of age on acrylamide-induced leg weakness was investigated by Fullerton and Barnes in groups of six rats (sex not specified) aged 5, 8, 26, and 52 weeks and fed 100 mg/kg of acrylamide at weekly intervals. After four doses, the youngest animals were only mildly affected, whereas those aged 26 weeks at the start of study were severely affected. The 52-week-old rats were severely affected after only three doses.
The authors stated that, when acrylamide feeding was discontinued, the recovery in young animals, which had shown weakness for only a few weeks, was rapid and complete. For the older rats, in which weakness had been present for months, recovery was slow, and mild ataxia continued for some months.
The effects of orally administered acrylamide on dogs were briefly reported by Hamblin in 1956, but recently have been studied in more detail by Thomann et al . In the earlier investigation , groups of two dogs each were given acrylamide at doses of 1 mg/kg/day for 19 weeks, 5 mg/kg/day for 5 weeks, or 8 mg/kg/day for 4 weeks without overt signs of toxicity. However, a dose of 10 mg/kg/day for 4-5 weeks produced incoordination, weakness of the extremities, and impaired righting reflex.
A single dose of 100 mg/kg produced these same effects in 24 hours. In the 1974 report by Thomann et al , experiments were performed on 6-to 12-month-old beagles. The dogs were given acrylamide orally in gelatin capsules in daily doses of 5 or 15 mg/kg. The first group (three males and three females) received 5 mg/kg/day for 60 days; the second group (five males and five females) received 15 mg/kg/day for 22 days. After the first 3 weeks of the experiment, the dogs in the 5-mg/kg/day group were inactive and had muscular weakness, which was particularly noticeable in the jaw muscles.
In addition to muscular weakness, the animals in the 15-mg/kg/day group had dilated pupils, conjunctivitis, salivation, difficulty in breathing, muscular stiffness of the hindlegs, muscle twitching, head Hamblin briefly described the effects on the growth of albino rats given acrylamide in the diet (10, 50, 100, or 300 ppm).
No effects were reported at the 10-and 50-ppm levels. Diets containing 100 and 300 ppm of acrylamide produced growth retardation within 6 and 4 weeks, respectively.
Fullerton and Barnes studied the effects of acrylamide on 6-to 8-week-old male albino rats. The animals were fed diets containing 100, 200, 300, or 400 ppm of acrylamide. According to the authors , these represented daily intakes of about 6-9, 10-14, 15-18, or 20-30 mg/kg, respectively. Control rats received a similar diet without acrylamide.
Rats on the acrylamide diets developed slight leg weakness as follows: at 400 ppm after 3 weeks, at 300 ppm after 4 weeks, at 200 ppm after 12 weeks, and at 100 ppm after 40 weeks.
Severe leg weakness developed in all except the 100-ppm rats on continuation of dietary treatment; the slight leg weakness observed at week 40 did not increase during the remaining 8 weeks of the experiment. The only macroscopic findings at necropsy were wasting of the hindlimb musculature and distended urinary bladders in all rats.
Axons and myelin sheaths of the sciatic, tibial, median, and ulnar nerves examined microscopically at necropsy showed extensive degeneration in the peripheral nerves of all the clinically affected animals. Microscopic examination of kidney, spleen, pancreas, adrenal, lung, brain, and spinal cord tissues showed no abnormalities.
McCollister et al studied the effects of acrylamide given at low concentrations in the diet. Groups of 10 male and 10 female 8-week-old rats of the Dow Wistar strain were maintained on diets containing 3, 9, or 30 ppm acrylamide for 90 days, and then killed and necropsied.
As judged by their appearance, behavior, growth, mortality, organ weights, and microscopic examination of tissues, there was no evidence of adverse effects in either male or female rats. No signs of neurotoxicity were seen in two other groups of animals maintained on diets containing either 70 or 110 ppm of acrylamide for 189 days. In the same experiment, McCollister et al also studied the effects on male and female rats of acrylamide given at high concentrations in the diet. At a 300-ppm acrylamide dietary level, the rats began to show loss of control of the hindquarters after 21 days.
By day 42, all 10 males and 6 of 10 females were dead. Loss of hindquarter control was seen at 14 days in rats maintained at a diet containing 400 ppm of acrylamide. According to the authors , doses of 3, 9, 30, 70, 90, 110, 300, and 400 ppm of acrylamide in the diet were equivalent to 0.3, 0.9, 3, 7, 9, 11, 30, and 40 mg/kg/day, respectively. weeks, the cats showed slight weakness of the hindlimbs which progressed at variable rates to paralysis of the hindlimbs. Atrophy of the thigh and leg muscles was noticeable in severely paralyzed cats, and a few cats also had weakness of the forelimbs. The authors reported that the cat cries became coarse, indicating possible involvement of the laryngeal nerves.
Cats showed marked improvement when returned to their normal diet.
Hindlimb strength was regained within 2-3 weeks, but complete recovery took several months and was directly related to the severity of the involvement.
Microscopic examination of the nerves revealed degeneration of the myelin and axons of all four limbs. There was a suggestion of actual axon loss in the distal third or fourth portion of the tibial nerve fibers. Atrophy was evident grossly in nearly all muscles of the caudal limbs; however, microscopically, it was marked only in the digital muscles. weights of male Porton-strain albino rats weighing about 200 g. Acrylamide was injected ip, twice a week, for 1 month. At 32 days after the first injection, there was a 28% reduction in body weight in rats injected with 50 mg/kg; those injected with 100 mg/kg showed a weight reduction of 63%.
Rats in both groups were ataxic after 2 weeks.
# Suzuki and Pfaff
studied the effects of acrylamide in white Osbome-Mendel strain suckling and adult rats. One group consisted of 30
suckling (1-day-old) rats weighing 5-8 g and the other of 28 adult rats weighing 150-300 g. The animals received ip injections of 50 mg/kg of acrylamide in saline, three times a week, for up to 18 injections; two additional adult rats each received a total of 26 injections. Controls were injected with saline only. Suckling rats, both experimental and control, gained weight normally until tlieir fifth or sixth injection, when weight gains of the ar.rylamide-injected animals slowed down. Slight weakness of the hindlimbs, noticeable in some of the young animals after five or six acrylamide injections, became more pronounced until the rats could no longer stand. In contrast to the results obtained In suckling rats, the body weights of the adults did not change. Weakness of the hindlimbs, noticeable after 7 or 8 injections, was followed by complete paralysis after 15-17 injections. At this time, wasting of the musculature of the hindlimbs was prominent. Weakness of the forelimbs was also noted in some rats. In the animals for which acrylamide treatment was terminated after the 16th injection, weakness of the extremities persisted for about 1 month but, in animals given 26 injections, it persisted for about 2 months.
Animals with clinical signs of neuropathy showed prominent distention of the urinary bladder in pups and adults at autopsy. The other organs were congested, but otherwise normal. With electron microscopy, the authors found that the most common feature in suckling rats given nine injections of acrylamide was axons filled with fine filaments. Very few changes were noted in the adult rats killed after 10 or 12 injections. Many axons of the sciatic nerve were filled with neurofilaments; however, the myelin sheaths appeared normal.
In addition to accumulations of neurofilaments, degeneration of axons and myelin sheaths was observed in adult rats receiving 15-18 injections.
Sciatic nerves and their branches in adult rats which had received 26 injections had numerous Schwann cells and macrophages containing many myelin figures and fat droplets but very few myelinated fibers. There were many Schwann cells in the sections, but few of these sections contained axons. Microscopic examination of the nerves of adult rats killed 20 or 30 days after the last injection showed numerous axonal sprouts growing within the Schwann cells.
# Suzuki and Pfaff
concluded that, since degenerative changes of sciatic nerve axons seen only in adult rats in advanced stages of neuropathy were also frequently observed in suckling rats at the onset of paralysis, the peripheral nerves of suckling rats were more susceptible to acrylamide than were those of adults. The authors suggested that the higher susceptibility of suckling rats could be a result of the incomplete development of the barrier system of peripheral nerves. seen at any of these intervals. However, more radioactivity from labeled lysine was counted in proteins of the spinal cords of treated than of control rats after 4 weeks of feeding and the difference continued to increase, particularly in the lower cord, until 6 or 8 weeks.
In the sciatic nerve, a slight decrease in radioactivity was noted after 2 or 3 weeks, but was followed by a larger increase beginning at 4 weeks.
The effects of acrylamide on the incorporation of 35S-methionine into proteins were studied at weekly intervals from weeks 2 to 6 of feeding .
The incorporation by the controls was highest at all times in the sciatic nerve followed by the liver, brain cortex, and spinal cord. A significant increase in incorporation was demonstrated at 6 weeks after start of the feeding in the spinal cord and sciatic nerve, but not in the brain or liver.
In contrast to the results obtained with lysine, when methionine was used, no decrease in radioactivity was seen in the sciatic nerve protein at the early stages. Six male Porton-strain albino rats weighing about 200 g were injected iv with radioactive acrylamide at a dose of 100 mg/kg.
The 14C-radioactivity was measured in the expired air and in the urine.
About 6% of the injected dose was exhaled as carbon dioxide in the first 8 hours; excretion was very low after that (reaching only slightly more than 6% in 24 hours). Urinary excretion of the 14C-radioactive material was very rapid, 40% of the injected dose excreted over the first day and a maximum of about 65% reached by day 4; the excreted metabolites were not identified. The distribution of 14C-radioactivity was studied in whole blood, plasma, brain, spinal cord, sciatic nerve, liver, and kidney at 1, 4, and 14 days after injection. At each of these intervals, the radioactive material was found in all tissues examined, with high counts in the blood. A considerable amount of radioactivity was present at 14 days; most of it was not extractable by 5% trichloroacetic acid and so was presumably protein-bound.
Edwards recently reported on the half-life of acrylamide in the blood of male Porton-strain rats weighing 200 g. Acrylamide dissolved in 0.9% saline was injected iv at a dose of 100 mg/kg. The drop in the blood concentration of acrylamide was exponential and its half-life was 1.9 hours. ; in the handling of a 10% aqueous solution in a mine ; in the production of flocculators from the monomer ; in the use of a resin mixture that apparently contained residual monomer in sealing processes ; and in the production of polymers while manufacturing papercoating materials .
The exposure of a Japanese family to acrylamide-contaminated well water, which they drank, cooked with, and bathed in (the latter for a few days only) was the single nonoccupational incident .
In all of the occupational incidents Other workers who were filling pumps or working in areas where there were leaks in the pressurized delivery system were splashed with a resin containing an unknown amount of monomers. In contrast to those exposures, workers in a flocculator plant where all skin contact was avoided exhibited no signs of intoxication . The authors also stated that the crystalline monomer was heavy (large particle size) and did not form stable aerosols. The vapor pressure of solid monomeric acrylamide is 0.007 mmHg at 25 C , equivalent to a saturation concentration of 27 mg/cu m, so acrylamide vapor may pose a hazard in confined or poorly ventilated spaces.
The more likely inhalation ha ard from acrylamide solution is from aerosolization of the solution.
Although signs and symptoms that developed in some workers who were exposed dermally to monomeric acrylamide have been well documented 23], the exact time of the appearance of symptoms after dermal exposure was not. The times of onset that were reported varied from 4 weeks to about 24 months except for one worker ; however, these effects were more evident in those humans who ingested acrylamide-contaminated water than in those dermally exposed. Another manifestation, which occurred in adults who ingested acrylamide, was retention of feces and urine that resulted in constipation and overflow urinary incontinence , Distended urinary bladders were also reported in animals . Reddened skin from dermal contact with acrylamide has been reported in humans and in rabbits .
It is notable and in marked contrast to the reactions of those people who were occupationally exposed (mainly dermally) to acrylamide 23] that the initial complaints of those exposed by ingestion were not directed toward the extremities or the skin. When symptoms of mild dysesthesia did occur by oral ingestion and probably also by dermal absorption, the three adults had been hospitalized for emotional problems for 2-4 weeks.
Electromyography and nerve conduction studies were performed on humans before and during recovery from acrylamide intoxication. Muscle response to nerve stimulation was abnormal, indicating damage of distal nerve terminals . Conduction velocity was affected in only a few units of each motor nerve . Structural abnormalities were also found in the distal portions of the long nerves .
Because both the action potentials and conduction times in the sensory nerves were more extensively affected than in the motor nerves, the author concluded that the sensory fibers were damaged earlier and more severely than the motor fibers.
Microscopic examination by Fullerton of biopsied sensory nerves from two patients in the Garland and Patterson study showed the presence of simultaneous degeneration and regeneration of the sensory nerves which seemed to have occurred before the onset of symptoms- Neuroanatomic and physiologic studies on animals, performed on a much more extensive scale than in humans, confirmed these results. Excretion of 14C-acrylamide and the effects of hepatic microsomal enzyme inducers on the toxicity of acrylamide also have been studied.
While these results have added to the information concerning the effects of acrylamide on various life processes, they do not describe the initial process whereby acrylamide produces peripheral neuropathy in humans and animals.
Other important and diagnostic manifestations of the acrylamide effect on humans are: dizziness ; vertigo ; positive Romberg's sign and inability to stand on one leg ; slurring of speech ; confusion, insecurity, and bizarre behavior ; poor memory and orientation ; writing inability or difficulty ; muscle pain ; adiadochokinesis ; gastrointestinal disturbance and dysphagia ; loss of temperature and touch and vibration senses ; dysarthria ; and paresthesia, dysesthesia, and hyperesthesia .
In summary, in the absence of pertinent exposure data, no useful correlation can be made between the type and extent of exposure and the degree of human intoxication produced by acrylamide in the industrial , when compiled and summarized, are very valuable for the recognition of the sequence and characterization of adverse effects produced on humans by exposure to monomeric acrylamide.
The animal studies also are pertinent in understanding human effects since they are very similar. In the single report of a nonoccupational episode found, three of five family members were hospitalized after ingesting well water containing an acrylamide concentration of 400 ppm. Just how much each ingested is unknown.
It is evident that effects on the CNS, rhinorrhea, and coughing were the first symptoms which were noticed by the people who ingested acrylamide, while those dermally exposed first noticed paresthesias, skin changes, and muscle weakness of the extremities. In all known recorded human cases of and in all types of exposures to acrylamide, persons recovered in 2 weeks-2 years (latter associated with peripheral nerve defects); most persons recovered in 1-12 months after cessation of exposure to acrylamide.
# Carcinogenicity, Mutagenicity, and Teratogenicity
No reports which address the subject of possible carcinogenic, mutagenic, or teratogenic properties of acrylamide monomer were found.
# Summary Tables of Exposure and Effect
The effects of dermal and oral exposures on humans to acrylamide that were presented in Chapter III are summarized in Table III However, the major manufacturers and users of monomeric acrylamide have provided some insight into a few sampling procedures. A direct readout method for analysis of airborne monomeric acrylamide dust and vapor has not been found.
One method for the sampling of acrylamide dust involving the use of a portable pump with an 0.8-jum membrane filter (open face) at an air flowrate of 2-3 liters/minute has been recommended for breathing zone sampling .
The minimum sampling time at a concentration of 0.3 mg/cu m was 30 minutes.
No information is available on different concentration ranges over which this method is applicable. Unless the membrane filter is properly stored after sampling either by refrigerating or in a sealed cassette, sample loss by sublimation could cause an error. Also, sampling with a membrane filter does not collect the vapor portion of acrylamide in the air and tends to underestimate the total exposure. There is insufficient information regarding concentrations that were tested, the concentration ranges over which this method is applicable, and a minimum sampling time.
Although this method can be used for collection of acrylamide vapor, it probably does not collect acrylamide particulates efficiently.
In addition, glasswool plugs at both ends of the tube would probably collect some dust particles since they are inefficient filters.
In any case, this system is not useful for collection of total acrylamide in air.
Another method of sampling for determination of acrylamide vapor in air was developed by using a midget fritted glass bubbler . The bubbler was filled to the 20-ml mark with distilled water and air was passed at a flowrate of 1 liter/minute for 100 minutes. Data concerning concentrations of acrylamide in the air that were collected are not available. However, the sampling adsorption efficiency of one bubbler with a flowrate of 1 liter/minute and a sampling period of 100 minutes was reported to be 98%, but without supporting data.
Midget impingers, as well as silica gel tubes, have been used to sample airborne dust and vapor of acrylamide . Two midget impingers, each containing 15 ml of distilled water, were connected in series. The recommended sampling time was a minimum of 60 minutes with an air pump adjusted to a flowrate of up to 1.75 liters/minute . It was indicated that this sampling method is applicable to any acrylamide monomer which may be present in the air in an industrial environment . It was also stated that, since the sample size is essentially unlimited, the limit of detection of acrylamide in air is determined by the amount of interference present. Details such as efficiency of collection by the impingers and concentration ranges over which this sampling technique is valid were not given.
A variety of sampling methods have been discussed, such as the portable pump with a membrane filter, silica gel tube, midget fritted glass bubbler, and midget impingers.
There is no one method that is uniquely applicable for collecting acrylamide aerosol and vapor. A membrane filter has been used to collect samples of acrylamide aerosol and the midget fritted glass bubbler has been used for determinations of acrylamide vapor in air. Silica gel tubes and midget impingers can be used to collect both dust and vapor with the latter method having less vapor loss. Therefore, despite the disadvantages of handling glassware and liquid solutions in field measurements, the sampling technique of using a midget impinger is recommended for personal breathing-zone sampling of airborne acrylamide dust and vapor to guard against losses attendant with filter sampling. Analysis of monomeric acrylamide solutions was also determined by measuring the refractive index of a sample solution at 35 C with an Abbe refractometer and converting the reading to percent acrylamide using a standard curve . Duplicate determinations were within 0.4% and the method could be applied for an aqueous acrylamide solution range of 5-60%.
# Samples
The usual concentration range of acrylamide monomer solutions in previously discussed analytical procedures is much lower than 5% and lacks specificity and sensitivity for a determination of acrylamide at the environmental limit.
A more sensitive, but nonspecific, analysis of monomeric acrylamide solutions can be performed by a titrimetric method . This method was based on the reaction of acrylamide with bromine which is obtained from an acidified bromate-bromide solution. The excess bromine was treated with potassium iodide which generates free iodine. The iodine was then titrated with thiosulfate to yield an indirect measure of acrylamide. Any reducible substance may interfere with this method. The titrimetric method gave a relative standard deviation of 0.1 and 0.01% for concentrations above and below 2% of acrylamide in solution, respectively. To prepare for infrared spectroscopy, a portion of the polymer extract equivalent to about 2 mg of acrylamide was evaporated to dryness onto potassium bromide and pressed into a disc. The infrared spectrum of this sample was so intense that other contaminants were obviously interfering.
After the sample was separated by thin-layer chromatography and the acrylamide portion of the chromatogram removed with methanol, the potassium bromide disc prepared from the extract gave an infrared spectrum identical to that of the pure acrylamide standard. Direct infrared analysis is subject to interferences from unspecified contaminants from the polymers.
Another disadvantage of the method is the large amount of acrylamide required for measurement. and a platinum wire auxiliary electrode as the anode. Recovery of acrylamide in the polyacrylamides was reported to be greater than 90%. The detection limit for acrylamide was less than 1 jug/ml. However, the presence of some nonionic species, substituted acrylamide, or acrylates would be electroactive in the same potential region as that of acrylamide and would thus interfere with polarographic acrylamide analysis.
Acrylonitrile also interfered but, because of its high volatility, it was purged readily by nitrogen from the solution with no adverse effects on acrylamide concentration. Acrolein, acrylic acid, acetone, vinyl-benzyl chloride, vinyl-benzyl alcohol, styrene, and beta-hydroxypropionitrile did not interfere in polarographic analysis of acrylamide. Resin treatment of the methanolic extract of polyacrylamide for 20 minutes removed the ionic species, such as sodium and potassium ions, without causing any detectable loss of acrylamide concentration.
The analysis of monomeric acrylamide by differential pulse polarography has been adapted for determining airborne acrylamide .
The sampling solution for dust and vapor from impingers was analyzed for acrylamide polarographically after ion-exchange resin treatment and the addition of the supporting electrolyte tetra-n-butylammonium hydroxide. No information on the accuracy or the precision for this analytical method was provided.
The method was claimed to be reasonably specific for acrylamide and to have relatively few interferences. It was also reported that an acrylamide concentration as low as 0.5 jug/ml could be determined by analysis.
A major factor for identifying the most appropriate analytical Engineering controls should be complimented with good work practices for more effective control of exposure to acrylamide. Respiratory protective equipment should not be used as a substitute for proper engineering controls but must be worn when the worker is exposed to dust or vapor concentrations exceeding the environmental exposure limit.
# V. DEVELOPMENT OF A STANDARD
Basis for Previous Standards
The acrylamide environmental limit was first introduced in 1966 in the United States by the American Conference of Governmental Industrial Hygienists (ACGIH) as a tentative Threshold Limit Value (TLV) of 0.3 mg of acrylamide/cu m of air with the notation "Skin" . This designation is intended to suggest the need for appropriate measures for the prevention of dermal or other local contact or absorption. The tentative TLV of 0.3 mg/cu m was adopted as the recommended value by the ACGIH the following year , and has remained unchanged since 1967 .
According to the 1971 (third) edition of Documentation of the Threshold Limit Values for Substances In Workroom Air , the basis for the ACGIH TLV was extrapolation from long-term feeding experiments on cats reported by McCollister et al in 1964. The oral LD50 for laboratory animals (rats, guinea pigs, and rabbits) was in the range of 150-180 mg/kg and the document further stated that "toxic effects may be produced by any route of administration-ingestion, inhalation, injection, skin contact, or contact with the eye." The cat was described in this document as the most sensitive species.
Cats given acrylamide at a dose of 1 mg/kg/day by iv or ip injection developed the neurologic effects in about 6 months; however, long-term feeding experiments (0.3 and 1 mg/kg/day, 5 days/week, for 1 year) in this same species apparently did not produce any ill effect. From the results of long-term feeding experiments in the most sensitive species, the cat, the ACGIH recommended "that no more than 0.05 mg/kg/day be absorbed by workmen" .
According to this 1971 ACGIH Documentation of TLV's , an absorption of 0.05 mg/kg/day, assuming a ventilation rate of 10 cu m of air for each 8-hour workday, corresponds to an environmental limit of 0.3 mg/cu m, or about 0.1 ppm. The present federal standard for acrylamide, 0.3 mg/cu m as a TWA concentration with the notation "Skin" (29 CFR 1910.1000), is based on the 1968 ACGIH Threshold Limit Value.
According to a 1968 joint report of the International Labour Office and the World Health Organization , no standards for acrylamide had been promulgated by countries other than the United States.
# Basis for the Recommended Standard
The studies of human intoxication with acrylamide have indicated that dermal absorption 20,21,23] and ingestion Graveleau et al , and Morviller in France and by Fujita et al and Takahashi et al in Japan. However, all of these reports on human effects are qualitative and deal only with clinical signs and symptoms of acrylamide intoxication.
Igisu et al described a nonoccupational exposure to monomeric acrylamide in a family of five persons who used acrylamide contaminated well water for cooking, drinking, washing, and bathing. Three adults in the family showed signs of CNS toxicity manifested by ataxia.
This was followed in 2-4 weeks by symptoms of peripheral neuropathy. The well water was analyzed by gas chromatography and shown to contain 400 ppm of acrylamide and a trace of dimethylaminoproprionitrile. Although slightly more quantitative information was presented in this report than in the reports on occupational exposures , no information is available which would allow estimation of dermal or airborne exposure limits from this report.
There is abundant documentation that experimental administration of acrylamide has produced peripheral neuropathy in many animal species: hens demonstrated the dermal penetration of a 10-30% solution of acrylamide which subsequently appeared in the blood.
In rabbits, application of aqueous solutions of acrylamide (10 and 12.5%) killed one of two rabbits at a dose of 1 g/kg and resulted in slight toxicity at 0.5 g/kg .
McCollr'ster et al also studied the effects of 10 and 40% aqueous solutions of acrylamide instilled into the eyes of rabbits. The 10% aqueous solution caused signs of slight pain and slight conjuctival irritation, while the 40% aqueous solution caused moderate pain, slight conjunctival irritation, and marked corneal injury.
McCollister et al found that acrylamide in the feed at 0.3 mg/kg/day, 5 days/week, for 1 year produced no adverse effect on cats.
A dose of 1 mg/kg/day caused questionable effects, whereas the higher doses of 3 and 10 mg/kg/day resulted in definite signs of neurotoxicity. The authors found that one monkey fed with 0.1 mg/kg/day, two with 0.3 mg/kg/day, and one with 1 mg/kg/day, 5 days/week, for one year also showed no adverse effects. However, 3 and 10 mg/kg/day levels did cause signs of neurotoxicity in monkeys. The authors concluded that the "no adverse effect" level for monkeys on a diet containing acrylamide lay between 1 and 3 mg/kg/day. It was the authors ' suggestion that the summation of industrial exposures should be so controlled that it will be almost impossible for a worker to absorb more than 0.05 mg/kg/day of acrylamide on a day-to-day basis.
As previously stated, studies of human intoxication with acrylamide have indicated that dermal contact and ingestion may have been the main routes of exposure without neglecting the possible contribution of inhalation of aerosol or vapor. Consequently, without knowing the airborne acrylamide concentrations at which skin and neurologic symptoms manifest themselves, and also in the absence of information as to the primary routes of exposure, ie, dermal, inhalation, or ingestion, by which these symptoms may be produced, it is difficult to correlate dermal or neurologic symptoms with worker exposure to airborne acrylamide. The available human and animal studies do not provide enough information to alter the existing federal standard for acrylamide of 0.3 mg/cu m of air as a TWA value. NIOSH, therefore, recommends that the present federal standard be kept. Engineering controls must be used whenever feasible to control airborne concentrations of acrylamide monomer within the recommended TWA limit.
Where acrylamide monomer is present, a closed system of control should be used. During the time required to install adequate controls and equipments, to make process changes, to perform routine maintenance and operations, or to make repairs, overexposure to acrylamide can be prevented by the use of respirators and protective clothing and in some cases by administrative controls.
Because acrylamide produces delayed neuropathy, it is recommended that all medical and other pertinent records involving acrylamide exposure be maintained for 20 years after termination of employment. This will allow enough time for future detection of chronic neurotoxicity of acrylamide which may be related to the employee's known occupational exposure.
The technology is currently available to sample and analyze the present environmental limit to institute appropriate engineering controls.
As was discussed in greater detail in Chapter IV, a midget impinger is recommended for personal breathing zone sampling of airborne acrylamide aerosol and vapor to guard against losses attendant with filter sampling.
Current analytical techniques commonly used for the determination of acrylamide in the industrial environment are differential pulse polarography and gas chromatography.
As was discussed in Chapter IV, differential pulse polarography is the analytical method of choice for airborne acrylamide since gas chromatography involves complex derivative formations of unknown efficiencies and sample preparation.
Concern for worker health requires that protective measures be instituted below the enforceable limit to ensure that exposures stay below that limit. An action level is set as a TWA concentration of one-half the environmental limit. It has been chosen on the basis of professional judgment rather than on quantitative data that delineate nonhazardous areas from areas in which a hazard may exist. However, in the case of acrylamide it is also recognized that many employees work with solid or liquid forms of the substance in situations where there may be skin contact with the
# VI. WORK PRACTICES
Occupational exposures can occur in the manufacture of solid forms of acrylamide , in the preparation and utilization of aqueous solutions of acrylamide , in the polymerization process , and in the handling of polymerized products which contain the residual monomer .
The conventional way of cutting bags of solid monomeric acrylamide manually with a knife and dumping the pellets or flakes into the reactor for polymerization should be replaced with a method that limits exposure to acrylamide. One method that has been used is a mechanized, maximally If there is any chance of skin contact with acrylamide, then the protective clothing worn should be impervious to acrylamide. At the present time, the suitability of impervious clothing for the protection of the worker has not been adequately established.
In addition, it is difficult for workers to wear this clothing for extended periods of time. Tomcufcik et al found that various acrylamide compounds inhibited the growth of tumors in mice.
Ismaylova also found the same inhibition property in tomato plant tumors and Ismaylova et al in plant tissue cultures. This property might be interesting to investigate in mammals.
The recommended impinger sampling method for acrylamide has several disadvantages when used for personal monitoring. These include breakage of the glass impinger and spillage of the absorption solution during sampling and subsequent shipment to the laboratory unless extreme care is taken. A method of personal monitoring using a membrane filter followed by a silica gel tube or other adsorbent tubes should be evaluated. Instructions for calibration with the soapbubble meter follow. If another calibration device is selected, equivalent procedures should be used. Since the flowrate given by a pump is dependent on the pressure drop of the sampling device, in this case an impinger, the pump must be calibrated while operating with a representative filled impinger in line.
The calibration system should be assembled in series following this order: soapbubble meter, water manometer, midget impinger, and pump as shown in The trapped acrylamide is analyzed as described in Appendix II.
Other collection methods shown to be equivalent may be used.
# X. APPENDIX II
The following analytical method for acrylamide is adapted from those described by Betso and McLean and presented in the Dow Chemical Company analytical method PAA No. 46 .
# Scope
This method is applicable to the determination of acrylamide monomer which may be present in the air in an industrial environment. The procedure is described for measuring potential employee exposure. Amounts of 5-200 Mg of acrylamide/10 ml of aqueous solution can be determined; larger amounts of acrylamide may be determined by appropriate dilution . Chemical substances should be listed according to their complete name derived from a recognized system of nomenclature.
# Principle of the Method
Where possible, avoid using common names and general class names such as "aromatic amine,"
"safety solvent," or "aliphatic hydrocarbon" when the specific name is known.
The may be the approximate percentage by weight or volume (indicate basis) which each hazardous ingredient of the mixture bears to the whole mixture. This may be indicated as a range or maximum amount, ie, "10-40% vol" or "10% max wt" to avoid disclosure of trade secrets.
Toxic hazard data shall be stated in terms of concentration, mode of exposure or test, and animal used, eg, "100 ppm LC50-rat," "25 mg/kg LD50skin-rabbit," "75 ppm LC man," or "permissible exposure from 29 CFR The "Health Hazard Data" should be a combined estimate of the hazard of the total product. This can be expressed as a TWA concentration, as a permissible exposure, or by some other indication of an acceptable standard. Other data are acceptable, such as lowest LC50 if multiple components are involved.
Under "Routes of Exposure," comments in each category should reflect the potential hazard from absorption by the route in question. Comments should indicate the severity of the effect and the basis for the statement if possible. The basis might be animal studies, analogy with similar products, or human experiences. Comments such as "yes" or "possible" are not helpful. Comments pertinent to acrylamide might be:
Skin Contact-single short contact, no adverse effects likely; prolonged or repeated contact, possibly mild irritation, erythema, and skin peeling.
# Eye
Contact-some pain and mild transient irritation; conjunctival injury. "Emergency and First Aid Procedures" should be written in lay language and should primarily represent first-aid treatment that could be provided by paramedical personnel or individuals trained in first aid.
Information in the "Notes to Physician" section should include any special medical information which would be of assistance to an attending physician including required or recommended preplacement and periodic medical examinations, diagnostic procedures, and medical management of overexposed employees. Respirators shall be specified as to type and NIOSH or US Bureau of Mines approval class, ie, "Supplied air," "Organic vapor canister," etc.
Protective equipment must be specified as to type and materials of construction.
# Range and Sensitivity
The polarographic detection limit for acrylamide in a clean system is less than 1 pg of acrylamide/ml of acrylamide solution , Even at this low concentration, the acrylamide reduction peak is well-defined and resolved from the background.
# Interferences
Differential pulse polarography is reasonably specific and relatively free from interferences. In addition, the ion-exchange resin treatment removes most common interferences, such as acrylic acid, acrylonitrile, and sodium and potassium ions. However, any compounds which are reducible in the same potential region (-2.0 v) will interfere. Substituted acrylamides and acrylate esters are reducible in the same potential region and, if present, will interfere.
# Apparatus (a)
Glass-fiber filter disc, or equivalent.
# Ill (i)
To the sample, add 0.1 ml of acrylamide standard solution (20 jug) with a 100-jul Eppendorf pipet to each of the samples and repeat (f) and (g). Record micrograms of acrylamide added.
(j) Measure the peak height of the sample at about -2.0 v, correcting for any blank reading, both before (value "A") and after (value "B") adding acrylamide (i).
# Calculations
# FIG U R E X II-1 C A LIB R A TIO N SETUP FOR PERSONAL SAMPLING PUMP W ITH M ID G E T IMPINGER | 13,578 | {
"id": "ed80347681d1872dabb6938d14140255f9f17700",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | This report updates the 2009 recommendations by CDC's Advisory Committee on Immunization Practices (ACIP) regarding the use of influenza vaccine for the prevention and control of influenza (CDC. Prevention and control of influenza: recommenda-# Introduction
In the United States, annual epidemics of influenza occur typically during the late fall through early spring. Influenza viruses can cause disease among persons in any age group, but rates of infection are highest among children (1)(2)(3). During these annual epidemics, rates of serious illness and death are highest among persons aged ≥65 years, children aged 99% of viruses characterized were the 2009 pandemic influenza A (H1N1) virus (11). Data from epidemiologic studies conducted during the 2009 influenza A (H1N1) pandemic indicate that the risk for influenza complications among adults aged 19-64 years who had 2009 pandemic influenza A (H1N1) was greater than typically occurs for seasonal influenza (12). Influenza caused by 2009 pandemic influenza A (H1N1) virus is expected to continue to occur during future winter influenza seasons in the Northern and Southern Hemispheres, but whether 2009 pandemic influenza A (H1N1) viruses will replace or co-circulate with one or more of the two seasonal influenza A virus subtypes (seasonal H1N1 and H3N2) that have cocirculated since 1977 is unknown. Influenza viruses undergo frequent antigenic change as a result of point mutations and recombination events that occur during viral replication (i.e., antigenic drift). The extent of antigenic drift and evolution of 2009 pandemic influenza A (H1N1) virus strains in the future cannot be predicted.
Annual influenza vaccination is the most effective method for preventing influenza virus infection and its complications (8). Annual vaccination with the most up-to-date strains predicted on the basis of viral surveillance data is recommended. Influenza vaccine is recommended for all persons aged ≥6 months who do not have contraindications to vaccination.
Trivalent inactivated influenza vaccine (TIV) can be used for any person aged ≥6 months, including those with high-risk conditions (Box). Live, attenuated influenza vaccine (LAIV) may be used for healthy nonpregnant persons aged 2-49 years. No preference is indicated for LAIV or TIV when considering vaccination of healthy nonpregnant persons aged 2-49 years. Because the safety or effectiveness of LAIV has not been established in persons with underlying medical conditions that confer a higher risk for influenza complications, these persons should be vaccinated only with TIV. Although vaccination coverage has increased in recent years for many groups recommended for routine vaccination, considerable room for improvement remains (13), and strategies to improve vaccination coverage in the medical home and in nonmedical settings should be implemented or expanded (14).
Antiviral medications are an adjunct to vaccination and are effective when administered as treatment and when used for chemoprophylaxis after an exposure to influenza virus. However, the emergence since 2005 of resistance to one or more of the four licensed antiviral agents (oseltamivir, zanamivir, amantadine, and rimantadine) among circulating strains has complicated antiviral treatment and chemoprophylaxis recommendations. CDC has revised recommendations for antiviral treatment and chemoprophylaxis of influenza periodically in response to new data on antiviral resistance patterns among circulating strains and risk factors for influenza complications (15). With few exceptions, 2009 pandemic influenza A (H1N1) virus strains that began circulating in April 2009 remained sensitive to oseltamivir (16).
# Methods
CDC's Advisory Committee on Immunization Practices (ACIP) provides annual recommendations for the prevention and control of influenza. The ACIP Influenza Work Group (the Work Group)- meets every 2-4 weeks throughout the year to discuss newly published studies, review current guidelines, and consider revisions to the recommendations. As the Work Group reviews the annual recommendations for consideration by the full ACIP, its members discuss a variety of issues, including the burden of influenza illness; vaccine effectiveness, vaccine safety, and coverage in groups recommended for vaccination; feasibility; cost-effectiveness; and anticipated vaccine supply. Work Group members also request periodic updates on vaccine and antiviral production, supply, safety, and efficacy from vaccinologists, epidemiologists, and manufacturers. State and local vaccination program representatives are consulted. CDC's Influenza Division (available at ) provides influenza surveillance and antiviral resistance data. The Vaccines and Related Biological Products Advisory Committee provides advice on vaccine strain selection to the Food and Drug Administration (FDA), which selects the viral strains to be used in the annual trivalent influenza vaccines.
Published, peer-reviewed studies are the primary source of data used by ACIP in making recommendations for the prevention and control of influenza, but unpublished data that are relevant to issues under discussion also are considered. Among studies discussed or cited, those of greatest scientific quality and those that measure influenza-specific outcomes are the most influential. For example, population-based estimates of influenza disease burden supported by laboratory-confirmed influenza virus infection outcomes contribute the most specific data. The best evidence for vaccine or antiviral efficacy comes from randomized controlled trials that assess laboratoryconfirmed influenza infections as an outcome measure and consider factors such as timing and intensity of influenza viruses' circulation and degree of match between vaccine strains and wild circulating strains (17,18). However, randomized controlled trials cannot be performed ethically in populations for which vaccination already is recommended, and in this context, observational studies that assess outcomes associated with laboratory-confirmed influenza infection also can provide important vaccine or antiviral safety and effectiveness data. Evidence for vaccine or antiviral safety also is provided by randomized controlled studies; however, the number of subjects in these studies often is inadequate to detect associations between vaccine and rare adverse events. The best way to assess the frequency of rare adverse events after vaccination is by controlled studies after vaccines are used widely in the population. These studies often use electronic medical records from large linked clinical databases and medical charts of persons who are identified as having a vaccine adverse event (19)(20)(21). Vaccine coverage data from a nationally representa-tive, randomly selected population that include verification of vaccination through health-care record review are superior to coverage data derived from limited population samples or from self-reported vaccination status; however, the former rarely is obtained in vaccination coverage data for children aged ≥5 years (22). Finally, studies that assess vaccination program practices that improve vaccination coverage are most influential in formulating recommendations if the study design includes a nonintervention comparison group. In cited studies that included statistical comparisons, a difference was considered to be statistically significant if the p-value was <0.05 or the 95% confidence interval around an estimate of effect allowed rejection of the null hypothesis (i.e., no effect).
Data presented in this report were current as of June 29, 2010, and represent recommendations presented to the full ACIP and approved on February 24, 2010, and June 24, 2010. Modifications were made to the ACIP statement during the subsequent review process at CDC to update and clarify wording in the document. Vaccine recommendations apply only to persons who do not have contraindications to vaccine use (see Contraindications and Precautions for Use of TIV and Contraindications and Precautions for Use of LAIV). Further updates, if needed, will be posted at CDC's influenza website ().
# Primary Changes and Updates in the Recommendations
The 2010 recommendations include five principal changes or updates:
- Routine influenza vaccination is recommended for all persons aged ≥6 months. This represents an expansion of the previous recommendations for annual vaccination of all adults aged 19-49 years and is supported by evidence that annual influenza vaccination is a safe and effective preventive health action with potential benefit in all age groups. By 2009, annual vaccination was already recommended for an estimated 85% of the U.S. population, on the basis of risk factors for influenza-related complications or having close contact with a person at higher risk for influenza-related complications. The only group remaining that was not recommended for routine vaccination was healthy nonpregnant adults aged 18-49 years who did not have an occupational risk for infection and who were not close contacts of persons at higher risk for influenza-related complications. However, some adults who have influenza-related complications have no previously identified risk factors for influenza complications.
In addition, some adults who have medical conditions
# BOX. Summary of influenza vaccination recommendations, 2010
- All persons aged ≥6 months should be vaccinated annually. - Protection of persons at higher risk for influenzarelated complications should continue to be a focus of vaccination efforts as providers and programs transition to routine vaccination of all persons aged ≥6 months. - When vaccine supply is limited, vaccination efforts should focus on delivering vaccination to persons who:
-are aged 6 months-4 years (59 months); -are aged ≥50 years; -have chronic pulmonary (including asthma), cardiovascular (except hypertension), renal, hepatic, neurologic, hematologic, or metabolic disorders (including diabetes mellitus); -are immunosuppressed (including immunosuppression caused by medications or by human immunodeficiency virus); -are or will be pregnant during the influenza season; -are aged 6 months-18 years and receiving long-term aspirin therapy and who therefore might be at risk for experiencing Reye syndrome after influenza virus infection; -are residents of nursing homes and other chroniccare facilities; -are American Indians/Alaska Natives; -are morbidly obese (body-mass index ≥40); -are health-care personnel; -are household contacts and caregivers of children aged <5 years and adults aged ≥50 years, with particular emphasis on vaccinating contacts of children aged <6 months; and -are household contacts and caregivers of persons with medical conditions that put them at higher risk for severe complications from influenza.
-r age-related increases in their risk for influenza-related complications or another indication for vaccination are unaware that they should be vaccinated. Further support for expansion of annual vaccination recommendations to include all adults is based on concerns that 2009 pandemic influenza A (H1N1)-like viruses will continue to circulate during the 2010-11 influenza season and that a substantial proportion of young adults might remain susceptible to infection with this virus. Data from epidemiologic studies conducted during the 2009 pandemic indicate that the risk for influenza complications among adults aged 19-49 years is greater than is seen typically for seasonal influenza (12,23,27). Persons aged ≥65 years can be administered any of the standard-dose TIV preparations or Fluzone High-Dose. Persons aged <65 years who receive inactivated influenza vaccine should be administered a standard-dose TIV preparation. - Previously approved inactivated influenza vaccines that were approved for expanded age indications in 2009 include Fluarix (GlaxoSmithKline), which is now approved for use in persons aged ≥3 years, and Afluria (CSL Biotherapies), which is now approved for use in persons aged ≥6 months. A new inactivated influenza vaccine, Agriflu (Novartis), has been approved for persons aged ≥18 years.
# Background and Epidemiology
# Biology of Influenza
Influenza A and B are the two types of influenza viruses that cause epidemic human disease. Influenza A viruses are categorized into subtypes on the basis of two surface antigens: hemagglutinin and neuraminidase. During 1977-2010, influenza A (H1N1) viruses, influenza A (H3N2) viruses, and influenza B viruses have circulated globally. Influenza A subtypes and B viruses are separated further into groups on the basis of antigenic similarities. New influenza virus variants result from frequent antigenic change (i.e., antigenic drift) caused by point mutations and recombination events that occur during viral replication (8). Recent studies have explored the complex molecular evolution and epidemiologic dynamics of influenza A viruses (28)(29)(30).
New or substantially different influenza A subtypes have the potential to cause a pandemic when they are able to cause human illness and demonstrate efficient human-to-human transmission and when little or no previously existing immunity has been identified among humans (8). In April 2009, human infections with a novel influenza A (H1N1) virus were identified, and this virus subsequently caused a worldwide pandemic (9). The 2009 pandemic influenza A (H1N1) virus is derived from influenza A viruses that have circulated in swine during the past several decades and is antigenically distinct from human influenza A (H1N1) viruses in circulation since 1977. The 2009 pandemic influenza A (H1N) virus contains a combination of gene segments that had not been reported previously in animals or humans. The hemagglutination (HA) gene, which codes for the surface protein most important for immune response, is related most closely to the HA found in contemporary influenza viruses circulating among pigs. This HA gene apparently evolved from the avian-origin 1918 pandemic influenza H1N1 virus, which is thought to have entered human and swine populations at about the same time (28).
Currently circulating influenza B viruses are separated into two distinct genetic lineages (Yamagata and Victoria) but are not categorized into subtypes. Influenza B viruses undergo antigenic drift less rapidly than influenza A viruses. Influenza B viruses from both lineages have circulated in most recent influenza seasons (31).
Immunity to surface antigens, particularly hemagglutinin, reduces the likelihood of infection (32). Antibody against one influenza virus type or subtype confers limited or no protection against another type or subtype of influenza virus. Furthermore, antibody to one antigenic type or subtype of influenza virus might not protect against infection with a new antigenic variant of the same type or subtype (33). Frequent emergence of antigenic variants through antigenic drift is the virologic basis for seasonal epidemics and is the reason for annually reassessing the need to change one or more of the recommended strains for influenza vaccines.
More dramatic changes, or antigenic shifts, occur less frequently. Antigenic shift occurs when a new subtype of influenza A virus appears and can result in the emergence of a novel influenza A virus with the potential to cause a pandemic. The 2009 pandemic influenza A (H1N1) virus is not a new subtype, but because most humans had no pre-existing antibody to key pandemic 2009 influenza A (H1N1) virus hemagglutinin epitopes, widespread transmission was possible (28).
# Health-Care Use, Hospitalizations, and Deaths Attributed to Influenza
In the United States, annual epidemics of influenza typically occur during the fall or winter months, but the peak of influenza activity can occur as late as April or May. Influenzarelated complications requiring urgent medical care, including hospitalizations or deaths, can result from the direct effects of influenza virus infection, from complications associated with age or pregnancy, or from complications of underlying cardiopulmonary conditions or other chronic diseases. Studies that have measured rates of a clinical outcome without a laboratory confirmation of influenza virus infection (e.g., respiratory illness requiring hospitalization during influenza season) to assess the effect of influenza can be difficult to interpret because of circulation of other respiratory pathogens (e.g., respiratory syncytial virus) during the same time as influenza viruses (34)(35)(36). However, increases in health-care provider visits for acute febrile respiratory illness occur each year during the time when influenza viruses circulate. Data from the U.S. Outpatient Influenza-like Illness Surveillance Network (ILINet) demonstrate the annual increase in physician visits for influenza-like illness (ILI) † and for each influenza season; for 2009, these data also indicated the increase in respiratory illness associated with circulation of 2009 pandemic influenza A (H1N1) virus during Spring 2009 and the resurgence of cases in Fall 2009 (Figure 2) (37,38).
In typical winter influenza seasons, an increase in deaths and hospitalizations is observed during periods when influenza viruses are circulating. Some persons whose hospitalization is attributed to invasive pneumococcal pneumonia are likely to have influenza as a co-pathogen, based on correlation between influenza activity and seasonal variations in pneumococcal pneumonia (39). The number of deaths or hospitalizations attributable at least partly to influenza can be estimated by applying modeling techniques to viral surveillance and national mortality or hospitalizations data and includes deaths and hospitalizations for which influenza infection is likely a contributor to mortality but not necessarily the sole cause of death (6,7,40,41).
Excess deaths and hospitalizations during influenza season that are likely to be caused at least partly by influenza are derived from the broad category of pulmonary and circulatory deaths or hospitalizations. Estimates that include only outcomes attributed to pneumonia and influenza underestimate the proportion of severe illnesses that are attributable at least partly to influenza because such estimates exclude deaths caused by exacerbations of underlying cardiac and pulmonary conditions that are associated with influenza infection (6,7,(40)(41)(42).
During seasonal influenza epidemics from 1979-1980 through 2000-2001, the estimated annual overall number of influenza-associated hospitalizations in the United States ranged from approximately 55,000 to 431,000 per annual epidemic (mean: 226,000) (7). In the United States, the estimated number of influenza-associated deaths increased during 1990-1999. This increase was attributed in part to the substantial increase in the number of persons aged ≥65 years, including many who were at higher risk for death from influenza complications (6). When mortality data that included deaths attributable to both the pneumonia and influenza as well as the respiratory and circulatory categories were used as a basis for estimating the influenza burden, an average of approximately 19,000 influenza-associated deaths per influenza season occurred during 1976-1990 compared with an average of approximately 36,000 deaths per season during 1990-1999 (6). On the basis of data from the pneumonia and influenza category alone, an estimated annual average of 8,000 influenzarelated deaths occurred. In addition, influenza A (H3N2) viruses, which have been associated with higher mortality (43), predominated in 90% of influenza seasons during 1990-1999 compared with 57% of seasons during 1976-1990 (6). From the 1990-91 influenza season through the 1998-99 season, the estimated annual number of deaths attributed to influenza ranged from 17,000 to 51,000 per epidemic (6). Estimates of mortality using a variety of different modeling techniques generally have been similar, although estimates for more recent years, when influenza A (H1N1) viruses have predominated more often, have been somewhat lower (40).
Influenza viruses cause disease among persons in all age groups (1)(2)(3)(4)(5). Rates of infection are highest among children, but the risks for complications, hospitalizations, and deaths from seasonal influenza are higher among adults aged ≥65 years, children aged <5 years, and persons of any age who have medical conditions that place them at increased risk for complications from influenza (1,4,5,(44)(45)(46)(47). Estimated rates of influenza-associated hospitalizations and deaths varied substantially by age group in studies conducted during different seasonal influenza epidemics. During 1990-1999, estimated average rates of influenza-associated pulmonary and circulatory deaths per 100,000 persons were 0.4-0.6 among persons aged 0-49 years, 7.5 among persons aged 50-64 years, and 98.3 among persons aged ≥65 years (6).
During the 2009 influenza A (H1N1) pandemic, epidemiologic studies in multiple countries indicated that hospitalization rates and deaths among children and adults aged <65 years exceeded those observed during typical winter seasonal influenza epidemics (12,23,25,48,49). In one analysis, the mean age among persons who died in the United States during May-December 2009 and who had laboratory-confirmed influenza was 37 years. In contrast, the estimated mean age among persons who died from seasonal influenza during 1979-2001 was 76 years (50). The estimated number of hospitalizations and deaths among adults aged ≥65 years was below that observed in most seasonal epidemics. This difference was attributed to a lower risk for infection (51) associated with a higher prevalence of partial or full immunity among older persons, presumably as a result of exposures to antigenically similar influenza A viruses that circulated in the early-mid 20th century. One indication of some degree of preexisting immunity was the presence of cross-reacting antibody present among approximately one third of older adults (52), which has been attributed to similarities in the structure of the hemagglutinin protein among the 2009 H1N1 virus and those that circulated earlier in the 20th century (53).
# Children
Among children aged <5 years, influenza-related illness is a common cause of visits to medical practices and emergency departments (EDs). During two influenza seasons (2002-03 and 2003-04), the percentage of visits among children aged <5 years with acute respiratory illness or fever caused by laboratory-confirmed influenza ranged from 10%-19% of medical office visits to 6%-29% of ED visits. On the basis of these data, the rate of visits to medical clinics for influenza was estimated to be 50-95 visits per 1,000 children, and the rate of visits to EDs was estimated to be 6-27 visits per 1,000 children (54). In a multiyear study in New York City that used viral surveillance data to estimate influenza strain-specific illness rates among ED visits, in addition to the expected variation by season and age group, influenza B epidemics were determined to be an important cause of illness among school-aged children in several seasons, and annual epidemics of both influenza A and B peaked among school-aged children before other age groups (55). Retrospective studies using medical records data have demonstrated similar rates of illness among children aged <5 years during other influenza seasons (45,56,57). During an influenza season, seven to 12 additional outpatient visits and five to seven additional antibiotic prescriptions per 100 children aged <15 years have been estimated compared with periods when influenza viruses are not circulating, with rates decreasing with increasing age of the child (57). During 1993-2004 in the Boston area, the rate of ED visits for respiratory illnesses that were attributed to influenza virus on the basis of viral surveillance data among children aged ≤7 years during the winter respiratory illness season ranged from 22.0 per 1,000 children aged 6-23 months to 5.4 per 1,000 children aged 5-7 years (58).
Estimates of rates of influenza-associated hospitalization are substantially higher among infants and children aged <2 years compared with older children and are similar to rates for other groups considered at higher risk for influenza-related complications (59)(60)(61)(62)(63)(64), including persons aged ≥65 years (57,61). During 1979-2001, the estimated rate of influenzaassociated hospitalizations among children aged <5 years in the United States was 108 hospitalizations per 100,000 person-years, based on data from a national sample of hospital discharges of influenza-associated hospitalizations (7). Recent population-based studies that measured hospitalization rates for laboratory-confirmed influenza in young children have documented hospitalization rates that are similar to or higher than rates derived from studies that analyzed hospital discharge data (54,56,63,65,66). Annual hospitalization rates for laboratory-confirmed influenza decrease with increasing age, ranging from 240-720 per 100,000 children aged <6 months to approximately 20 per 100,000 children aged 2-5 years (54). Hospitalization rates for children aged <5 years with high-risk medical conditions are approximately 250-500 per 100,000 children (45,47,67).
Influenza-associated deaths are uncommon among children. An estimated annual average of 92 influenza-associated deaths (0.4 deaths per 100,000 persons) occurred among children aged <5 years during the 1990s compared with 32,651 deaths (98.3 per 100,000 persons) among adults aged ≥65 years (6). Of 153 laboratory-confirmed influenza-related pediatric deaths reported during the 2003-04 influenza season, 96 (63%) deaths occurred among children aged <5 years and 61 (40%) among children aged <2 years. Among the 149 children who died and for whom information on underlying health status was available, 100 (67%) did not have an underlying medical condition that was an indication for vaccination at that time (68) admission to an intensive care unit had no underlying medical conditions (69). These data indicate that although children with risk factors for influenza complications are at higher risk for death, the majority of pediatric deaths occur among children with no known high-risk conditions. Since 2004, death associated with laboratory-confirmed influenza virus infection among children (defined as persons aged <18 years) has been a nationally reportable condition. During 2004-2005, the annual number of seasonal influenzaassociated deaths among children aged <18 years reported to CDC ranged from 47 during 2004-05 to 88 during 2007-08 (70). During April 2009-March 2010, over 300 deaths attributable to laboratory-confirmed 2009 H1N1 influenza among children, the majority of whom had one or more underlying medical conditions, were reported to CDC in the United States, and over 1,000 deaths are estimated to have occurred (71;CDC, unpublished data, 2010).
Deaths among children that have been attributed to coinfection with influenza and Staphylococcus aureus, particularly methicillin-resistant S. aureus (MRSA), have increased (38,72), and illness severity of co-infection is increased compared with influenza alone (73). The reason for this increase in co-infections has not been established but might reflect an increasing prevalence within the general population of colonization with MRSA strains, some of which carry certain virulence factors (74,75).
# Adults
Among healthy younger adults, illness caused by seasonal influenza is typically not severe and rarely results in hospitalization, compared with children aged <5 years, adults aged ≥65 years, pregnant women, or persons with chronic medical conditions. However, illness burden among healthy adults aged 19-49 years is an important cause of outpatient medical visits and worker absenteeism. The impact of influenza varies considerably by season, making estimates of the attack rate in healthy younger adults difficult. In most studies, attack rates have varied from 2% to 10% annually, and influenza has been estimated to cause 0.6-2.5 workdays lost per illness (76)(77)(78)(79)(80). In one economic analysis, the average annual burden of seasonal influenza among adults aged 18-49 years who did not have a medical condition that conferred a higher risk for influenza complications was estimated to include approximately 5 million illnesses, 2.4 million outpatient visits, 32,000 hospitalizations, and 680 deaths (78).
Hospitalization rates during typical influenza seasons are substantially increased for persons aged ≥65 years compared with younger age groups. One retrospective analysis based on data from managed-care organizations collected during 1996-2000 estimated that the risk during influenza season among persons aged ≥65 years with underlying conditions that put them at risk for influenza-related complications (i.e., one or more of the conditions listed as indications for vaccination) was approximately 560 influenza-associated hospitalizations per 100,000 persons compared with approximately 190 per 100,000 healthy persons aged ≥65 years. Persons aged 50-64 years who have underlying medical conditions also were at substantially increased risk for hospitalizations during influenza season compared with healthy adults aged 50-64 years (44).
Influenza is an important contributor to the annual increase in deaths attributed to pneumonia and influenza that is observed during the winter months. During 1976-2001, an estimated yearly average of 32,651 (90%) influenza-related deaths occurred among adults aged ≥65 years, with the risk for an influenza-related death highest in the oldest age groups (6). Persons aged ≥85 years were 16 times more likely to die from an influenza-related illness compared with persons aged 65-69 years (6).
During the 2009 H1N1 pandemic, adults aged <65 years were at higher risk for influenza-related complications (23,81,82), particularly those aged 50-64 years who had underlying medical conditions, compared with typical influenza seasons. The distribution of hospitalizations by age group differed from usual seasonal influenza patterns during 2009-10, with more hospitalizations among younger age groups and fewer among adults aged ≥65 years (Figure 1). Hospitalization rates exceeded those seen in any recent influenza season among adults aged ≤65 years (26). Pneumonia with evidence of invasive bacterial co-infection has been reported in approximately one third of fatal cases in autopsy studies (83). In one study of critically ill adults who required mechanical ventilation, Streptococcus pneumoniae pneumonia at admission was an independent risk factor for death (84). In addition, obesity (body-mass index ≥30) and particularly morbid obesity (BMI ≥40) appeared to be risk factors for hospitalization and death in some studies (23,24,81,85,86). Additional studies are needed to determine whether obesity is a risk factor specific to the 2009 H1N1like influenza viruses or a previously unrecognized risk factor for influenza-related complications caused by other influenza viruses. Other epidemiologic features of the 2009 H1N1 pandemic underscored racial and ethnic disparities in the risk for influenza-related complications among adults, including higher rates of hospitalization for blacks and a disproportionate number of deaths among American Indians/Alaska Natives and indigenous populations in other countries (87)(88)(89)(90)(91). These disparities might be attributable in part to the higher prevalence of underlying medical conditions or disparities in medical care among these racial/ethnic groups (92,93).
The duration of influenza symptoms is prolonged and the severity of influenza illness increased among persons with human immunodeficiency virus (HIV) infection (94)(95)(96)(97)(98). A retrospective study of women aged 15-64 years enrolled in Tennessee's Medicaid program determined that the attributable risk for cardiopulmonary hospitalizations among women with HIV infection was higher during influenza seasons than it was either before or after influenza was circulating. The risk for hospitalization was higher for HIV-infected women than it was for women with other underlying medical conditions (99). Another study estimated that the risk for influenzarelated death was 94-146 deaths per 100,000 persons with acquired immune deficiency syndrome (AIDS) compared with 0.9-1.0 deaths per 100,000 persons aged 25-54 years and 64-70 deaths per 100,000 persons aged ≥65 years in the general population (100).
Influenza-related excess deaths among pregnant women were reported during the pandemics of 1918-1919, 1957-1958, and 2009-2010 (48,101-106). Severe infections among postpartum women (those delivered within the previous 2 weeks) also were observed in the 2009-10 pandemic (48,107,108). Case reports and several epidemiologic studies also indicate that pregnancy increases the risk for seasonal influenza complications for the mother (109)(110)(111)(112)(113)(114). The majority of studies that have attempted to assess the effect of influenza on pregnant women have measured changes in excess hospitalizations for respiratory illness during influenza season but not laboratoryconfirmed influenza hospitalizations. Pregnant women have an increased number of medical visits for respiratory illnesses during influenza season compared with nonpregnant women (115). Hospitalized pregnant women with respiratory illness during influenza season have increased lengths of stay compared with hospitalized pregnant women without respiratory illness. Rates of hospitalization for respiratory illness were twice as common during influenza season (116). A retrospective cohort study of approximately 134,000 pregnant women conducted in Nova Scotia during 1990-2002 compared medical record data for pregnant women to data from the same women during the year before pregnancy. Among pregnant women, 0.4% were hospitalized, and 25% visited a clinician during pregnancy for a respiratory illness. The rate of third-trimester hospital admissions during the influenza season was five times higher than the rate during the influenza season in the year before pregnancy and more than twice as high as the rate during the noninfluenza season. An excess of 1,210 hospital admissions in the third trimester per 100,000 pregnant women with comorbidities and of 68 admissions per 100,000 women without comorbidities was reported (117). In one study, pregnant women with hospitalizations for respiratory symptoms did not have an increase in adverse perinatal outcomes or delivery complications (118); another study indicated an increase in delivery complications, including fetal distress, preterm labor, and cesarean delivery. However, infants born to women with laboratory-confirmed influenza during pregnancy do not have higher rates of low birth weight, congenital abnormalities, or lower Apgar scores compared with infants born to uninfected women (109,119).
In a case series conducted during the 2009 H1N1 pandemic, 56 deaths were reported among 280 women admitted to intensive care units (120). Among the deaths, 36 (64%) occurred in the third trimester. Pregnant women who received treatment >4 days after symptom onset were more likely than those treated within 2 days after symptom onset to be admitted to an intensive care unit (57% and 9%, respectively; relative risk : 6.0; 95% CI = 3.5-10.6) (120).
# options for Controlling Influenza
The most effective strategy for preventing influenza is annual vaccination. Strategies that focus on providing routine vaccination to persons at higher risk for influenza complications have long been recommended, although coverage among the majority of these groups remains low. Routine vaccination of certain persons (e.g., children, contacts of persons at risk for influenza complications, and health-care personnel ) who serve as a source of influenza virus transmission might provide additional protection to persons at risk for influenza complications and reduce the overall influenza burden. However, coverage levels among these persons need to be increased before effects on transmission can be measured reliably. Antiviral medications can be used for chemoprophylaxis and have been demonstrated to prevent influenza illness. When used for treatment, antiviral medications have been demonstrated to reduce the severity and duration of illness, particularly if used within the first 48 hours after illness onset. However, antiviral medications are adjuncts to vaccine in the prevention and control of influenza, and primary prevention through annual vaccination is the most effective and efficient prevention strategy. Despite recommendations to use antiviral medications to treat hospitalized patients with suspected influenza, antiviral drugs are underused (121).
Reductions in detectable influenza A viruses on hands after handwashing have been demonstrated, and handwashing has been demonstrated to reduce the overall incidence of respiratory diseases (122)(123)(124). Nonpharmacologic interventions (e.g., frequent handwashing and improved respiratory hygiene) are reasonable and inexpensive. However, the impact of hygiene interventions such as handwashing on influenza virus transmission is not well understood, and hygiene measures should not be advocated as a replacement or alternative to specific prevention measures such as vaccination. Few data are available to assess the effects of community-level respiratory disease mitigation strategies (e.g., closing schools, avoiding mass gatherings, or using respiratory protection) on reducing influenza virus transmission during typical seasonal influenza epidemics (125)(126)(127). An interventional trial among university students indicated that students living in dormitories who were asked to use surgical face masks, given an alcohol-based hand sanitizer, and provided with education about mask use and hand hygiene during influenza season had substantially lower rates of ILI compared with students in dormitories for whom no intervention was recommended. However, neither face mask nor hand sanitizer use alone was associated with statistically significant reduction in ILI (128). During the 2009 pandemic, one study indicated that having members of households in which an influenza case was identified discuss ways to avoid transmission was associated with a significant reduction in the frequency of additional cases after one household member became ill, suggesting that education measures might be an effective way to reduce secondary transmission (129). Limited data suggest that transmission of seasonal influenza or ILI among household members can be reduced if household contacts use a surgical face mask or implement hand washing early in the course of an ill index case patient's illness (130,131). However, these interventions might supplement use of vaccine as a means to reduce influenza transmission or provide some protection when vaccine is not available (130)(131)(132).
# Influenza Vaccine Efficacy, Effectiveness, and Safety Evaluating Influenza Vaccine Efficacy and Effectiveness Studies
The efficacy (i.e., prevention of illness among vaccinated persons in controlled trials) and effectiveness (i.e., prevention of illness in vaccinated populations) of influenza vaccines depend in part on the age and immunocompetence of the vaccine recipient, the degree of similarity between the viruses in the vaccine and those in circulation (see Effectiveness of Influenza Vaccination When Circulating Influenza Virus Strains Differ from Vaccine Strains), and the outcome being measured. Influenza vaccine efficacy and effectiveness studies have used multiple possible outcome measures, including the prevention of medically attended acute respiratory illness (MAARI), laboratory-confirmed influenza virus illness, influenza or pneumonia-associated hospitalizations or deaths, or seroconversion. Efficacy or effectiveness for more specific outcomes such as laboratory-confirmed influenza typically will be higher than for less specific outcomes such as MAARI because the causes of MAARI include infections with other pathogens that influenza vaccination would not be expected to prevent (133). Observational studies that compare less-specific outcomes among vaccinated populations to those among unvaccinated populations are subject to biases that are difficult to control for during analyses. For example, an observational study that determines that influenza vaccination reduces overall mortality might be biased if healthier persons in the study are more likely to be vaccinated (134,135). Randomized controlled trials that measure laboratory-confirmed influenza virus infections as the outcome are the most persuasive evidence of vaccine efficacy, but such trials cannot be conducted ethically among groups recommended to receive vaccine annually.
# Influenza Vaccine Composition
Both LAIV and TIV contain strains of influenza viruses that are equivalent antigenically to the annually recommended strains: one influenza A (H3N2) virus, one influenza A (H1N1) virus, and one influenza B virus. Each year, one or more virus strains in the vaccine might be changed on the basis of global surveillance for influenza viruses and the emergence and spread of new strains. The 2010-11 trivalent vaccines will contain A/California/7/2009 (H1N1)-like, A/Perth/16/2009 (H3N2)-like, and B/Brisbane/60/2008-like antigens. The A/California/7/2009 (H1N1)-like antigen is derived from a pandemic 2009 influenza A (H1N1) virus and is the same vaccine antigen used in the influenza A (H1N1) 2009 monovalent vaccines. The A/Perth/16/2009 (H3N2)-like antigen is different from the H3N2-like antigen recommended for the 2009-10 northern hemisphere seasonal influenza vaccine. The influenza B vaccine strain will remain B/Brisbane/16/2008 and is not changed compared with the 2009-10 northern hemisphere seasonal influenza vaccine (136). Viruses for currently licensed TIV and LAIV preparations are grown in chicken eggs. Either vaccine is administered annually to provide optimal protection against influenza virus infection (Table 1). Both TIV and LAIV are widely available in the United States. Although both types of vaccines are expected to be effective, the vaccines differ in several respects (Table 1). None of the influenza vaccines licensed in the United States contains an adjuvant.
# Major Differences Between tIV and LAIV
TIV contains inactivated viruses and thus cannot cause influenza. LAIV contains live attenuated influenza viruses that have the potential to cause mild signs or symptoms related to vaccine virus infection (e.g., rhinorrhea, nasal congestion, fever, or sore throat). LAIV is administered intranasally by sprayer, whereas TIV is administered intramuscularly by injection. LAIV is licensed for use among nonpregnant persons aged 2-49 years; safety has not been established in persons with underlying medical conditions that confer a higher risk for influenza complications. TIV is licensed for use among persons aged ≥6 months, including those who are healthy and those with chronic medical conditions (Table 1). During the preparation of TIV, the vaccine viruses are made noninfectious (i.e., inactivated or killed) (8). Only subvirion and purified surface antigen preparations of TIV (often referred to as "split" and subunit vaccines, respectively) are available in the United States. Standard-dose TIV preparations contain 7.5 mcg HA antigen per vaccine strain (for children aged <36 months) or 15 mcg of HA antigen (for persons aged ≥36 months) per vaccine strain (i.e., 22.5 mcg or 45 mcg total HA antigen). A newly licensed higher dose TIV (60 mcg per vaccine strain or 180 mcg total HA antigen) was approved recently for persons aged ≥65 years (Fluzone High-Dose, Sanofi pasteur).
# Correlates of Protection after Vaccination
Immune correlates of protection against influenza infection after vaccination include serum hemagglutination inhibition antibody and neutralizing antibody (32,137). Increased levels of antibody induced by vaccination decrease the risk for illness caused by strains that are similar antigenically to those strains of the same type or subtype included in the vaccine (138)(139)(140)(141). The majority of healthy children and adults have high titers of antibody after vaccination (139,142). Although immune correlates such as achievement of certain antibody titers after vaccination correlate well with immunity on a population level, the significance of reaching or failing to reach a certain antibody threshold (typically defined as a hemagglutination titer of 1:32 or 1:40) is not well understood on the individual level. Other immunologic correlates of protection that might best indicate monovalent vaccine should receive 2 doses, spaced ≥4 weeks apart. Those children aged 6 months-8 years who were vaccinated for the first time in the 2009-10 season with the seasonal 2009-10 vaccine but who received only 1 dose of seasonal influenza vaccine should receive 2 doses in the following year, spaced ≥4 wks apart. † Persons at higher risk for complications of influenza infection because of underlying medical conditions should not receive LAIV. Such persons include those who have chronic pulmonary (including asthma), cardiovascular (except hypertension), renal, hepatic, neurologic, hematologic, or metabolic (including diabetes mellitus) disorders; those who are immunosuppressed (including immunosuppression caused by medications or by human immunodeficiency virus); those who are or will be pregnant during the influenza season; those aged 6 months-18 years and receiving long-term aspirin therapy and who therefore might be at risk for experiencing Reye syndrome after influenza virus infection; and residents of nursing homes and other chronic-care facilities. § Approval varies by formulation. Fluzone (sanofi pasteur) and Afluria (CSL Biotherapies) have been approved previously for use in children as young as age 6 months. Fluzone High-Dose is approved for use in persons aged ≥65 years. Immunization providers should check Food and Drug Administration-approved prescribing information for 2010-11 influenza vaccines for the most updated information. ¶ Clinicians and vaccination programs should screen for possible reactive airways diseases when considering use of LAIV for children aged 2-4 years and should avoid use of this vaccine in children with asthma or a recent wheezing episode. Health-care providers should consult the medical record, when available, to identify children aged 2-4 years with asthma or recurrent wheezing that might indicate asthma. In addition, to identify children who might be at greater risk for asthma and possibly at increased risk for wheezing after receiving LAIV, parents or caregivers of children aged 2-4 years should be asked: "In the past 12 months, has a health-care provider ever told you that your child had wheezing or asthma?" Children whose parents or caregivers answer "yes" to this question and children who have asthma or who had a wheezing episode noted in the medical record within the preceding 12 months, should not receive LAIV. LAIV coadministration has been evaluated systematically only among children aged 12-15 months who received with measles, mumps and rubella vaccine or varicella vaccine. † † Inactivated influenza vaccine coadministration has been evaluated systematically only among adults who received pneumococcal polysaccharide or zoster vaccine.
# MMWR
August 6, 2010 clinical protection after receipt of an intranasal vaccine such as LAIV (e.g., mucosal antibody) are more difficult to measure (143,144). Laboratory measurements that correlate with protective immunity induced by LAIV have been described, including measurement of cell-mediated immunity with ELISPOT assays that measure gamma-interferon (143).
# Duration of Immunity
The recommended composition of influenza vaccines changes in most seasons, with one or more vaccine strains replaced annually to provide better protection against wildtype viruses that are likely to circulate. However, evidence from clinical trials suggests that protection against viruses that are similar antigenically to those contained in the vaccine extends for at least 6-8 months. Three years after vaccination with the A/Hong Kong/68 vaccine, vaccine effectiveness was 67% for prevention of influenza caused by the A/Hong Kong/68 virus (145). In randomized trials conducted among healthy college students, immunization with TIV provided 92% and 100% efficacy against influenza H3N2 and H1N1 illnesses, respectively, during the first year, and a 68% reduction against H1N1 illness during the second year (when the predominant circulating virus was H1N1) without revaccination (146). In a similar study of young adults in 1986-1987, TIV reduced influenza A (H1N1) illness 75% in the first year, H3N2 illness 45% in the second year, and H1N1 illness 61% in the third year after immunization (146). Serum anti-influenza antibodies and nasal IgA elicited by vaccination remain detectable in children vaccinated with LAIV for more than 1 year (147). In one community-based nonrandomized open label trial, continued protection from MAARI during the 2000-01 influenza season was demonstrated in children who received only a single dose of LAIV during the 1999-2000 season (148).
Adults aged ≥65 years typically have a diminished immune response to influenza vaccination compared with young healthy adults, suggesting that immunity might be of shorter duration (although still extending through one influenza season) (149,150). However, a review of the published literature concluded that no clear evidence existed that immunity declined more rapidly in the elderly (151), and additional vaccine doses during the same season do not increase the antibody response. One study that measured the proportion of persons who retained seroprotective levels of anti-influenza antibody declined in all age groups, including those aged ≥65 years, within 1 year of vaccination. However, the proportion in each age group that retained seroprotective antibody levels remained above standards typically used for vaccine licensure for seasonal influenza A (H1N1) and influenza A (H3N2) in all age groups. In this study, anti-influenza B antibody levels declined more quickly, but remained elevated well above licensure threshold for at least 6 months in all age groups (152). The frequency of breakthrough infections is not known to be higher among those who were vaccinated early in the season. Infections among the vaccinated elderly might be more likely related to an age-related reduction in ability to respond to vaccination rather than reduced duration of immunity.
# Immunogenicity, Efficacy, and Effectiveness of tIV Children
Children aged ≥6 months typically have protective levels of anti-influenza antibody against specific influenza virus strains after receiving the recommended number of doses of seasonal inactivated influenza vaccine (137,142,(153)(154)(155)(156)(157). Immunogenicity studies using the influenza A (H1N1) 2009 monovalent vaccine indicated that >90% of children aged ≥9 years responded to a single dose with anti-influenza antibody levels that are considered to be protective. Young children had inconsistent responses to a single dose of the influenza A (H1N1) 2009 monovalent vaccine across studies, with 20% of children aged 6-35 months responding to a single dose with protective anti-influenza antibody levels. However, in all studies, 80%-95% of vaccinated infants, children, and adolescents developed protective anti-influenza antibody levels to the 2009 H1N1 influenza virus after 2 doses (158-160; National Institutes of Health, unpublished data, 2010).
In most seasons, one or more seasonal vaccine antigens are changed compared with the previous season. In consecutive years when vaccine antigens change, children aged <9 years who received only 1 dose of vaccine in their first year of vaccination are less likely to have protective antibody responses when administered only a single dose during their second year of vaccination compared with children who received 2 doses in their first year of vaccination (161)(162)(163).
When the vaccine antigens do not change from one season to the next, priming children aged 6-23 months with a single dose of vaccine in the spring followed by a dose in the fall engenders similar antibody responses compared with a regimen of 2 doses in the fall (164). However, one study conducted during a season when the vaccine antigens did not change compared with the previous season estimated 62% effectiveness against ILI for healthy children who had received only 1 dose in the previous influenza season and only 1 dose in the study season compared with 82% for those who received 2 doses separated by ≥4 weeks during the study season (165).
The antibody response among children at higher risk for influenza-related complications (e.g., children with chronic medical conditions) might be lower than those reported typically among healthy children (166,167). However, antibody responses among children with asthma are similar to those of healthy children and are not substantially altered during asthma exacerbations requiring short-term prednisone treatment (168).
Vaccine effectiveness studies also have indicated that 2 doses are needed to provide adequate protection during the first season that young children are vaccinated. Among children aged <5 years who have never received influenza vaccine previously or who received only 1 dose of influenza vaccine in their first year of vaccination, vaccine effectiveness is lower compared with children who received 2 doses in their first year of being vaccinated. Two large retrospective studies of young children who had received only 1 dose of TIV in their first year of being vaccinated determined that no decrease was observed in ILI-related office visits compared with unvaccinated children (165,169). Similar results were reported in a case-control study of children aged 6-59 months in which laboratory-confirmed influenza was the outcome measured (170). These results, along with the immunogenicity data indicating that antibody responses are substantially higher when young children are given 2 doses, are the basis for the recommendation that all children aged 6 months-8 years who are being vaccinated for the first time should receive 2 vaccine doses separated by ≥4 weeks.
Estimates of vaccine efficacy or effectiveness among children aged ≥6 months have varied by season and study design. In a randomized trial conducted during five influenza seasons (1985)(1986)(1987)(1988)(1989)(1990) in the United States among children aged 1-15 years, annual vaccination reduced laboratory-confirmed influenza A substantially (77%-91%) (139). A limited 1-year placebo-controlled study reported vaccine efficacy against laboratory-confirmed influenza illness of 56% among healthy children aged 3-9 years and 100% among healthy children and adolescents aged 10-18 years (171). A randomized, double-blind, placebo-controlled trial conducted during two influenza seasons among children aged 6-24 months indicated that efficacy was 66% against culture-confirmed influenza illness during the 1999-00 influenza season but did not reduce culture-confirmed influenza illness substantially during the 2000-01 influenza season (172).
A case-control study conducted during the 2003-04 season indicated vaccine effectiveness of 49% against laboratory-confirmed influenza (170). An observational study among children aged 6-59 months with laboratory-confirmed influenza compared with children who tested negative for influenza reported vaccine effectiveness of 44% in the 2003-04 influenza season and 57% during the 2004-05 season (173). Partial vaccination (only 1 dose for children being vaccinated for the first time)
was not effective in either study. During an influenza season (2003-04) with a suboptimal vaccine match, a retrospective cohort study conducted among approximately 30,000 children aged 6 months-8 years indicated vaccine effectiveness of 51% against medically attended, clinically diagnosed pneumonia or influenza (i.e., no laboratory confirmation of influenza) among fully vaccinated children and 49% among approximately 5,000 children aged 6-23 months (169). Another retrospective cohort study of similar size conducted during the same influenza season in Denver but limited to healthy children aged 6-21 months estimated clinical effectiveness of 2 TIV doses to be 87% against pneumonia or influenza-related office visits (165). Among children, TIV effectiveness might increase with age (139,174). A systematic review of published studies estimated vaccine effectiveness at 59% for children aged >2 years but concluded that additional evidence was needed to demonstrate effectiveness among children aged 6 months-2 years (175).
Because of the recognized influenza-related disease burden among children with other chronic diseases or immunosuppression and the long-standing recommendation for vaccination of these children, randomized placebo-controlled studies to study efficacy in these children have not been conducted. In a nonrandomized controlled trial among children aged 2-6 years and 7-14 years who had asthma, vaccine efficacy was 54% and 78% against laboratory-confirmed influenza type A infection and 22% and 60% against laboratory-confirmed influenza type B infection, respectively. Vaccinated children aged 2-6 years with asthma did not have substantially fewer type B influenza virus infections compared with the control group in this study (176). The association between vaccination and prevention of asthma exacerbations is unclear. Vaccination was demonstrated to provide protection against asthma exacerbations in some studies (177,178).
TIV has been demonstrated to reduce acute otitis media in some studies. Two studies have reported that TIV decreases the risk for influenza-related otitis media by approximately 30% among children with mean ages of 20 and 27 months, respectively (179,180). However, a large study conducted among children with a mean age of 14 months indicated that TIV was not effective against acute otitis media (172). Influenza vaccine effectiveness against a nonspecific clinical outcome such as acute otitis media, which is caused by a variety of pathogens and is not typically diagnosed using influenza virus culture, would be expected to be relatively low.
# Adults Aged <65 Years
One dose of TIV is highly immunogenic in healthy adults aged <65 years. Limited or no increase in antibody response is reported among adults when a second dose is administered
MMWR August 6, 2010
during the same season (181)(182)(183). The influenza A (H1N1) 2009 monovalent vaccines were also highly immunogenic; >90% of adults developed levels of anti-influenza antibody considered to be protective (160,184). When the vaccine and circulating viruses are antigenically similar, TIV prevents laboratory-confirmed influenza illness among approximately 70%-90% of healthy adults aged <65 years in randomized controlled trials (77,80,(185)(186)(187). Vaccination of healthy adults also has resulted in decreased work absenteeism and decreased use of health-care resources, including use of antibiotics, when the vaccine and circulating viruses are well-matched (77,185,186). Efficacy or effectiveness against laboratory-confirmed influenza illness was substantially lower in studies conducted during different influenza seasons when the vaccine strains were antigenically dissimilar to the majority of circulating strains (77,80,180,182,185,186). However, effectiveness among healthy adults against influenza-related hospitalization, measured in the most recent of these studies, was 90% (188).
In certain studies, persons with certain chronic diseases have lower serum antibody responses after vaccination compared with healthy young adults and can remain susceptible to influenza virus infection and influenza-related upper respiratory tract illness (189)(190)(191). Vaccine effectiveness among adults aged <65 years who are at higher risk for influenza complications typically is lower than that reported for healthy adults. In a case-control study conducted during the 2003-04 influenza season, when the vaccine was a suboptimal antigenic match to many circulating virus strains, effectiveness for prevention of laboratory-confirmed influenza illness among adults aged 50-64 years with high-risk conditions was 48% compared with 60% for healthy adults (188). Effectiveness against hospitalization among adults aged 50-64 years with high-risk conditions was 36% compared with 90% effectiveness among healthy adults in that age range (188). A randomized controlled trial among adults in Thailand with chronic obstructive pulmonary disease (median age: 68 years) indicated a vaccine effectiveness of 76% in preventing laboratory-confirmed influenza during a season when viruses were well-matched to vaccine viruses. Effectiveness did not decrease with increasing severity of underlying lung disease (192).
Few randomized controlled trials have studied the effect of influenza vaccination on noninfluenza outcomes. A controlled trial conducted in Argentina among 301 adults hospitalized with myocardial infarction or undergoing angioplasty for cardiovascular disease (56% of whom were aged ≥65 years) who were randomized to receive influenza vaccine or no vaccine indicated that a substantially lower percentage (6%) of cardiovascular deaths occurred among vaccinated persons at 1 year after vaccination compared with unvaccinated persons (17%) (193). A randomized, double-blind, placebo-controlled study conducted in Poland among 658 persons with coronary artery disease indicated that significantly fewer vaccinated persons had a cardiac ischemic event during the 9 months of follow up compared with unvaccinated persons (p<0.05) (194).
Observational studies that have measured clinical endpoints without laboratory confirmation of influenza virus infection typically have demonstrated substantial reductions in hospitalizations or deaths among adults with risk factors for influenza complications. For example, in a case-control study conducted during 1999-2000 in Denmark among adults aged <65 years with underlying medical conditions, vaccination reduced deaths attributable to any cause 78% and reduced hospitalizations attributable to respiratory infections or cardiopulmonary diseases 87% (195). A benefit was reported after the first vaccination and increased with subsequent vaccinations in subsequent years (196). Among patients with diabetes mellitus, vaccination was associated with a 56% reduction in any complication, a 54% reduction in hospitalizations, and a 58% reduction in deaths (197). Certain experts have noted that the substantial effects on morbidity and mortality among those who received influenza vaccination in these observational studies should be interpreted with caution because of the difficulties in ensuring that those who received vaccination had similar baseline health status as those who did not (134,135). One meta-analysis of published studies concluded that evidence was insufficient to demonstrate that persons with asthma benefit from vaccination (198). However, a meta-analysis that examined effectiveness among persons with chronic obstructive pulmonary disease identified evidence of benefit from vaccination (199).
# Immunocompromised Persons
TIV produces adequate antibody concentrations against influenza among vaccinated HIV-infected persons who have no or minimal AIDS-related symptoms (200)(201)(202). Among persons who have advanced HIV disease and low CD4+ T-lymphocyte cell counts, TIV might not induce protective antibody titers (202,203); a second dose of vaccine does not improve the immune response in these persons (203,204). A randomized, placebo-controlled trial determined that TIV was highly effective in preventing symptomatic, laboratory-confirmed influenza virus infection among HIV-infected persons with a mean of 400 CD4+ T-lymphocyte cells/mm 3 ; however, a limited number of persons with CD4+ T-lymphocyte cell counts of 100 CD4+ cells and among those with <30,000 viral copies of HIV type-1/mL (95).
On the basis of certain limited studies, immunogenicity for persons with solid organ transplants varies according to transplant type. Among persons with kidney or heart transplants, the proportion who developed seroprotective antibody concentrations was similar or slightly reduced compared with healthy persons (205)(206)(207). However, a study among persons with liver transplants indicated reduced immunologic responses to influenza vaccination (208)(209)(210), especially if vaccination occurred within the 4 months after the transplant procedure (208).
# Pregnant Women and neonates
Pregnant women have protective levels of anti-influenza antibodies after vaccination (211,212). Passive transfer of antiinfluenza antibodies that might provide protection from vaccinated women to neonates has been reported (211,(213)(214)(215)(216). One randomized controlled trial conducted in Bangladesh that provided vaccination to pregnant women during the third trimester demonstrated a 29% reduction in respiratory illness with fever among the mothers and a 36% reduction in respiratory illness with fever among their infants during the first 6 months of life. In addition, infants born to vaccinated women had a 63% reduction in laboratory-confirmed influenza illness during the first 6 months of life (217). All women in this trial breastfed their infants (mean duration: 14 weeks). However, a retrospective study conducted during 1997-2002 that used clinical records data did not indicate a reduction in ILI among vaccinated pregnant women or their infants (218). In another study conducted during 1995-2001, medical visits for respiratory illness among infants were not reduced substantially (219).
# Adults Aged ≥65 Years
One prospective cohort study indicated that immunogenicity among hospitalized persons who either were aged ≥65 years or were aged 18-64 years and had one or more chronic medical conditions was similar compared with outpatients (220). Immunogenicity data from three studies among persons aged ≥65 years indicate that higher-dose preparations elicit substantially higher hemagglutinin inhibition (HI) titers compared with the standard dose (221)(222)(223). In one study, prespecified criteria for superiority (defined as when the lower bound of the two-sided confidence interval of a ratio of geometric mean HI titers is >1.5 and the difference in fourfold rise of HI titers is >10%) were demonstrated for influenza A (H1N1) and influenza A (H3N2) antigens among persons aged ≥65 years who received a TIV formulation (Fluzone High-Dose, sanofi pasteur) that contains four times the standard amount of HA antigen (180 mcg ) of influenza virus hemagglutinin per dose (222,224). Prespecified criteria for noninferiority to a standard-dose vaccine (Fluzone, sanofi pasteur) was demonstrated for the influenza B antigen (222).
The only randomized controlled trial among communitydwelling persons aged ≥60 years reported a vaccine efficacy of 58% (95% CI = 26%-77%) against laboratory-confirmed influenza illness during a season when the vaccine strains were considered to be well-matched to circulating strains (225). Additional information from this trial published separately indicated that efficacy among those aged ≥70 years was 57% (95% CI = -36%-87%), similar to younger persons. However, few persons aged >75 years participated in this study, and the wide confidence interval for the estimate of efficacy among participants aged ≥70 years could not exclude no effect (i.e., included 0) (226). Influenza vaccine effectiveness in preventing MAARI among the elderly in nursing homes has been estimated at 20%-40% (227,228), and reported outbreaks among well-vaccinated nursing home populations have suggested that vaccination might not have any significant effectiveness when circulating strains are drifted from vaccine strains (229,230). In contrast, some studies have indicated that vaccination can be up-to-80% effective in preventing influenza-related death (227,(231)(232)(233). Among elderly persons not living in nursing homes or similar long-term-care facilities, influenza vaccine is 27%-70% effective in preventing hospitalization for pneumonia and influenza (234)(235)(236). Influenza vaccination reduces the frequency of secondary complications and reduces the risk for influenza-related hospitalization and death among community-dwelling adults aged ≥65 years with and without high-risk medical conditions (e.g., heart disease and diabetes) (235)(236)(237)(238)(239)(240). However, studies demonstrating large reductions in hospitalizations and deaths among the vaccinated elderly have been conducted using medical record databases and have not measured reductions in laboratory-confirmed influenza illness. These studies have been challenged because of concerns that they have not controlled adequately for differences in the propensity for healthier persons to be more likely than less healthy persons to receive vaccination (134,135,232,(241)(242)(243)(244).
# Immunogenicity of Inactivated 2009 Pandemic H1n1 Vaccines
The 2010-11 seasonal influenza vaccine will contain an influenza A (H1N1) California/7/2009-like strain, which was also the strain used for the 2009 pandemic H1N1 monovalent vaccines. Clinical studies of the 2009 H1N1 monovalent vaccines indicate that this vaccine antigen is immunogenic and response rates are similar to those observed after immunization with influenza A antigens found in typical seasonal influenza vaccines. Among children aged 6-35 months, 19%-92% responded with an HI titer ≥40 at ≥21 days after 1 dose, and >90% responded with an HI titer ≥40 after 2 doses separated (160,184) although geometric mean titers were substantially lower among adults aged ≥50 years in one study (184) and among adults aged ≥65 years (160). Additional data on 2009 H1N1 pandemic vaccine immunogenicity among persons with chronic medical conditions or pregnant women are not yet available, but results from studies in other groups suggest that immunogenicity is likely to be similar to that observed in studies of seasonal vaccine immunogenicity.
# tIV Dosage, Administration, and Storage
The composition of TIV varies according to manufacturer, and package inserts should be consulted. TIV formulations in multidose vials contain the vaccine preservative thimerosal; preservative-free, single-dose preparations also are available. TIV should be stored at 35°F-46°F (2°C-8°C) and should not be frozen. TIV that has been frozen should be discarded. Dosage recommendations and schedules vary according to age group (Table 2). Vaccine prepared for a previous influenza season should not be administered to provide protection for any subsequent season.
The intramuscular route is recommended for TIV. Adults and older children should be vaccinated in the deltoid muscle. A needle length of ≥1 inch (≥25 mm) should be considered for persons in these age groups because needles of <1 inch might be of insufficient length to penetrate muscle tissue in certain adults and older children (245). When injecting into the deltoid muscle among children with adequate deltoid muscle mass, a needle length of ⅞-1¼ inches is recommended (245).
Infants and young children should be vaccinated in the anterolateral aspect of the thigh. A needle length of ⅞-1 inch should be used for children aged <12 months.
# Adverse Events After Receipt of tIV Children
Studies support the safety of annual TIV in children and adolescents. The largest published postlicensure populationbased study assessed TIV safety in 251,600 children aged <18 years (including 8,476 vaccinations in children aged 6-23 months) who were enrolled in one of five health maintenance organizations within the Vaccine Safety Datalink (VSD) dur-ing 1993-1999. This study indicated no increase in clinically important medically attended events during the 2 weeks after inactivated influenza vaccination compared with control periods 3-4 weeks before and after vaccination (246). A retrospective cohort study using VSD medical records data from 45,356 children aged 6-23 months during 1991-2003 provided additional evidence supporting overall safety of TIV in this age group. During the 2 weeks after vaccination, TIV was not associated with statistically significant increases in any clinically important medically attended events other than gastritis/duodenitis, compared with 2-week control time periods before and after vaccination. Analysis also indicated that 13 diagnoses, including acute upper respiratory illness, otitis media, and asthma, were substantially less common during the 2 weeks after influenza vaccine. On chart review, most children with a diagnosis of gastritis/duodenitis had acute episodes of vomiting or diarrhea, which usually are self-limiting symptoms. The positive or negative associations between TIV and any of these diagnoses do not necessarily indicate a causal relationship (247). The study identified no increased risk for febrile seizure during the 3 days after vaccination. Similarly, no increased risk for febrile seizure was observed during the 14 days after TIV vaccination, after controlling for simultaneous receipt of measles-mumps-rubella (MMR) vaccine which has a known association with febrile seizures in the second week after MMR vaccination (247). Another analysis assessed risk for prespecified adverse events in the VSD, including seizures and Guillan-Barré Syndrome (GBS), after TIV during three influenza seasons (2005-06, 2006-07, and 2007-08). No elevated risk for adverse events was identified among 1,195,552 TIV doses administered to children aged <18 years (248).
In a study of 791 healthy children aged 1-15 years, postvaccination fever was noted among 12% of those aged 1-5 years, 5% among those aged 6-10 years, and 5% among those aged 11-15 years (139). Fever, malaise, myalgia, and other systemic symptoms that can occur after vaccination with inactivated vaccine most often affect persons who have had no previous exposure to the influenza virus antigens in the vaccine (e.g., young children) (249). These reactions begin 6-12 hours after vaccination and can persist for 1-2 days (249).
Data about potential adverse events among children after influenza vaccination are available from the Vaccine Adverse Event Reporting System (VAERS). Because of the limitations of passive reporting systems, determining causality for specific types of adverse events usually is not possible using VAERS data alone. Published reviews of VAERS reports submitted after administration of TIV to children aged 6-23 months indicated that the most frequently reported adverse events were fever, rash, injection-site reactions, and seizures; the majority of the limited number of reported seizures appeared to be febrile (250,251). Seizure and fever were the leading serious adverse events (SAEs) reported to VAERS in these studies (250,2511); analysis of VSD data did not confirm an association with febrile seizures and influenza vaccination as observed in VAERS (247).
In April 2010, Australia's Therapeutic Goods Administration reported preliminary data indicating an elevated risk for febrile reactions, including febrile seizures, among young children in Australia who received the 2010 trivalent vaccine Fluvax Jr., the southern hemisphere inactivated trivalent vaccine for children manufactured by CSL Biotherapies. The risk for febrile seizures was estimated to be as high as five to nine cases per 1,000 vaccinated children aged <5 years, and most seizures occurred among children aged <3 years. Other influenza vaccines, including previous seasonal and pandemic influenza vaccines manufactured by CSL Biotherapies, have not been associated with an increased risk for febrile seizures among children in the United States or Australia. As of July 2010, no cause for the increased frequency of febrile reactions among young children who received the southern hemisphere CSL Biotherapies vaccine had been identified (252). ACIP will continue to monitor safety studies being conducted in Australia and might provide further guidance on use of Afluria, the northern hemisphere trivalent vaccine manufactured by CSL Biotherapies later in 2010. Immunization providers should consult updated information on use of the CSL vaccine from CDC () and FDA (. gov/BiologicsBloodVaccines/SafetyAvailability/VaccineSafety/ default.htm).
# Adults
In placebo-controlled studies among adults, the most frequent side effect of vaccination was soreness at the vaccination site (affecting 10%-64% of patients) that lasted <2 days (253,254). These local reactions typically were mild and rarely interfered with the recipients' ability to conduct usual daily activities. Placebo-controlled trials demonstrated that among older persons and healthy young adults, administration of TIV is not associated with higher rates for systemic symptoms (e.g., fever, malaise, myalgia, and headache) when compared with In addition, to identify children who might be at greater risk for asthma and possibly at increased risk for wheezing after receiving LAIV, parents or caregivers of children aged 2-4 years should be asked: "In the past 12 months, has a health-care provider ever told you that your child had wheezing or asthma?" Children whose parents or caregivers answer "yes" to this question and children who have asthma or who had a wheezing episode noted in the medical record within the past 12 months should not receive FluMist.
# MMWR August 6, 2010
placebo injections (77,198,(253)(254)(255). One prospective cohort study indicated that the rate of adverse events was similar among hospitalized persons who either were aged ≥65 years or were aged 18-64 years and had one or more chronic medical conditions compared with outpatients (220). Among adults vaccinated in consecutive years, reaction frequencies declined in the second year of vaccination (256). In clinical trials, SAEs were reported to occur after vaccination with TIV at a rate of <1%. Adverse events in adults aged ≥18 years reported to VAERS during 1990-2005 were analyzed. The most common adverse events reported to VAERS in adults included injectionsite reactions, pain, fever, myalgia, and headache. The VAERS review identified no new safety concerns. Fourteen percent of the TIV VAERS reports in adults were classified as SAEs, similar to proportions seen overall in VAERS. The most common SAE reported after receipt of TIV in VAERS in adults was GBS (257). The potential association between TIV and GBS has been an area of ongoing research (see Guillain-Barré Syndrome and TIV). No elevated risk for prespecified events after TIV was identified among 4,773,956 adults in a VSD analysis (249). Solicited injection-site reactions and systemic adverse events among persons aged ≥65 years were more frequent after vaccination with a vaccine containing 180 mcg of HA antigen (Fluzone High-Dose, sanofi pasteur) compared with a standard dose (45 mcg) (Fluzone, Sanofi pasteur vaccines) but were typically mild and transient. In the largest study, 915 (36%) of 2,572 persons who received Fluzone High-Dose reported injection-site pain, compared with 306 (24%) of the 1,260 subjects who received Fluzone. The pain was of mild intensity and resolved within 3 days in the majority of subjects. Among Fluzone High Dose recipients, 1.1% reported moderate to severe fever; this was substantially higher than the 0.3% of Fluzone recipients who reported this systemic adverse event (222). During the 6-month follow-up period, SAEs were reported in 6% of the High-Dose recipients and 7% of the Fluzone recipients (222).
# Pregnant Women and neonates
FDA has classified TIV as a "Pregnancy Category C" medication, indicating that adequate animal reproduction studies have not been conducted. Available data do not indicate that influenza vaccine causes fetal harm when administered to a pregnant woman. One study of approximately 2,000 pregnant women who received TIV during pregnancy demonstrated no adverse fetal effects and no adverse effects during infancy or early childhood (258). A matched case-control study of 252 pregnant women who received TIV within the 6 months before delivery determined no adverse events after vaccination among pregnant women and no difference in pregnancy outcomes compared with 826 pregnant women who were not vaccinated (212). During 2000-2003, an estimated 2 million pregnant women were vaccinated, and only 20 adverse events among women who received TIV were reported to VAERS during this time, including nine injection-site reactions and eight systemic reactions (e.g., fever, headache, and myalgias). In addition, three miscarriages were reported, but these were not known to be related causally to vaccination (259). Similar results have been reported in certain smaller studies (211,213,260), and a recent international review of data on the safety of TIV concluded that no evidence exists to suggest harm to the fetus (261). The rate of adverse events associated with TIV was similar to the rate of adverse events among pregnant women who received pneumococcal polysaccharide vaccine in one small randomized controlled trial in Bangladesh, and no severe adverse events were reported in any study group (217).
# Persons with Chronic Medical Conditions
In a randomized cross-over study of children and adults with asthma, no increase in asthma exacerbations was reported for either age group (262), and two additional studies also have indicated no increase in wheezing among vaccinated asthmatic children (177) or adults (195). One study reported that 20%-28% of children aged 9 months-18 years with asthma had injection-site pain and swelling at the site of influenza vaccination (167), and another study reported that 23% of children aged 6 months-4 years with chronic heart or lung disease had injection-site reactions (153). A blinded, randomized, cross-over study of 1,952 adults and children with asthma demonstrated that only self-reported "body aches" were reported more frequently after receipt of TIV (25%) than placebo-injection (21%) (262). However, a placebo-controlled trial of TIV indicated no difference in injection-site reactions among 53 children aged 6 months-6 years with high-risk medical conditions or among 305 healthy children aged 3-12 years (157).
Among children with high-risk medical conditions, one study of 52 children aged 6 months-3 years reported fever among 27% and irritability and insomnia among 25% (153), and a study among 33 children aged 6-18 months reported that one child had irritability and one had a fever and seizure after vaccination (263). No placebo comparison group was used in these studies.
# Immunocompromised Persons
Data demonstrating safety of TIV for HIV-infected persons are limited, but no evidence exists that vaccination has a clinically important impact on HIV infection or immunocompetence. One study demonstrated a transient (i.e., 2-4 week) increase in HIV RNA (ribonucleic acid) levels in one HIV-infected person after influenza virus infection (264). Studies have demonstrated a transient increase in replication of HIV-1 in the plasma or peripheral blood mononuclear cells of HIV-infected persons after vaccine administration (202,265). However, more recent and better-designed studies have not documented a substantial increase in the replication of HIV (266-269). CD4+ T-lymphocyte cell counts or progression of HIV disease have not been reduced after influenza vaccination among HIV-infected persons compared with unvaccinated HIV-infected persons (202,270). Limited information is available about the effect of antiretroviral therapy on increases in HIV RNA levels after either natural influenza virus infection or influenza vaccination (94,271).
Data are similarly limited for persons with other immunocompromising conditions. In small studies, vaccination did not affect allograft function or cause rejection episodes in recipients of kidney transplants (205,206), heart transplants (207), or liver transplants (208).
# Immediate Hypersensitivity Reactions After Receipt of Influenza Vaccines
Vaccine components rarely can cause allergic reactions, also called immediate hypersensitivity reactions, among certain recipients. Immediate hypersensitivity reactions are mediated by preformed immunoglobulin E (IgE) antibodies against a vaccine component and usually occur within minutes to hours of exposure (272). Symptoms of immediate hypersensitivity range from mild urticaria (hives) and angioedema to anaphylaxis. Anaphylaxis is a severe life-threatening reaction that involves multiple organ systems and can progress rapidly. Symptoms and signs of anaphylaxis can include but are not limited to generalized urticaria, wheezing, swelling of the mouth and throat, difficulty breathing, vomiting, hypotension, decreased level of consciousness, and shock. Minor symptoms such as red eyes or hoarse voice also might be present (246,(272)(273)(274)(275).
Allergic reactions might be caused by the vaccine antigen, residual animal protein, antimicrobial agents, preservatives, stabilizers, or other vaccine components (276). Manufacturers use a variety of compounds to inactivate influenza viruses and add antibiotics to prevent bacterial growth. Package inserts for specific vaccines of interest should be consulted for additional information. ACIP has recommended that all vaccine providers should be familiar with the office emergency plan and be certified in cardiopulmonary resuscitation (246). The Clinical Immunization Safety Assessment (95% CISA) network, a collaboration between CDC and six medical research centers with expertise in vaccination safety, has developed an algorithm to guide evaluation and revaccination decisions for persons with suspected immediate hypersensitivity after vaccination (272).
Immediate hypersensitivity reaction after receipt of TIV and LAIV are rare. A VSD study of children aged <18 years in four health maintenance organizations during 1991-1997 estimated the overall risk for postvaccination anaphylaxis after childhood vaccine to be approximately 1.5 cases per 1 million doses administered, and in this study, no cases were identified in TIV recipients (277). Anaphylaxis occurring after receipt of TIV and LAIV in adults has been reported rarely to VAERS (257).
Some immediate hypersensitivity reactions after receipt of TIV or LAIV are caused by the presence of residual egg protein in the vaccines (278). Although influenza vaccines contain only a limited quantity of egg protein, this protein can induce immediate hypersensitivity reactions among persons who have severe egg allergy. Asking persons if they can eat eggs without adverse effects is a reasonable way to determine who might be at risk for allergic reactions from receiving influenza vaccines (246). Persons who have had symptoms such as hives or swelling of the lips or tongue or who have experienced acute respiratory distress after eating eggs should consult a physician for appropriate evaluation to help determine if future influenza vaccine should be administered. Persons who have documented IgE-mediated hypersensitivity to eggs, including those who have had occupational asthma related to egg exposure or other allergic responses to egg protein, also might be at increased risk for allergic reactions to influenza vaccine, and consultation with a physician before vaccination should be considered (279)(280)(281). A regimen has been developed for administering influenza vaccine to asthmatic children with severe disease and egg hypersensitivity (280).
Hypersensitivity reactions to other vaccine components also can occur rarely. Although exposure to vaccines containing thimerosal can lead to delayed-type (Type IV) hypersensitivity (282), the majority of patients do not have reactions to thimerosal when it is administered as a component of vaccines, even when patch or intradermal tests for thimerosal indicate hypersensitivity (283,284). When reported, hypersensitivity to thimerosal typically has consisted of local delayed hypersensitivity reactions (283).
# ocular and Respiratory Symptoms After Receipt of tIV
Ocular or respiratory symptoms have been reported occasionally within 24 hours after TIV administration, but these symptoms typically are mild and resolve quickly without specific treatment. In some trials conducted in the United States,
# MMWR
August 6, 2010 ocular or respiratory symptoms included red eyes (<1%-6%), cough (1%-7%), wheezing (1%), and chest tightness (1%-3%) (274,275,(285)(286)(287). However, most of these trials were not placebo-controlled, and causality cannot be determined.
In addition, ocular and respiratory symptoms are features of a variety of respiratory illnesses and seasonal allergies that would be expected to occur coincidentally among vaccine recipients unrelated to vaccination. A placebo-controlled vaccine effectiveness study among young adults indicated that 2% of persons who received the 2006-07 formulation of Fluzone (sanofi pasteur) reported red eyes compared with none of the controls (p=0.03) (288). A similar trial conducted during the 2005-06 influenza season indicated that 3% of Fluzone recipients reported red eyes compared with 1% of placebo recipients; however the difference was not statistically significant (289).
Oculorespiratory syndrome (ORS), an acute, self-limited reaction to TIV with prominent ocular and respiratory symptoms, was first described during the 2000-01 influenza season in Canada. The initial case-definition for ORS was the onset of one or more of the following within 2-24 hours after receiving TIV: bilateral red eyes and/or facial edema and/or respiratory symptoms (coughing, wheezing, chest tightness, difficulty breathing, sore throat, hoarseness or difficulty swallowing, cough, wheeze, chest tightness, difficulty breathing, sore throat, or facial swelling) (290). ORS was first described in Canada and strongly associated with one vaccine preparation (Fluviral S/F, Shire Biologics, Quebec, Canada) not available in the United States during the 2000-01 influenza season (291). Subsequent investigations identified persons with ocular or respiratory symptoms meeting an ORS case-definition in safety monitoring systems and trials that had been conducted before 2000 in Canada, the United States, and several European countries (292)(293)(294).
The cause of ORS has not been established; however, studies suggest that the reaction is not IgE-mediated (295). After changes in the manufacturing process of the vaccine preparation associated with ORS during 2000-01, the incidence of ORS in Canada was reduced greatly (293). In one placebo-controlled study, only hoarseness, cough, and itchy or sore eyes (but not red eyes) were strongly associated with a reformulated Fluviral preparation. These findings indicated that ORS symptoms following use of the reformulated vaccine were mild, resolved within 24 hours, and might not typically be of sufficient concern to cause vaccine recipients to seek medical care (296).
Ocular and respiratory symptoms reported after TIV administration, including ORS, have some similarities with immediate hypersensitivity reactions. One study indicated that the risk for ORS recurrence with subsequent vaccination is low, and persons with ocular or respiratory symptoms (e.g., bilateral red eyes, cough, sore throat, or hoarseness) after receipt of TIV that did not involve the lower respiratory tract have been revaccinated without reports of SAEs after subsequent exposure to TIV (297).
# Revaccination in Persons Who Experienced ocular or Respiratory Symptoms After Receipt of tIV
When assessing whether a patient who experienced ocular and respiratory symptoms should be revaccinated, providers should determine if concerning signs and symptoms of Ig-E mediated immediate hypersensitivity are present (see Immediate Hypersensitivity after Influenza Vaccines). Healthcare providers who are unsure whether symptoms reported or observed after receipt of TIV represent an IgE-mediated hypersensitivity immune response should seek advice from an allergist/immunologist. Persons with symptoms of possible IgE-mediated hypersensitivity after receipt of TIV should not receive influenza vaccination unless hypersensitivity is ruled out or revaccination is administered under close medical supervision (272).
Ocular or respiratory symptoms observed after receipt of TIV often are coincidental and unrelated to TIV administration, as observed among placebo recipients in some randomized controlled studies. Determining whether ocular or respiratory symptoms are coincidental or related to possible ORS might not be possible. Persons who have had red eyes, mild upper facial swelling, or mild respiratory symptoms (e.g., sore throat, cough, or hoarseness) after receipt of TIV without other concerning signs or symptoms of hypersensitivity can receive TIV in subsequent seasons without further evaluation. Two studies indicated that persons who had symptoms of ORS after receipt of TIV were at a higher risk for ORS after subsequent TIV administration; however, these events usually were milder than the first episode (297,298).
# Contraindications and Precautions for Use of tIV
TIV is contraindicated and should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine unless the recipient has been desensitized. Prophylactic use of antiviral agents is an option for preventing influenza among such persons. Information about vaccine components is located in package inserts from each manufacturer. Persons with moderate to severe acute febrile illness usually should not be vaccinated until their symptoms have abated. Moderate or severe acute illness with or without fever is a precaution for TIV. GBS within 6 weeks following a previous dose of influenza vaccine is considered to be a precaution for use of influenza vaccines.
# Guillain-Barré Syndrome and tIV
The annual incidence of GBS is 10-20 cases per 1 million adults (299). Substantial evidence exists that multiple infectious illnesses, most notably Campylobacter jejuni gastrointestinal infections and upper respiratory tract infections, are associated with GBS (300)(301)(302). A recent study identified serologically confirmed influenza virus infection as a trigger of GBS, with time from onset of influenza illness to GBS of 3-30 days. The estimated frequency of influenza-related GBS was four to seven times higher than the frequency that has been estimated for influenza-vaccine-associated GBS (303).
The 1976 swine influenza vaccine was associated with an increased frequency of GBS, estimated at one additional case of GBS per 100,000 persons vaccinated (304,305). The risk for influenza-vaccine-associated GBS was higher among persons aged ≥25 years than among persons aged <25 years (306). However, obtaining epidemiologic evidence for a small increase in risk for a rare condition with multiple causes is difficult, and no evidence consistently exists for a causal relation between subsequent vaccines prepared from other influenza viruses and GBS.
None of the studies conducted using influenza vaccines other than the 1976 swine influenza vaccine has demonstrated an increase in GBS associated with influenza vaccines on the order of magnitude seen in 1976-77. During three of four influenza seasons studied during 1977-1991, the overall relative risk estimates for GBS after influenza vaccination were not statistically significant in any of these studies (307)(308)(309). However, in a study of the 1992-93 and 1993-94 seasons, the overall relative risk for GBS was 1.7 (95% CI = 1.0-2.8; p=0.04) during the 6 weeks after vaccination, representing approximately one additional case of GBS per 1 million persons vaccinated; the combined number of GBS cases peaked 2 weeks after vaccination (305). Results of a study that examined health-care data from Ontario, Canada, during 1992-2004 demonstrated a small but statistically significant temporal association between receiving influenza vaccination and subsequent hospital admission for GBS. However, no increase in cases of GBS at the population level was reported after introduction of a mass public influenza vaccination program in Ontario beginning in 2000 (310). Data from VAERS have documented decreased reporting of GBS occurring after vaccination across age groups over time, despite overall increased reporting of other non-GBS conditions occurring after administration of influenza vaccine (304). Published data from the United Kingdom's General Practice Research Database (GPRD) indicated that influenza vaccine was associated with a decreased risk for GBS, although whether this was associated with protection against influenza or confounding because of a "healthy vaccinee" effect (e.g., healthier persons might be more likely to be vaccinated and also be at lower risk for GBS) (311) is unclear. A separate GPRD analysis identified no association between vaccination and GBS for a 9-year period; only three cases of GBS occurred within 6 weeks after administration of influenza vaccine (312). A third GPRD analysis indicated that GBS was associated with recent ILI, but not influenza vaccination (313,314).
The estimated risk for GBS (on the basis of the few studies that have demonstrated an association between vaccination and GBS) is low (i.e., approximately one additional case per 1 million persons vaccinated). The potential benefits of influenza vaccination in preventing serious illness, hospitalization, and death substantially outweigh these estimates of risk for vaccineassociated GBS. No evidence indicates that the case-fatality ratio for GBS differs among vaccinated persons and those not vaccinated. Preliminary data from the systems monitoring influenza A (H1N1) 2009 monovalent vaccines suggest that if a risk exists for GBS after receiving inactivated vaccines, it is not substantially higher than that reported in some seasons for TIV (315); analyses are ongoing to quantify any potential GBS risk (316).
# Use of tIV Among Patients with a History of GBS
The incidence of GBS among the general population is low, but persons with a history of GBS have a substantially greater likelihood of subsequently experiencing GBS than persons without such a history (299). Thus, the likelihood of coincidentally experiencing GBS after influenza vaccination is expected to be greater among persons with a history of GBS than among persons with no history of this syndrome. Whether influenza vaccination specifically might increase the risk for recurrence of GBS is unknown. Among 311 patients with GBS who responded to a survey, 11 (4%) reported some worsening of symptoms after influenza vaccination; however, some of these patients had received other vaccines at the same time, and recurring symptoms were generally mild (317). However, as a precaution, persons who are not at high risk for severe influenza complications and who are known to have experienced GBS within 6 weeks of receipt of an influenza vaccine generally should not be vaccinated. As an alternative, physicians might consider using influenza antiviral chemoprophylaxis for these persons. Although data are limited, the established benefits of influenza vaccination might outweigh the risks for many persons who have a history of GBS and who also are at high risk for severe complications from influenza.
# MMWR
August 6, 2010
# Vaccine Preservative (thimerosal) in Multidose Vials of tIV
Thimerosal, a mercury-containing antibacterial compound, has been used as a preservative in vaccines and other medications since the 1930s (318) and is used in multidose vial preparations of TIV to reduce the likelihood of bacterial growth. No scientific evidence indicates that thimerosal in vaccines, including influenza vaccines, is a cause of adverse events other than occasional local hypersensitivity reactions in vaccine recipients. In addition, no scientific evidence indicates that thimerosal-containing vaccines are a cause of adverse events among children born to women who received vaccine during pregnancy. The weight of accumulating evidence does not suggest an increased risk for neurodevelopment disorders from exposure to thimerosal-containing vaccines (319)(320)(321)(322)(323)(324)(325)(326)(327)(328). The U.S. Public Health Service and other organizations have recommended that efforts be made to eliminate or reduce the thimerosal content in vaccines as part of a strategy to reduce mercury exposures from all sources (319,320,329). Also, continuing public concerns about exposure to mercury in vaccines has been viewed as a potential barrier to achieving higher vaccine coverage levels and reducing the burden of vaccine-preventable diseases, including influenza. Since mid-2001, vaccines routinely recommended for infants aged <6 months in the United States have been manufactured either without or with greatly reduced (trace) amounts of thimerosal. As a result, a substantial reduction in the total mercury exposure from vaccines for infants and children already has been achieved (246). ACIP and other federal agencies and professional medical organizations continue to support efforts to provide thimerosal-preservative-free vaccine options.
The U.S. vaccine supply for infants and pregnant women is in a period of transition as manufacturers expand the availability of thimerosal-reduced or thimerosal-free vaccine to reduce the cumulative exposure of infants to mercury. Other environmental sources of mercury exposure are more difficult or impossible to avoid or eliminate (319). The benefits of influenza vaccination for all recommended groups, including pregnant women and young children, outweigh concerns on the basis of a theoretic risk from thimerosal exposure through vaccination. The risks for severe illness from influenza virus infection are elevated among both young children and pregnant women, and vaccination has been demonstrated to reduce the risk for severe influenza illness and subsequent medical complications. In contrast, no harm from exposure to vaccine containing thimerosal preservative has been demonstrated. For these reasons, persons recommended to receive TIV may receive any age-and risk factor-appropriate vaccine preparation, depending on availability. An analysis of VAERS reports identified no difference in the safety profile of preservativecontaining compared with preservative-free TIV vaccines in infants aged 6-23 months (251).
Nonetheless, some states have enacted legislation banning the administration of vaccines containing mercury; the provisions defining mercury content vary (330). LAIV and many of the single-dose vial or syringe preparations of TIV are thimerosalfree, and the number of influenza vaccine doses that do not contain thimerosal as a preservative is expected to increase (Table 2). However, these laws might present a barrier to vaccination unless influenza vaccines that do not contain thimerosal as a preservative are routinely available in those states.
# Dosage, Administration, and Storage of LAIV
Each dose of LAIV contains the same three vaccine antigens used in TIV. However, the antigens are constituted as live, attenuated, cold-adapted, temperature-sensitive vaccine viruses. Providers should refer to the package insert, which contains additional information about the formulation of this vaccine and other vaccine components. LAIV does not contain thimerosal. LAIV is made from attenuated viruses that are able to replicate efficiently only at temperatures present in the nasal mucosa. LAIV recipients might experience nasal congestion or mild fever, which is probably a result of effects of intranasal vaccine administration or local viral replication. However, LAIV does not typically cause the more prominent systemic symptoms of influenza such as high fever, myalgia, and severe fatigue (331).
LAIV is intended for intranasal administration only and should not be administered by the intramuscular, intradermal, or intravenous route. LAIV is not licensed for vaccination of children aged 49 years. LAIV is supplied in a prefilled, single-use sprayer containing 0.2 mL of vaccine. Approximately 0.1 mL (i.e., half of the total sprayer contents) is sprayed into the first nostril while the recipient is in the upright position. An attached dose-divider clip is removed from the sprayer to administer the second half of the dose into the other nostril. LAIV is shipped at 35°F-46°F (2°C-8°C). LAIV should be stored at 35°F-46°F (2°C-8°C) on receipt and can remain at that temperature until the expiration date is reached (331). Vaccine prepared for a previous influenza season should not be administered to provide protection for any subsequent season.
# Shedding, transmission, and Stability of LAIV Viruses
Available data indicate that both children and adults vaccinated with LAIV can shed vaccine viruses after vaccination, although in lower amounts than occur typically with shedding of wild-type influenza viruses. In rare instances, shed vaccine viruses can be transmitted from vaccine recipients to unvaccinated persons. However, serious illnesses have not been reported among unvaccinated persons who have been infected inadvertently with vaccine viruses.
One study of 197 children aged 8-36 months in a child care center assessed transmissibility of vaccine viruses from 98 vaccinated children to the other 99 unvaccinated children; 80% of vaccine recipients shed one or more virus strains (mean duration: 7.6 days). One influenza type B vaccine strain isolate was recovered from a placebo recipient and was confirmed to be vaccine-type virus. The type B isolate retained the coldadapted, temperature-sensitive, attenuated phenotype, and it possessed the same genetic sequence as a virus shed from a vaccine recipient who was in the same play group. The placebo recipient from whom the influenza type B vaccine strain was isolated had symptoms of a mild upper respiratory illness but did not experience any serious clinical events. The estimated probability of acquiring vaccine virus after close contact with a single LAIV recipient in this child care population was 1%-2% (332).
Studies assessing whether vaccine viruses are shed have been based on viral cultures or polymerase chain reaction (PCR) detection of vaccine viruses in nasal aspirates from persons who have received LAIV. Among 345 subjects aged 5-49 years, 30% had detectable virus in nasal secretions obtained by nasal swabbing after receiving LAIV. The duration of virus shedding and the amount of virus shed was correlated inversely with age, and maximal shedding occurred within 2 days of vaccination. Symptoms reported after vaccination, including runny nose, headache, and sore throat, did not correlate with virus shedding (333). Other smaller studies have reported similar findings (334,335). Vaccine strain virus was detected from nasal secretions in one (2%) of 57 HIV-infected adults who received LAIV, none of 54 HIV-negative participants (336), and three (13%) of 23 HIV-infected children compared with seven (28%) of 25 children who were not HIV-infected (337). No participants in these studies had detectable virus beyond 10 days after receipt of LAIV. The possibility of person-to-person transmission of vaccine viruses was not assessed in these studies (334)(335)(336)(337).
In clinical trials, viruses isolated from vaccine recipients have retained attenuated phenotypes. In one study, nasal and throat swab specimens were collected from 17 study participants for 2 weeks after vaccine receipt (338). Virus isolates were analyzed by multiple genetic techniques. All isolates retained the LAIV genotype after replication in the human host, and all retained the cold-adapted and temperature-sensitive phenotypes. A study conducted in a child care setting demonstrated that limited genetic change occurred in the LAIV strains following replication in the vaccine recipients (339).
# Immunogenicity, Efficacy, and Effectiveness of LAIV
LAIV virus strains replicate primarily in nasopharyngeal epithelial cells. The protective mechanisms induced by vaccination with LAIV are not understood completely but appear to involve both serum and nasal secretory antibodies. The immunogenicity of the approved LAIV has been assessed in multiple studies conducted among children and adults (147,(340)(341)(342)(343)(344)(345).
# Healthy Children
A randomized, double-blind, placebo-controlled trial among 1,602 healthy children aged 15-71 months assessed the efficacy of LAIV against culture-confirmed influenza during two seasons (346,347). This trial included a subset of children aged 60-71 months who received 2 doses in the first season. During the first season (1996-97), when vaccine and circulating virus strains were well-matched, efficacy against culture-confirmed influenza was 94% for participants who received 2 doses of LAIV separated by ≥6 weeks, and 89% for those who received 1 dose. During the second season (1997-98), when the A (H3N2) component in the vaccine was not well-matched with circulating virus strains, efficacy (1 dose) was 86%, for an overall efficacy for two influenza seasons of 92%. Receipt of LAIV also resulted in 21% fewer febrile illnesses and a significant decrease in acute otitis media requiring antibiotics (346,348). Other randomized, placebo-controlled trials demonstrating the efficacy of LAIV in young children against culture-confirmed influenza include a study conducted among children aged 6-35 months attending child care centers during consecutive influenza seasons (349) in which 85%-89% efficacy was observed, and a study conducted among children aged 12-36 months living in Asia during consecutive influenza seasons in which 64%-70% efficacy was documented (350). In one community-based, nonrandomized open-label study, reductions in MAARI were observed among children who received 1 dose of LAIV during the 1990-00 and 2000-01 influenza seasons even though antigenically drifted influenza A/H1N1 and B viruses were circulating during that season (148). LAIV efficacy in preventing laboratory-confirmed influenza also has been demonstrated in studies comparing the efficacy of LAIV with TIV rather than with a placebo (see Comparisons of LAIV and TIV Efficacy or Effectiveness). In clinical trials, an increased risk for wheezing postvaccination was observed in LAIV recipients aged <24 months. An increase in hospitalizations also was observed in children aged <24 months after vaccination with LAIV (331).
# Healthy Adults
A randomized, double-blind, placebo-controlled trial of LAIV effectiveness among 4,561 healthy working adults aged 18-64 years assessed multiple endpoints, including reductions in self-reported respiratory tract illness without laboratory confirmation, work loss, health-care visits, and medication use during influenza outbreak periods. The study was conducted during the 1997-98 influenza season, when the vaccine and circulating A (H3N2) strains were not well-matched. The frequency of febrile illnesses was not substantially decreased among LAIV recipients compared with those who received placebo. However, vaccine recipients had substantially fewer severe febrile illnesses (19% reduction) and febrile upper respiratory tract illnesses (24% reduction), and substantial reductions in days of illness, days of work lost, days with health-care-provider visits, and use of prescription antibiotics and over-the-counter medications (351,352). Efficacy against culture-confirmed influenza in a randomized, placebo-controlled study among young adults was 57% in the 2004-05 influenza season, 43% in the 2005-06 influenza season, and 51% in the 2007-08 influenza season, although efficacy in 2004-05 and 2005-06 was not demonstrated to be substantially greater than placebo (187,288,289).
# Adverse Events After Receipt of LAIV Healthy Children Aged 2-18 Years
In a subset of healthy children aged 60-71 months from one clinical trial, certain signs and symptoms were reported more often after the first dose among LAIV recipients (n = 214) than among placebo recipients (n = 95), including runny nose (48% and 44%, respectively); headache (18% and 12%, respectively); vomiting (5% and 3%, respectively); and myalgias (6% and 4%, respectively) (346). However, these differences were not statistically significant. In other trials, signs and symptoms reported after LAIV administration have included runny nose or nasal congestion (20%-75%), headache (2%-46%), fever (0-26%), vomiting (3%-13%), abdominal pain (2%), and myalgias (0-21%) (340,342,343,349,(353)(354)(355)(356). These symptoms were associated more often with the first dose and were self-limited. A placebo-controlled trial in 9,689 children aged 1-17 years assessed prespecified medically attended outcomes during the 42 days after vaccination (355). Following >1,500 statistical analyses in the 42 days after LAIV, elevated risks that were assessed to be biologically plausible were observed for asthma, upper respiratory infection, musculoskeletal pain, otitis media with effusion, and adenitis/adenopathy. The increased risk for wheezing events after LAIV was observed among children aged 18-35 months (RR: 4.06; 90% CI = 1.3-17.9). Of the 16 children with asthma-related events in this study, seven had a history of asthma on the basis of subsequent medical record review. None required hospitalization, and elevated risks for asthma were not observed in other age groups (355). In this study, the rate of SAEs was 0.2% in LAIV and placebo recipients; none of the SAEs was judged to be related to the vaccine by the study investigators (355).
In a randomized trial, LAIV and TIV were compared among children aged 6-59 months (357). Children with medically diagnosed or treated wheezing within 42 days before enrollment or with a history of severe asthma were excluded from this prelicensure study. Among children aged 24-59 months who received LAIV, the rate of medically significant wheezing, using a prespecified definition, was not greater compared with those who received TIV (357). Wheezing was observed more frequently among younger LAIV recipients aged 6-23 months in this study; LAIV is not licensed for this age group.
Another study was conducted among >11,000 children aged 18 months-18 years in which 18,780 doses of vaccine were administered over 4 years. For children aged 18 months-4 years, no increase was reported in asthma visits 0-15 days after vaccination compared with the prevaccination period. A significant increase in asthma events was reported 15-42 days after vaccination, but only in vaccine year 1 (358). A 4-year, open-label field trial study assessed LAIV safety of >2,000 doses administered to children aged 18 months-18 years with a history of intermittent wheeze who were otherwise healthy. Among these children, no increased risk was reported for medically attended acute respiratory illnesses, including acute asthma exacerbation, during the 0-14 or 0-42 days after LAIV compared with the pre-and postvaccination reference periods (359).
Initial data from VAERS during 2007-2008 and 2008-2009, following ACIP's recommendation for use of LAIV in healthy children aged 2-4 years, did not demonstrate an increased frequency of wheezing after administration of LAIV. However, data also indicate that uptake of LAIV among children aged 2-4 years was limited (CDC, unpublished data, 2010). Safety monitoring for wheezing events after LAIV is ongoing.
# Adults Aged <50 Years
Among adults, runny nose or nasal congestion (28%-78%), headache (16%-44%), and sore throat (15%-27%) have been reported more often among vaccine recipients than placebo recipients (346,360). In one clinical trial among a subset of healthy adults aged 18-49 years, signs and symptoms reported significantly more often (p<0.05) among LAIV recipients (n = 2,548) than placebo recipients (n = 1,290) within 7 days after each dose included cough (14% and 11%, respectively), runny nose (45% and 27%, respectively), sore throat (28% and 17%, respectively), chills (9% and 6%, respectively), and tiredness/ weakness (26% and 22%, respectively) (144). A review of 460 reports to VAERS after distribution of approximately 2.5 million doses during the 2003-04 and 2004-05 influenza seasons did not indicate any new safety concerns (361). Few of the LAIV VAERS reports (9%) were SAEs; respiratory events (47%) were the most common conditions reported (361).
The 2010-11 seasonal live attenuated influenza vaccine will contain an influenza A (H1N1) California/7/2009-like strain, which was also the strain used for the 2009 pandemic H1N1 monovalent live attenuated vaccine. (See Safety Monitoring of Pandemic 2009 H1N1 Monovalent Vaccines for additional information about 2009 H1N1 monovalent vaccine safety data among children and adults.)
# Persons at Higher Risk for Influenza-Related Complications
Limited data assessing the safety of LAIV use for certain groups at higher risk for influenza-related complications are available. In one study of 54 HIV-infected persons aged 18-58 years with CD4+ counts ≥200 cells/mm 3 who received LAIV, no SAEs were reported during a 1-month follow-up period (336). Similarly, one study demonstrated no significant difference in the frequency of adverse events or viral shedding among HIV-infected children aged 1-8 years on effective antiretroviral therapy who were administered LAIV compared with HIV-uninfected children receiving LAIV (337). LAIV was well-tolerated among adults aged ≥65 years with chronic medical conditions (362). These findings suggest that persons at risk for influenza complications who have inadvertent exposure to LAIV would not have significant adverse events or prolonged viral shedding and that persons who have contact with persons at higher risk for influenza-related complications may receive LAIV.
# Safety Monitoring of Pandemic 2009 H1n1 Monovalent Vaccines
The 2010-11 seasonal influenza vaccine will contain an influenza A (H1N1) California/7/2009-like strain, which was also the strain used for the 2009 pandemic H1N1 monovalent vaccines. Clinical immunogenicity and safety studies of the 2009 H1N1 monovalent vaccines indicate that the reactogenicity profile in children and adults is similar to seasonal influenza vaccines (158)(159)(160)184). Ongoing comprehensive safety monitoring of the pandemic 2009 H1N1 vaccine was implemented as part of the pandemic immunization program (363). A nongovernment working group was established by the National Vaccine Advisory Committee to provide an independent review of safety data, with members representing other federal advisory committees as well as experts in internal medicine, pediatrics, immunology, and vaccine safety (314). Data from the first 2 months of implementation of H1N1 vaccination from VAERS and VSD suggested a similar safety profile for influenza A (H1N1) 2009 monovalent vaccines and seasonal influenza vaccines. As of July 2010, analysis and review of vaccine safety data from numerous systems were underway (314,316).
# Comparisons of LAIV and tIV Efficacy or Effectiveness
Both TIV and LAIV have been demonstrated to be effective in children and adults. However, data directly comparing the efficacy or effectiveness of these two types of influenza vaccines are limited and insufficient to identify whether one vaccine might offer a clear advantage over the other in certain settings or populations. Studies comparing the efficacy of TIV to that of LAIV have been conducted in a variety of settings and populations using several different outcomes. One randomized, double-blind, placebo-controlled challenge study that was conducted among 92 healthy adults aged 18-41 years assessed the efficacy of both LAIV and TIV in preventing influenza infection when challenged with wild-type strains that were antigenically similar to vaccine strains (364). The overall efficacy in preventing laboratory-documented influenza from all three influenza strains combined was 85% and 71%, respectively, when challenged 28 days after vaccination by viruses to which study participants were susceptible before vaccination. The difference in efficacy between the two vaccines was not statistically significant in this limited study. No additional challenges were conducted to assess efficacy at time points later than 28 days (364). In a randomized, double-blind, placebo-controlled trial that was conducted among young adults during the 2004-05 influenza season, when the majority of circulating H3N2 viruses were antigenically drifted from that season's vaccine viruses, the efficacy of LAIV and TIV against culture-confirmed influenza was 57% and 77%, respectively. The difference in efficacy was not statistically significant and was attributable primarily to a difference in efficacy against influenza B (289). Similar studies conducted during the 2005-06 and 2007-08 influenza seasons identified no significant difference in vaccine efficacy in 2005-06 ( 288), but a 50% relative efficacy or TIV compared with LAIV in the 2007-08 season (187).
# MMWR August 6, 2010
A randomized controlled clinical trial conducted among children aged 6-59 months during the 2004-05 influenza season demonstrated a 55% reduction in cases of culture-confirmed influenza among children who received LAIV compared with those who received TIV (357). In this study, LAIV efficacy was higher compared with TIV against antigenically drifted viruses and well-matched viruses (357). An open-label, nonrandomized, community-based influenza vaccine trial conducted during an influenza season when circulating H3N2 strains were poorly matched with strains contained in the vaccine also indicated that LAIV, but not TIV, was effective against antigenically drifted H3N2 strains during that influenza season. In this study, children aged 5-18 years who received LAIV had significant protection against laboratory-confirmed influenza (37%) and pneumonia and influenza events (50%) (365). An observational study conducted among military personnel aged 17-49 years over three influenza seasons indicated that persons who received TIV had a substantially lower incidence of health-care encounters resulting in diagnostic coding for pneumonia and influenza compared with those who received LAIV. However, among new recruits being vaccinated for the first time, the incidence of pneumonia-and influenza-coded health-care encounters among those received LAIV was similar to those receiving TIV (366).
Although LAIV is not licensed for use in persons with risk factors for influenza complications, certain studies have compared the efficacy of LAIV to TIV in these groups. LAIV provided 32% increased protection in preventing culture-confirmed influenza compared with TIV in one study conducted among children aged ≥6 years and adolescents with asthma (367) and 52% increased protection compared with TIV among children aged 6-71 months with recurrent respiratory tract infections (368).
# Effectiveness of Vaccination for Decreasing transmission to Contacts
Decreasing transmission of influenza from caregivers and household contacts to persons at high risk might reduce ILI and complications among persons at high risk. Influenza virus infection and ILI are common among HCP (369-371). Influenza outbreaks have been attributed to low vaccination rates among HCP in hospitals and long-term-care facilities (372)(373)(374). One serosurvey demonstrated that 23% of HCP had serologic evidence of influenza virus infection during a single influenza season; the majority had mild illness or subclinical infection (369). Observational studies have demonstrated that vaccination of HCP is associated with decreased deaths among nursing home patients (375,376). In one clusterrandomized controlled trial that included 2,604 residents of 44 nursing homes, significant decreases in mortality, ILI, and medical visits for ILI care were demonstrated among residents in nursing homes in which staff were offered influenza vaccination (coverage rate: 48%) compared with nursing homes in which staff were not provided with vaccination (coverage rate: 6%) (377). Another trial demonstrated substantially lower rates of ILI among residents and staff absences in nursing homes where staff were specifically targeted for vaccination (coverage rate: 70%) compared with nursing homes where no intervention was attempted (coverage rate: 32%) (378). A review concluded that vaccination of HCP in settings in which patients also were vaccinated provided significant reductions in deaths among elderly patients from all causes and deaths from pneumonia (379).
Epidemiologic studies of community outbreaks of influenza demonstrate that school-aged children typically have the highest influenza illness attack rates, suggesting routine universal vaccination of children might reduce transmission to their household contacts and possibly others in the community. Results from certain studies have indicated that the benefits of vaccinating children might extend to protection of their adult contacts and to persons at risk for influenza complications in the community. However, these data are limited, and most studies have not used laboratory-confirmed influenza as an outcome measure. A single-blinded, randomized controlled study conducted as part of a 1996-1997 vaccine effectiveness study demonstrated that vaccinating preschool-aged children with TIV reduced influenza-related morbidity among some household contacts (380). A randomized, placebo-controlled trial among children with recurrent respiratory tract infections demonstrated that members of families with children who had received a live attenuated virosomal vaccine formulation (not currently available in the United States) were substantially less likely to have respiratory tract infections and reported substantially fewer workdays lost compared with families with children who received placebo (381). One cluster randomized trial conducted among rural Hutterite communities in Canada compared laboratory confirmed influenza among unvaccinated persons in communities where children were administered influenza vaccine (coverage: 83%) among children aged 3-15 years with communities where children received hepatitis A vaccine. Influenza vaccine effectiveness for prevention of influenza among unvaccinated persons was 61% (95% CI = 8%-81%) (382) In nonrandomized community-based studies, administration of LAIV has been demonstrated to reduce MAARI (383,384) and ILI-related economic and medical consequences (e.g., workdays lost and number of health-care provider visits) among contacts of vaccine recipients (384). Households with children attending schools in which school-based LAIV vac-cination programs had been established reported less ILI and fewer physician visits during peak influenza season compared with households with children in schools in which no LAIV vaccination had been offered. However a decrease in the overall rate of school absenteeism was not reported in communities in which LAIV vaccination was offered (384). During an influenza outbreak during the 2005-06 influenza season, countywide school-based influenza vaccination was associated with reduced absenteeism among elementary and high school students in one county that implemented a school-based vaccination program compared with another county without such a program (385). These community-based studies have not used laboratory-confirmed influenza as an outcome.
Some studies also have documented reductions in influenza illness among persons living in communities where focused programs for vaccinating children have been conducted. A community-based observational study conducted during the 1968 pandemic using a univalent inactivated vaccine reported that a vaccination program targeting school-aged children (coverage rate: 86%) in one community reduced influenza rates within the community among all age groups compared with another community in which aggressive vaccination was not conducted among school-aged children (386). An observational study conducted in Russia demonstrated reductions in ILI among the community-dwelling elderly after implementation of a vaccination program using TIV for children aged 3-6 years (57% coverage achieved) and children and adolescents aged 7-17 years (72% coverage achieved) (387). In a nonrandomized community-based study conducted over three influenza seasons, 8%-18% reductions in the incidence of MAARI during the influenza season among adults aged ≥35 years were observed in communities in which LAIV was offered to all children aged ≥18 months (estimated coverage rate: 20%-25%) compared with communities that did not provide routine influenza vaccination programs for all children (383). In a subsequent influenza season, the same investigators documented a 9% reduction in MAARI rates during the influenza season among persons aged 35-44 years in intervention communities, where coverage was estimated at 31% among school children. However, MAARI rates among persons aged ≥45 years were lower in the intervention communities regardless of the presence of influenza in the community, suggesting that lower rates could not be attributed to vaccination of school children against influenza (365).
The largest study to examine the community effects of increasing overall vaccine coverage was an ecologic study that described the experience in Ontario, Canada, which is the only province to implement a universal influenza vaccination program beginning in 2000. On the basis of models developed from administrative and viral surveillance data, influenza-related mortality, hospitalizations, ED use, and physicians' office visits decreased substantially more in Ontario after program introduction than in other provinces, with the largest reductions observed in younger age groups (388). In addition, influenza-associated antibiotic prescriptions were substantially reduced compared with other provinces (389).
# Efficacy and Effectiveness of Influenza Vaccination When Circulating Influenza Virus Strains Differ from Vaccine Strains
Vaccination can provide reduced but substantial crossprotection against drifted strains in some seasons, including reductions in severe outcomes such as hospitalization. Usually one or more circulating viruses with antigenic changes compared with the vaccine strains are identified in each influenza season. In addition, two distinct lineages of influenza B viruses have co-circulated in recent years, and limited cross-protection is observed against the lineage not represented in the vaccine (70). However, assessment of the clinical effectiveness of influenza vaccines cannot be determined solely by laboratory evaluation of the degree of antigenic match between vaccine and circulating strains. In some influenza seasons, circulating influenza viruses with significant antigenic differences predominate, and reductions in vaccine effectiveness sometimes are observed compared with seasons when vaccine and circulating strains are well-matched (77,170,188,239,289,390). However, even during years when vaccine strains were not antigenically well-matched to circulating strains (the result of antigenic drift), substantial protection has been observed against severe outcomes, presumably because of vaccine-induced crossreacting antibodies (77,188,289,352). For example, in one study conducted during the 2003-04 influenza season, when the predominant circulating strain was an influenza A (H3N2) virus that was antigenically different from that season's vaccine strain, effectiveness against laboratory-confirmed influenza illness among persons aged 50-64 years was 60% among healthy persons and 48% among persons with medical conditions that increased the risk for influenza complications (188). An interim, within-season analysis during the 2007-08 influenza season indicated that vaccine effectiveness was 44% overall, 54% among healthy persons aged 5-49 years, and 58% against influenza A, despite the finding that viruses circulating in the study area were predominately a drifted influenza A (H3N2) and an influenza B strain from a different lineage compared with vaccine strains (391). Among children, both TIV and LAIV provide protection against infection even in seasons when vaccines and circulating strains are not well-matched. Vaccine effectiveness against ILI was 49%-69% in two observational MMWR August 6, 2010 studies, and 49% against medically attended, laboratoryconfirmed influenza in a case-control study conducted among young children during the 2003-04 influenza season, when a drifted influenza A (H3N2) strain predominated, based on viral surveillance data (165,169). However, the 2009-10 seasonal influenza vaccines provided no protection against medically attended illness caused by the pandemic 2009 influenza A (H1N1) virus, because of substantial changes in key viral antigens compared with recently circulating strains (392). Continued improvements in collecting representative circulating viruses and use of surveillance data to forecast antigenic drift are needed. Manufacturing trivalent influenza virus vaccines is a challenging process that takes 6-8 months to complete. Shortening manufacturing time to increase the time to identify good vaccine candidate strains from among the most recent circulating strains also is important. Data from multiple seasons that are collected in a consistent manner are needed to better understand vaccine effectiveness during seasons when circulating and vaccine virus strains are not well-matched.
# Cost-Effectiveness of Influenza Vaccination
Economic studies of influenza vaccination are difficult to compare because they have used different measures of both costs and benefits (e.g., cost-only, cost-effectiveness, cost-benefit, or cost-utility measures). However, most studies indicate that vaccination reduces or minimizes health care, societal, and individual costs and the productivity losses and absenteeism associated with influenza illness. One national study estimated the annual economic burden of seasonal influenza in the United States (using 2003 population and dollars) to be $87.1 billion, including $10.4 billion in direct medical costs (78).
Studies of influenza vaccination in the United States among persons aged ≥65 years have estimated substantial reductions in hospitalizations and deaths and overall societal cost savings (234,235). A study of a larger population comparing persons aged 50-64 years with those aged ≥65 years estimated the cost-effectiveness of influenza vaccination to be $28,000 per QALY saved (in 2000 dollars) in persons aged 50-64 years compared with $980 per QALY saved among persons aged ≥65 years (393).
Economic analyses among adults aged <65 years have reported mixed results regarding influenza vaccination. Two studies in the United States indicated that vaccination can reduce both direct medical costs and indirect costs from work absenteeism and reduced productivity (79,394). However, another U.S. study indicated no productivity and absentee savings in a strategy to vaccinate healthy working adults, although vaccination still was estimated to be cost-effective (395). In Ontario, Canada, where a universal influenza vaccination program was implemented beginning in 2000, costs were estimated to be approximately twice as much as a targeted vaccination program; however, the number of cases of influenza was reduced 61%, and influenza-related mortality declined 28%, saving an estimated 1,134 QALYs per season overall from a health-care payer perspective. Most cost savings were attributed to the avoidance of hospitalizations. The incremental cost-effectiveness ratio was estimated to be $10,797 Canadian per QALY gained (396).
Cost analyses have documented the considerable financial burden of illness among children. In a study of 727 children conducted at a medical center during 2000-2004, the mean total cost of hospitalization for influenza-related illness was $13,159 ($39,792 for patients admitted to an intensive care unit and $7,030 for patients cared for exclusively in the general wards) (397). A strategy that focuses on vaccinating children with medical conditions that confer a higher risk for influenza complications are more cost-effective than a strategy of vaccinating all children (395). An analysis that compared the costs of vaccinating children of varying ages with TIV and LAIV indicated that costs per QALY saved increased with age for both vaccines. In 2003 dollars per QALY saved, costs for routine vaccination using TIV were $12,000 for healthy children aged 6-23 months and $119,000 for healthy adolescents aged 12-17 years compared with $9,000 and $109,000, respectively, using LAIV (398). Economic evaluations of vaccinating children have demonstrated a wide range of cost estimates, but have generally found this strategy to be either cost saving or cost beneficial (399)(400)(401)(402).
Economic analyses most influenced by the vaccination venue, with vaccination in medical-care settings incurring higher projected costs. In a published model, the mean cost (year 2004 values) of vaccination was lower in mass vaccination ($17.04) and pharmacy ($11.57) settings than in scheduled doctor's office visits ($28.67) (403). Vaccination in nonmedical settings was projected to be cost saving for healthy adults aged ≥50 years and for high-risk adults of all ages. For healthy adults aged 18-49 years, preventing an episode of influenza would cost $90 if vaccination were delivered in a pharmacy setting, $210 in a mass vaccination setting, and $870 during a scheduled doctor's office visit (403). Medicare and Vaccines for Children program reimbursement rates in recent years have been less than the costs associated with providing vaccination in a medical practice (404,405).
# Vaccination Coverage Levels
Continued annual monitoring is needed to determine the effects on vaccination coverage of vaccine supply delays and shortages, changes in influenza vaccination recommendations and target groups for vaccination, reimbursement rates for vaccine and vaccine administration, and other factors. One of the Healthy People 2010 objectives (objective no. 14-29a) includes achieving an influenza vaccination coverage level of 90% for persons aged ≥65 years and among nursing home residents (406,407); new strategies to improve coverage are needed to achieve this objective (408,409).
On the basis of 2009 final data and 2010 early release data from the National Health Interview Survey (NHIS), estimated national influenza vaccine coverage during the 2007-08 and 2008-09 influenza seasons did not increase substantially among persons aged ≥65 years and those aged 50-64 years (Table 3) and are only slightly higher than coverage levels observed before the 2004-05 vaccine shortage year (410)(411)(412). In the 2007-08 and 2008-09 influenza seasons, estimated vaccination coverage levels (based on NHIS data) among adults with high-risk conditions aged 18-49 years were 30.4% and 33%, respectively, substantially lower than the Healthy People 2000 and Healthy People 2010 objectives of 60% (Table 3) (406,407). Among adults with asthma aged 18-49 years and 50-64 years, estimated coverage during the 2006-07 influenza season was 24% and 55% respectively; the national objective for coverage among adults with asthma is 60% (413). Epidemiologic studies conducted during the 2009 pandemic indicated that more hospitalizations and deaths were occurring among adults aged <65 years with high-risk conditions than among any other group, and these adults were among the initial target groups to receive the 2009 H1N1 vaccination while vaccine supply was limited (414). However, coverage among adults aged <65 years with medical conditions that confer a higher risk for influenza complications was <40% for the 2009 H1N1 monovalent vaccine (415).
During the 2009 influenza A (H1N1) pandemic, state-level estimates of seasonal vaccine coverage data for both seasonal influenza and the monovalent 2009 H1N1 vaccines were obtained via telephone surveys conducted by the Behavioral Risk Factor Surveillance System (BRFSS) and the National 2009 H1N1 Flu Survey. By January 31, 2010 estimated state seasonal influenza vaccination coverage among persons aged ≥6 months ranged from 30.3% to 54.5% (median: 40.6%). Median coverage was 41.2% for children aged 6 months-17 years, 38.3% for adults aged 18-49 years with high-risk conditions, 28.8% for adults aged 18-49 years without high-risk conditions, 45.5% for adults aged 50-64 years, and 69.3% for adults aged ≥65 years. These results, compared with the previous season, suggest large increases in coverage for children and a moderate increase for adults aged 18-49 years without high-risk compared with seasonal influenza vaccine coverage estimates in previous seasons (415,416). However, vaccine coverage estimates using BRFSS data typically have been higher than estimates derived from NHIS data (416).
Studies conducted among children and adults indicate that opportunities to vaccinate persons at risk for influenza complications (e.g., during hospitalizations for other causes) often are missed. In one study, 23% of children hospitalized with influenza and a comorbidity had a previous hospitalization during the preceding influenza vaccination season (417). In a study of hospitalized Medicare patients, only 31.6% were vaccinated before admission, 1.9% during admission, and 10.6% after admission (418). A study in New York City conducted during 2001-2005 among 7,063 children aged 6-23 months indicated that 2-dose vaccine coverage increased from 1.6% to 23.7% over time; however, although the average number of medical visits during which an opportunity to be vaccinated decreased during the course of the study from 2.9 to 2.0 per child, 55% of all visits during the final year of the study still represented a missed vaccination opportunity (419). Using standing orders in hospitals increases vaccination rates among hospitalized persons (420), and vaccination of hospitalized patients is safe and stimulates an appropriate immune response (220). In one survey, the strongest predictor of receiving vaccination was the survey respondent's belief that he or she was in a high-risk group, based on data from one survey; however, many persons in high-risk groups did not know that they were in a group recommended for vaccination (421,422). In one study, over half of adults who did not receive influenza vaccination reported that they would have received vaccine if this had been recommended by their health-care provider (422).
Reducing racial/ethnic health disparities, including disparities in influenza vaccination coverage, is an overarching national goal that is not being met (407). Estimated vaccination coverage levels in 2008 among persons aged ≥65 years were 70% for non-Hispanic whites, 52% for non-Hispanic blacks, and 52% for Hispanics (423). Among Medicare beneficiaries, other key factors that contribute to disparities in coverage include variations in the propensity of patients to actively seek vaccination and variations in the likelihood that providers recommend vaccination (424,425). One study estimated that eliminating these disparities in vaccination coverage would have an impact on mortality similar to the impact of eliminating deaths attributable to kidney disease among blacks or liver disease among Hispanics (426). Differences in coverage by race or ethnicity might be partly attributable to differences in beliefs about vaccine effectiveness and safety (422). Among nursing home patients, fewer blacks and Hispanics are offered vaccine or receive it compared with whites, and blacks refuse vaccination more frequently (427). Disparities in seasonal influenza vaccine coverage among adult whites (43%), blacks (31%), and Hispanics (31%) also were observed during 2009-2010 (416).
# MMWR August 6, 2010
Reported vaccination levels are low among children at increased risk for influenza complications. Coverage among children aged 2-17 years with asthma was estimated to be 29% for the 2004-05 influenza season (428). During the 2007-08 influenza season, the fourth season for which ACIP recommended that all children aged 6-23 months receive vaccination, National Immunization Survey data demonstrated that 41% of children aged 6-23 months received at least 1 dose of influenza vaccine, and 23% were fully vaccinated (i.e., received 1 or 2 doses depending on previous vaccination history); however, results varied substantially among states (429). Data from the eight Immunization Information System sentinel sites during 2008-09 indicated that 48% of children aged 6-23 months had received at least 1 dose, and 29% were fully vaccinated (430). Coverage levels in these sites for older children were lower and declined with increasing age, ranging from 22% fully vaccinated among children aged 2-4 years to 9% among children aged 13-18 years (430). As has been reported for older adults, a physician recommendation for vaccination and the perception that having a child be vaccinated † † Adults categorized as being at high risk for influenza-related complications self-reported one or more of the following: 1) ever being told by a physician they had diabetes, emphysema, coronary heart disease, angina, heart attack, or other heart condition; 2) having a diagnosis of cancer during the preaceding 12 months (excluding nonmelanoma skin cancer) or ever being told by a physician they have lymphoma, leukemia, or blood cancer during the previous 12 months (postcoding for a cancer diagnosis was not yet completed at the time of this publication so this diagnosis was not included in the 2006-07 season data.); 3) being told by a physician they have chronic bronchitis or weak or failing kidneys; or 4) reporting an asthma episode or attack during the preceding 12 months. For children aged <18 years, high-risk conditions included ever having been told by a physician of having diabetes, cystic fibrosis, sickle cell anemia, congenital heart disease, other heart disease, or neuromuscular conditions (seizures, cerebral palsy, and muscular dystrophy), or having an asthma episode or attack during the preceding 12 months. § § Aged 18-44 years, pregnant at the time of the survey, and without high-risk conditions. ¶ ¶ Adults were classified as health-care workers if they were currently employed in a health-care occupation or in a health-care-industry setting, on the basis of standard occupation and industry categories recoded in groups by CDC's National Center for Health Statistics. * Interviewed sample child or adult in each household containing at least one of the following: a child aged <5 years, an adult aged ≥65 years, or any person aged 5-17 years at high risk (as defined in previous footnote for adults at high risk). To obtain information on household composition and high-risk status of household members, the sampled adult, child, and person files from NHIS were merged. Interviewed adults who were health-care workers or who had high-risk conditions were excluded. Information could not be assessed regarding high-risk status of other adults aged 18-64 years in the household; therefore, certain adults aged 18-64 years who live with an adult aged 18-64 years at high risk were not included in the analysis. Also note that although the recommendation for children aged 2-4 years was not in place during the 2005-06 season, children aged 2-4 years were included in these calculations as if the recommendation already was in place to facilitate comparison of coverage data for subsequent years.
"is a smart idea" were associated positively with likelihood of vaccination of children aged 6-23 months (431). Similarly, children with asthma were more likely to be vaccinated if their parents recalled a physician recommendation to be vaccinated or believed that the vaccine worked well (432). Implementation of a reminder/recall system in a pediatric clinic increased the percentage of children with asthma receiving vaccination from 5% to 32% (433). Reminder/recall systems might be particularly useful when limited vaccine availability requires targeted vaccination of children with high-risk conditions (434). Although annual vaccination is recommended for HCP and is a high priority for reducing morbidity associated with influenza in health-care settings and for expanding influenza vaccine use (435)(436)(437), NHIS data demonstrated a vaccination coverage level of only 44.4% among HCP during the 2006-07 season, and 49% during the 2007-08 season (Table 3). Coverage levels during the 2009 pandemic were higher for seasonal vaccine, but remained low for the 2009 pandemic vaccine. By mid-January 2010, estimated vaccination coverage among HCP was 37% for 2009 pandemic influenza A (H1N1) and 62% for seasonal influenza, based on a RAND Corporation-conducted telephone survey that used a somewhat different methodology than NHIS (438). Overall, 64% received either of these influenza vaccines, higher coverage than any previous season, but only 35% of HCP reported receiving both vaccines (438). Vaccination of HCP has been associated with reduced work absenteeism (370) and with fewer deaths among nursing home patients (375,377) and elderly hospitalized patients (379). Factors associated with a higher rate of influenza vaccination among HCP include older age, being a hospital employee, having employer-provided health-care insurance, having had pneumococcal or hepatitis B vaccination in the past, or having visited a health-care professional during the preceding year. HCP who decline vaccination frequently express doubts about the risk for influenza and the need for vaccination, are concerned about vaccine effectiveness and side effects, and dislike injections (439).
Vaccine coverage among pregnant women increased during the 2007-08 influenza season, with 24% of pregnant women reporting vaccination, excluding pregnant women who reported diabetes, heart disease, lung disease, and other selected high-risk conditions; seasonal vaccine coverage estimates for 2008-09 were only 11%, however, which is closer to pre-2007 estimates and likely reflects variation in estimates caused by the small sample size rather than significant fluctuations in coverage (Table 3). The causes of persistent low coverage among pregnant women are not fully determined. However, in a study of influenza vaccination acceptance by pregnant women, 71% of those who were offered the vaccine chose to be vaccinated (440). However, a 1999 survey of obstetricians and gynecologists determined that only 39% administered influenza vaccine to obstetric patients in their practices, although 86% agreed that pregnant women's risk for influenza-related morbidity and mortality increases during the last two trimesters (441). Pregnancy was an important risk factor during the 2009 H1N1 pandemic (106,120), and because the 2009 H1N1 influenza virus is expected to continue circulation during 2010-11, improved vaccination coverage among pregnant women is needed.
Influenza vaccination coverage in all groups recommended for vaccination remains suboptimal. Despite the timing of the peak of influenza disease, administration of vaccine decreases substantially after November. According to results from NHIS, for the three most recent influenza seasons for which these data are available, approximately 84% of all influenza vaccinations were administered during September-November. Among persons aged ≥65 years, the percentage of September-November vaccinations was 92% (442). Because many persons recommended for vaccination remain unvaccinated at the end of November, CDC encourages public health partners and health-care providers to conduct vaccination clinics and other activities that promote seasonal influenza vaccination annually during National Influenza Vaccination Week (December 6-12, 2010) and throughout the remainder of the influenza season.
Self-report of influenza vaccination among adults compared with determining vaccination status from the medical record, is a sensitive and specific source of information (443). Patient self-reports should be accepted as evidence of influenza vaccination in clinical practice (443). However, information on the validity of parents' reports of pediatric influenza vaccination is not yet available.
Vaccination coverage estimates for the influenza A (H1N1) 2009 monovalent vaccines indicate that most doses were administered to the initial target groups, and that, by January 2, 2010 (approximately 90 days after vaccine first became available), an estimated 20% of the U.S. population (61 million persons) had been vaccinated, including 28% of persons in the initial target groups. An estimated 30% of U.S. children aged 6 months-18 years had been vaccinated, including 33% of children aged 6 months-4 years. Estimated coverage for specific initial target groups was 38% for pregnant women, 22% for HCP, and 12% for adults aged 25-64 years with medical conditions that confer a higher risk for influenza complications. Estimates of 2009 H1N1 vaccination coverage levels generally were higher among non-Hispanic whites than among non-Hispanic blacks (438). These coverage estimates were in the same approximate range as estimates for seasonal vaccination coverage, suggesting that concerns about the pandemic were not sufficient to overcome some barriers to influenza vaccination among persons at higher risk for influenza complications.
# MMWR August 6, 2010
# Recommendations for Using tIV and LAIV During the 2010-11 Influenza Season
Routine vaccination of all persons aged ≥6 months is recommended. During the 2009-10 influenza season, an estimated 85% of the U.S. population already had an indication for vaccination (444). A universal vaccination recommendation for all persons aged ≥6 months eliminates the need to determine whether each person has an indication for vaccination and emphasizes the importance of preventing influenza among person of all ages. The expansion of recommendations for annual vaccination to include all adults is supported by evidence that influenza vaccines are safe and effective. In addition, morbidity and mortality among adults aged <50 years, including adults who were previously healthy, occurs in every influenza season. Although most adults in this age group who develop influenzarelated complications have medical risk factors, some have no previously identified risk factors for influenza complications, or have risk factors but are unaware that they should be vaccinated. Expansion of vaccination recommendations to all adults reflects the need to remove potential barriers to receipt of influenza vaccine, including lack of awareness about vaccine indications among persons at higher risk for influenza complications and their close contacts. Although the capacity now exists to produce sufficient influenza vaccines to meet the predicted increase in demand, the annual supply of influenza vaccine and timing of its distribution cannot be guaranteed in any year.
Further support for expansion of recommendations to include all adults is based on data from the 2009 pandemic experience. Data from epidemiologic studies conducted during the 2009 influenza A (H1N1) pandemic indicates that the risk for influenza complications among adults aged <50 years who had 2009 pandemic influenza A (H1N1) is greater than is typically seen for seasonal influenza (12). Explosive outbreaks of 2009 H1N1 influenza among young adults in settings such as college campuses (445) were part of the basis for prioritizing vaccination of all persons aged 6 months-24 years during the 2009 pandemic influenza response. Pandemic 2009 influenza A (H1N1)-like viruses are expected to continue to circulate during the 2010-11 influenza season, and a substantial proportion of young adults do not yet have immunity as a result of natural infection with this virus (446). In addition, severe infections were observed more frequently in some younger adults who did not have previously recognized risk factors for influenza-related complications, including obese persons, persons in certain racial and ethnic minority groups, and postpartum women (24,48,85,86,90,447).
Both TIV and LAIV prepared for the 2010-11 season will include A/California/7/2009 (H1N1)-like, A/Perth/16/2009 (H3N2)-like, and B/Brisbane/60/2008-like antigens. The influenza B virus component of the 2010-11 vaccine is from the Victoria lineage (448). These viruses will be used because they are representative of influenza viruses that are predicted to be circulating in the United States during the 2010-11 influenza season and have favorable growth properties in eggs. The H1N1 strain recommended for the 2010-11 trivalent influenza vaccine is the same as the vaccine strain in the 2009 H1N1 monovalent vaccines given during the pandemic. The 2009 pandemic influenza virus-derived vaccine strain has replaced the seasonal influenza H1N1 vaccine strains that were present in the vaccine since 1977.
Healthy nonpregnant persons aged 2-49 years can choose to receive either TIV or LAIV. Some TIV formulations are FDA-licensed for use in persons as young as age 6 months (see Recommended Vaccines for Different Age Groups). Persons aged ≥65 years can be administered either standard-dose TIV 15 mcg per vaccine strain) or the newly licensed TIV containing 60 mcg HA antigen per vaccine strain (Sanofi pasteur). TIV is licensed for use in persons with high-risk conditions (Table 2). LAIV is FDA-licensed for use only for persons aged 2-49 years. In addition, FDA has indicated that the safety of LAIV has not been established in persons with underlying medical conditions that confer a higher risk for influenza complications.
All children aged 6 months-8 years who have not been vaccinated previously at any time with at least 1 dose of either LAIV (if appropriate) or TIV should receive 2 doses of age-appropriate vaccine in the same season, with a single dose during subsequent seasons. Persons who received a 2009 H1N1 monovalent vaccine should still be vaccinated with the 2010-11 formulation of TIV or LAIV to provide protection against influenza A (H3N2) and influenza B strains that are expected to circulate during the 2010-11 influenza season. In addition, the duration of protection after receipt of the 2009 H1N1 monovalent influenza vaccines is unknown and likely declines over time.
In addition, emphasis on providing routine vaccination annually to certain groups at higher risk for influenza infection or complications is advised, including all children aged 6 months-18 years, all persons aged ≥50 years, and other persons at risk for medical complications from influenza. These persons, their household and close contacts, and all HCP should continue to be a focus of vaccination efforts as providers and programs transition to routinely vaccinating all persons aged ≥6 months (Box). Despite a recommendation for vaccination for approximately 85% of the U.S. population over the past two seasons, <50% of the U.S. population received a seasonal influenza vaccination in 2008-09 or 2009-10. Estimated vaccine coverage for the 2009 H1N1 monovalent vaccine coverage was <40% (438).
# Rationale for Vaccination of Specific Populations
Children Aged 6 Months-18 Years
Annual vaccination for all children aged 6 months-18 years is recommended. Healthy children aged 2-18 years can receive either LAIV or TIV. Children aged 6-23 months, and those aged 2-4 years who have evidence of asthma, wheezing, or who have medical conditions that put them at higher risk for influenza complications should receive TIV (see Considerations When Using LAIV).
Recommendations to provide routine influenza vaccination to all children and adolescents aged 6 months-18 years are made on the basis of 1) accumulated evidence that influenza vaccine is effective and safe for children (see Influenza Vaccine Efficacy, Effectiveness, and Safety); 2) increased evidence that influenza has substantial adverse impacts among children and their contacts (e.g., school absenteeism, increased antibiotic use, medical care visits, and parental work loss) (see Health-Care Use, Hospitalizations, and Deaths Attributed to Influenza); and 3) an expectation that a simplified age-based influenza vaccine recommendation for all children and adolescents will improve vaccine coverage levels among children who already have a risk-or contact-based indication for annual influenza vaccination.
Children typically have the highest attack rates during community outbreaks of influenza and serve as a major source of transmission within communities (1,2). If sufficient vaccination coverage among children can be achieved, potential benefits include the indirect effect of reducing influenza among persons who have close contact with children and reducing overall transmission within communities (449). Achieving and sustaining community-level reductions in influenza will require mobilization of community resources and development of sustainable annual vaccination campaigns to assist healthcare providers and vaccination programs in providing influenza vaccination services to children of all ages. In many areas, innovative community-based efforts, which might include mass vaccination programs in school or other community settings, will be needed to supplement vaccination services provided in health-care providers' offices or public health clinics. In nonrandomized community-based controlled trials, reductions in ILI-related symptoms and medical visits among household contacts have been demonstrated in communities where vaccination programs among school-aged children were established compared with communities without such vaccination programs (365,386,387).
All children aged 6 months-8 years who receive a seasonal influenza vaccine for the first time should be administered 2 doses. Children aged 6 months-8 years who received a seasonal vaccine for the first time during 2009-2010 but who received only 1 dose should receive 2 doses, rather than 1, during 2010-2011. In addition, for the 2010-11 influenza season, children aged 6 months-8 years who did not receive at least 1 dose of an influenza A (H1N1) 2009 monovalent vaccine should receive 2 doses of a 2010-11 seasonal influenza vaccine, regardless of previous influenza vaccination history (Figure 3). Children aged 6 months-8 years for whom the previous 2009-10 seasonal or influenza A (H1N1) 2009 monovalent vaccine history cannot be determined should receive 2 doses of a 2010-11 seasonal influenza vaccine. For all children, the second dose of a recommended 2-dose series should be administered ≥4 weeks after the initial dose.
The recommendation to administer 2 doses to children who did not receive an influenza A (H1N1) 2009 monovalent vaccine, regardless of previous seasonal influenza vaccine history, is new. This change in recommendations is made on the basis of data from several immunogenicity studies indicating that children aged 80% of infants and children aged 90% of older children who receive 2 doses of a vaccine that contains the 2009 H1N1 antigen develop protective antibody levels (158,160; National Institutes of Health, unpublished data, 2010). Therefore, current immunogonicity data indicate that at least 2 doses of the 2009 H1N1 vaccine antigen are needed to produce protective antibody levels for the majority of young children. This recommendation includes children who have received at least 2 doses of a seasonal influenza vaccine in a previous season and who would normally be scheduled to only receive 1 seasonal vaccine dose in the 2010-11 season.
A second dose is not necessary for children being vaccinated for the first time who were aged 8 years at the time of the first dose but who are seen again after they have reached age 9 years. Children aged 6 months-8 years who had never received a seasonal influenza vaccine previously and who received only the 2009 H1N1 monovalent vaccine should receive 2 doses of the 2010-11 seasonal influenza vaccine, to provide adequate protection against influenza A (H3N2) and influenza B. If possible, children recommended for 2 doses of seasonal influenza MMWR August 6, 2010 vaccine should receive them both before onset of influenza season. However, vaccination, including the second dose, is recommended even after influenza virus begins to circulate in a community.
Children who had a laboratory-confirmed 2009 pandemic influenza A (H1N1) virus infection (e.g., reverse transcription-PCR or virus culture specific for 2009 pandemic influenza A (H1N1) virus) are likely to be immune to this virus. There is no known harm in providing 2 doses of 2010-11 seasonal influenza vaccine to a child who has been infected previously with the 2009 pandemic influenza A (H1N1) virus. However, at immunization provider discretion, these children can receive the appropriate number of seasonal vaccine doses (1 or 2) without regard to previous receipt of the influenza A (H1N1) 2009 monovalent vaccine. However, most children did not receive specific diagnostic testing (i.e., were untested or received a rapid antigen test), and for others, evidence of laboratory confirmation using a diagnostic test specific for the 2009 H1N1 antigen is unavailable to immunization providers. If no test results are available and no influenza A (H1N1) 2009 monovalent vaccine had been administered, children who had a febrile respiratory illness during 2009-2010 cannot be assumed to have had influenza A (H1N1) virus infection, and these children should receive 2 doses of the 2010-11 seasonal vaccine. Providers who are determining the number of vaccine doses recommended for children with laboratory-confirmed 2009 pandemic influenza A (H1N1) virus infection (Figure 3) should also determine whether 2 doses are indicated on the basis of seasonal vaccine history.
# Persons at Risk for Medical Complications
Vaccination to prevent influenza is particularly important for persons who are at increased risk for severe complications from influenza or at higher risk for influenzarelated outpatient, ED, or hospital visits. When vaccine supply is limited, vaccination and adults aged ≥50 years, with particular emphasis on vaccinating contacts of children aged <6 months; and - household contacts and caregivers of persons with medical conditions that put them at higher risk for severe complications from influenza. For children, the risk for severe complications from influenza is highest among those aged <2 years, who have much higher rates of hospitalization for influenza-related complications compared with older children (7,54,61). Medical care and ED visits attributable to influenza are increased among children aged <5 years compared with older children (54). Chronic neurologic conditions are thought to place persons at higher risk for influenza complications on the basis of the potential for compromised respiratory function or the handling of respiratory secretions, both of which can increase the risk for aspiration; such conditions include cognitive dysfunction, spinal cord injuries, seizure disorders, or neuromuscular disorders (46).
An observational study conducted during the 2009 H1N1 pandemic indicated that morbid obesity, and possibly obesity, might be a new or previously unrecognized risk factor for influenza-related complications (85). In another study, American Indians/Alaska Natives were demonstrated to have a higher risk for death from 2009 H1N1 influenza (90). These medical and race/ethnicity risk factors might reflect a higher prevalence of underlying chronic medical conditions, including conditions that are not known by the patient or provider. Other minority groups, including blacks, have been demonstrated to have higher incidence of hospitalizations as a result of laboratory-confirmed influenza compared with whites (CDC, unpublished data, 2010); additional study is needed to determine the reasons. Persons who have chronic medical conditions, who are pregnant, or who are at higher risk for 2009 H1N1 influenza-related complications should be encouraged to begin receiving a routine annual influenza vaccination as programs and practitioners transition to providing vaccination for all persons aged ≥6 months (Box).
# Persons Who Live With or Care for Persons at Higher Risk for Influenza-Related Complications
All persons aged ≥6 months should be vaccinated annually. As providers and programs transition to providing annual vaccination to all persons, continued emphasis should be placed on vaccination of persons who live with or care for persons at higher risk for influenza-relate complications. When vaccine supply is limited, vaccination efforts should focus on delivering vaccination to persons at higher risk for influenza-related complications as well as these persons:
- HCP;
- household contacts (including children) and caregivers of children aged ≤59 months (i.e., aged <5 years) and adults aged ≥50 years; and - household contacts (including children) and caregivers of persons with medical conditions that put them at higher risk for severe complications from influenza. Healthy persons who are infected with influenza virus, including those with subclinical infection, can transmit influenza virus to persons at higher risk for complications from influenza. In addition to HCP, groups that can transmit influenza to high-risk persons include:
- employees of assisted living and other residences for persons in groups at high risk; - persons who provide home care to persons in groups at high risk; and - household contacts of persons in groups at high risk, including contacts such as children or mothers of newborns. In addition, because children aged <5 years are at increased risk for influenza-related hospitalization (7,47,61,450,451) compared with older children, vaccination is recommended for their household contacts and out-of-home caregivers. Because influenza vaccines have not been licensed by FDA for use among children aged <6 months, emphasis should be placed on vaccinating contacts of these children.
Healthy HCP and persons aged 2-49 years who are contacts of persons in these groups and who are not contacts of severely immunocompromised persons living in a protected (436,437,452).
Facilities that employ HCP should provide vaccine to workers by using approaches that have been demonstrated to be effective in increasing vaccination coverage. The HCP influenza coverage goal should be vaccination of 100% of employees who do not have medical contraindications. Health-care administrators should consider the level of vaccination coverage among HCP to be one measure of a patient safety quality program and consider obtaining signed declinations from personnel who decline influenza vaccination for reasons other than medical contraindications (437,453,454). Influenza vaccination rates among HCP within facilities should be measured regularly and reported, and ward-, unit-, and specialty-specific coverage rates should be provided to staff and administration (437).
Policies that work best to achieve this coverage goal might vary among facilities. Studies have demonstrated that organized campaigns can attain higher rates of vaccination among HCP with moderate effort and by using strategies that increase vaccine acceptance (435,437,455,456). A mandatory influenza vaccination policy for HCP, exempting only those with a medical contraindication, has been demonstrated to be a highly effective approach to achieving high vaccine coverage among HCP (456)(457)(458). Hospitals and health-care systems that have mandated vaccination of HCP often have achieved coverage rates of >90%, and persons refusing vaccination who do not have a medical contraindication have been required to wear a surgical mask during influenza season in some programs (458). Efforts to increase vaccination coverage among HCP using mandatory vaccination policies are supported by various national accrediting and professional organizations, including the Infectious Diseases Society of America, and in certain states by statute (457,459,460). Worker objections, including legal challenges, are an important consideration for facilities considering mandates (459,461). Studies to assess the impact of mandatory HCP vaccination on patient outcomes are needed.
The Joint Commission on Accreditation of Health-Care Organizations has approved an infection-control standard that requires accredited organizations to offer influenza vaccinations to staff, including volunteers and licensed independent practitioners with close patient contact. The standard became an accreditation requirement beginning January 1, 2007 (462). Some states have regulations regarding vaccination of HCP in long-term-care facilities (463), require that health-care facilities offer influenza vaccination to HCP, or require that HCP either receive influenza vaccination or indicate a religious, medical, or philosophic reason for not being vaccinated (464,465).
Children aged <6 months are not recommended for vaccination, and antivirals are not licensed for use among infants. Protection of young infants, who have hospitalization rates similar to those observed among the elderly, depends on vaccination of the infants' close contacts. A recent study conducted in Bangladesh demonstrated that infants born to vaccinated women have significant protection from laboratory-confirmed influenza, either through transfer of influenza-specific maternal antibodies or by reducing the risk for exposure to influenza that might occur through vaccination of the mother (217). All household contacts, health-care and day care providers, and other close contacts of young infants should be vaccinated.
Immunocompromised persons are at risk for influenza complications but might have inadequate protection after vaccination. Vaccination of close contacts of immunocompromised persons, including HCP, might reduce the risk for influenza transmission. In 2006, a joint recommendation from ACIP and the Hospital Infection Control Practices Advisory Committee (HICPAC) recommended that TIV be used for vaccinating household members, HCP, and others who have close contact with severely immunosuppressed persons (e.g., patients with hematopoietic stem cell transplants) during those periods in which the immunosuppressed person requires care in a protective environment (typically defined as a specialized patient-care area with a positive airflow relative to the corridor, high-efficiency particulate air filtration, and frequent air changes) (437,466). To reduce the theoretic risk for vaccine virus transmission, ACIP/HICPAC recommended that HCP who receive LAIV should avoid providing care for severely immunosuppressed patients requiring a protected environment for 7 days after vaccination, and hospital visitors who have received LAIV should avoid contact with severely immunosuppressed persons in protected environments for 7 days after vaccination but should not be restricted from visiting less severely immunosuppressed patients. Healthy nonpregnant persons aged 2-49 years, including HCP, who have close contact with persons with lesser degrees of immunosuppression (e.g., persons with chronic immunocompromising conditions such as HIV infection, corticosteroid or chemotherapeutic medication use, or who are cared for in other hospital areas such as neonatal intensive care units) can receive TIV or LAIV.
The rationale for avoiding use of LAIV among HCP or other close contacts of severely immunocompromised patients is the theoretic risk that a live attenuated vaccine virus could be transmitted to the severely immunosuppressed person. However, instances of LAIV transmission from a recently vaccinated person to an immunocompromised contact in health-care settings have not been reported. In addition, the temperature-sensitive and attenuated viruses present in LAIV do not cause illness when administered to immunocompromised persons with HIV infection (336), children undergoing cancer treatment (467), or immunocompromised ferrets given dexamethasone and cytarabine (468). Concerns about the theoretic risk posed by transmission of live attenuated vaccine viruses contained in LAIV to patients should not be used to justify preferential use of TIV in health-care settings other than inpatient units that house severely immunocompromised patients requiring protected environments. Some health-care facilities might choose to not restrict use of LAIV in close contacts of severely immunocompromised persons, based on the lack of evidence for transmission in health-care settings since licensure in 2004.
# Pregnant and Postpartum Women
Vaccination of pregnant women protects women and newborns. The American College of Obstetricians and Gynecologists and the American Academy of Family Physicians also have previously recommended routine vaccination of all pregnant women (469). Women who are postpartum are also at risk for influenza complications and should be vaccinated (108). No preference is indicated for use of TIV that does not contain thimerosal as a preservative (see Vaccine Preservative in Multidose Vials of TIV) for any group recommended for vaccination, including pregnant and postpartum women. LAIV is not licensed for use in pregnant women, but postpartum women can receive LAIV or TIV. Pregnant and postpartum women do not need to avoid contact with persons recently vaccinated with LAIV.
# Breastfeeding Mothers
Breastfeeding does not affect the immune response adversely and is not a contraindication for vaccination (246). Unless contraindicated because of other medical conditions, women who are breastfeeding can receive either TIV or LAIV. In one randomized controlled trial conducted in Bangladesh, infants born to women vaccinated during pregnancy had a lower risk for laboratory-confirmed influenza. However, the contribution to protection from influenza of breastfeeding compared with passive transfer of maternal antibodies during pregnancy was not determined (217).
# travelers
The risk for exposure to influenza during travel depends on the time of year and destination. In the temperate regions of the Southern Hemisphere, influenza activity occurs typically during April-September. In temperate climate zones of the Northern and Southern Hemispheres, travelers also can be exposed to influenza during the summer, especially when traveling as part of large tourist groups (e.g., on cruise ships) that include persons from areas of the world in which influenza viruses are circulating (470,471). In the tropics, influenza occurs throughout the year. In a study among Swiss travelers to tropical and subtropical countries, influenza was the most frequently acquired vaccine-preventable disease (472).
Any traveler who wants to reduce the risk for influenza infection should consider influenza vaccination, preferably at least 2 weeks before departure. In particular, persons at high risk for complications of influenza and who were not vaccinated with influenza vaccine during the preceding fall or winter should consider receiving influenza vaccine before travel if they plan to travel:
- to the tropics,
- with organized tourist groups at any time of year, or - to the Southern Hemisphere during April-September.
No information is available about the benefits of revaccinating persons before summer travel who already were vaccinated during the preceding fall, and revaccination is not recommended. Persons at high risk who receive the previous season's vaccine before travel should be receive the current vaccine the following fall or winter. Persons at higher risk for influenza complications should consult with their health-care practitioner to discuss the risk for influenza or other travel-related diseases before embarking on travel during the summer.
# Recommended Vaccines for Different Age Groups
Each season, vaccination providers should check the latest information on FDA approval of the 2010-11 seasonal influenza vaccines and CDC recommendations for use of these vaccines to determine which vaccines are licensed for use in any particular age. Immunization providers should consult updated information on use of influenza vaccines from CDC (available at ) and FDA (available at . fda.gov/BiologicsBloodVaccines/SafetyAvailability/vaccinesafety/default.htm). The following information is based on approvals for the 2009-10 seasonal influenza vaccines.
When vaccinating children aged 6-35 months with TIV, health-care providers should use TIV that has been licensed by FDA for this age group (i.e., TIV manufactured by sanofi pasteur or CSL Biotherapies (Afluria) (286). TIV from Novartis (Fluvirin) is FDA-approved in the United States for use among persons aged ≥4 years (287). One TIV preparation from GlaxoSmithKline (Fluarix) is licensed for use in children aged ≥3 years, and another preparation (FluLaval) is labeled for use in persons aged ≥18 years (274,275,285). LAIV from MedImmune (FluMist) is recommended for use by healthy nonpregnant persons aged 2-49 years (Table 2) (360). If a pediatric vaccine dose (0.25mL) is administered inadvertently to an adult, an additional pediatric dose (0.25 mL) should be given to provide a full adult dose (0.5mL).
If the error is discovered later (after the patient has left the vaccination setting), an adult dose should be administered as soon as the patient can return. Vaccination with a formulation approved for adult use should be counted as a dose if inadvertently administered to a child. An inactivated trivalent influenza vaccine (Fluzone High-Dose, sanofi pasteur.) that contains an increased amount of influenza virus antigen compared with other inactivated influenza vaccines was licensed in 2009. Fluzone High-Dose is available as single dose prefilled syringe formulation distinguished from Fluzone by a gray syringe plunger rod (224). As with other 2010-11 influenza vaccines, Fluzone High-Dose will contain the three recommended virus strains (A/California/7/2009 (H1N1)-like, A/Perth/16/2009 (H3N2)-like, and B/ Brisbane/60/2008-like antigens) (136). ACIP recommends that all persons aged ≥65 years receive an inactivated 2010-11 seasonal influenza vaccination but has not expressed a preference for Fluzone High-Dose or any other inactivated influenza vaccine for use in persons aged ≥65 years (473). Whether or not the higher postvaccination immune responses observed among Fluzone High-Dose vaccine recipients (221)(222)(223) will result in greater protection against influenza illness is not known. High-dose vaccine should not be administered to persons aged <65 years. Several other new vaccine formulations are being evaluated in immunogenicity and efficacy trials; when licensed, these new products will increase the influenza vaccine supply and provide additional vaccine choices for practitioners and their patients. Providers should review the formulation and packaging before administering influenza vaccine to ensure the product used is appropriate for the age of the patient.
# Influenza Vaccines and Use of Influenza Antiviral Medications
Administration of TIV to persons receiving influenza antivirals for treatment or chemoprophylaxis is acceptable. The effect on safety and effectiveness of LAIV coadministration with influenza antiviral medications has not been studied. However, because influenza antivirals reduce replication of influenza viruses, LAIV should not be administered until 48 hours after cessation of influenza antiviral therapy. If influenza antiviral medications are administered within 2 weeks after receipt of LAIV, the vaccine dose should be repeated 48 or more hours after the last dose of antiviral medication. Persons receiving antivirals within the period 2 days before to 14 days after vaccination with LAIV should be revaccinated at a later date with any approved vaccine formulation (246,331).
# Considerations When Using LAIV
LAIV is an option for vaccination of healthy nonpregnant persons aged 2-49 years without contraindications, including HCP and other close contacts of high-risk persons (excepting severely immunocompromised hospitalized persons who require care in a protected environment). The precaution regarding use of LAIV in protected environments is based upon a theoretic concern that the live attenuated vaccine virus could be transmitted to severely immunocompromised persons. However, no transmission of LAIV in health-care settings ever has been reported, and because these viruses are also cold-adapted (and cannot effectively replicate at normal body temperature) the risk for transmitting a vaccine virus to a severely immunocompromised person and causing severe infection appears to be extremely low. HCP working in environments such as neonatal intensive care, oncology, or labor and delivery units can receive LAIV without any restrictions.
No preference is indicated for LAIV or TIV when considering vaccination of healthy nonpregnant persons aged 2-49 years. Possible advantages of LAIV include its potential to induce a broad mucosal and systemic immune response in children, its ease of administration, and the possibly increased acceptability of an intranasal rather than intramuscular route of administration.
If the vaccine recipient sneezes immediately after administration, the dose should not be repeated. However, if nasal congestion is present that might impede delivery of the vaccine to the nasopharyngeal mucosa, deferral of administration should be considered until resolution of the illness, or TIV should be administered instead. No data exist about concomitant use of nasal corticosteroids or other intranasal medications (331).
Although FDA licensure of LAIV excludes children aged 2-4 years with a history of asthma or recurrent wheezing, the precise risk, if any, of wheezing caused by LAIV among these children is unknown because experience with LAIV among these young children is limited. Young children might not have a history of recurrent wheezing if their exposure to respiratory viruses has been limited because of their age. Certain children might have a history of wheezing with respiratory illnesses but have not had asthma diagnosed.
Clinicians and vaccination programs should screen for asthma or wheezing illness (or history of wheezing illness) when considering use of LAIV for children aged 2-4 years, and should avoid use of this vaccine in children with asthma or a wheezing episode within the previous 12 months. Health-care providers should consult the medical record, when available, to identify children aged 2-4 years with asthma or recurrent wheezing that might indicate asthma. In addition, to identify children who might be at greater risk for asthma and possibly at increased risk for wheezing after receiving LAIV, parents or caregivers of children aged 2-4 years should be asked: "In the past 12 months, has a health-care provider ever told you that your child had wheezing or asthma?" Children whose parents or caregivers answer "yes" to this question and children who have asthma or who had a wheezing episode noted in the medical record during the preceding 12 months should not receive LAIV. TIV is available for use in children with asthma or wheezing (474). LAIV can be administered to persons with minor acute illnesses (e.g., diarrhea or mild upper respiratory tract infection with or without fever). However, if nasal congestion is present that might impede delivery of the vaccine to the nasopharyngeal mucosa, use of TIV, or deferral of administration should be considered until resolution of the illness, is recommended.LAIV is approved for use in persons aged 2-49 years. However, the effectiveness or safety of LAIV is not known or is of potential concern for certain persons, and LAIV is not recommended for these persons. Do not administer LAIV to the following groups:
- persons with a history of hypersensitivity, including anaphylaxis, to any of the components of LAIV or to eggs; - children aged <2 years, because of an increased risk for hospitalization and wheezing observed in clinical trials; - children aged 2-4 years whose parents or caregivers report that a health-care provider has told them during the preceding 12 months that their child had wheezing or asthma or whose medical record indicates a wheezing episode has occurred during the preceding 12 months; - persons with asthma;
- persons aged ≥50 years; - adults and children who have chronic pulmonary, cardiovascular (except isolated hypertension), renal, hepatic, neurologic/neuromuscular, hematologic, or metabolic disorders; - adults and children who have immunosuppression (including immunosuppression caused by medications or by HIV); - children or adolescents aged 6 months-18 years receiving aspirin or other salicylates (because of the association of Reye syndrome with wild-type influenza virus infection); or
# pregnant women.
A moderate or severe illness with or without fever is a precaution for use of LAIV. Development of GBS within 6 weeks following a previous dose of influenza vaccine is considered to be a precaution for use of influenza vaccines. LAIV should not be administered to close contacts of immunosuppressed persons who require a protected environment.
# Personnel Who Can Administer LAIV
Low-level introduction of vaccine viruses into the environment probably is unavoidable when administering LAIV, but no instances have been reported of illness or attenuated vaccine virus infections among inadvertently exposed HCP or immunocompromised patients. The risk for acquiring vaccine viruses from the environment is unknown but is probably low; in addition, vaccine viruses are cold-adapted and attenuated, and unlikely to cause symptomatic influenza. Severely immunosuppressed persons should not administer LAIV. However, other persons at higher risk for influenza complications can administer LAIV. These include persons with underlying medical conditions placing them at higher risk or who are likely to be at risk, including pregnant women, persons with asthma, and persons aged ≥50 years.
# Concurrent Administration of Influenza Vaccine With other Vaccines
Use of LAIV concurrently with measles, mumps, rubella (MMR) alone and MMR and varicella vaccine among children aged 12-15 months has been studied, and no interference with the immunogenicity to antigens in any of the vaccines was observed (331,475). Among adults aged ≥50 years, the safety and immunogenicity of zoster vaccine and TIV was similar whether administered simultaneously or spaced 4 weeks apart (476). In the absence of specific data indicating interference, following ACIP's general recommendations for vaccination is prudent (246). Inactivated vaccines do not interfere with the immune response to other inactivated vaccines or to live vaccines. Inactivated or live vaccines can be administered simultaneously with LAIV. However, after administration of a live vaccine, at least 4 weeks should pass before another live vaccine is administered.
# MMWR August 6, 2010
# Recommendations for Vaccination Administration and Vaccination Programs
Influenza vaccination levels increased substantially over the past 20 years, and a record proportion of children received seasonal or pandemic influenza A (H1N1) vaccines in 2009-10. However, a majority of persons in most groups recommended for vaccination do not receive an annual vaccine. Strategies to improve vaccination levels, including using reminder/recall systems and standing orders programs (408,409,423), should be implemented whenever feasible. Vaccination efforts should begin as soon as vaccine is available and continue through the influenza season, which typically extends through April. Vaccination coverage can be increased by administering vaccine before and during the influenza season to persons during hospitalizations or routine health-care visits. Vaccinations can be provided in alternative settings (e.g., schools, pharmacies, grocery stores, workplaces, or other locations in the community), thereby making special visits to physicians' offices or clinics unnecessary. Coordinated campaigns such as the National Influenza Vaccination Week (December 6-12, 2010) provide opportunities to refocus public attention on the benefits, safety, and availability of influenza vaccination throughout the influenza season. The 2009 pandemic provided opportunities for innovative programs to administer vaccine in a variety of settings, and lessons learned from this experience should be applied when developing routine influenza immunization programs.
# Discussing Risk for Adverse Events after Vaccination
Concern about vaccine safety is often cited by persons who refuse vaccination, including health-care workers. When educating patients about adverse events, clinicians should provide Vaccine Information Statements (available at . gov/vaccines/pubs/vis), and emphasize the risks and benefits of vaccination. Providers should inform patients or parents that 1) TIV contains noninfectious killed viruses and cannot cause influenza; 2) LAIV contains weakened influenza viruses that cannot replicate outside the upper respiratory tract and are unlikely to infect others; 3) many patients will experience no side effects and most known side effects are mild, transient, and manageable, such as injection-site pain after receipt of TIV or rhinorrhea after LAIV; and 4) concomitant symptoms or respiratory disease unrelated to vaccination with either TIV or LAIV can occur after vaccination.
Patients concerned about more severe adverse events might be reassured by discussing the many safety studies available, the safety monitoring systems currently in use, and the immunization provider or program's previous experience with influenza vaccines. Providers concerned about the risk for severe adverse events or who observe or report a severe adverse event after vaccination should keep in mind that relatively common events will occur by chance after vaccination. For example, one study used the background rate of spontaneous abortion to estimate that 397 per 1 million vaccinated pregnant women would be predicted to have a spontaneous abortion within 1 day of vaccination (477). Even rare events will be observed by chance after vaccination if large numbers of persons are vaccinated, as occurs with annual influenza immunization campaigns. For example, if a cohort of 10 million individuals was vaccinated, approximately 22 cases of GBS and six cases of sudden death would be expected to occur within 6 weeks of vaccination as coincident background cases unrelated to vaccination (477).
# Information About the Vaccines for Children Program
The Vaccines for Children (VFC) program supplies vaccine to all states, territories, and the District of Columbia for use by participating providers. These vaccines are to be provided to eligible children without vaccine cost to the patient or the provider. Although the provider might charge a vaccine administration fee, vaccination will not be denied to parents who cannot pay an administration fee. All routine childhood vaccines recommended by ACIP are available through this program, including influenza vaccines. The program saves parents and providers out-of-pocket expenses for vaccine purchases and provides cost savings to states through CDC's vaccine contracts. The program results in lower vaccine prices and ensures that all states pay the same contract prices. Detailed information about the VFC program is available at / vaccines/programs/vfc/default.htm.
# Influenza Vaccine Supply Considerations
The annual supply of influenza vaccine and the timing of its distribution cannot be guaranteed in any year. During the 2009-10 influenza season, 114 million doses of seasonal influenza vaccine were distributed in the United States. However, influenza vaccine distribution delays or vaccine shortages remain possible. One factor that affects production is the inherent critical time constraints in manufacturing the vaccine given the annual updating of the influenza vaccine strains. Multiple manufacturing and regulatory issues also might affect the production schedule.
If supplies of seasonal influenza vaccine are not adequate, vaccination should be carried out in accordance with local circumstances of supply and demand based on the judgment of state and local health officials and health-care providers. National guidance for tiered use of influenza vaccine during prolonged distribution delays or supply shortfalls will be based primarily on epidemiologic studies indicating that certain persons are at higher risk for influenza infection or influenzarelated complications, as well as which vaccine formulations have limited supplies. When epidemiologic studies or other data that would guide tiered use are unavailable, persons previously demonstrated to be at higher risk for influenza or influenza-related complications should be among those targeted by immunization programs for receipt of limited supplies. Even if vaccine use is not restricted to certain persons known to be at higher risk for influenza complications, strategies employed by immunization programs and providers during periods of limited vaccine availability should emphasize outreach to persons at higher risk for influenza or influenza-related complications (Box), or who are part of populations that have limited access to medical care. During shortages of TIV, LAIV should be used preferentially when feasible for all healthy nonpregnant persons aged 2-49 years (including HCP) who desire or are recommended for vaccination to increase the availability of inactivated vaccine for persons at high risk.
# timing of Vaccination
Vaccination efforts should be structured to ensure the vaccination of as many persons as possible over the course of several months, with emphasis on vaccinating before influenza activity in the community begins. Even if vaccine distribution begins before October, distribution probably will not be completed until December or January. The following recommendations reflect this phased distribution of vaccine.
In any given year, the optimal time to vaccinate patients cannot be determined precisely because influenza seasons vary in their timing and duration, and more than one outbreak might occur in a single community in a single year. In the United States, localized outbreaks that indicate the start of seasonal influenza activity can occur as early as October. However, in >80% of influenza seasons since 1976, peak influenza activity (which often is close to the midpoint of influenza activity for the season) has not occurred until January or later, and in >60% of seasons, the peak was in February or later. In general, health-care providers should begin offering vaccination soon after vaccine becomes available and if possible by October. To avoid missed opportunities for vaccination, providers should offer vaccination during routine health-care visits or during hospitalizations whenever vaccine is available.
Vaccination efforts should continue throughout the season, because the duration of the influenza season varies and influenza might not appear in certain communities until February or March. Providers should offer influenza vaccine routinely, and organized vaccination campaigns should continue throughout the influenza season, including after influenza activity has begun in the community. Vaccine administered in December or later, even if influenza activity has already begun, is likely to be beneficial in the majority of influenza seasons. The majority of adults have antibody protection against influenza virus infection within 2 weeks after vaccination (478,479).
All children aged 6 months-8 years who are recommended for 2 doses should receive their first dose as soon after vaccine becomes available as is feasible and should receive the second dose ≥4 weeks later. This practice increases the opportunity for both doses to be administered before or shortly after the onset of influenza activity.
Planners are encouraged to develop the capacity and flexibility to schedule at least one vaccination clinic in December. Guidelines for planning large-scale vaccination clinics, including school-based clinics, are available at / flu/professionals/vaccination/vax_clinic.htm, . gov/h1n1flu/vaccination/statelocal/settingupclinics.htm, and .
During a vaccine shortage or delay, substantial proportions of TIV or LAIV doses might not be released and distributed until November and December or later. When the vaccines are substantially delayed or disease activity has not subsided, providers should consider offering vaccination clinics into January and beyond as long as vaccine supplies are available.
# Strategies for Implementing Vaccination Recommendations
The expansion of the recommendations to all persons aged ≥6 months highlights the importance of making influenza vaccine readily accessible in a variety of settings. Many of the persons at highest risk for complications will likely continue to be vaccinated in health-care settings. However, vaccination in health-care settings must increasingly be complemented by vaccination in nonmedical settings that increase convenience and access. During the 2009-2010 H1N1 Vaccination Program, substantial efforts were made at the state and local level to direct vaccine to locations such as schools, pharmacies, workplaces, and health departments.
# Health-Care Settings
Health-care settings remain a central component of an overall influenza vaccination strategy. Studies consistently show that provider recommendation is the strongest predictor of vaccination (425,480,481). While nonmedical settings play an important role for those motivated to seek vaccination, health-care settings are critical for facilitating vaccination of all those who come into contact with the setting, including those who might not seek out vaccination. Successful vaccination programs combine publicity and education for HCP and other potential vaccine recipients, use of reminder/recall systems, assessment of practice-level vaccination rates with feedback to staff, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine, including use of standing orders programs (409,482,483). The use of standing orders programs by long-term-care facilities (e.g., nursing homes and skilled nursing facilities), hospitals, and home health agencies ensures that vaccination is offered. Standing orders programs for influenza vaccination should be conducted under the supervision of a licensed practitioner according to a physician-approved facility or agency policy by HCP trained to screen patients for contraindications to vaccination, administer vaccine, and monitor and report adverse events. The Centers for Medicare and Medicaid Services (CMS) has removed the physician signature requirement for the administration of influenza and pneumococcal vaccines to Medicare and Medicaid patients in hospitals, long-term-care facilities, and home health agencies (484). To the extent allowed by local and state law, these facilities and agencies can implement standing orders for influenza and pneumococcal vaccination of Medicare-and Medicaideligible patients. Payment for influenza vaccine under Medicare Part B is available (485,486). Other settings (e.g., outpatient facilities, managed-care organizations, assisted living facilities, correctional facilities, pharmacies, and adult workplaces) are encouraged to introduce standing orders programs (487). In addition, physician reminders (e.g., flagging charts) and patient reminders are recognized strategies for increasing rates of influenza vaccination (483).
# outpatient Facilities Providing ongoing Care
Staff in facilities providing ongoing medical care (e.g., physicians' offices, public health clinics, employee health clinics, hemodialysis centers, hospital specialty-care clinics, and outpatient rehabilitation programs) should offer vaccine to all patients during visits throughout the influenza season. The offer of vaccination and its receipt or refusal should be documented in the medical record or immunization information system. Patients who do not have regularly scheduled visits during the fall should be reminded by mail, telephone, or other means of the need for vaccination.
# outpatient Facilities Providing Episodic or Acute Care
Acute health-care facilities (e.g., EDs and walk-in clinics) should offer vaccinations throughout the influenza season or provide written information regarding why, where, and how to obtain the vaccine. This written information should be provided in languages at literacy levels appropriate for the populations served by the facility.
# Acute-Care Hospitals
Hospitals should serve as a key setting for identifying persons at increased risk for influenza complications. Unvaccinated persons without contraindications who are hospitalized at any time during the period when vaccine is available should be offered and strongly encouraged to receive influenza vaccine before they are discharged. Standing orders to offer influenza vaccination to all hospitalized persons should be considered.
# nursing Homes and other Long-term-Care Facilities
Vaccination should be provided routinely to all residents of long-term-care facilities. If possible, all residents should be vaccinated before influenza season. In the majority of seasons, TIV will become available to long-term-care facilities in October or November, and vaccination should commence as soon as vaccine is available. As soon as possible after admission to the facility, the benefits and risks of vaccination should be discussed and education materials provided (488). Informed consent is required, but this does not necessarily mean a signed consent must be present in order to implement a standing order for vaccination (489). Residents admitted after completion of the vaccination program at the facility should be vaccinated at the time of admission.
Lower rates of severe illness among older persons were observed during the 2009 pandemic, but outbreaks among residents of nursing homes and other long-term-care facilities still occurred (490). Although the influenza viruses that will circulate during the 2010-11 season are unknown, multiple influenza types and subtypes that often infect and cause severe infections among older adults (e.g., H3N2) circulate each winter influenza season. The 2010-11 influenza vaccine formulation should be administered to all residents and staff.
Since October 2005, CMS has required nursing homes participating in the Medicare and Medicaid programs to offer all residents influenza and pneumococcal vaccines and to document the results. According to the requirements, each resident is to be vaccinated unless contraindicated medically, the resident or a legal representative refuses vaccination, or the vaccine is not available because of shortage. This information is to be reported as part of the CMS Minimum Data Set, which tracks nursing home health parameters (486,491).
# Vaccination Provided by Visiting nurses and others Providing Home Care to Persons at High Risk
Vaccine should be administered in the home if necessary as soon as influenza vaccine is available and throughout the influenza season. Caregivers and other persons in the household (including children) should be referred for vaccination.
# Vaccination for Health-Care Personnel
Health-care facilities should offer influenza vaccinations to all HCP, including night, weekend, and temporary staff. Particular emphasis should be placed on providing vaccinations to workers who provide direct care for persons at high risk for influenza complications. Efforts should be made to educate HCP regarding the benefits of vaccination and the potential health consequences of influenza illness for their patients, themselves, and their family members. All HCP should be provided convenient access to influenza vaccine at the work site, free of charge, as part of employee health programs (437,455,462).
# other Settings
Influenza vaccination has increasingly become available in nonmedical settings. In the 2009-2010 vaccination season, 33% of seasonal influenza vaccinations occurred in health departments, pharmacies or drug stores, workplaces, schools, or other nonmedical locations (CDC, unpublished data, 2009). The proportion of 2009 H1N1 vaccine administered in these settings was 45% (CDC, unpublished data, 2010). Availability of vaccine in a range of settings such as pharmacies and the workplace is especially important for persons who do not regularly accesss the health-care system. In addition, with the recent expansion of the influenza recommendations to include all persons aged ≥6 months, implementation of strategies that are sustainable beyond vaccination in provider offices are necessary. School-located vaccination provides an opportunity to address the challenges associated with large numbers of children to vaccinate, a short window of time for vaccination, and the need for annual revaccination. A number of states and immunization programs have effectively conducted school-located vaccination both for seasonal vaccination (492,493) and 2009 H1N1 vaccination (494). School-located vaccination does, however, present challenges from a resource perspective both for vaccine costs and program costs (493), because reimbursement practices might be different compared with those used in medical settings. In addition, documentation of vaccination must be provided to the vaccinated person's primary care provider and where appropriate state or local vaccine registries.
Nonmedical settings that should be considered to reach the elderly include assisted living housing, retirement communities, and recreation centers. Such facilities should offer unvaccinated residents, attendees, and staff annual on-site vaccination before the start of the influenza season. Continuing to offer vaccination throughout the fall and winter months is appropriate. Efforts to vaccinate newly admitted patients or new employees also should be continued, both to prevent illness and to avoid having these persons serve as a source of new influenza infections. Staff education should emphasize the benefits for self, staff and patients of protection from influenza through vaccination.
# Future Directions for Research and Recommendations Related to Influenza Vaccine
Although available influenza vaccines are effective and safe, additional research is needed to improve prevention efforts. Most severe morbidity and mortality during typical influenza seasons occurs among persons aged ≥65 years of those who have chronic medical conditions (6,7,24). More immunogenic influenza vaccines are needed for persons at higher risk for influenza-related complications. Additional research also is needed to understand potential biases in estimating the benefits of vaccination among older adults in reducing hospitalizations and deaths (134,241,495). Additional studies of the relative cost-effectiveness and cost utility of influenza vaccination among children and adults, especially those aged <65 years, are needed and should be designed to account for year-to-year variations in influenza attack rates, illness severity, hospitalization costs and rates, and vaccine effectiveness when evaluating the long-term costs and benefits of annual vaccination (496). Additional data on indirect effects of vaccination also are needed to quantify the benefits of influenza vaccination of HCP in protecting their patients (379) and the impact of a universal vaccination recommendation on influenza epidemiology, particularly the impact on persons at higher risk for influenza complications. In addition, a better understanding is needed of how to motivate persons, particularly those at risk for influenza-related complications and their close contacts, to seek or accept annual influenza vaccination.
The expansion of annual vaccination recommendations to include all persons aged ≥6 months will require a substantial increase in resources for epidemiologic research to develop long-term studies capable of assessing the possible effects on community-level transmission. In Canada, a universal vac-MMWR August 6, 2010 cination recommendation implemented in Ontario in 2000 has been compared with typical practice in other Canadian provinces. These studies have been challenging to conduct, but have indicated that a universal recommendation for annual vaccination is associated with overall reductions in influenzarelated mortality, hospitalizations, ED use, physicians' office visits, and antibiotic use (388,389,396). However, differences between health-care systems in Canada and the United States limit the ability to generalize the findings in Ontario to the United States, and measures of the impact of a universal recommendation in the United States will likely require many years to evaluate. Additional planning to improve surveillance systems capable of monitoring effectiveness, safety and vaccine coverage, and further development of implementation strategies will be necessary. Vaccination programs capable of delivering annual influenza vaccination to a broad range of the population could potentially serve as a resilient and sustainable platform for delivering vaccines and monitoring outcomes for other urgently required public health interventions (e.g., vaccines for future influenza pandemics or medical countermeasures to prevent or treat illnesses caused by acts of terrorism).
# Seasonal Influenza Vaccine and Influenza Viruses of Animal origin
Human infection with novel or nonhuman influenza A virus strains, including influenza A viruses of animal origin, is a nationally notifiable disease in the United States (497). Human infections with nonhuman or novel human influenza A virus should be identified quickly and investigated to determine possible sources of exposure, identify additional cases, and evaluate the possibility of human-to-human transmission because transmission patterns could change over time with variations in these influenza A viruses.
Sporadic severe and fatal human cases of infection with highly pathogenic avian influenza A (H5N1) virus have been identified in Asia, Africa, Europe, and the Middle East, primarily among persons who have had direct or close unprotected contact with sick or dead birds associated with the ongoing H5N1 panzootic among birds (498)(499)(500)(501)(502)(503)(504)(505)(506). Severe lower respiratory illness with multiorgan failure has been reported in fatal H5N1 cases, and asymptomatic infection and clinically mild cases also have been reported (507)(508)(509)(510). Limited, nonsustained human-to-human transmission of H5N1 virus has likely occurred in some case clusters (508,511). To date, there is no evidence of genetic reassortment between human influenza A and H5N1 viruses. However, influenza viruses derived from strains circulating among poultry (e.g., the H5N1 virus, which has caused outbreaks of avian influenza and occasionally have infected humans) have the potential to recombine with human influenza A viruses (512,513). To date, highly pathogenic H5N1 virus has not been identified in wild or domestic birds or in humans in the United States. Guidance for testing suspected cases of H5N1 virus infection among persons in the United States and follow-up of contacts is available (514,515). Human H5N1 cases have continued to occur in 2009 and 2010, including in the Middle East and Southeast Asia (516).
Human illness from infection with different avian influenza A subtype viruses also has been documented, including infections with low pathogenic and highly pathogenic viruses. A range of clinical illness has been reported for human infection with low pathogenic avian influenza viruses, including conjunctivitis with influenza A (H7N7) virus in the United Kingdom, lower respiratory tract disease and conjunctivitis with influenza A (H7N2) virus in the United Kingdom, and uncomplicated ILI with influenza A (H9N2) virus in Hong Kong and China (517)(518)(519)(520)(521)(522)(523). Two human cases of infection with low pathogenic influenza A (H7N2) have been reported in the United States (520). Although human infections with highly pathogenic A (H7N7) virus infections typically have ILI or conjunctivitis, severe infections, including one fatal case in the Netherlands, have been reported following exposure to poultry (524)(525)(526). Conjunctivitis also has been reported because of human infection with highly pathogenic influenza A (H7N3) virus in Canada and low pathogenic A (H7N3) in the United Kingdom (517,525). In contrast, sporadic infections with highly pathogenic avian influenza A (H5N1) virus have caused severe illness in many countries, with an overall case-fatality proportion of approximately 60% (508,526).
Swine influenza A (H1N1), A (H1N2), and A (H3N2) viruses, including reassortant viruses, are endemic among pig populations in the United States (527). Two clusters of influenza A (H2N3) virus infections among pigs have been reported recently (528). Outbreaks among pigs normally occur in colder weather months (late fall and winter) and sometimes with the introduction of new pigs into susceptible herds. An estimated 30% of the pig population in the United States has serologic evidence of having had swine influenza A (H1N1) virus infection. Sporadic human infections with a variety of swine influenza A viruses occur in the United States, but the incidence of these human infections is unknown (529)(530)(531)(532)(533)(534). Persons infected with swine influenza A viruses typically report direct contact with ill pigs or places where pigs have been present (e.g., agricultural fairs or farms) and have symptoms that are clinically indistinguishable from infection with other respiratory viruses (531,532,535,536). Swine influenza virus infection has not been associated with household exposure to pork products or consumption of pork. Clinicians should consider swine influenza A virus infection in the differential diagnosis of patients with ILI who have had recent contact with pigs. Sporadic cases among persons whose infections were linked to swine exposure have not resulted in sustained human-tohuman transmission of swine influenza A viruses or community outbreaks (9,536). The 2009 pandemic influenza A (H1N1) virus contains some genes previously found in viruses currently circulating among swine, but the origin of the pandemic has not been definitively linked to swine exposures among humans. Although immunity to swine influenza A viruses appears to be low (<2%) in the overall human population, 10%-20% of persons with occupational exposure to pigs (e.g., pig farmers or pig veterinarians) have been documented in certain studies to have antibody evidence of prior swine influenza A (H1N1) virus infection (529,537).
Current seasonal influenza vaccines are not expected to provide protection against human infection with avian influenza A viruses, including influenza A (H5N1) viruses, or to provide protection against influenza A viruses currently circulating exclusively in swine (318,448). However, reducing seasonal influenza risk through influenza vaccination of persons who might be exposed to nonhuman influenza viruses (e.g., H5N1 virus) might reduce the theoretic risk for recombination of influenza A viruses of animal origin and human influenza A viruses by preventing seasonal influenza A virus infection within a human host.
CDC has recommended that persons who are charged with responding to avian influenza outbreaks among poultry receive seasonal influenza vaccination (538,539). As part of preparedness activities, the Occupational Safety and Health Administration (OSHA) has issued an advisory notice regarding poultry worker safety that is intended for implementation in the event of a suspected or confirmed avian influenza outbreak at a poultry facility in the United States. OSHA guidelines recommend that poultry workers in an involved facility receive vaccination against seasonal influenza; OSHA also has recommended that HCP involved in the care of patients with documented or suspected avian influenza should be vaccinated with the most recent seasonal human influenza vaccine to reduce the risk for co-infection with human influenza A viruses (539).
# Recommendations for Using Antiviral Agents
Annual vaccination is the primary strategy for preventing complications of influenza virus infections. Antiviral medications with activity against influenza viruses are useful adjuncts in the prevention of influenza, and effective when used early in the course of illness for treatment. Four influenza antiviral agents are licensed in the United States: amantadine, rimantadine, zanamivir, and oseltamivir. Investigational antiviral medications, such as peramivir and intravenous formulations of zanamivir, might be available under investigational new drug protocols (540).
During the 2007-08 influenza season, influenza A (H1N1) viruses with a mutation that confers resistance to oseltamivir became more common in the United States and other countries (541)(542)(543). As of June 2010, in the United States, approximately 99% of seasonal influenza A (H1N1) viruses (i.e., H1N1 viruses not associated with the 2009 pandemic) tested have been resistant to oseltamivir. None of the influenza A (H3N2) or influenza B viruses tested were resistant to oseltamivir. However, few seasonal influenza viruses isolated after May 2009 are available for testing. As of June 2010, with few exceptions, 2009 pandemic influenza A (H1N1) virus strains that began circulating in April 2009 remained sensitive to oseltamivir, and all were sensitive to zanamivir (16). Sporadic cases of 2009 pandemic influenza A (H1N1) virus infection with an H275Y mutation in neuraminidase associated with oseltamivir resistance have been reported worldwide, but as of June 2010, no sustained community-wide transmission has been identified (544). Such oseltamivir-resistant virus infections have been identified in severely immunosuppressed patients, persons receiving oseltamivir chemoprophylaxis, and in some persons without oseltamivir exposure, including some influenza illness clusters (544-549). CDC's recommendations for use of influenza antiviral medications should be consulted for guidance on antiviral use (15). New guidance on clinical management of influenza, including use of antivirals, also is available from the Infectious Diseases Society of America and the World Health Organization (550)(551)(552). ACIP recommendations for antiviral use will be published separately later in 2010.
# Sources of Information Regarding Influenza and its Surveillance
Information regarding influenza surveillance, prevention, detection, and control is available at . During October-May, surveillance information is updated weekly. In addition, periodic updates regarding influenza are published in MMWR (). Additional information regarding influenza vaccine can be obtained by calling 1-800-CDC-INFO (1-800-232-4636). State and local health departments should be consulted about availability of influenza vaccine, access to vaccination programs, information related to state or local influenza activity, reporting of influenza outbreaks and influenza-related pediatric deaths, and advice concerning outbreak control.
# Vaccine Adverse Event Reporting System (VAERS)
Clinically significant adverse events that follow vaccination should be reported to the Vaccine Adverse Event Reporting System (VAERS) at . Reports can be filed securely online, by mail, or by fax. A VAERS form can be downloaded from the VAERS website or requested by sending an e-mail message to [email protected], by calling telephone 1-800-822-7967, or by sending a faxed request to 1-877-721-0366. Additional information on VAERS or vaccine safety is available at or by calling telephone 1-800-822-7967.
# Reporting of Adverse Events that occur After Administering Antiviral Medications (MedWatch)
Health-care professionals should report all serious adverse events (SAEs) after antiviral medication use promptly to MedWatch, FDA's adverse event reporting program for medications. SAEs are defined as medical events that involve hospitalization, death, life-threatening illness, disability, or certain other medically important conditions. SAEs that follow administration of medications should be reported at http:// www.fda.gov/medwatch/report/hcp.htm.
# national Vaccine Injury Compensation Program
The National Vaccine Injury Compensation Program (VICP), established by the National Childhood Vaccine Injury Act of 1986, as amended, provides a mechanism through which compensation can be paid on behalf of a person determined to have been injured or to have died as a result of receiving a vaccine covered by VICP. The Vaccine Injury Table lists the vaccines covered by VICP and the injuries and conditions (including death) for which compensation might be paid. If the injury or condition is not on the Table, or does not occur within the specified time period on the Table, persons must prove that the vaccine caused the injury or condition.
For a person to be eligible for compensation, the general filing deadlines for injuries require claims to be filed within 3 years after the first symptom of the vaccine injury; for a death, claims must be filed within 2 years of the vaccine-related death and not more than 4 years after the start of the first symptom of the vaccine-related injury from which the death occurred. When a new vaccine is covered by VICP or when a new injury/ condition is added to the Table, claims that do not meet the general filing deadlines must be filed within 2 years from the date the vaccine or injury/condition is added to the Table for injuries or deaths that occurred up to 8 years before the Table change. Persons of all ages who receive a VICP-covered vaccine might be eligible to file a claim. Both the intranasal (LAIV) and injectable (TIV) trivalent influenza vaccines are covered under VICP. Additional information about VICP is available at http//www.hrsa.gov/vaccinecompensation or by calling 1-800-338-2382.
# Additional Information Regarding Influenza Virus Infection Control Among Specific Populations
Each year, ACIP provides general, annually updated information regarding control and prevention of influenza. Other reports related to controlling and preventing influenza among specific populations (e.g., immunocompromised persons, HCP, hospital patients, pregnant women, children, and travelers) also are available in the following publications:
- CDC. General recommendations on immunization: recommendations of the Advisory Committee on Immunization Practices (ACIP) and the American Academy of Family Physicians (AAFP | 45,186 | {
"id": "2973043c9cfcc3af59d20ee0721567efb96c0461",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | After reading this report, I am confident I can describe conditions in which a second HIV screening test should be performed during the third trimester of pregnancy. A. Strongly agree. D. Disagree. B. Agree. E. Strongly disagree. C. Undecided. 20. The learning outcomes (objectives) were relevant to the goals of this report. A. Strongly agree. D. Disagree. B. Agree. E. Strongly disagree. C. Undecided. 21. The instructional strategies used in this report (text and boxes) helped me learn the material. A. Strongly agree. D. Disagree. B. Agree. E. Strongly disagree. C. Undecided. 22. The content was appropriate given the stated objectives of the report. A. Strongly agree. D. Disagree. B. Agree. E. Strongly disagree. C. Undecided.# Summary
These recommendations for human immunodeficiency virus (HIV) testing are intended for all health-care providers in the public and private sectors, including those working in hospital emergency departments, urgent care clinics, inpatient services, substance abuse treatment clinics, public health clinics, community clinics, correctional health-care facilities, and primary care settings. The recommendations address HIV testing in health-care settings only. They do not modify existing guidelines concerning HIV counseling, testing, and referral for persons at high risk for HIV who seek or receive HIV testing in nonclinical settings (e.g., community-based organizations, outreach settings, or mobile vans). The objectives of these recommendations are to increase HIV screening of patients, including pregnant women, in health-care settings; foster earlier detection of HIV infection; identify and counsel persons with unrecognized HIV infection and link them to clinical and prevention services; and further reduce perinatal transmission of HIV in the United States. These revised recommendations update previous recommendations for HIV testing in health-care settings and for screening of pregnant women (CDC. Recommendations for HIV testing services for inpatients and outpatients in acute-care hospital settings. MMWR 1993; 42 Major revisions from previously published guidelines are as follows:
For patients in all health-care settings - HIV screening is recommended for patients in all health-care settings after the patient is notified that testing will be performed unless the patient declines (opt-out screening). - Persons at high risk for HIV infection should be screened for HIV at least annually. - Separate written consent for HIV testing should not be required; general consent for medical care should be considered sufficient to encompass consent for HIV testing. - Prevention counseling should not be required with HIV diagnostic testing or as part of HIV screening programs in health-care settings. For pregnant women - HIV screening should be included in the routine panel of prenatal screening tests for all pregnant women. - HIV screening is recommended after the patient is notified that testing will be performed unless the patient declines (opt-out screening).
- Separate written consent for HIV testing should not be required; general consent for medical care should be considered sufficient to encompass consent for HIV testing. - Repeat screening in the third trimester is recommended in certain jurisdictions with elevated rates of HIV infection among pregnant women.
# Introduction
Human immunodeficiency virus (HIV) infection and acquired immunodeficiency syndrome (AIDS) remain leading causes of illness and death in the United States. As of December 2004, an estimated 944,306 persons had received a diagnosis of AIDS, and of these, 529,113 (56%) had died (1). The annual number of AIDS cases and deaths declined substantially after 1994 but stabilized during 1999-2004 (1). However, since 1994, the annual number of cases among blacks, members of other racial/ethnic minority populations, and persons exposed through heterosexual contact has increased. The number of children reported with AIDS attributed to perinatal HIV transmission peaked at 945 in 1992 and declined 95% to 48 in 2004 (1), primarily because of the identification of HIV-infected pregnant women and the effectiveness of antiretroviral prophylaxis in reducing mother-to-child transmission of HIV (2).
By 2002, an estimated 38%-44% of all adults in the United States had been tested for HIV; 16-22 million persons aged 18-64 years are tested annually for HIV (3). However, at the end of 2003, of the approximately 1.0-1.2 million persons estimated to be living with HIV in the United States, an estimated one quarter (252,000-312,000 persons) were unaware of their infection and therefore unable to benefit from clinical care to reduce morbidity and mortality (4). A number of these persons are likely to have transmitted HIV unknowingly (5).
Treatment has improved survival rates dramatically, especially since the introduction of highly active antiretroviral therapy (HAART) in 1995 (6). However, progress in effecting earlier diagnosis has been insufficient. During 1990-1992, the proportion of persons who first tested positive for HIV <1 year before receiving a diagnosis of AIDS was 51% (7); during 1993-2004, this proportion declined only modestly, to 39% in 2004 (1). Persons tested late in the course of their infection were more likely to be black or Hispanic and to have been exposed through heterosexual contact; 87% received their first positive HIV test result at an acute or referral medical care setting, and 65% were tested for HIV antibody because of illness (8).
These recommendations update previous recommendations for HIV testing in health-care settings (9,10) and for screening of pregnant women (11). The objectives of these recommendations are to increase HIV screening of patients, including pregnant women, in health-care settings; foster earlier detection of HIV infection; identify and counsel persons with unrecognized HIV infection and link them to clinical and prevention services; and further reduce perinatal transmission of HIV in the United States.
Single copies of this report are available free of charge from CDC's National Prevention Information Network, telephone 800-458-5231 (Mondays-Fridays, 9:00 a.m.-8:00 p.m. ET).
# Background Definitions
Diagnostic testing. Performing an HIV test for persons with clinical signs or symptoms consistent with HIV infection.
Screening. Performing an HIV test for all persons in a defined population (12).
Targeted testing. Performing an HIV test for subpopulations of persons at higher risk, typically defined on the basis of behavior, clinical, or demographic characteristics (9). Informed consent. A process of communication between patient and provider through which an informed patient can choose whether to undergo HIV testing or decline to do so. Elements of informed consent typically include providing oral or written information regarding HIV, the risks and benefits of testing, the implications of HIV test results, how test results will be communicated, and the opportunity to ask questions.
Opt-out screening. Performing HIV screening after notifying the patient that 1) the test will be performed and 2) the patient may elect to decline or defer testing. Assent is inferred unless the patient declines testing.
HIV-prevention counseling. An interactive process of assessing risk, recognizing specific behaviors that increase the risk for acquiring or transmitting HIV, and developing a plan to take specific steps to reduce risks (13).
# Evolution of HIV Testing Recommendations in Health-Care Settings and for Pregnant Women
In 1985, when HIV testing first became available, the main goal of such testing was to protect the blood supply. Alternative test sites were established to deter persons from using blood bank testing to learn their HIV status. At that time, professional opinion was divided regarding the value of HIV testing and whether HIV testing should be encouraged because no consensus existed regarding whether a positive test predicted transmission to sex partners or from mother to infant (14). No effective treatment existed, and counseling was designed in part to ensure that persons tested were aware that the meaning of positive test results was uncertain.
During the next 2 years, the implications of positive HIV serology became evident, and in 1987, the United States Public Health Service (USPHS) issued guidelines making HIV counseling and testing a priority as a prevention strategy for persons most likely to be infected or who practiced high-risk behaviors and recommended routine testing of all persons seeking treatment for STDs, regardless of healthcare setting (15). "Routine" was defined as a policy to provide these services to all clients after informing them that testing would be conducted (15).
In 1993, CDC recommendations for voluntary HIV counseling and testing were extended to include hospitalized patients and persons obtaining health care as outpatients in acute-care hospital settings, including emergency departments (EDs) (10). Hospitals with HIV seroprevalence rates of >1% or AIDS diagnosis rates of >1 per 1,000 discharges were encouraged to adopt a policy of offering voluntary HIV counseling and testing routinely to all patients aged 15-54 years. Health-care providers in acute-care settings were encouraged to structure counseling and testing procedures to facilitate confidential, voluntary participation and to include basic information regarding the medical implications of the test, the option to receive more information, and documentation of informed consent (10). In 1994, guidelines for counseling and testing persons with high-risk behaviors specified prevention counseling to develop specific prevention goals and strategies for each person (client-centered counseling) (16). In 1995, after perinatal transmission of HIV was demonstrated to be substantially reduced by administration of zidovudine to HIV-infected pregnant women and their newborns, USPHS recommended that all pregnant women be counseled and encouraged to undergo voluntary testing for HIV (17,18).
In 2001, CDC modified the recommendations for pregnant women to emphasize HIV screening as a routine part of prenatal care, simplification of the testing process so pretest counseling would not pose a barrier, and flexibility of the consent process to allow multiple types of informed consent (11). In addition, the 2001 recommendations for HIV testing in health-care settings were extended to include multiple additional clinical venues in both private and public health-care sectors, encouraging providers to make HIV counseling and testing more accessible and acknowledging their need for flexibility (9). CDC recommended that HIV testing be offered routinely to all patients in high HIV-prevalence health-care settings. In low prevalence settings, in which the majority of clients are at minimal risk, targeted HIV testing on the basis of risk screening was considered more feasible for identifying limited numbers of HIV-infected persons (9).
In 2003, CDC introduced the initiative Advancing HIV Prevention: New Strategies for a Changing Epidemic (19). Two key strategies of this initiative are 1) to make HIV testing a routine part of medical care on the same voluntary basis as other diagnostic and screening tests and 2) to reduce perinatal transmission of HIV further by universal testing of all pregnant women and by using rapid tests during labor and delivery or postpartum if the mother was not screened prenatally (19). In its technical guidance, CDC acknowledged that prevention counseling is desirable for all persons at risk for HIV but recognized that such counseling might not be appropriate or feasible in all settings (20). Because time constraints or discomfort with discussing their patients' risk behaviors caused some providers to perceive requirements for prevention counseling and written informed consent as a barrier (12,(21)(22)(23), the initiative advocated streamlined approaches.
In March 2004, CDC convened a meeting of health-care providers, representatives from professional associations, and local health officials to obtain advice concerning how best to expand HIV testing, especially in high-volume, highprevalence acute-care settings. Consultants recommended simplifying the HIV screening process to make it more feasible and less costly and advocated more frequent diagnostic testing of patients with symptoms. In April 2005, CDC initiated a comprehensive review of the literature regarding HIV testing in health-care settings and, on the basis of published evidence and lessons learned from CDC-sponsored demonstration projects of HIV screening in health-care facilities, began to prepare recommendations to implement these strategies. In August 2005, CDC invited health-care providers, representatives from public health agencies and community organizations, and persons living with HIV to review an outline of proposed recommendations. In November 2005, CDC convened a meeting of researchers, representatives of professional health-care provider organizations, clinicians, persons living with HIV, and representatives from community organizations and agencies overseeing care of HIV-infected persons to review CDC's proposed recommendations. Before final revision of these recommendations, CDC described the proposals at national meetings of researchers and healthcare providers and, in March 2006, solicited peer review by health-care professionals, in compliance with requirements of the Office of Management and Budget for influential scientific assessments, and invited comment from multiple professional and community organizations. The final recommendations were further refined on the basis of comments from these constituents.
# MMWR September 22, 2006
# Rationale for Routine Screening for HIV Infection
Previous CDC and U.S. Preventive Services Task Force guidelines for HIV testing recommended routine counseling and testing for persons at high risk for HIV and for those in acute-care settings in which HIV prevalence was >1% (9,10,24). These guidelines proved difficult to implement because 1) the cost of HIV screening often is not reimbursed, 2) providers in busy health-care settings often lack the time necessary to conduct risk assessments and might perceive counseling requirements as a barrier to testing, and 3) explicit information regarding HIV prevalence typically is not available to guide selection of specific settings for screening (25)(26)(27)(28)(29).
These revised CDC recommendations advocate routine voluntary HIV screening as a normal part of medical practice, similar to screening for other treatable conditions. Screening is a basic public health tool used to identify unrecognized health conditions so treatment can be offered before symptoms develop and, for communicable diseases, so interventions can be implemented to reduce the likelihood of continued transmission (30).
HIV infection is consistent with all generally accepted criteria that justify screening: 1) HIV infection is a serious health disorder that can be diagnosed before symptoms develop; 2) HIV can be detected by reliable, inexpensive, and noninvasive screening tests; 3) infected patients have years of life to gain if treatment is initiated early, before symptoms develop; and 4) the costs of screening are reasonable in relation to the anticipated benefits (30). Among pregnant women, screening has proven substantially more effective than riskbased testing for detecting unsuspected maternal HIV infection and preventing perinatal transmission (31)(32)(33).
# Rationale for New Recommendations
Often, persons with HIV infection visit health-care settings (e.g., hospitals, acute-care clinics, and sexually transmitted disease clinics) years before receiving a diagnosis but are not tested for HIV (34)(35)(36). Since the 1980s, the demographics of the HIV/AIDS epidemic in the United States have changed; increasing proportions of infected persons are aged <20 years, women, members of racial or ethnic minority populations, persons who reside outside metropolitan areas, and heterosexual men and women who frequently are unaware that they are at risk for HIV (37). As a result, the effectiveness of using risk-based testing to identify HIV-infected persons has diminished (34,35,38,39).
Prevention strategies that incorporate universal HIV screening have been highly effective. For example, screening blood donors for HIV has nearly eliminated transfusion-associated HIV infection in the United States (40). In addition, incidence of pediatric HIV/AIDS in the United States has declined substantially since the 1990s, when prevention strategies began to include specific recommendations for routine HIV testing of pregnant women (18,41). Perinatal transmission rates can be reduced to <2% with universal screening of pregnant women in combination with prophylactic administration of antiretroviral drugs (42,43), scheduled cesarean delivery when indicated (44,45), and avoidance of breast feeding (46).
These successes contrast with a relative lack of progress in preventing sexual transmission of HIV, for which screening rarely is performed. Declines in HIV incidence observed in the early 1990s have leveled and might even have reversed in certain populations in recent years (47,48). Since 1998, the estimated number of new infections has remained stable at approximately 40,000 annually (49). In 2001, the Institute of Medicine (IOM) emphasized prevention services for HIV-infected persons and recommended policies for diagnosing HIV infections earlier to increase the number of HIV-infected persons who were aware of their infections and who were offered clinical and prevention services (37). The majority of persons who are aware of their HIV infections substantially reduce sexual behaviors that might transmit HIV after they become aware they are infected (5). In a meta-analysis of findings from eight studies, the prevalence of unprotected anal or vaginal intercourse with uninfected partners was on average 68% lower for HIV-infected persons who were aware of their status than it was for HIV-infected persons who were unaware of their status (5). To increase diagnosis of HIV infection, destigmatize the testing process, link clinical care with prevention, and ensure immediate access to clinical care for persons with newly identified HIV infection, IOM and other health-care professionals with expertise (25,37,50,51) have encouraged adoption of routine HIV testing in all health-care settings.
Routine prenatal HIV testing with streamlined counseling and consent procedures has increased the number of pregnant women tested substantially (52). By contrast, the number of persons at risk for HIV infection who are screened in acute-care settings remains low, despite repeated recommendations in support of routine risk-based testing in health-care settings (9,10,15,34,53,54). In a survey of 154 health-care providers in 10 hospital EDs, providers reported caring for an average of 13 patients per week suspected to have STDs, but only 10% of these providers encouraged such patients to be tested for HIV while they were in the ED (54). Another 35% referred patients to confidential HIV testing sites in the community; however, such referrals have proven ineffective because of poor compliance by patients (55). Reasons cited for not offering HIV testing in the ED included lack of established mechanisms to ensure follow-up (51%), lack of the certification perceived as necessary to provide counseling (45%), and belief that the testing process was too time-consuming (19%) (54).
With the institution of HIV screening in certain hospitals and EDs, the percentage of patients who test positive (2%-7%) often has exceeded that observed nationally at publicly funded HIV counseling and testing sites (1.5%) and STD clinics (2%) serving persons at high risk for HIV (53,(56)(57)(58)(59). Because patients rarely were seeking testing when screening was offered at these hospitals, HIV infections often were identified earlier than they might otherwise have been (29). Targeted testing programs also have been implemented in acute-care settings; nearly two thirds of patients in these settings accept testing, but because risk assessment and prevention counseling are time-consuming, only a limited proportion of eligible patients can be tested (29). Targeted testing on the basis of risk behaviors fails to identify a substantial number of persons who are HIV infected (34,35,39). A substantial number of persons, including persons with HIV infection, do not perceive themselves to be at risk for HIV or do not disclose their risks (53,56,59). Routine HIV testing reduces the stigma associated with testing that requires assessment of risk behaviors (60)(61)(62)(63). More patients accept recommended HIV testing when it is offered routinely to everyone, without a risk assessment (54,56).
In 1999, to increase the proportion of women tested for HIV, IOM recommended 1) adopting a national policy of universal HIV testing of pregnant women with patient notification (opt-out screening) as a routine component of prenatal care, 2) eliminating requirements for extensive pretest counseling while requiring provision of basic information regarding HIV, and 3) not requiring explicit written consent to be tested for HIV (12). Subsequent studies have indicated that these policies, as proposed by IOM and other professional organizations (12,64,65), reflect an ethical balance among public health goals, justice, and individual rights (66,67). Rates of HIV screening are consistently higher at settings that provide prenatal and STD services using optout screening than at opt-in programs, which require pretest counseling and explicit written consent (52,(68)(69)(70)(71)(72)(73)(74). Pregnant women express less anxiety with opt-out HIV screening and do not find it difficult to decline a test (68,74). In 2006, approximately 65% of U.S. adults surveyed concurred that HIV testing should be treated the same as screening for any other disease, without special procedures such as written permission from the patient (75).
Adolescents aged 13-19 years represent new cohorts of persons at risk, and prevention efforts need to be repeated for each succeeding generation of young persons (63). The 2005 Youth Risk Behavior Survey indicated that 47% of high school students reported that they had had sexual intercourse at least once, and 37% of sexually active students had not used a condom during their most recent act of sexual intercourse (76). More than half of all HIVinfected adolescents are estimated not to have been tested and are unaware of their infection (77,78). Among young (aged 18-24 years) men who have sex with men (MSM) surveyed during 2004-2005 in five U.S. cities, 14% were infected with HIV; 79% of these HIV-infected MSM were unaware of their infection (56). The American Academy of Pediatrics recommends that clinicians obtain information from adolescent patients regarding their sexual activity and inform them how to prevent HIV infection (79). Evidence indicates that adolescents prefer to receive this information from their health-care providers rather than from their parents, teachers, or friends (80). However, fewer than half of clinicians provide such guidance (81). Health-care providers' recommendations also influence adolescents' decision to be tested. Among reasons for HIV testing provided by 528 adolescents who had primary care providers, 58% cited their provider's recommendation as their reason for testing (82).
The U.S. Preventive Services Task Force recently recommended that clinicians screen for HIV all adults and adolescents at increased risk for HIV, on the basis that when HIV is diagnosed early, appropriately timed interventions, particularly HAART, can lead to improved health outcomes, including slower clinical progression and reduced mortality (24). The Task Force also recommended screening all pregnant women, regardless of risk, but made no recommendation for or against routinely screening asymptomatic adults and adolescents with no identifiable risk factors for HIV. The Task Force concluded that such screening would detect additional patients with HIV, but the overall number would be limited, and the potential benefits did not clearly outweigh the burden on primary care practices or the potential harms of a general HIV screening program (24,83). In making these recommendations, the Task Force considered how many patients would need to be screened to prevent one clinical progression or death during the 3-year period after screening. On the basis of evidence available for its review, the Task Force was unable to calculate benefits attributable to the prevention of secondary HIV transmission to partners (84). However, a recent meta-analysis indicated that HIV-infected persons reduced high-risk behavior substantially when they became aware of their infection (5). Because viral load is the chief biologic predictor of HIV transmission (85), reduction in viral load through timely initiation of HAART might reduce transmission, even for HIV-infected patients who do not change their risk behavior (86). Estimated transmission is 3.5 times higher among persons who are unaware of their infection than among persons who are aware of their infection and contributes disproportionately to the number of new HIV infections each year in the United States (87). In theory, new sexual HIV infections could be reduced >30% per year if all infected persons could learn their HIV status and adopt changes in behavior similar to those adopted by persons already aware of their infection (87).
Recent studies demonstrate that voluntary HIV screening is cost-effective even in health-care settings in which HIV prevalence is low (26,27,86). In populations for which prevalence of undiagnosed HIV infection is >0.1%, HIV screening is as cost-effective as other established screening programs for chronic diseases (e.g., hypertension, colon cancer, and breast cancer) (27,86). Because of the substantial survival advantage resulting from earlier diagnosis of HIV infection when therapy can be initiated before severe immunologic compromise occurs, screening reaches conventional benchmarks for cost-effectiveness even before including the important public health benefit from reduced transmission to sex partners (86).
Linking patients who have received a diagnosis of HIV infection to prevention and care is essential. HIV screening without such linkage confers little or no benefit to the patient. Although moving patients into care incurs substantial costs, it also triggers sufficient survival benefits that justify the additional costs. Even if only a limited fraction of patients who receive HIV-positive results are linked to care, the survival benefits per dollar spent on screening represent good comparative value (26,27,88).
The benefit of providing prevention counseling in conjunction with HIV testing is less clear. HIV counseling with testing has been demonstrated to be an effective intervention for HIV-infected participants, who increased their safer behaviors and decreased their risk behaviors; HIV counseling and testing as implemented in the studies had little effect on HIVnegative participants (89). However, randomized controlled trials have demonstrated that the nature and duration of prevention counseling might influence its effectiveness (90,91). Carefully controlled, theory-based prevention counseling in STD clinics has helped HIV-negative participants reduce their risk behaviors compared with participants who received only a didactic prevention message from health-care providers (90).
A more intensive intervention among HIV-negative MSM at high risk, consisting of 10 theory-based individual counseling sessions followed by maintenance sessions every 3 months, resulted in reductions in unprotected sex with partners who were HIV infected or of unknown status, compared with MSM who received structured prevention counseling only twice yearly (91).
Timely access to diagnostic HIV test results also improves health outcomes. Diagnostic testing in health-care settings continues to be the mechanism by which nearly half of new HIV infections are identified. During 2000-2003, of persons reported with HIV/AIDS who were interviewed in 16 states, 44% were tested for HIV because of illness (8). Compared with HIV testing after patients were admitted to the hospital, expedited diagnosis by rapid HIV testing in the ED before admission led to shorter hospital stays, increased the number of patients aware of their HIV status before discharge, and improved entry into outpatient care (92). However, at least 28 states have laws or regulations that limit health-care providers' ability to order diagnostic testing for HIV infection if the patient is unable to give consent for HIV testing, even when the test results are likely to alter the patient's diagnostic or therapeutic management (93).
Of the 40,000 persons who acquire HIV infection each year, an estimated 40%-90% will experience symptoms of acute HIV infection (94)(95)(96), and a substantial number will seek medical care. However, acute HIV infection often is not recognized by primary care clinicians because the symptoms resemble those of influenza, infectious mononucleosis, and other viral illnesses (97). Acute HIV infection can be diagnosed by detecting HIV RNA in plasma from persons with a negative or indeterminate HIV antibody test. One study based on national ambulatory medical care surveys estimated that the prevalence of acute HIV infection was 0.5%-0.7% among ambulatory patients who sought care for fever or rash (98). Although the long-term benefit of HAART during acute HIV infection has not been established conclusively (99), identifying primary HIV infection can reduce the spread of HIV that might otherwise occur during the acute phase of HIV disease (100,101).
Perinatal HIV transmission continues to occur, primarily among women who lack prenatal care or who were not offered voluntary HIV counseling and testing during pregnancy. A substantial proportion of the estimated 144-236 perinatal HIV infections in the United States each year can be attributed to the lack of timely HIV testing and treatment of pregnant women (102). Multiple barriers to HIV testing have been identified, including language barriers; late entry into prenatal care; health-care providers' perceptions that their patients are at low risk for HIV; lack of time for counseling and testing, particularly for rapid testing during labor and delivery; and state regulations requiring counseling and separate informed consent (103). A survey of 653 obstetrical providers in North Carolina suggested that not all health-care providers embrace universal testing of pregnant women; the strength with which providers recommended prenatal testing to their patients and the numbers of women tested depended largely on the providers' perception of the patients' risk behaviors (21). Data confirm that testing rates are higher when HIV tests are included in the standard panel of screening tests for all pregnant women (52,69,104). Women also are much more likely to be tested if they perceive that their health-care provider strongly recommends HIV testing (105). As universal prenatal screening has become more widespread, an increasing proportion of pregnant women who had undiagnosed HIV infection at the time of delivery were found to have seroconverted during pregnancy (106). A second HIV test during the third trimester for women in settings with elevated HIV incidence (>17 cases per 100,000 person-years) is cost-effective and might result in substantial reductions in mother-to-child HIV transmission (107).
Every perinatal HIV transmission is a sentinel health event, signaling either a missed opportunity for prevention or, more rarely, a failure of interventions to prevent perinatal transmission. When these infections occur, they underscore the need for improved strategies to ensure that all pregnant women undergo HIV testing and, if found to be HIV positive, receive proper interventions to reduce their transmission risk and safeguard their health and the health of their infants.
# Recommendations for Adults and Adolescents
CDC recommends that diagnostic HIV testing and optout HIV screening be a part of routine clinical care in all health-care settings while also preserving the patient's option to decline HIV testing and ensuring a provider-patient relationship conducive to optimal clinical and preventive care. The recommendations are intended for providers in all health-care settings, including hospital EDs, urgent-care clinics, inpatient services, STD clinics or other venues offering clinical STD services, tuberculosis (TB) clinics, substance abuse treatment clinics, other public health clinics, community clinics, correctional health-care facilities, and primary care settings. The guidelines address HIV testing in health-care settings only; they do not modify existing guidelines concerning HIV counseling, testing, and referral for persons at high risk for HIV who seek or receive HIV testing in nonclinical settings (e.g., communitybased organizations, outreach settings, or mobile vans) (9).
# Screening for HIV Infection
# Consent and Pretest Information
- Screening should be voluntary and undertaken only with the patient's knowledge and understanding that HIV testing is planned. - Patients should be informed orally or in writing that HIV testing will be performed unless they decline (optout screening). Oral or written information should include an explanation of HIV infection and the
# Similarities and Differences Between Current and Previous Recommendations for Adults and Adolescents
Aspects of these recommendations that remain unchanged from previous recommendations are as follows:
- HIV testing must be voluntary and free from coercion. Patients must not be tested without their knowledge. - HIV testing is recommended and should be routine for persons attending STD clinics and those seeking treatment for STDs in other clinical settings.
- Access to clinical care, prevention counseling, and support services is essential for persons with positive HIV test results. Aspects of these recommendations that differ from previous recommendations are as follows:
- Screening after notifying the patient that an HIV test will be performed unless the patient declines (opt-out screening) is recommended in all health-care settings.
# Recommendations for Pregnant Women
These guidelines reiterate the recommendation for universal HIV screening early in pregnancy but advise simplifying the screening process to maximize opportunities for women to learn their HIV status during pregnancy, preserving the woman's option to decline HIV testing, and ensuring a provider-patient relationship conducive to optimal clinical and preventive care. All women should receive HIV screening consistent with the recommendations for adults and adolescents. HIV screening should be a routine component of preconception care, maximizing opportunities for all women to know their HIV status before conception (109). In addition, screening early in pregnancy enables HIV-infected women and their infants to benefit from appropriate and timely interventions (e.g., antiretroviral medications , scheduled cesarean delivery , and avoidance of breastfeeding- ). These recommendations are intended for clinicians who provide care to pregnant women and newborns and for health policy makers who have responsibility for these populations.
# HIV Screening for Pregnant Women and Their Infants
Universal Opt-Out Screening
- All pregnant women in the United States should be screened for HIV infection. - Screening should occur after a woman is notified that HIV screening is recommended for all pregnant patients and that she will receive an HIV test as part of the routine panel of prenatal tests unless she declines (opt-out screening). - HIV testing must be voluntary and free from coercion.
No woman should be tested without her knowledge. - Pregnant women should receive oral or written information that includes an explanation of HIV infection, a description of interventions that can reduce HIV transmission from mother to infant, and the meanings of positive and negative test results and should be offered an opportunity to ask questions and to decline testing. - No additional process or written documentation of informed consent beyond what is required for other routine prenatal tests should be required for HIV testing. - If a patient declines an HIV test, this decision should be documented in the medical record.
# Addressing Reasons for Declining Testing
- Providers should discuss and address reasons for declining an HIV test (e.g., lack of perceived risk; fear of the disease; and concerns regarding partner violence or potential stigma or discrimination). - Women who decline an HIV test because they have had a previous negative test result should be informed of the importance of retesting during each pregnancy. - Logistical reasons for not testing (e.g., scheduling) should be resolved. - Certain women who initially decline an HIV test might accept at a later date, especially if their concerns are discussed. Certain women will continue to decline testing, and their decisions should be respected and documented in the medical record.
# Timing of HIV Testing
- To promote informed and timely therapeutic decisions, health-care providers should test women for HIV as early as possible during each pregnancy. Women who decline the test early in prenatal care should be encouraged to be tested at a subsequent visit. - A second HIV test during the third trimester, preferably <36 weeks of gestation, is cost-effective even in areas of low HIV prevalence and may be considered for all pregnant women. A second HIV test during the third trimester is recommended for women who meet one or more of the following criteria: -Women who receive health care in jurisdictions with elevated incidence of HIV or AIDS among women aged 15-45 years. In 2004, these jurisdictions included Alabama, Connecticut, Delaware, the District of Columbia, Florida, Georgia, Illinois, Louisiana, Maryland, Massachusetts, Mississippi, Nevada, New Jersey, New York, North Carolina, Pennsylvania, Puerto Rico, Rhode Island, South Carolina, Tennessee, Texas, and Virginia. † -Women who receive health care in facilities in which prenatal screening identifies at least one HIV-infected pregnant woman per 1,000 women screened. -Women who are known to be at high risk for acquiring HIV (e.g., injection-drug users and their sex partners, women who exchange sex for money or drugs, women who are sex partners of HIV-infected persons, and women who have had a new or more than one sex partner during this pregnancy). -Women who have signs or symptoms consistent with acute HIV infection. When acute retroviral syndrome is a possibility, a plasma RNA test should be used in conjunction with an HIV antibody test to diagnose acute HIV infection (96).
# Rapid Testing During Labor
- Any woman with undocumented HIV status at the time of labor should be screened with a rapid HIV test unless she declines (opt-out screening). - Reasons for declining a rapid test should be explored (see Addressing Reasons for Declining Testing). - Immediate initiation of appropriate antiretroviral prophylaxis (42) should be recommended to women on the basis of a reactive rapid test result without waiting for the result of a confirmatory test.
# MMWR September 22, 2006
Postpartum/Newborn Testing
- When a woman's HIV status is still unknown at the time of delivery, she should be screened immediately postpartum with a rapid HIV test unless she declines (opt-out screening). - When the mother's HIV status is unknown postpartum, rapid testing of the newborn as soon as possible after birth is recommended so antiretroviral prophylaxis can be offered to HIV-exposed infants. Women should be informed that identifying HIV antibodies in the newborn indicates that the mother is infected. - For infants whose HIV exposure status is unknown and who are in foster care, the person legally authorized to provide consent should be informed that rapid HIV testing is recommended for infants whose biologic mothers have not been tested. - The benefits of neonatal antiretroviral prophylaxis are best realized when it is initiated <12 hours after birth (110).
# Confirmatory Testing
- Whenever possible, uncertainties regarding laboratory test results indicating HIV infection status should be resolved before final decisions are made regarding reproductive options, antiretroviral therapy, cesarean delivery, or other interventions. - If the confirmatory test result is not available before delivery, immediate initiation of appropriate antiretroviral prophylaxis (42) should be recommended to any pregnant patient whose HIV screening test result is reactive to reduce the risk for perinatal transmission.
# Similarities and Differences Between Current and Previous Recommendations for Pregnant Women and Their Infants
Aspects of these recommendations that remain unchanged from previous recommendations are as follows:
- Universal HIV testing with notification should be performed for all pregnant women as early as possible during pregnancy. - HIV screening should be repeated in the third trimester of pregnancy for women known to be at high risk for HIV. - Providers should explore and address reasons for declining HIV testing. - Pregnant women should receive appropriate health education, including information regarding HIV and its transmission, as a routine part of prenatal care.
- Access to clinical care, prevention counseling, and support services is essential for women with positive HIV test results. Aspects of these recommendations that differ from previous recommendations are as follows:
- HIV screening should be included in the routine panel of prenatal screening tests for all pregnant women. Patients should be informed that HIV screening is recommended for all pregnant women and that it will be performed unless they decline (opt-out screening). - Repeat HIV testing in the third trimester is recommended for all women in jurisdictions with elevated HIV or AIDS incidence and for women receiving health care in facilities with at least one diagnosed HIV case per 1,000 pregnant women per year. - Rapid HIV testing should be performed for all women in labor who do not have documentation of results from an HIV test during pregnancy. Patients should be informed that HIV testing is recommended for all pregnant women and will be performed unless they decline (opt-out screening). Immediate initiation of appropriate antiretroviral prophylaxis should be recommended on the basis of a reactive rapid HIV test result, without awaiting the result of confirmatory testing.
# Additional Considerations for HIV Screening
Test Results
- Communicating test results.
# Clinical Care for HIV-Infected Persons
Persons with a diagnosis of HIV infection need a thorough evaluation of their clinical status and immune function to determine their need for antiretroviral treatment or other therapy. HIV-infected persons should receive or be referred for clinical care promptly, consistent with USPHS guidelines for management of HIV-infected persons (96). HIV-exposed infants should receive appropriate antiretroviral prophylaxis to prevent perinatal HIV transmission as soon as possible after birth (42) and begin trimethoprim-sulfamethoxazole prophylaxis at age 4-6 weeks to prevent Pneumocystis pneumonia (112). They should receive subsequent clinical monitoring and diagnostic testing to determine their HIV infection status (113).
# Partner Counseling and Referral
When HIV infection is diagnosed, health-care providers should strongly encourage patients to disclose their HIV status to their spouses, current sex partners, and previous sex partners and recommend that these partners be tested for HIV infection. Health departments can assist patients by notifying, counseling, and providing HIV testing for partners without disclosing the patient's identity (114). Providers should inform patients who receive a new diagnosis of HIV infection that they might be contacted by health department staff for a voluntary interview to discuss notification of their partners.
# Special Considerations for Screening Adolescents
Although parental involvement in an adolescent's health care is usually desirable, it typically is not required when the adolescent consents to HIV testing. However, laws concerning consent and confidentiality for HIV care differ among states (79). Public health statutes and legal precedents allow for evaluation and treatment of minors for STDs without parental knowledge or consent, but not every state has defined HIV infection explicitly as a condition for which testing or treatment may proceed without parental consent. Health-care providers should endeavor to respect an adolescent's request for privacy (79). HIV screening should be discussed with all adolescents and encouraged for those who are sexually active. Providing information regarding HIV infection, HIV testing, HIV transmission, and implications of infection should be regarded as an essential component of the anticipatory guidance provided to all adolescents as part of primary care (79).
# Monitoring and Evaluation
Recommended thresholds for screening are based on estimates of the prevalence of undiagnosed HIV infection in U.S. health-care settings, for which no accurate recent data exist. The optimal frequency for retesting is not yet known. Cost-effectiveness parameters for HIV screening were based on existing program models, all of which include a substantial counseling component, and did not consistently consider secondary infections averted as a benefit of screening. To assess the need for revised thresholds for screening adults and adolescents or repeat screening of pregnant women and to confirm their continued effectiveness, screening programs should monitor the yield of new diagnoses of HIV infection, monitor costs, and evaluate whether patients with a diagnosis of HIV infection are linked to and remain engaged in care. With minor modifications, laboratory information systems might provide a practical alternative for clinicians to use in determining HIV prevalence among their patients who are screened for HIV.
# Primary Prevention and HIV Testing in Nonclinical Settings
These revised recommendations are designed to increase HIV screening in health-care settings. Often, however, the population most at risk for HIV includes persons who are least likely to interact with the conventional health-care system (47,116). The need to maintain primary prevention activities, identify persons at high risk for HIV who could benefit from prevention services, and provide HIV testing for persons who are at high risk for HIV in nonclinical venues remains undiminished. New approaches (e.g., enlisting (117).
# Regulatory and Legal Considerations
These public health recommendations are based on best practices and are intended to comply fully with the ethical principles of informed consent (67). Legislation related to HIV and AIDS has been enacted in every state and the District of Columbia (118), and specific requirements related to informed consent and pretest counseling differ among states (119). Certain states, local jurisdictions, or agencies might have statutory or other regulatory impediments to opt-out screening, or they might impose other specific requirements for counseling, written consent, confirmatory testing, or communicating HIV test results that conflict with these recommendations. Where such policies exist, jurisdictions should consider strategies to best implement these recommendations within current parameters and consider steps to resolve conflicts with these recommendations.
# Other Guidelines
Issues that fall outside the scope of these recommendations are addressed by other USPHS guidelines (Box 1). Because concepts relevant to HIV management evolve rapidly, USPHS updates recommendations periodically. Current updates are available from the National Institutes of Health at . Additional guidelines have been published by CDC and the U.S. Department of Health and Human Services, Office for Civil Rights (Box 2). | 9,015 | {
"id": "93b9cfc6fb9778d74d0e9a86726a7ce99381679e",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # Compendium of Animal Rabies Control, 1998 National Association of State Public Health
Veterinarians, Inc.*
The purpose of this Compendium is to provide information on rabies control to veterinarians, public health officials, and others concerned with rabies control. These recommendations serve as the basis for animal rabies-control programs throughout the United States and facilitate standardization of procedures among jurisdictions, thereby contributing to an effective national rabies-control program. This document is reviewed annually and revised as necessary. Recommendations for parenteral immunization procedures are contained in Part I; all animal rabies vaccines licensed by the United States Department of Agriculture (USDA) and marketed in the United States are listed in Part II; Part III details the principles of rabies control.
# Part I: Recommendations for Parenteral Immunization Procedures
# A. Vaccine Administration
All animal rabies vaccines should be restricted to use by, or under the direct supervision of, a veterinarian.
# B. Vaccine Selection
In comprehensive rabies-control programs, only vaccines with a 3-year duration of immunity should be used. This procedure constitutes the most effective method of increasing the proportion of immunized dogs and cats in any population (See Part II).
# C. Route of Inoculation
All vaccines must be administered in accordance with the specifications of the product label or package insert. If administered intramuscularly, the vaccine must be injected at one site in the thigh.
# D. Vaccination of Wildlife and Hybrids
The efficacy of parenteral rabies vaccination of wildlife and hybrids (the offspring of wild animals crossbred to domestic dogs and cats) has not been established, and no rabies vaccine is licensed for these animals. Zoos or research institutions may establish vaccination programs that attempt to protect valuable animals, but these programs should not replace appropriate public health activities that protect humans.
# E. Accidental Human Exposure to Vaccine
Accidental inoculation may occur during administration of animal rabies vaccine. Such exposure to inactivated vaccines constitutes no rabies hazard.
# F. Identification of Vaccinated Animals
All agencies and veterinarians should adopt the standard tag system. This practice will aid the administration of local, state, national, and international control procedures. Animal license tags should be distinguishable in shape and color from rabies vaccine tags. Anodized aluminum rabies tags should be no less than 0.064 inches in thickness.
1. Rabies Tags 4. Adjunct Procedures. Methods or procedures that enhance rabies control include the following: a. Licensure. Registration or licensure of all dogs, cats, and ferrets may be used to aid in rabies control. A fee is frequently charged for such licensure, and revenues collected are used to maintain rabies-or animal-control programs.
Vaccination is an essential prerequisite to licensure. b. Canvassing of Area. House-to-house canvassing by animal-control personnel facilitates enforcement of vaccination and licensure requirements. c. Citations. Citations are legal summonses issued to owners for violations, including the failure to vaccinate or license their animals. The authority for officers to issue citations should be an integral part of each animal-control program. d. Animal Control. All communities should incorporate stray animal control, leash laws, and training of personnel in their programs. 5. Postexposure Management. Any animal bitten or scratched by a wild, carnivorous mammal or a bat that is not available for testing should be regarded as having been exposed to rabies. a. Dogs, Cats, and Ferrets. Unvaccinated dogs, cats, and ferrets exposed to a rabid animal should be euthanized immediately. If the owner is unwilling to have this done, the animal should be placed in strict isolation for 6 months and vaccinated 1 month before being released. Animals with expired vaccinations need to be evaluated on a case-by-case basis. Dogs, cats, and ferrets that are currently vaccinated should be revaccinated immediately, kept under the owner's control, and observed for 45 days. b. Livestock. All species of livestock are susceptible to rabies; cattle and horses are among those most frequently infected. Livestock exposed to a rabid animal and currently vaccinated with a vaccine approved by USDA for that species should be revaccinated immediately and observed for 45 days. Unvaccinated livestock should be slaughtered immediately. If the owner is unwilling to have this done, the animal should be kept under close observation for 6 months. The following are recommendations for owners of unvaccinated livestock exposed to rabid animals: 1) If the animal is slaughtered within 7 days of being bitten, its tissues may be eaten without risk of infection, provided liberal portions of the exposed area are discarded. Federal meat inspectors must reject for slaughter any animal known to have been exposed to rabies within 8 months. 2) Neither tissues nor milk from a rabid animal should be used for human or animal consumption. However, because pasteurization temperatures will inactivate rabies virus, drinking pasteurized milk or eating cooked meat does not constitute a rabies exposure. 3) Having more than one rabid animal in a herd or having herbivore-toherbivore transmission is rare; therefore, restricting the rest of the herd if a single animal has been exposed to or infected by rabies may not be necessary. c. Other Animals. Other animals bitten by a rabid animal should be euthanized immediately. Animals maintained in USDA-licensed research facilities or accredited zoological parks should be evaluated on a case-by-case basis.
# Management of Animals That Bite Humans.
A healthy dog, cat, or ferret that bites a person should be confined and observed for 10 days; not administering rabies vaccine during the observation period is recommended. Such animals should be evaluated by a veterinarian at the first sign of illness during confinement. Any illness in the animal should be reported immediately to the local health department. If signs suggestive of rabies develop, the animal should be euthanized, its head removed, and the head shipped under refrigeration (not frozen) for examination of the brain by a qualified laboratory designated by the local or state health department. Any stray or unwanted dog, cat, or ferret that bites a person may be euthanized immediately and the head submitted as described above for rabies examination. Animals other than dogs, cats, or ferrets that might have exposed a person to rabies should be reported immediately to the local health department. Prior vaccination of an animal does not preclude the necessity for euthanasia and testing if the period of virus shedding is unknown for that species. Management of animals other than dogs, cats, and ferrets depends on the species, the circumstances of the bite, the epidemiology of rabies in the area, and the biting animal's history, current health status, and potential for exposure to rabies. Postexposure management of persons should follow the recommendations of the ACIP.*
# C. Control Methods in Wildlife
The public should be warned not to handle wildlife. Wild mammals and hybrids that bite or otherwise expose people, pets, or livestock should be considered for euthanasia and rabies examination. A person bitten by any wild mammal should immediately report the incident to a physician who can evaluate the need for antirabies treatment (See current rabies prophylaxis recommendations of the ACIP*).
1. Terrestrial Mammals. The use of licensed oral vaccines for the mass immunization of free-ranging wildlife should be considered in selected situations, with the approval of the state agency responsible for animal rabies control. Continuous and persistent government-funded programs for trapping or poisoning wildlife are not cost effective in reducing wildlife rabies reservoirs on a statewide basis. However, limited control in high-contact areas (e.g., picnic grounds, camps, or suburban areas) may be indicated for the removal of selected high-risk species of wildlife. The state wildlife agency and state health department should be consulted for coordination of any proposed vaccination or population-reduction programs. 2. Bats. Indigenous rabid bats have been reported from every state except Hawaii and have caused rabies in at least 28 humans in the United States. However, controlling rabies in bats by programs to reduce their populations is neither feasible nor desirable. Bats should be excluded from houses and adjacent structures to prevent direct association with humans. Such structures should then be made bat-proof by sealing entrances used by bats. Persons with frequent bat contact should be immunized against rabies as recommended by the ACIP.*
The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on Friday of each week, send an e-mail message to [email protected]. The body content should read SUBscribe mmwr-toc. Electronic copy also is available from CDC's World-Wide Web server at / or from CDC's file transfer protocol server at ftp.cdc.gov. To subscribe for paper copy, contact Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone (202) 512-1800.
Data in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. The reporting week concludes at close of business on Friday; compiled data on a national basis are officially released to the public on the following Friday. Address inquiries about the MMWR Series, including material to be considered for publication, to: Editor, MMWR Series, Mailstop C-08, CDC, 1600 Clifton Rd., N.E., Atlanta, GA 30333; telephone (888) 232-3228.
All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated.
# 6U.S. Government Printing Office: 1998-633-228/67077 Region IV | 2,053 | {
"id": "90c6bceeca97ab862d0a642aa188904c9280b6c4",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | On the cover: The cover design includes a wispy background image intended to evoke the well-known health hazard of smoke associated with the use of combustible tobacco products, and the much less studied misty emissions associated with the use of electronic devices to "vape" liquids containing nicotine and other components. The full range of hazards associated with tobacco use extends well beyond such air contaminants, so the cover design also incorporates a text box to highlight the optimal "Tobacco-Free" (i.e., not just "Smoke-Free") status for both workplaces and workers. The text box also evokes the NIOSH Total Worker Health TM strategy. This strategy maintains a strong focus on protection of workers against occupational hazards, including exposure to secondhand tobacco smoke on the job, while additionally encompassing workplace health promotion to target lifestyle risk factors, including tobacco use by workers. Photo by ©Thinkstock.#
iii NIOSH CIB 67 - Workplace Tobacco Policies
# Foreword
Current Intelligence Bulletins (CIBs) are issued by the National Institute for Occupational Safety and Health (NIOSH) to disseminate new scientific information about occupational hazards. A CIB may draw attention to a formerly unrecognized hazard, report new data on a known hazard, or disseminate information about hazard control.
Public health efforts to prevent disease caused by tobacco use have been underway for the past half century, but more remains to be done to achieve a society free of tobaccorelated death and disease. The Centers for Disease Control and Prevention (CDC), of which NIOSH is a part, has recently proclaimed a "Winnable Battle" to reduce tobacco use. NIOSH marks a half century since the first Surgeon General's Report on the health consequences of smoking by disseminating this CIB 67, Promoting Health and Preventing Disease and Injury through Workplace Tobacco Policies.
Workers who use tobacco products, or who are employed in workplaces where smoking is allowed, are exposed to carcinogenic and other toxic components of tobacco and tobacco smoke. Cigarette smoking is becoming less frequent, and smoke-free and tobaccofree workplace policies are reducing exposure to secondhand smoke (SHS) and motivating smokers to quit-but millions of workers still smoke, and smoking is still permitted in many workplaces. Other forms of tobacco also represent a health hazard to workers who use them. In addition to direct adverse effects of tobacco on the health of workers who use tobacco products or are exposed to SHS, tobacco products used in the workplace-and away from work-can worsen the hazardous effects of other workplace exposures.
This CIB addresses the following aspects of tobacco use:
- Tobacco use among workers.
- Exposure to secondhand smoke in workplaces.
- Occupational health and safety concerns relating to tobacco use by workers.
- Existing occupational safety and health regulations and recommendations prohibiting or limiting tobacco use in the workplace.
- Hazards of worker exposure to SHS in the workplace.
- Interventions aimed at eliminating or reducing these hazards.
This CIB concludes with NIOSH recommendations on tobacco use in places of work and tobacco use by workers.
NIOSH urges all employers to embrace a goal that all their workplaces will ultimately be made and maintained tobacco-free. Initially, at a minimum, employers should take these actions:
- Establish their workplaces as smoke-free (encompassing all indoor areas without exceptions, areas immediately outside building entrances and air intakes, and all work vehicles).
# Introduction
Various NIOSH criteria documents on individual hazardous industrial agents, from asbestos through hexavalent chromium , have included specific recommendations relating to tobacco use, along with other recommendations to eliminate or reduce occupational safety and health risks. In addition, NIOSH has published two Current Intelligence Bulletins focused entirely on the hazards of tobacco use. CIB 31, Adverse Health Effects of Smoking and the Occupational Environment, outlined how tobacco usemost commonly smoking-can increase risk, sometimes profoundly, of occupational disease and injury . In that CIB, NIOSH recommended that smoking be curtailed in workplaces where those other hazards are present and that worker exposure to those other occupational hazards be controlled. CIB 54, Environmental Tobacco Smoke in the Workplace: Lung Cancer and Other Health Effects, presented a determination by NIOSH that secondhand smoke (SHS) causes cancer and cardiovascular disease . In that CIB, NIOSH recommended that workplace exposures to SHS be reduced to the lowest feasible concentration, emphasizing that eliminating tobacco smoking from the workplace is the best way to achieve that. This current CIB 67, Promoting Health and Preventing Disease and Injury Through Workplace Tobacco Policies, augments those two earlier NIOSH CIBs. Consistent with the philosophy embodied in the NIOSH Total Worker Health™ Program , this CIB is aimed not just at preventing occupational injury and illness related to tobacco use, but also at improving the general health and well-being of workers.
# Smoking and Other Tobacco Use by Workers-Exposure to Secondhand Smoke at Work
Millions of workers use tobacco products. Since publication of the first Surgeon General's Report on the health consequences of smoking, cigarette smoking prevalence in the United States has declined by more than 50% among all U.S. adults-from about 42% in 1965 to about 18% in . Overall, smoking among workers has similarly declined, but smoking rates among blue-collar workers have been shown to be consistently higher than among white-collar workers. Among blue-collar workers, those exposed to higher levels of workplace dust and chemical hazards are more likely to be smokers . Also, on average, blue-collar smokers smoke more heavily than white-collar smokers .
From 2004-2011, cigarette smoking prevalence varied widely by industry, ranging from about 10% in education services to more than 30% in construction, mining, and accommodation and food services. Smoking prevalence varies even more by occupation, ranging from 2% among religious workers to 50% among construction trades helpers . A recent survey of U.S. adults found that by 2013, approximately 1 in 3 current vi NIOSH CIB 67 - Workplace Tobacco Policies smokers reported ever having used e-cigarettes, a type of electronic nicotine delivery system (ENDS) . However, the prevalence of ENDS use by industry and occupation has not been studied. Overall, about 3% of all workers use smokeless tobacco in the form of chewing tobacco and snuff, but smokeless tobacco use exceeds 10% among workers in construction and extraction jobs and stands at nearly 20% among workers in the mining industry . The use of smokeless tobacco by persons who also smoke tobacco products-one form of what is known as "dual use"-is a way some workers maintain their nicotine habit in settings where smoking is prohibited (e.g., in an office where indoor smoking is prohibited or in coal mines where smoking can cause explosions). More than 4% of U.S. workers who smoke cigarettes also use smokeless tobacco .
The implementation of smoke-free policies has eliminated or substantially decreased exposure to SHS in many U.S. workplaces. But millions of nonsmoking workers not covered by these policies are still exposed to SHS in their workplace. A 2009-2010 survey found that 20.4% of nonsmoking U.S. workers experienced exposure to SHS at work on at least 1 day during the preceding week . Another survey conducted at about the same time estimated that 10.4% of nonsmoking adult U.S. workers experienced exposure to SHS at work on at least 2 days per week during the past year . Such exposure varied by industry (ranging from 4% for finance and insurance to nearly 28% for mining) and by occupation (ranging from 2% for education, training, and library occupations to nearly 29% for construction and extraction occupations). Inclusion of ENDS in smoke-free policies has increased over time. In the United States, the number of states and localities that explicitly prohibited use of e-cigarettes in public places where tobacco smoking was already prohibited totaled 3 states and more than 200 localities before the end of 2014 .
# Health and Safety Consequences of Tobacco Use
Since the first Surgeon General's report on smoking and health, many reports from the Surgeon General and other health authorities have documented serious health consequences of smoking tobacco, exposure to secondhand smoke (SHS), and use of smokeless tobacco.
Smoking is a known cause of the top fıve health conditions impacting the U.S. population-heart disease, cancer, cerebrovascular disease, chronic lower respiratory disease, and unintentional injuries . The risk of most adverse health outcomes caused by smoking is related to the duration and intensity of tobacco smoking, but no level of tobacco smoking is risk free [DHHS 2010b[DHHS , 2014.
Likewise, there is no risk-free level of exposure to SHS . Among exposed adults, there is strong evidence of a causal relationship between exposure to SHS and a number of adverse health effects, including lung cancer, heart disease (including heart attacks), stroke, exacerbation of asthma, and reduced birth weight of offspring (due to SHS exposure of nonsmoking pregnant women) . In addition, there is suggestive evidence that exposure to SHS causes a range of other health effects among adults, including other cancers (breast cancer, nasal cancer), asthma, chronic obstructive pulmonary disease (COPD), and premature delivery of babies born to women exposed to SHS . vii NIOSH CIB 67 - Workplace Tobacco Policies
Because ENDS are relatively new products that vary widely and have not been well studied, limited data are available on potential hazardous effects of active and passive exposures to their emissions . A recent white paper from the American Industrial Hygiene Association thoroughly reviewed the ENDS issue and cautioned that "… the existing research does not appear to warrant the conclusion that e-cigarettes are "safe" in absolute terms … e-cigarettes should be considered a source of volatile organic compounds (VOCs) and particulates in the indoor environment that have not been thoroughly characterized or evaluated for safety" .
Smokeless tobacco is known to cause several types of cancer, including oral, esophageal, and pancreatic cancers . Some newer smokeless tobacco products (e.g., snus) are processed in a way intended to substantially reduce toxicant and carcinogen content, though variable residual levels remain even in these newer products and represent potential risk to users . All smokeless tobacco products contain nicotine, a highly addictive substance which is plausibly responsible for high risks of adverse reproductive outcomes (e.g., low birth weight, pre-term delivery, and stillbirth) associated with maternal use of snus .
# Combining Tobacco Use and Occupational Hazards Enhances Risk
Many workers and their employers do not fully understand that tobacco use in their workplaces (most commonly smoking) can increase-sometimes profoundly-the likelihood and/or the severity of occupational disease and injury caused by other hazards present. This can occur in various ways. A toxic industrial chemical present in the workplace can also be present in tobacco products and/or tobacco smoke, so workers who smoke or are exposed to SHS are more highly exposed and placed at greater risk of the occupational disease associated with those chemicals.
Heat generated by smoking tobacco in the workplace can transform some workplace chemicals into more toxic chemicals, placing workers who smoke at greater risk of toxicity . Tobacco products can readily become contaminated by toxic workplace chemicals, through contact of the tobacco products with unwashed hands or contaminated surfaces and through deposition of airborne contaminants onto the tobacco products. Subsequent use of the contaminated tobacco products, whether at or away from the workplace, can facilitate entry of these toxic agents into the user's body .
Often, a health effect can be independently caused by tobacco use and by workplace exposure to a toxic agent. For example, tobacco smoking can reduce a worker's lung function, leaving that worker more vulnerable to the effect of any similar impairment of lung function caused by occupational exposure to dusts, gases, or fumes . For some occupational hazards, the combined impact of tobacco use and exposure to a toxic occupational agent can be synergistic (i.e., amounting to an effect profoundly greater than the sum of each independent effect). An example is the combined synergistic effect of tobacco smoking and asbestos exposure on lung cancer, which results in a profoundly increased risk of lung cancer among asbestos-exposed workers who smoke .
viii
NIOSH CIB 67 - Workplace Tobacco Policies
The risk of occupational injuries and traumatic fatalities can be greatly enhanced when tobacco use is combined with an occupational hazard. Obvious examples are explosions and fires when explosive or flammable materials in the workplace are ignited by sources associated with tobacco smoking . However, any form of tobacco use may result in traumatic injury if the worker operating a vehicle or industrial machinery is distracted by tobacco use (e.g., while opening, lighting, extinguishing, or disposing of a tobacco product) .
# Preventive Interventions
Both health and economic considerations can motivate people to quit tobacco use. Workers who smoke can protect their own health by quitting tobacco use and can protect their coworkers' health by not smoking in the workplace. Smokers who quit stand to benefit financially. Among other savings, they no longer incur direct costs associated with consumer purchases of tobacco products and related materials, and they generally pay lower life and health insurance premiums and lower out-of-pocket costs for health care.
Legally determined employer responsibilities set out in federal, state, and local laws and regulations, as well as health and economic considerations, can motivate employers to establish workplace policies that prohibit or restrict tobacco use. Even where smoke-free workplace policies are not explicitly mandated by state or local governments, the general duty of employers to provide safe work environments for their employees can motivate employers to prohibit smoking in their workplaces, thereby avoiding liability for exposing nonsmoking employees to SHS . Also, not only are nonsmoking workers generally healthier, but they are more productive and less costly for employers. Considering aggregate cost and productivity impacts, one recent study estimated that the annual cost to employ a smoker was, on average, nearly $6,000 greater than the cost to employ a nonsmoker . It follows that interventions that help smoking workers quit can benefit the bottom line of a business.
Several studies have shown that smoke-free workplace policies decrease exposure of nonsmoking employees to SHS at work, increase smoking cessation, and decrease smoking rates among employees . Less restrictive workplace smoking policies are associated with higher levels of sustained tobacco use among workers . In workplaces without a workplace rule that limits smoking, workers are significantly more likely to be smokers . Policies that make indoor workplaces smoke-free result in improved worker health . For example, smoke-free policies in the hospitality industry have been shown to improve health among bar workers, who are often heavily exposed to SHS in the absence of such policies . Smoke-free policies also reduce hospitalizations for heart attacks in the general population and several recent studies suggest that these policies may also reduce hospitalizations and emergency department visits for asthma in the general population . The CDC-administered Task Force on Community Preventive Services recommends smoke-free workplace policies, not only to reduce exposure to SHS, but also to increase tobacco cessation, reduce tobacco use prevalence, and reduce tobacco-related morbidity and mortality .
# Workplace Tobacco Use Cessation Programs
Employees who smoke and want to quit can benefit from employer-provided resources and assistance. Various levels and types of cessation support can be provided to workers, though more intensive intervention has a greater effect .
Occupational health providers and worksite health promotion staff can increase quit rates simply by asking about a worker's tobacco use and offering brief counseling . Workers who smoke can be referred to publicly funded state quitlines, which have been shown to increase tobacco cessation success . Widespread availability, ease of accessibility, affordability, and potential reach to populations with higher levels of tobacco use make quitlines an important component of any cessation effort . However, many employers do not make their employees aware of them .
Mobile phone texting interventions and web-based interventions are also promising approaches to assist with tobacco cessation . The most comprehensive workplace cessation programs incorporate tobacco cessation support into programs that address the overall safety, health, and well-being of workers. A growing evidence base supports the enhanced effectiveness of workplace health promotion programs when they are combined with occupational health protection programs .
# Health Insurance and Smoking-Using Incentives and Disincentives to Modify Tobacco Use Behavior
Many workers are covered by employer-provided health insurance, which is increasingly being designed to encourage and help employees to adopt positive personal health-related behaviors, including smoking cessation for smokers. Health insurance coverage of evidence-based smoking cessation treatments is associated with increases in the number of smokers who attempt to quit, use proven treatments in these attempts, and succeed in quitting . Ideally, such coverage should provide access to all evidence-based cessation treatments, including individual, group, and telephone counseling, and all seven FDA-approved cessation medications, while eliminating or minimizing barriers such as cost-sharing and prior authorization .
The Affordable Care Act (ACA), Public Law 111-148, includes provisions pertinent to tobacco use and cessation . Some of these provisions are intended to help smokers quit by increasing their access to proven cessation treatments. Other ACA provisions are intended to encourage tobacco cessation by permitting small-group plans to charge tobacco users premiums that are up to 50% higher than those charged to nontobacco users, subject certain limitations .
x NIOSH CIB 67 - Workplace Tobacco Policies
The appropriate intent of incentives is to help employees who use tobacco quit, thus improving health and reducing health-care costs overall. The evidence for the effectiveness of imposing insurance premium surcharges on tobacco users is limited, and care is needed to ensure that incentive programs are designed to work as intended and to minimize the potential use of incentives in an unduly coercive or discriminatory manner .
# Conclusions
- Cigarette smoking by workers and SHS exposure in the workplace have both declined substantially over recent decades, but about 20% of all U.S. workers still smoke and about 20% of nonsmoking workers are still exposed to SHS at work.
- Smoking prevalence among workers varies widely by industry and occupation, approaching or exceeding 30% in construction, mining, and accommodation and food services workers.
- Prevalence of ENDS use by occupation and industry has not been studied, but ENDS has grown greatly, with about 1 in 3 current U.S. adult smokers reporting ever having used e-cigarettes by 2013.
- Smokeless tobacco is used by about 3% of U.S. workers overall, but smokeless tobacco is used by more than 10% workers in construction and extraction jobs and by nearly 20% of workers in the mining industry, which can be expected to result in disparities in tobacco-related morbidity and mortality.
- Tobacco use causes debilitating and fatal diseases, including cancer, respiratory diseases, and cardiovascular diseases. These diseases afflict mainly users, but they also occur in those exposed to SHS. Smoking is substantially more hazardous, but use of smokeless tobacco also causes adverse health effects. More than 16 million U.S. adults live with a disease caused by smoking, and each year nearly a half million die prematurely from smoking or exposure to SHS.
- Tobacco use is associated with increased risk of injury and property loss due to fire, explosion, and vehicular collisions.
- Tobacco use by workers can increase, sometimes profoundly, the likelihood and the severity of occupational disease and injury caused by other workplace hazards (e.g., lead, asbestos, and flammable materials).
- Restrictions on smoking and tobacco use in specific work areas where particular high-risk occupational hazards are present (e.g., explosives, highly flammable materials, or highly toxic materials that could be ingested via tobacco use) have long been used to protect workers.
- A risk-free level of exposure to SHS has not been established, and ventilation is insufficient to eliminate indoor exposure to SHS.
- Potential adverse health effects associated with using ENDS or secondhand exposure to particulate aerosols and gases emitted from ENDS remains to be fully characterized.
xi NIOSH CIB 67 - Workplace Tobacco Policies
- Policies that prohibit tobacco smoking throughout the workplace (i.e., smoke-free workplace policies) are now widely implemented, but they have not yet been universally adopted across the United States. These policies improve workplace air quality, reduce SHS exposure and related health effects among nonsmoking employees, increase the likelihood that workers who smoke will quit, decrease the amount of smoking during the working day by employees who continue to smoke, and have an overall impact of improving the health of workers (i.e., among both nonsmokers who are no longer exposed to SHS on the job and smokers who quit).
- Workplace-based efforts to help workers quit tobacco use can be easily integrated into existing occupational health and wellness programs. Even minimal counseling and/or simple referral to state quitlines, mobile phone texting interventions, and web-based intervention can be effective, and more comprehensive programs are even more effective.
- Integrating both occupational safety and health protection components into workplace health promotion programs (e.g., smoking cessation) can increase participation in tobacco cessation programs and successful cessation among blue-collar workers.
- Smokers, on average, are substantially more costly to employ than nonsmokers.
- Some employers have policies that prohibit employees from using tobacco when away from work or that bar the hiring of smokers or tobacco users. However, the ethics of these policies remain under debate, and they may be legally prohibited in some jurisdictions.
# Recommendations
NIOSH recommends that employers take the following actions related to employee tobacco use:
- At a minimum, establish and maintain smoke-free workplaces that protect those in workplaces from involuntary, secondhand exposures to tobacco smoke and airborne emissions from e-cigarettes and other electronic nicotine delivery systems. Ideally, smoke-free workplaces should be established in concert with tobacco cessation support programs. Smoke-free zones should encompass (1) all indoor areas without exceptions (i.e., no indoor smoking areas of any kind, even if separately enclosed and/ or ventilated), (2) all areas immediately outside building entrances and air intakes, and (3) all work vehicles. Additionally, ashtrays should be removed from these areas.
- Optimally, establish and maintain entirely tobacco-free workplaces, allowing no use of any tobacco products across the entire workplace campus (see model policy in Box 6-1).
- Comply with current OSHA and MSHA regulations that prohibit or limit smoking, smoking materials, and/or use of other tobacco products in work areas characterized by the presence of explosive or highly flammable materials or potential exposure to toxic materials (see Table A-3 in the Appendix). To the extent feasible, follow all similar NIOSH recommendations (see Table A-2 in the Appendix).
- Provide information on tobacco-related health risks and on benefits of quitting to all employees and other workers at the worksite (e.g., contractors and volunteers).
ȣ Inform all workers about health risks of tobacco use.
ȣ Inform all workers about health risks of exposure to SHS.
ȣ Train workers who are exposed or potentially exposed to occupational hazards at work about increased health and/or injury risks of combining tobacco use with exposure to workplace hazards, about what the employer is doing to limit the risks, and about what the worker can do to limit his/her risks.
- Provide information on employer-provided and publicly available tobacco cessation services to all employees and other workers at the worksite (e.g., contractors and volunteers).
ȣ At a minimum, include information on available quitlines, mobile phone texting interventions, and web-based cessation programs, self-help materials, and employer-provided cessation programs and tobacco-related health insurance benefits available to the worker.
ȣ Ask about personal tobacco use as part of all occupational health and wellness program interactions with individual workers and promptly provide encouragement to quit and guidance on tobacco cessation to each worker identified as a tobacco user and to any other worker who requests tobacco cessation guidance.
- Offer and promote comprehensive tobacco cessation support to all tobacco-using workers and, where feasible, to their dependents.
ȣ Provide employer-sponsored cessation programs at no cost or subsidize cessation programs for lower-wage workers to enhance the likelihood of their participation. If health insurance is provided for employees, the health plan should provide comprehensive cessation coverage, including all evidence-based cessation treatments, unimpeded by co-pays and other financial or administrative barriers.
ȣ Include occupational health protection content specific to the individual workplace in employer-sponsored tobacco cessation programs offered to workers with jobs involving potential exposure to other occupational hazards.
- Become familiar with available guidance (e.g., CDC's "Implementing a Tobacco-Free Campus Initiative in Your Workplace") (see Box 6-2) and federal guidance on tobacco cessation insurance coverage under the ACA (e.g., ) before developing, implementing, or modifying tobacco-related policies, interventions, or controls.
- Develop, implement, and modify tobacco-related policies, interventions, and controls in a stepwise and participatory manner. Get input from employees, labor representatives, line management, occupational safety/health and wellness staff, and human resources professionals. Those providing input should include current and former tobacco users, as well as those who have never used tobacco. Seek voluntary input from employees with health conditions, such as heart disease and asthma, exacerbated by exposure to SHS.
- Make sure that any differential employment benefits policies that are based on tobacco use or participation in tobacco cessation programs are designed with a primary intent to improve worker health and comply with all applicable federal, state, and local laws and regulations. Even when permissible by law, these differential employment benefit policies-as well as differential hiring policies based on tobacco use-should be implemented only after seriously considering ethical concerns and possible
xiii NIOSH CIB 67 - Workplace Tobacco Policies unintended consequences. These consequences can include the potential for adverse impacts on individual employees (e.g., coercion, discrimination, and breach of privacy) and the workforce as a whole. Furthermore, the impact of any differential policies that are introduced should be monitored to determine whether they improve health and/or have unintended consequences.
NIOSH recommends that workers who smoke cigarettes or use other tobacco products take the following actions:
- Comply with all workplace tobacco policies.
- Ask about available employer-provided tobacco cessation programs and cessationrelated health insurance benefits.
- Quit using tobacco products. Know that quitting tobacco use is beneficial at any age, but the earlier one quits, the greater the benefits. Many people find various types of assistance to be very helpful in quitting, and evidence-based cessation treatments have been found to increase smokers' chances of quitting successfully.
# Background
The widespread recognition that tobacco use is the leading preventable cause of premature death and a major cause of preventable disease, injury, and disability in the United States is based on an extraordinarily strong scientific foundation. The first Surgeon General's report on smoking and health, issued a half century ago, concluded that cigarette smoke causes lung cancer and chronic bronchitis . Subsequent reports of the Surgeon General have determined that both active tobacco smoking and secondhand smoke (SHS) exposure are important causes of cancer, heart disease, and respiratory disease, and that smokeless tobacco use also causes serious disease, including oral, esophageal, and pancreatic cancer . Several reports of the Surgeon General have addressed benefits of effective smoking cessation programs and other means of reducing tobacco use [DHHS 1990[DHHS , 2000[DHHS , 2012[DHHS , 2014.
A Surgeon General's report also established the ongoing Healthy People strategy, aimed broadly at improving the nation's health . Currently, Healthy People 2020 includes a major goal of reducing "illness, disability, and death related to tobacco use and secondhand smoke exposure" along with several specific objectives that target eliminating tobacco smoking in workplaces . Later, when scientific evidence became clear that the health risk from inhaling tobacco smoke is not limited to smokers but also affects bystanders, NIOSH published another CIB focused solely on tobacco smoke-this one on SHS in the workplace . In that CIB, NIOSH presented its determination that SHS (referred to in that document as "environmental tobacco smoke") causes cancer and cardiovascular disease. In recommending that workplace exposures to SHS be reduced to the lowest feasible concentration using all available preventive measures, NIOSH emphasized that the best approach is to eliminate tobacco smoking in the workplace, and it endorsed employer-provided smoking cessation programs for employees who smoke (see Appendix Table A-1).
In retrospect, the CIB on SHS in the workplace marked a watershed in the Institute's approach to occupational safety and health. Over time, NIOSH recommendations concerning specific industrial hazards-which earlier might have been relatively silent about what were then narrowly understood to be strictly personal health-related behaviors like smoking-have come to embrace a more comprehensive preventive approach. This evolution has been motivated by a better understanding of how tobacco use adversely impacts occupational diseases and injuries and-perhaps just as importantly-by a changing societal view of the health and economic consequences of tobacco use. By way of example, criteria documents produced in the past decade on two lung hazards-refractory ceramic fibers and hexavalent chromium -have included entire sections on smoking cessation, something not seen in earlier criteria documents (see Appendix Table A-2). In a 2004 medical journal paper, the Director of NIOSH concluded that "Smoking is an occupational hazard, both for the worker who smokes and for the nonsmoker who is exposed to in his or her workplace. " He also recommended that "Smoking as an occupational hazard should be completely eliminated for the sake of the health and safety of American workforce" . A 2010 post on the NIOSH Blog site pointed out that "Tobaccofree workplaces, on-site tobacco cessation services, and comprehensive, employer-sponsored healthcare benefits that provide multiple quit attempts, have all been shown to increase tobacco treatment success" .
Thus, instead of staying focused nearly exclusively on protecting workers from specific occupational hazards, NIOSH has progressively adopted a "strategy integrating occupational safety and health protection with health promotion to prevent worker injury and illness and to advance health and wellbeing" . This integrated approach, embodied by NIOSH in its Total Worker Health™ Program . The overall prevalence of current cigarette smoking among workers during the 2004-2010 period was 19.6%, very closely approximating the prevalence among all U.S. adults, which annually ranged from a high of 20.9% to a low of 19.3% during the period [CDC 2011a, 2013b.
During the past several decades, a number of studies have assessed smoking habits among U.S. workers. Consistently, these studies have shown substantially higher cigarette smoking prevalence among blue-collar workers compared with white-collar workers, particularly among males .
In addition, these studies provide evidence of higher intensity of smoking among blue-collar workers who smoke than white-collar workers who smoke . Among blue-collar workers, those with higher levels of exposure to dust and chemical hazards are more likely to be smokers .
NIOSH publishes recent data on cigarette smoking status by industry and occupation groupings in the Work-Related Lung Disease (WoRLD) Surveillance Report and corresponding online updates [NIOSH 2008a[NIOSH , 2014. The most recent tables, covering the period 2004-2011, show that smoking prevalence varies widely-nearly four-fold-by industry. Smoking prevalence at or below 10% was found among major industry sectors in education services (9.8%) and among minor industry sectors in religious, grantmaking, civic, labor, professional, and similar organizations (10.0%). Smoking prevalence exceeding 30% was found among three major industry sectors: construction (32.1%); accommodation and food services (32.1%); and mining (30.2%). Several minor sectors in other major industries also exceeded 30%: gasoline stations (37.6%); fishing, trapping, and hunting (34.3%); forestry and logging (32.9%); warehousing and storage (32.0%); rental and leasing services (31.3%); wood product manufacturing (30.7%); and nonmetallic mineral product manufacturing (30.4%).
Additional tables posted on that same NIOSH site show that cigarette smoking prevalence varies even more extremely-25-fold-by specific occupational group, from 2.0% for religious workers to 49.5% for construction trades helpers (see Appendix Figures A-1a and A-1b).
# Smokeless Tobacco
Smokeless tobacco is not burned when used. Types of smokeless tobacco include chewing tobacco,
NIOSH CIB 67 - Workplace Tobacco Policies snuff, dip, snus, and dissolvable tobacco products.
As with smoking, NHIS data have been used to estimate smokeless tobacco use by workers NIOSH 2014]. During 2010, an estimated 3% of currently working adults used smokeless tobacco in the form of chewing tobacco or snuff. Smokeless tobacco use ranged up to 11% for those working in construction and extraction jobs and more than 18% for those working in the mining industry . (Appendix Figures A-2a and A-2b display prevalence of smokeless tobacco use for major industry and occupation categories.)
# Dual Use
Someone who smokes cigarettes and also uses smokeless tobacco engages in "dual use.
# Secondhand Smoke Exposures at Work
SHS is a mixture of the "sidestream smoke" emitted directly into the air by the burning tobacco product and the "mainstream smoke" exhaled by smokers while smoking. Workplace exposures to SHS have been demonstrated by using air monitoring and through the use of biological markers, such as cotinine, a metabolite of nicotine . By the late 1990s, studies that objectively measured markers of SHS found levels that varied substantially by workplace. Where smoking was allowed, offices and blue-collar workplaces had similar concentrations of nicotine in the air; higher nicotine concentrations were present in restaurants, and still higher concentrations (an order of magnitude higher than in offices) were measured in bars .
The 2009-2010 nationwide survey also found that 20.4% of nonsmoking employed adults reported SHS exposure in their indoor workplace on 1 or more days during the past 7 days .
An analysis of recent NHIS data that used a more restrictive definition of SHS exposure-exposure to SHS at work on 2 or more days per week during the past year-estimated that 10.0% of nonsmoking U.S. workers reported frequent exposure to SHS at work ]. An even more recent survey involving all states found proportions of nonsmoking employed adults who reported SHS exposure on 1 or more days during the past 7 days in their indoor workplace ranging from 12.4% (Maine) to 30.8% (Nevada) .
Prevalences of SHS exposure at work on 1 or more days during the past 7 days were significantly higher among males (23.8%) than females (16.7%), among those without a high school diploma (31.9%) than those with a graduate school degree (11.9%), and among those with an annual household income less than $20,000 (24.2%) than those with ≥ $100,000 income (14.8%). A recent study separated effects on workplace SHS exposure associated with education and income from effects associated with occupation . Even after statistically adjusting for the effects of education and income, blue-collar workers were more likely to report workplace SHS exposure than managers and professionals. That same study also found that bluecollar workers were also more likely to be smokers and more likely to be heavy smokers, suggesting that SHS exposures in the workplace could be intense for many blue-collar workers.
# Electronic Nicotine Delivery Systems
First introduced into the U.S. market in 2007 , electronic nicotine delivery systems (ENDS), which include electronic cigarettes, or e-cigarettes, are rapidly increasing in use . The ENDS marketplace has diversified in recent years and now includes multiple products, including electronic hookahs, vape pens, electronic cigars, and electronic pipes. Typically, an ENDS product has a cartridge containing a liquid consisting of varying amounts of nicotine, a propylene glycol and/or glycerin carrier, and flavorings. Inhalation draws the fluid to a heating element, creating vapor that subsequently condenses into an aerosol of minute droplets .
Available data suggest that e-cigarette use has increased greatly in the United States during the past several years. A mail survey of U.S. adults showed that the percentage who had ever used e-cigarette more than quadrupled from 0.6% in 2009 to 2.7% in 2010
# Lung Disease
Cigarette smoking is estimated to cause more than 113,000 deaths among smokers each year in the
# Reproductive and Developmental Effects
Inhalation of tobacco smoke affects the reproductive system, with harmful effects related to fertility, fetal and child development, and pregnancy outcome. Smoking is estimated to cause more than 1,000 deaths from perinatal conditions each year in the United States . Exposure to the complex chemical mixture of combustion compounds in tobacco smoke-including carbon monoxide, which binds to hemoglobin and can deprive the fetus of oxygen-has been found to contribute to a wide range of reproductive effects in women. These effects include altered menstrual cycle and reduced fertility; placental abnormalities and preterm delivery; reduced birth weight, stillbirth, neonatal death, and sudden infant death syndrome (SIDS) in their offspring; earlier and more symptomatic menopause; and other effects [DHHS 2001[DHHS , 2004[DHHS , 2014
# Other Adverse Effects
Smoking is known to cause other health problems that contribute to the generally poorer health of smokers as a group. These include visual difficulties (due to cataracts and age-related macular degeneration), hip fractures (due to low bone density), peptic ulcer disease, diabetes, rheumatoid arthritis, and periodontitis (Table 3-1). Smoking may also cause hearing loss in adults .
Inflammatory effects of tobacco smoke have been associated with many other health effects. For example, smoking has been found to delay wound healing after surgery and lead to wound complications . Also, tobacco smoking may increase the risk of hearing loss caused by occupational exposure to excessive noise . Research on other health effects associated with exposure to tobacco smoke will undoubtedly provide a more complete understanding of the adverse health effects of smoking.
# Secondhand Smoke
In the United States, SHS exposure causes more than 41,000 deaths among nonsmokers each year . There is strong evidence of a causal relationship between SHS of adults and adverse health effects, including lung cancer, heart disease, stroke, exacerbation of asthma, nasal irritation, and (due to maternal exposure) reduced birth weight of offspring (Table 3-2) . The evidence that exposure to SHS causes health effects among exposed infants and children is also strong (Table 3-2) .
There is also suggestive evidence that exposure to SHS causes a range of other health effects. These include respiratory diseases (asthma, COPD), breast cancer, and nasal cancer among nonsmoking adults, premature delivery of babies born to women exposed to SHS, and cancers (leukemia, lymphoma, brain cancer) among children exposed to SHS . SHS exposure may also be associated with hearing loss in adults .
Among nonsmoking adults, health risks of SHS exposure extend to workplace exposures. A metaanalysis of 11 pertinent studies provided quantitative estimates of lung cancer risk attributable to workplace exposure to SHS; lung cancer risk was increased by 24% overall among workers exposed to SHS in the workplace, and there was a doubling of lung cancer risk among workers categorized as highly exposed to SHS in the workplace . A dramatic example of an adverse effect of exposure to SHS in the workplace was an asthmatic worker's death (see Box 3-3).
# Smokeless Tobacco
Some forms of smokeless tobacco are well documented as causes of oral cancer, esophageal cancer,
# Box 3-2. Emphysema Risk in Coal Miners-Effects of Tobacco Smoking and Coal Mine Dust Exposure
A study evaluated the effects of exposure to coal mine dust, cigarette smoking, and other factors on the severity of lung disease (emphysema) among more than 700 deceased persons, including more than 600 deceased coal miners. The study found that combined occupational exposure to coal mine dust and cigarette smoking had an additive effect on the severity of emphysema among the coal miners. Among smokers and never-smokers alike, emphysema was generally more severe among those who had higher levels of exposure to coal mine dust. However, at any given level of dust exposure, miners who had smoked generally had worse emphysema than miners who had not smoked.
NIOSH CIB 67 - Workplace Tobacco Policies Box 3-3. Asthma Death and Exposure to Secondhand Smoke-A Case Report
On May 1, 2004, a 19-year-old part-time waitress, who had a history of asthma since childhood, arrived at work. She spent 15 minutes chatting with a coworker in an otherwise unoccupied room adjacent to the bar and was reported to have no apparent breathing difficulty at that time. She then entered the bar, which was occupied by dozens of patrons, many of whom were smoking. Less than 5 minutes later she reported to the manager that she wished she had her inhaler with her, needed fresh air, and needed to get to the hospital. As she walked towards the door, she collapsed. An emergency medical crew attempted resuscitation and transported her to a hospital emergency room, where she was declared dead. "Status asthmaticus" and "asphyxia secondary to acute asthma attack'' were the causes of death recorded on the death certificate and autopsy report, respectively. The workplace was described by an investigator from a NIOSH-funded state program as a ''typical smoky bar. " Based on the nature and circumstances of the waitress's death, it was concluded by the principal investigator of the state's fatality investigation program and his colleagues that this waitress died from exposure to work-related SHS . . Furthermore, beyond the concerns of nonuser exposure to nicotine and these other components, there is also potential for ENDS to be used to deliver psychoactive substances such THC, the active ingredient in marijuana .
A study funded by an organization that promotes consumer access to ENDS as a means of harm reduction for nicotine-addicted individuals, emphasizes that estimated exposure levels associated with the use of ENDS are generally much lower than occupational exposure limits (OELs) for toxic industrial hazards . In light of irritant compounds (e.g., formaldehyde, acetaldehyde, and acrolein) identified in emissions from ENDS, it has been recommended that research be done to evaluate possible adverse effects of exposure to these compounds among ENDS users and individuals exposed to secondhand ENDS aerosol . Indeed, findings relating to short-term adverse effects on ENDS users include preliminary reports of significantly increased airways resistance
# Traumatic Injuries and Fatalities Caused by Tobacco Use
Tobacco use is also an important cause of traumatic death, injury, and property damage. In 2011, there were an estimated 90,000 fires related to lighted tobacco products in the United States, resulting in an estimated 540 deaths and 1,640 injuries among civilians, and more than $600 million in property damage. Of these fires, 1 in every 4 fatalities was a victim who was not the smoker whose cigarette or other combusting tobacco product caused the fire . Annual estimates have been declining over time, in part due to the decline in smoking . In addition to injuries caused by smoking-related fires, use of tobacco products is a recognized distracting factor while operating motor vehicles , and smoking while driving has been shown to increase the risk of being involved in a crash . Adverse smoking-associated physiological alterations in bone mineralization, blood vessels, and inflammatory response may also contribute to higher risk of injuries, impaired recovery, and higher rates of associated disability among smokers .
# Tobacco Use and Increased Risk of Work-related Disease and Injury
In the general population, personal use of tobacco and exposure to SHS both cause debilitating and fatal health problems, as outlined above. What many workers and their employers often do not fully understand is that tobacco use can increase, sometimes profoundly, the likelihood and/or the severity of occupational disease and injury caused by other hazards present in their workplaces. 1991]. This section outlines some ways that tobacco use by workers and in the workplace can cause or worsen occupational risks (see Box 3-4). Readers are referred to other sources for more conceptual detail about how tobacco use can affect doses of hazardous industrial agents received by workers, metabolism of these agents by exposed workers, and pathogenesis and carcinogenesis of diseases caused by these agents, as well as how researchers assess complex causal relationships involving multiple causes .
Heat generated by smoking tobacco in the workplace can transform a workplace chemical to a more toxic chemical . Though smokers are most highly exposed to the transformed chemical, nonsmoking coworkers in the same work area may also be exposed. Examples of occupational agents that have the potential for conversion to highly toxic chemicals by smoking tobacco products include polytetrafluoroethylene (Teflon ® ) and other chlorinated hydrocarbons (see Box 3-5).
# Box 3-4. Some Ways Tobacco Use by Workers or in the Workplace Can Cause or Worsen Occupational Safety and Health Risks
- Tobacco smoke/tobacco products can contain the same toxic agent that is released into the workplace from a work process, thus increasing the dose of that agent received by tobaccousing workers.
- A workplace chemical can be transformed into a more harmful agent by the heat involved in smoking.
- Tobacco products can become contaminated with industrial toxicants found in the workplace, thus facilitating entry of the agent into the body through inhalation, ingestion, and/or skin absorption.
- Smoking can contribute to an effect on the body comparable to that which can result from exposure to an industrial toxicant in the workplace, thus causing an additive combined effect.
- Smoking can act synergistically with industrial toxicants found in the workplace to cause a much more profound effect than anticipated based on the known individual effects of smoking and the occupational exposure.
- Tobacco use at work can contribute to work-related traumatic injuries and fatalities, either as an ignition source for explosive or flammable agents in the workplace, or through tobacco-related distraction while operating a vehicle or machinery at work.
- Smoking at work exposes nonsmoking coworkers to the hazards of secondhand smoke.
Adapted from NIOSH Tobacco products can readily become contaminated by industrial toxicants in the workplace through contact of the tobacco products with unwashed hands or contaminated surfaces and through deposition of airborne contaminants onto the tobacco products. Subsequent use of the contaminated tobacco product, at or even away from the workplace, facilitates entry of these toxic agents into the user's body by inhalation, ingestion, and/or skin absorption . To protect workers from such exposure, occupational safety and health regulations for lead, cadmium, and many other specific toxic agents prohibit use of (and even carrying of) tobacco products in designated work areas (see Appendix Table A-2 and A-3).
Often, a particular health effect can be independently caused by tobacco use and workplace exposure to a toxic agent. Thus, even if tobacco is used only away from work, users will be more severely affected than non-users, typically in an additive manner. For example, tobacco smoking reduces a worker's lung function, leaving that worker more vulnerable to the effect of any similar lowering of lung function caused by occupational exposure to dusts, gases, or fumes. Occupational exposures that, like tobacco smoking, cause chronic airways diseases and lung function impairment, include cotton dust, coal mine dust, grain dust, silica dust, welding fumes, and others .
For some occupational hazards, the combined impact of tobacco use and exposure to the occupational agent can be synergistic (i.e., amounting to an effect much greater than the sum of each independent effect). Adverse biological effects of smoking on the respiratory tract can lead to higher effective doses of an industrial toxicant among smoking workers compared with nonsmoking workers. For example, deposition of hazardous occupational dusts can be increased in airways narrowed by smoking, and clearance of deposited dust can be slowed by smoking-induced impairment of both alveolar and mucociliary transport .
In addition, inflammatory cells recruited to the alveoli and airways by smoking can enhance lung injury from hazardous occupational agents, and tumor promoters in tobacco smoke can act on cells initiated by an occupational carcinogen, leading to an increased likelihood that cancer will develop from the occupational exposure among smokers .
An example of a synergistic effect is the combined effect of smoking and asbestos exposure on lung cancer (see Box 3-1). Smoking and asbestos exposure both independently cause lung cancer. Workers who both smoke and are exposed to asbestos at work face a much greater risk of dying from lung cancer than would be expected from the known independent risks of smoking by itself and asbestos exposure by itself [NIOSH 1979;DHEW 1979b;
# Box 3-5. Polymer Fume Fever in Smokers with Occupational Exposure to Tetrafluoroethylene
Soon after use of a new spray product containing tetrafluoroethylene (a fluorocarbon monomer) was introduced at a small industrial facility, workers began experiencing severe episodic "flu-like" symptoms. The symptoms-lower backache accompanied by fever, chills, and malaise, and a dry, nonproductive coughoccurred only on work days and usually subsided by the next morning. The spray was used in a stamp-making process, and only the employees making the stamps were affected. All the affected workers ate and smoked in their work area. After smoking was prohibited, no further symptoms occurred. Investigators concluded that workers had experienced polymer-fume fever due to contamination of cigarettes with the fluorocarbon (via the workplace air or direct contact with workers' hands) and subsequent inhalation of decomposition products created by the intense heat of the cigarettes as they smoked . IARC 2004;Frost et al. 2011;Markowitz et al. 2013]. Workers who only smoke outside of work remain vulnerable to the synergistic combined effect of smoking and asbestos exposure. Other sorts of synergistic effects may involve consideration of temporality of exposures and outcome. Workers who start smoking before they are occupationally exposed to radon may face a more-than-multiplicative risk of lung cancer that is much higher than the additive risk faced by radon-exposed workers who did not smoke until after their occupational exposure to radon Tobacco use on the job can also cause occupational traumatic injuries and property loss unrelated to fires or explosions. Worker distraction by tobacco use (e.g., opening, preparing, lighting, extinguishing, or disposing of a tobacco product) or by a tobacco-caused coughing spell can result in traumatic injury or death when that worker is driving a work vehicle or operating other potentially hazardous machinery or equipment . Several studies have shown that smokers are more likely to be injured at work than nonsmokers .
As evident from the above discussion, both the type of tobacco/tobacco-related product used and where it is used can influence whether and how occupational safety and health risks are enhanced by tobacco use for users and for non-users. For example, tobacco smoking in a workplace will put nonsmoking workers in that workplace at increased risk due to their workplace exposure to SHS. In contrast, in workplaces free of other occupational chemical or physical hazards, use of smokeless tobacco would not be expected to result in any increased occupational risk for users or their coworkers. ENDS use in the workplace could, like use of any tobacco product, plausibly enhance a user's exposure to hazardous workplace toxicants present in the workplace, may serve as a potential ignition source in workplaces where explosive atmospheres are present, and can result in secondhand exposure of coworkers (see Electronic Nicotine Delivery Systems, above).
NIOSH CIB 67 - Workplace Tobacco Policies - Smoke-free or tobacco-free indoors or campuswide prohibitions.
- Other restrictions on tobacco use by employees.
- Tobacco vending machines removal and prohibiting other onsite sales of tobacco in workplaces.
- Provision of tobacco cessation programs.
- Employer-provided health insurance benefits designed to increase access and remove barriers to evidence-based cessation treatments and to provide incentives to quit tobacco use.
- Design of hiring policies based on smoking status.
Many preventive policies relating to smoking and the workplace are governed by local, state, or federal government laws and/or regulations. Others are independently implemented by employers as workplace requirements or conditions of employment. Employees and/or labor organizations can share in a sense of joint ownership if they meaningfully collaborate with the employer on policy language, approaches and timing, cessation supports, and compliance and consequence issues. Involving employees in the development, implementation, and evaluation of workplace programs is an effective strategy for changing employee culture and behavior .
Workplace tobacco policies are underpinned by several motivating interests (Boxes 4-1 and 4-2). First and foremost is an interest in protecting tobacco users' health, given that tobacco use causes Box 4-1. Some Reasons for Employers to Implement Workplace Tobacco Interventions
- Reducing occupational disease and injuries (and workers' compensation insurance costs).
- Lowering health insurance and life insurance costs and claims.
- Decreasing costs of training workers to replace those who become disabled or die prematurely.
- Increasing productivity through reduced absenteeism and reduced presenteeism.
- Reducing accidents and fires (and related insurance costs).
- Lessening property damage (and related insurance costs).
- Eliminating indoor air pollution (and related cleaning, maintenance, and ventilation costs).
- Limiting liability and legal costs for failing to provide a safe and healthful working environment.
- Enhancing worker morale and corporate image by showing concern for employees/customers. To improve one's own health - Reduce risk for lung, mouth, throat, and other types of cancer. For example, lung cancer risk drops by as much as 50% 10 years after quitting, and risks for cancers of the mouth and throat and bladder drop by 50% 5 years after quitting.
- Diminish risk for coronary heart disease, stroke, and peripheral vascular disease. For example, heart disease risk drops by as much as 50% 1 year after quitting. Stroke risk attributable to tobacco use may be eliminated 5 years after quitting.
- Ease symptoms such as coughing, wheezing, and shortness of breath within months of quitting and long-term risk for chronic obstructive pulmonary disease (COPD) and other respiratory diseases.
- Reduce risk of ulcer.
- Reduce risks of infertility (for women who stop smoking during their reproductive years).
To protect the health of others - Avoid exposing family, friends, coworkers, and others to the harmful effects of secondhand smoke (SHS).
- Lessen the risk of having a low-birth-weight baby (for women who stop smoking before becoming pregnant or during the first trimester of pregnancy).
- Increase the likelihood that one's young children will not use tobacco when they reach adolescence and adulthood.
To improve personal/family finances - Save money by not spending money on tobacco and other tobacco-related expenditures (e.g., higher insurance premiums).
- Reduce the risk of financial devastation resulting from income loss due to smoking-related disability or premature death, or from property loss due to a smoking-related home fire.
# To avoid personal inconvenience
- Avoid the need to go outside, sometimes in the rain and cold, when working in a tobacco-free workplace.
the top fıve health conditions that impact the U.S. population . Protecting the health of nonsmoking workers is another important motivating interest. Although the health and safety consequences of tobacco use offer sufficient rationale for workplace tobacco policies, legal and economic considerations are also important. Government (i.e., taxpayers), employers, and employees all bear financial costs associated with adverse effects of tobacco use by workers and occupational exposure to SHS. . Employees may bring claims under state workers' compensation laws for illness or injury attributable to SHS smoke exposure in the workplace, under federal or state disability law for failing to provide reasonable accommodations to alleviate an employee's exposure to SHS, and under the common law for failure to provide employees with a reasonably safe work environment free of SHS . Adopting an effective smokefree (or tobacco-free) workplace policy would protect an employer from such liability and provide employees with a safe workplace.
With respect to personal costs paid by individual smokers, there are obvious direct costs associated with consumer purchases of tobacco products and related materials. However, many smokers, especially those with the least discretionary income, are unaware of longer-term financial costs. One financial writer estimated in 2007 that a typical pack-aday smoker who is spending nearly $2,000 annually just to purchase cigarettes could instead amass more than $1 million by investing that amount each year from ages 18 to 65 in an individual retirement account invested with an emphasis on growth . That estimate did not encompass costs of smoking other than the purchase price of tobacco. Smokers may be charged higher premiums for health and life insurance and generally pay more out-of-pocket costs for health care. Families can experience substantial loss of income when their smoking breadwinner becomes disabled or dies prematurely from a smoking-related disease. Financial devastation can also result from smokingcaused residential fires through costly personal injury to the smoker and/or family members and through loss of residence and other personal property. In addition, smokers and their families may incur additional costs for more frequent cleaning, repairing, or replacement of clothing and other personal furnishings to remove smoke odors and tobacco-related stains.
With respect to employers' costs, a recent study estimated excess annual cost to U.S.-based private employers associated with employees who smoke cigarettes compared with those who do not. Considering aggregate cost and productivity impacts associated with smoking breaks, absenteeism, presenteeism, healthcare expenses, and pension benefits, the study estimated that the annual cost to employ a smoker was, on average, $5,816 greater than the cost to employ a nonsmoker . Interventions that help smoking workers quit can benefit a business' bottom line .
# Workplace Policies Prohibiting or Restricting Smoking
For safety reasons, smoking has long been prohibited in particular work settings where explosive or extremely flammable materials are present (see Appendix Tables A-2 and A-3). A century ago, such prohibitions may have been motivated more out of concern about property loss than concern for the well-being of workers. Subsequently, concern about worker health has motivated additional policies prohibiting the use of tobacco products in specific work sites where exposure to certain hazardous occupational agents can be increased as a result of tobacco use (see Appendix Tables A-1 and A-3). The need for such venue-specific prohibitions on tobacco use has been widely understood and accepted; however, compliance with these prohibitions has been imperfect , indicating a need for ongoing training and vigilance.
In the last decades of the past century, as the public became more aware of the hazards of exposure to SHS, government (at the local, state, and federal
NIOSH CIB 67 - Workplace Tobacco Policies levels) acted with the intent to reduce workplace exposures to SHS and subsequently to eliminate SHS from workplaces . The Surgeon General has concluded that there is no riskfree level of exposure to SHS .
Other measures, such as separating smokers from nonsmokers, cleaning the workplace air, and ventilating buildings, cannot eliminate exposures of nonsmokers to SHS . Thus, ventilation is not an acceptable alternative to making workplaces completely tobacco smoke-free.
Federal actions have been implemented to eliminate SHS from some workplaces. Actively supported by flight attendants and their union as a way to protect their health by eliminating SHS in their workplace, federal law has progressively prohibited smoking during commercial passenger flights, beginning in 1988 with shorter domestic flights and by 2000 encompassing all flights originating and/or terminating in the United States .
In 1990, the Interstate Commerce Commission acted to ban smoking on interstate buses . A 1997 Presidential Executive Order has prohibited tobacco smoking in all interior space owned, rented, or leased by the executive branch of the federal government, with limited exceptions (e.g., specially equipped designated smoking areas, certain residential settings, and space occupied by non-federal parties) . In 2009, this policy to prohibit smoking in all interior space owned, rented, or leased by the executive branch of the federal government was extended by action of the General Services Administration, eliminating remaining indoor designated smoking areas and additionally prohibiting smoking within 25 feet of doorways and air intakes and in courtyards where those outdoor spaces are under GSA jurisdiction . OSHA proposed a rule that would have more universally restricted smoking in the workplace , but later withdrew the proposed rule, noting that workplace regulation of SHS was being advanced by private employers and by state and local governments .
The first comprehensive local and state laws restricting smoking in private workplaces, restaurants, and bars went into effect in 1993 (Shasta County, California) and 2002 (Delaware), respectively .
There is no fundamental legal impediment to adoption of smoke-free workplace policies by private employers , and the private sector has taken independent actions to eliminate exposure to SHS in the workplace. In the early 1990s, new standards established by the Joint Commission on Accreditation of Hospitals spurred industry-wide adoption of workplace smoke-free policies by accredited hospitals, achieving a high level of compliance within just 2 years . In addition to its intended effect on exposure to SHS, this policy has been associated with additional beneficial impacts on workplace safety and property loss (see Box 4-3). Many other businesses also voluntarily implemented smoke-free policies in their workplaces and, by the late 1990s, nearly 70% of U.S. workers employed in non-residential indoor worksites were working in smoke-free workplaces .
A number of studies, including meta-analyses, have shown that smoke-free workplace policies decrease exposure of nonsmoking employees to SHS at work and increase cessation among employees who smoke . Although one review of the literature found
NIOSH CIB 67 - Workplace Tobacco Policies inconclusive evidence that smoke-free workplace policies cause smokers to quit altogether , there is strong evidence that such policies are associated with increased quit rates among smoking workers and with a reduction in the amount of smoking among those workers who continue to smoke . In contrast, less restrictive workplace smoking policies are associated with sustained tobacco use among workers . A nationally representative survey found that in workplaces without a workplace rule limiting smoking, workers were significantly more likely to be smokers .
There is clear evidence of improved health among workers as a result of policy interventions to make indoor spaces, including workplaces, smoke-free . This is especially true for workers in the hospitality industry (see Box 4-4). Smoke-free policies have been shown to improve indoor air quality, reduce SHS exposure, reduce sensory and respiratory symptoms, and improve lung function among bar workers . Implementation of smoke-free policies also has been shown in ecological studies to be associated with reduced hospitalizations for heart attacks in the general population . Results of similar studies suggest that such policies may also reduce hospitalizations and emergency department visits for asthma . Smoke-free policies in Coinciding with comprehensive smoke-free workplace policies being enacted across the U.S. health care industry, the number of smoking-ignited structure fires involving health-care facilities dropped from well over 3,000 per year in the early 1980s to only about 100 per year since the late 1990s. Notably, the percentage of all structural fires in health-care facilities determined to have been caused by smoking materials dropped from 30% to 5% over the same period .
# Box 4-4. Prohibiting Smoking in Bars Improves the Health of Bartenders
A state law that prohibited smoking in most California taverns and bars began on January 1, 1998. Bartenders were surveyed in the month before the law took effect and again about 1 month afterward. Self-reported exposure to SHS at work fell from a median of 28 hours per week before the law took effect to 2 hours per week afterward. Respiratory symptoms and eye, nose, and throat irritant symptoms were each reported by about 75% of bartenders before the law took effect. Of those with symptoms at baseline, 59% with respiratory symptoms and 78% with irritant symptoms experienced resolution of those symptoms after the law took effect (P < 0.001). On average, lung function measurements also improved. The authors of this study concluded that making taverns and bars smokefree resulted in a rapid improvement in the health of bartenders .
the hospitality industry have been found to receive high levels of public support and compliance, and they have not had a negative economic impact on the hospitality industry .
Acting on strong evidence of the effectiveness of smoke-free policies available by 2001, the Task Force on Community Preventive Services recommended workplace smoking bans and restrictions as effective means for reducing exposure to SHS . More recent evidence has led the Task Force to now recommend smoke-free workplace policies (i.e., total prohibition of smoking in the workplace), not only as a means to reduce exposure to SHS, but also as an effective means to increase tobacco cessation, reduce tobacco use prevalence, and reduce tobacco-related morbidity and mortality . To prevent SHS exposures, the recent Surgeon General's report urges that comprehensive smoke-free indoor protections be extended to the entire U.S. population .
World Health Organization reports have recommended that, in the absence of evidence that can assure authorities that use of ENDS does not expose bystanders to toxic emissions, ENDS should not be exempted from laws that restrict the places in which cigarette smoking is allowed . Similarly concerned about potentially hazardous secondhand exposure, the Federal German Institute for Risk Assessment has likewise recommended prohibiting use of ENDS wherever tobacco smoking is prohibited , and the American Heart Association has recommended including e-cigarettes in smokefree laws . The Forum of International Respiratory Societies has recommended that ENDS use be prohibited "in public places, workplaces, and on public transportation" . The American Industrial Hygiene Association similarly reviewed the issue of ENDS and concluded that "their use in the indoor environment should be restricted, consistent with current smoking bans, until and unless research documents that they will not significantly increase the risk of adverse health effects" to those exposed secondhand . In the United States, the number of states and localities that explicitly prohibit use of e-cigarettes in public places where tobacco smoking is already prohibited is increasing with time , totaling 3 states and more than 200 localities before the end of 2014 .
Employers are taking notice, and some are acting to prohibit ENDS use in their workplaces . Given the current unregulated nature of ENDS and the liquid concoctions used them, along with uncertainty about the impact of ENDS use in the workplace on the health of non-users, the simplest approach is for employers to simply add ENDS to the list of products covered by their tobacco-free (or smoke-free) workplace policy .
# Employer Prohibitions on Tobacco Use Extending Beyond the Workplace
Some employers have taken action to extend restrictions on tobacco use by their employees beyond the workplace. For example, in 2013, the U.S. Public Health Service Commissioned Corps became the first federal uniformed service to prohibit tobacco use by its officers whenever and wherever they are in uniform . More controversial are attempts of private employers to control the behavior of their employees outside of the workplace. For example, at a major medical center that had a smoke-free campus policy in place for years, the employer recently announced plans to prohibit smoking by workers during their workday breaks, including lunchtime, even when off campus [Asch et al. 2013;.
Controversy surrounds many organizational policies that bar the hiring of smokers or prohibit tobacco use by employees during the workday when they are away from the worksite even on their own time. Proponents argue that a nonsmoking workforce serves as a positive role model for health, experiences better health status, incurs substantially lower health-care costs for employers and employees alike, and improves productivity . Opponents posit the addictive nature of tobacco and the fact that tobacco use usually begins in adolescence, note that tobacco use remains legal, and cite the disparate and potentially discriminatory effects such a policy might have on minority, lower-income, or less educated workers and job candidates-groups that tend to have higher levels of tobacco use. They also point out that employers who refuse to hire smokers typically do not similarly refuse to hire individuals with other personal health behaviors that, like tobacco use, have adverse health consequences. They add that more than half of states have laws in place prohibiting employers from refusing to hire individuals because they smoke .
# Workplace Tobacco Use Cessation Programs
Smoking employees who want to quit can benefit from employer-provided resources and assistance. In 2010, roughly 65% of employed smokers in the United States expressed an interest in quitting tobacco and about half reported having tried to quit in the previous year . Just as policies increasing tobacco taxes at the state and federal levels have led to increased calls to state telephone tobacco cessation quitlines , implementing tobacco-free workplace policies can be expected to increase worker interest in quitting and cessation support services. When a smoking cessation program is established in a workplace, smokers employed at that workplace are more likely to intend to quit in the next 6 months . Various levels and types of cessation support can be provided .
On a basic level, a health-care provider's inquiry about tobacco use and delivery of brief counseling advice to tobacco users has been shown to increase quit rates, with more intensive intervention having a greater effect . This basic approach can be readily 'piggy-backed' on occupational health services that already exist in many workplaces. For example, all workers enrolled in OSHA-mandated respiratory protection programs must be asked about tobacco use as part of their medical evaluation (see Appendix Guideline 2008]. Similarly, the Community Preventive Services Task Force found that quitlines-especially proactive quitlines where the counselor initiates follow-up calls-increase tobacco cessation among callers who are interested in quitting and can help expand the use of evidence-based cessation services among smokers in populations that have limited access to cessation treatment . An updated Cochrane review has reaffirmed the effectiveness of proactive quitlines . Their widespread availability, ease of accessibility, affordability, and potential reach to populations with higher levels of tobacco use make quitlines an important component of any cessation effort . Yet many employers do not make their employees aware of them. For example, a 2008 Washington State survey of almost 700 employers with at least 50 employees found that only 6% mentioned the availability of the state quitline in their health promotion messages to workers .
The most comprehensive workplace cessation programs go well beyond minimal cessation counseling and referral to no-cost quitlines and web-based programs. Employers can enter into preferred relationships with state quitlines or contract quitline providers to establish employer-specific quitlines with special services . Webbased intervention to assist with tobacco cessation is a less studied but promising approach . One report described success achieved by a major corporation with a program offering employees webbased cessation intervention . Individualized counseling and support can often be provided by an existing employee assistance program. A systematic review of the literature found that workplace-based smoking cessation services such as individual and group counseling, pharmacological treatment, and social support are all effective in enhancing quit rates when compared with no interventions or minimal interventions . Optimal work-based tobacco cessation programs are designed to provide followup assistance and to support multiple quit attempts, because most smokers try to quit repeatedly before finally succeeding .
Ideally, employers should incorporate tobacco cessation support programs into a more comprehensive approach that addresses the overall safety, health, and well-being of workers. A growing evidence base supports the enhanced effectiveness of workplace programs that integrate health promotion efforts such as smoking cessation with more specific occupational health protection programs . Such integrated workplace tobacco cessation programs may be most usefully implemented among blue-collar workers, who generally have higher smoking (and lower quitting) rates than office workers and who generally face higher risks from industrial hazards.
A large randomized study involving 15 manufacturing sites showed that combining occupational health and safety messages with health promotion messages resulted in a doubling of smoking quit rates among hourly workers (from 5.9% to 11.8%; P = 0.04) compared with health promotion messages alone . Another demonstration study of an integrated program aimed at enhancing smoking cessation among blue collar workers targeted participants in a union apprenticeship program (see Box 4-5).
# Health Insurance and Smoking Behavior
Another recent phenomenon is the increasing use of health insurance to encourage employees to adopt positive personal health-related behaviors (e.g., smoking cessation) through modification in the design of benefits and out-of-pocket cost for covered individuals. For example, it is known that quit rates are higher when health insurance covers the costs of evidence Apprentice ironworkers at a local union in Boston were studied before and after a 4-month smoking cessation demonstration program. With input from union leaders and members, the program was carried out in a local union hall, where posters promoting cessation and featuring photographs of ironworkers were displayed. Articles explaining and promoting the program were published in the union newsletter. Occupational health protection aspects of the program were featured in an educational module on "toxics and tobacco. " This module was taught by an industrial hygienist and covered separate and combined adverse health effects, including cancer, caused by smoking and workplace hazards (e.g., asbestos, welding fumes, and diesel exhaust) commonly encountered by ironworkers. Tobacco treatment specialists led weekly group sessions on tobacco cessation. Incentives to participate in the sessions included free lunches and, for those attending all sessions, a chance for a raffle prize. Self-help quit kits were provided to apprentices who chose not to attend the group sessions. Nicotine replacement therapy was available at no cost to participants. Of 337 participants, 139 (41.2%) were current smokers at the time the program started. One month after the program concluded, 27 (19.4%) of those smokers had quit-a rate much higher than the expected ~5% quit rate. Program participants were 3 times more likely to quit than nonparticipants .
NIOSH CIB 67 - Workplace Tobacco Policies Task Force assigned an "A" grade . Federal guidance clarifying this requirement states that health plans will be considered to be in compliance with this requirement if, for example, they cover ( 1 Insurers and employers who sponsor health insurance coverage for their employees will have expanded opportunities to design incentives for wellness programs, including interventions intended to enhance tobacco cessation (or, with some limitations, disincentives for continued tobacco use).
For example, in order to motivate employees to quit smoking, the ACA allows employer-sponsored health insurance issuers to charge tobacco users premiums that are up to 50% higher than premiums charged non-tobacco users . States have authority to limit the magnitude of such surcharges, and a number of states have done so .
# Using Incentives and Disincentives to Modify Tobacco Use Behavior
Increasingly, governmental and employer actions are removing barriers and offering incentives for employee quit attempts and success in quitting tobacco use. Likewise, such actions are increasingly discouraging tobacco use by workers covered by employer-sponsored health insurance programs (e.g., through increased premiums for smokers). For example, more than one-third of surveyed large employers who offer their employees smoking cessation programs incentivize participation in these programs. The number of large employers who are planning to reward or penalize smokers based on their smoking status is increasing-more than half of companies plan to do so by the end of 2013, up from less than 25% of employers who did so in 2010 .
A clear barrier that reduces use of evidence-based cessation treatments is out-of-pocket costs for cessation counseling and FDA-approved cessation medications. Because of strong evidence that the number of tobacco users who quit can be increased by reducing these out-of-pocket costs, the Community Preventive Services Task Force recommends reducing tobacco users' out-of-pocket costs for cessation treatments .
The Task Force had earlier examined the issue of providing incentives for tobacco cessation, finding insufficient evidence at that time that workplacebased incentives or competitions by themselves reduced tobacco use among employees . Even then, the Task Force went on to recommend worksite-based incentives and competitions when they are combined with other evidence-based interventions (e.g., education, group support, telephonic counseling, self-help materials, smoke-free workplace policies) as part of a comprehensive cessation program .
A subsequent systematic review of the literature identified a single well-designed study in which financial incentives integrated into a smoking cessation program produced a substantial and sustained beneficial impact .
Incentive payments for that randomized trial were structured as $100 for completion of the smokingcessation program; $250 for abstinence (confirmed biochemically) during the first 6 months after study enrollment; and $400 for abstinence (also confirmed biochemically) during the subsequent 6 months. Smokers offered the financial incentives were three times as likely to enroll in the program (15.4% vs. 5.4%, P < 0.001), four times as likely to complete the program (10.8% vs. 2.5%, P < 0.001), and three times as likely to remain abstinent more than a year later (14.7% vs. 5.0%, P < 0.001) . Notably, this study did not involve establishing a new smoking cessation program; rather, all participants were informed about existing smoking-cessation resources available in their community and about employer-provided health benefits related to smoking cessation.
A recent review explored ethical and legal issues relating to employer-provided incentives intended to change individual health behaviors, including tobacco use . The authors identified a number of specific issues that call for scrutiny, including the need to ensure that incentive programs are designed to work as intended, and the potential for incentives to be used in an unduly coercive or discriminatory manner. They emphasized that employers should play a collaborative, supportive role in advancing the health of workers, and they further suggested that, in order to limit the potential for discrimination, programs should be designed to minimize differences in individual employees' abilities to access incentives . It should be recognized that, while imposing insurance premium surcharges or other disincentives on smokers has the potential to motivate them to quit smoking, the evidence that they are effective in doing so is quite limited, and care is needed to avoid such practices having unintended consequences. For example, these practices could lead smokers to conceal their smoking (and thereby not benefit from cessation assistance), or even to forgo health insurance coverage or quit their jobs .
The appropriate intent of incentives is to improve health and reduce health-care costs overall, and not merely to shift health-care costs to high-risk individuals [Madison et al. 2011.
In summary, workplace policies are powerful tools that can benefit worker health. Well-designed policies protect workers from occupational risks, provide workplace-associated opportunities for enhancing worker health, and motivate workers to take beneficial actions to protect their well-being.
Although not a primary focus of this CIB, workplace policies that effectively sustain or improve worker health can also be cost-effective and benefit the employer's bottom line.
NIOSH CIB 67 - Workplace Tobacco Policies
# Conclusions
- Cigarette smoking by workers and SHS exposure in the workplace have both declined substantially over recent decades, but about 20% of all U.S. workers still smoke and about 20% of nonsmoking workers are still exposed to SHS at work.
- Smoking prevalence among workers varies widely by industry and occupation, approaching or exceeding 30% in construction, mining, and accommodation and food services workers.
- Prevalence of ENDS use by occupation and industry has not been studied, but ENDS has grown greatly, with about 1 in 3 current U.S. adult smokers reporting ever having used ecigarettes by 2013.
- Smokeless tobacco is used by about 3% of U.S. workers overall, but smokeless tobacco is used by more than 10% workers in construction and extraction jobs and by nearly 20% of workers in the mining industry, which can be expected to result in disparities in tobaccorelated morbidity and mortality.
- Tobacco use causes debilitating and fatal diseases, including cancer, respiratory diseases, and cardiovascular diseases. These diseases afflict mainly users, but they also occur in those exposed to SHS. Smoking is substantially more hazardous, but use of smokeless tobacco also causes adverse health effects. More than 16 million U.S. adults live with a disease caused by smoking, and each year nearly a half million die prematurely from smoking or exposure to SHS.
- Tobacco use is associated with increased risk of injury and property loss due to fire, explosion, and vehicular collisions.
- Tobacco use by workers can increase, sometimes profoundly, the likelihood and the severity of occupational disease and injury caused by other workplace hazards (e.g., lead, asbestos, and flammable materials).
- Restrictions on smoking and tobacco use in specific work areas where particular high-risk occupational hazards are present (e.g., explosives, highly flammable materials, or highly toxic materials that could be ingested via tobacco use) have long been used to protect workers.
- A risk-free level of exposure to SHS has not been established, and ventilation is insufficient to eliminate indoor exposure to SHS.
- The risk of adverse health effects associated with using ENDS or secondhand exposure to particulate aerosols and gases emitted from ENDS remains to be fully characterized.
- Policies that prohibit tobacco smoking throughout the workplace (i.e., smoke-free workplace policies) are now widely implemented, but they
have not yet been universally adopted across the United States. These policies improve workplace air quality, reduce SHS exposure and related health effects among nonsmoking employees, increase the likelihood that workers who smoke will quit, decrease the amount of smoking during the working day by employees who continue to smoke, and have an overall impact of improving the health of workers (i.e., among both nonsmokers who are no longer exposed to SHS on the job and smokers who quit).
- Workplace-based efforts to help workers quit tobacco use can be easily integrated into existing occupational health and wellness programs. Even minimal counseling and/or simple referral to state quitlines, mobile phone texting interventions, and web-based intervention can be effective, and more comprehensive programs are even more effective.
- Integrating both occupational safety and health protection components into workplace health promotion programs (e.g., smoking cessation) can increase participation in tobacco cessation programs and successful cessation among blue-collar workers.
- Smokers, on average, are substantially more costly to employ than nonsmokers.
- Some employers have policies that prohibit employees from using tobacco when away from work or that bar the hiring of smokers or tobacco users. However, the ethics of these policies remain under debate, and they may be legally prohibited in some jurisdictions.
NIOSH CIB 67 - Workplace Tobacco Policies
# Recommendations
The following recommendations relate specifically to the issues raised in this CIB. NIOSH recommends that employers take the following actions related to employee tobacco use:
- At a minimum, establish and maintain smokefree workplaces that protect those in workplaces from involuntary, secondhand exposures to tobacco smoke and airborne emissions from e-cigarettes and other electronic nicotine delivery systems. Ideally, smoke-free workplaces should be established in concert with tobacco cessation support programs. Smoke-free zones should encompass (1) all indoor areas without exceptions (i.e., no indoor smoking areas of any kind, even if separately enclosed and/or ventilated), (2) all areas immediately outside building entrances and air intakes, and (3) all work vehicles. Additionally, ashtrays should be removed from these areas.
- Optimally, establish and maintain entirely tobacco-free workplaces, allowing no use of any tobacco products across the entire workplace campus (see model policy in Box 6-1).
- Comply with current OSHA and MSHA regulations that prohibit or limit smoking, smoking materials, and/or use of other tobacco products in work areas characterized by the presence of explosive or highly flammable materials or potential exposure to toxic materials (see Table A-3 in the Appendix). To the extent feasible, follow all similar NIOSH recommendations (see Table A-2 in the Appendix).
- Provide information on tobacco-related health risks and on benefits of quitting to all employees and other workers at the worksite (e.g., contractors and volunteers).
ȣ Inform all workers about health risks of tobacco use.
ȣ Inform all workers about health risks of exposure to SHS.
ȣ Train workers who are exposed or potentially exposed to occupational hazards at work about increased health and/or injury risks of combining tobacco use with exposure to workplace hazards, about what the employer is doing to limit the risks, and about what the worker can do to limit his/her risks.
- Provide information on employer-provided and publicly available tobacco cessation services to all employees and other workers at the worksite (e.g., contractors and volunteers).
ȣ At a minimum, include information on available quitlines, mobile phone texting interventions, and web-based cessation programs, self-help materials, and employer-provided cessation programs and tobacco-related health insurance benefits available to the worker.
ȣ Ask about personal tobacco use as part of all occupational health and wellness program interactions with individual workers and promptly provide encouragement to quit and guidance on tobacco cessation to each worker identified as a tobacco user and to any other worker who requests tobacco cessation guidance.
- Offer and promote comprehensive tobacco cessation support to all tobacco-using workers and, where feasible, to their dependents.
ȣ Provide employer-sponsored cessation programs at no cost or subsidize 34 NIOSH CIB 67 - Workplace Tobacco Policies cessation programs for lower-wage workers to enhance the likelihood of their participation. If health insurance is provided for employees, the health plan should provide comprehensive cessation coverage, including all evidence-based cessation treatments, unimpeded by copays and other financial or administrative barriers.
ȣ Include occupational health protection content specific to the individual workplace in employer-sponsored tobacco cessation programs offered to workers with jobs involving potential exposure to other occupational hazards.
- Become familiar with available guidance (e.g., CDC's "Implementing a Tobacco-Free Campus Initiative in Your Workplace") (see Box 6-2) and federal guidance on tobacco cessation insurance coverage under the ACA (e.g., . html) before developing, implementing, or modifying tobacco-related policies, interventions, or controls.
- Develop, implement, and modify tobaccorelated policies, interventions, and controls in a stepwise and participatory manner. Get input from employees, labor representatives, line management, occupational safety/health and wellness staff, and human resources professionals. Those providing input should include current and former tobacco users, as well as those who have never used tobacco.
Seek voluntary input from employees with health conditions, such as heart disease and asthma, exacerbated by exposure to SHS.
- Make sure that any differential employment benefits policies that are based on tobacco use or participation in tobacco cessation programs are designed with a primary intent to improve worker health and comply with all applicable federal, state, and local laws and regulations. Even when permissible by law, these differential employment benefit policies-as well as differential hiring policies based on tobacco use-should be implemented only after seriously considering ethical concerns and possible unintended consequences. These consequences can include the potential for adverse impacts on individual employees (e.g., coercion, discrimination, and breach of privacy) and the workforce as a whole. Furthermore, the impact of any differential policies that are introduced should be monitored to determine whether they improve health and/or have unintended consequences.
NIOSH recommends that workers who smoke cigarettes or use other tobacco products take the following actions:
- Comply with all workplace tobacco policies.
- Ask about available employer-provided tobacco cessation programs and cessation-related health insurance benefits.
- Quit using tobacco products. Know that quitting tobacco use is beneficial at any age, but the earlier one quits, the greater the benefits. Many people find various types of assistance to be very helpful in quitting, and evidencebased cessation treatments have been found to increase smokers' chances of quitting successfully. Workers can get help from ȣ tobacco cessation programs, including web-based programs (e.g., . gov and ) and mobile phone texting services (e.g., the SmokefreeTXT program . gov/smokefreetxt); ȣ state quitlines (phone: 1-800-QUIT-NOW , or 1-855-DEJELO-YA ); and/or ȣ health-care providers.
In addition, individual workers who want to quit tobacco may find several of the websites listed in Box 6-2 helpful.
NIOSH recommends that all workers, including workers who use tobacco and nonsmokers exposed to SHS at their workplace:
- Know the occupational safety and health risks associated with their work, including those that can be made worse by personal tobacco use, and how to limit those risks; and
- Consider sharing a copy of this CIB with their employer. To receive NIOSH documents or more information about occupational safety and health topics, contact NIOSH at | 18,251 | {
"id": "98703a2cbac684bcca16fd937e6ee7fd1eeb5f75",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | for their assistance in initiating and completing this project. vi coalitions 1 and state health departments to CDC's National Center for Injury Prevention and Control (Injury Center). 2 As a result of the new rape prevention and education monies, meetings were held with representatives from CDC, researchers, victim advocates, and other federal leaders to help states organize and refine their plans for using the appropriated monies. In addition, the Department of Health and Human Services convened meetings around the country to discuss issues about sexual violence definitions, lack of adequate data, prevention programs, services and support for sexual violence survivors, evaluation of existing systems and programs, and communication. CDC also convened regional meetings to help build better working relationships between the state coalitions and state health department staff. Each state, with its sexual violence prevention partners, was encouraged to develop a plan addressing the Preventive Health and Health Services Block Grant data collection requirements. The Department of Justice, Office for Victims of Crime worked to standardize the data collected for the Victims of Crime Act, Violence against Women Act, and the Preventive Health and Health Services Block Grant. CDC Injury Center staff worked with sexual violence victim advocates and representatives from state health departments to develop standardized definitions that could be used for sexual violence. The process of panel and external review used to develop intimate partner violence definitions and data elements, published in 1999, was emulated in the development of the current sexual violence definitions document.Public health surveillance is the ongoing, systematic collection, analysis, and interpretation of outcome-specific data for use in planning, implementing, and evaluating public health practice (Thacker and Berkelman 1988). Surveillance of sexual violence is important to track this problem over time and to guide prevention and intervention (Saltzman, Green, Marks and Thacker 2000). However, no nationally established mechanism exists for routine identification, recording, and monitoring of sexual violence.Confidentiality and safety are of paramount importance in the study of sexual violence, as is true for all research on violence. No data should be collected or stored that would in any way jeopardize a victim's safety. In developing a surveillance system for sexual violence, it is important to maintain confidentiality of respondents and ensure the safety of victims. The issue of confidentiality must be balanced with the need for data linkage across multiple data sets. This could be accomplished with the use of unique identifiers. Unique identifiers are important to link information from separate data sources (e.g., rape crisis centers, law enforcement agencies, or hospitals). Unique identifiers are also needed to reduce duplication of information collected and to identify repeat visits by the same person. This document does not suggest the use of names or social security numbers as unique identifiers. Unique identifiers can be created by using different data components such that the information can be linked to a particular person but cannot be traced back to that person without their explicit involvement or assent.# PANEL MEMBERS
# INTRODUCTION The Problem of Sexual Violence
Sexual violence is a profound social and public health problem in the United States. As will be covered in more detail to follow, sexual violence includes completed or attempted penetration of the genital opening or anus by the penis, a hand, a finger, or any other object, or penetration of the mouth by the penis or other object. Sexual violence also includes non-penetrative abusive sexual contact (e.g., intentional touching of the groin), as well as non-contact sexual abuse (e.g., voyeurism, exposure to pornography). Sexual violence occurs when the victim does not consent to the sexual activity, or when the victim is unable to consent (e.g., due to age, illness) or refuse (e.g., due to physical violence or threats). According to the National Violence Against Women Survey (NVAWS), 1 in 6 women and 1 in 33 men have experienced an attempted or completed rape, defined as forced vaginal, oral, or anal penetration, in their lifetime (Tjaden and Thoennes 2000). These numbers exclude abusive sexual contact and non-contact sexual violence. Furthermore, they do not take into account the potential for significant underreporting of this crime due to its sensitive nature. Therefore, many researchers and practitioners in this field believe that existing national statistics underestimate the number of victims of sexual violence.
Just as sexual violence is not limited to physically forced penetration, its perpetrators are not limited to strangers. Indeed, perpetrators of sexual violence are more likely to be someone known to the victim (Koss, Gidycz and Wisniewski 1987;Estrich 1987;Koss 1992;Wiehe and Richards 1995;Randall and Haskell 1995;Tjaden and Thoennes 2000;Mahoney and Williams 1998). Sexual violence is a problem embedded in our society and includes any contact and/or non-contact sexual abuse perpetrated by persons well known (e.g., partners or spouses), not as well known (e.g., acquaintances), and unknown to the victim (e.g., strangers). The term "sexual violence" is used here to represent many behaviors that may otherwise fall under the rubrics of sexual abuse, sexual assault, and any other sexual violations (such as sexual harassment and voyeurism). Although many who work in the field of sexual violence use the word "survivor" to describe the person on whom the sexual violence is inflicted, the word "victim" is used in this document in an effort to be consistent with agencies from which most of traditional surveillance information is gathered. For the purposes of survey surveillance, the word "survivor" may be substituted for "victim," as long as "survivor" is defined in the same way "victim" is defined in
# The Need for Consistent Definitions and Data Elements
Currently there is a lack of consensus regarding the definition of sexual violence and which of its various components (e.g., rape, fondling, contact and non-contact sexual abuse) should be included as part of the term. A consistent definition is needed to monitor the incidence of and trends for sexual violence, to determine the scope of the problem, and to compare the problem across jurisdictions. A consistent definition of sexual violence also would help to measure and identify risk and protective factors for victimization and perpetration in a uniform way, which would inform prevention and intervention efforts.
In 1999, CDC published Version 1.0 of Intimate Partner Violence Surveillance: Uniform Definitions and Recommended Data Elements (Saltzman, Fanslow, McMahon and Shelley 1999). That document was intended to guide data collection for public health surveillance of intimate partner violence. It included sexual violence, but only that perpetrated by intimate partners. The present document uses a similar process and format to focus on sexual violence committed by all perpetrators. The information in this document is intended for surveillance of sexual violence against both adult and child victims.
# External Panel
Ten professionals with backgrounds in sexual violence prevention or surveillance and from various settings, including universities, state health departments, hospitals, sexual assault coalitions, and federal agencies, served as panelists for this project. Members of the panel met three times over 18 months -in October 1999, September 2000, and February 2001 In addition, an external group of 15 professionals working in the field of sexual violence was asked to review and comment on drafts of the documents. The goal of this process was to identify uniform definitions and data elements for sexual violence that could be implemented and pilot tested.
"Traditional" Versus "Survey" Surveillance Traditionally, surveillance involves systematic, ongoing collection, analysis, and interpretation of data already available from sources such as emergency departments, police departments, or rape crisis centers. These data have typically been collected for other agency-related purposes, but can later be extracted and used for surveillance purposes. In addition to allowing examination of trends over time, these data are typically inexpensive to collect given that the data already exist. Surveys, on the other hand, involve systematic data collection from a representative sample of the population of interest for analysis and interpretation. Survey data are collected directly from individuals affected by the condition under surveillance. Surveys allow flexibility in the types of questions that can be asked and the level of detail of information that can be collected, since they do not rely upon information already existing in official agency records. Unlike traditional surveillance, surveys offer the opportunity to gather information from those who have sustained sexual violence and from similar individuals who have not for purposes of comparison. However, they are sometimes more expensive than record reviews.
For traditional surveillance of sexual violence, data collectors are encouraged to gather information from appropriate agencies in their jurisdiction, including, but not limited to, rape crisis centers, hospitals, elder services, and shelters. In gathering this information, data collectors should be aware of two major issues. First, rape crisis centers and other victim service agencies are under strict bounds of confidentiality and, therefore, access to their records may be limited. Many service agencies are overburdened and may not have the available staff to either remove unique identifiers or gather information from the records themselves. Second, very little information about sexual violence is available from most agency records (with the possible exception of rape crisis centers), because many victims are not known to those agencies and thus do not appear in their records; they are known but not identified as sexual violence victims; or agency personnel may not always gather or document all of the information that might be of interest for sexual violence surveillance. Therefore, to better assess the magnitude of sexual violence, the panel recommended complimenting traditional surveillance with survey surveillance. This would allow for more detailed, inclusive data collection of sexual violence presently missing from agency records. This publication establishes a single set of uniform definitions for sexual violence, but it separates the proposed data elements for traditional and survey surveillance. For both types of data elements, we further subdivide minimum data elements and expanded data elements. The minimum data elements represent the minimum information that should be collected with each type of surveillance. In most traditional surveillance settings and in the context of survey surveillance, these data elements should be relatively easy to collect. The expanded data elements represent additional information that would be beneficial to collect if resources allow. For both traditional and survey surveillance, the expanded set represents elements in addition to what is already listed in the minimum set. Therefore, expanded data elements are inclusive of minimum data elements.
An example of data that would be difficult to obtain using traditional surveillance methods is changes in psychological functioning after sexual violence. For most victims, these changes can be profound and long lasting (DeKeseredy 1995;Kuyken 1995;Resick 1993;Shields and Hanneke 1992). Information about psychological functioning is usually not part of the information asked of victims at agency intake. This is particularly true if victims state the reason for their visit to be something such as pelvic pain or a broken bone, rather than explicitly mentioning the sexual violence. In addition, information revealed in an intake interview, such as psychological state, is often not recorded. That the data element on psychological functioning was not placed in the minimum list for traditional surveillance does not imply that our panel believed that psychological functioning is unimportant. Rather, the panel suggested that psychological functioning would be difficult to capture in existing and future traditional surveillance systems and better captured through survey surveillance.
# Contents, Purpose, and Scope of this Document
This document includes three major sections:
- - Description or Definition of the data element;
- Uses of the data element and Type of Surveillance for which it is recommended;
- Data Type and maximum allowed Field Length;
- Field Values and Coding Instructions that designate recommended coding specification and valid data entries. For some data elements, the following additional categories may be listed:
- Discussion of conceptual or operational issues;
- Repetition, an indication of when it is appropriate to include all answers that may apply; - Data Standards or Guidelines used to define the data element and its field values; - Other References consulted in developing the data element. Not all categories are included for all data elements because some categories do not apply to certain elements. Data types and field lengths conform to specifications in Health Level 7 (HL7), a widely used protocol for electronic data exchange (HL7 1996), and ASTM's (formerly known as the American Society for Testing and Materials) E1238-94: Standard Specification for Transferring Clinical Observations Between Independent Computer Systems (ASTM 1994). The Technical Notes at the end of this document provide a detailed description of data types and conventions for addressing missing, unknown, and null data, as well as recommendations for handling data elements that are not applicable to selected groups of individuals.
The definitions and data elements in this document are recommendations only. Furthermore, the contents of this document do not represent instruments to be used for either traditional or survey surveillance. Rather, this document contains definitions and data elements that can be used to create instruments for surveillance. When creating instruments, it is important to clearly communicate the definitions of sexual violence for the recorder of traditional surveillance data or to use behaviorally specific items that reflect the definitions of sexual violence in surveys.
The order of the categories of sexual violence as they appear in this document, starting with "completed sex act against the victim's will" and ending with "non-contact sexual abuse," is not intended to suggest a hierarchy of resulting trauma. The panel emphasized that all of the types of sexual violence can have serious negative consequences for victims.
# Next Steps
Although our focus here is on surveillance and measurement issues, the panel identified several other areas for future research: disclosure issues (e.g., who do victims tell about the violence), in-depth study of the psychological consequences of sexual violence, long-term impact of child sexual abuse, and connections between multiple experiences of sexual violence throughout the life course. This document is the first version of Sexual Violence Surveillance: Uniform Definitions and Recommended Data Elements; it is not intended to be the final version of this document. We will pilot test these definitions and data elements in the next few years and revise them as needed.
While this document is similar to Intimate Partner Violence Surveillance: Uniform Definitions and Recommended Data Elements, it differs in some important ways. Its scope is broader: all perpetrators of sexual violence versus intimate partners only. It separates sexual violence into completed acts and attempted acts, whereas the intimate partner violence document combines completed and attempted sex acts in the definition of sexual violence. Also, the current document categorizes a "date" as a friend/acquaintance while the 1999 document classifies a "date" as an intimate partner. Our next step in developing uniform definitions and recommended data elements is to reconcile these differences.
Please send questions or suggestions for improving this document to:
Kathleen C.
# UNIFORM DEFINITIONS FOR SEXUAL VIOLENCE Violence and Associated Terms
# Sexual Violence -Overall Definition
Nonconsensual completed or attempted contact between the penis and the vulva or the penis and the anus involving penetration, however slight; nonconsensual contact between the mouth and the penis, vulva, or anus; nonconsensual penetration of the anal or genital opening of another person by a hand, finger, or other object; nonconsensual intentional touching, either directly or through the clothing, of the genitalia, anus, groin, breast, inner thigh, or buttocks; or nonconsensual non-contact acts of a sexual nature such as voyeurism and verbal or behavioral sexual harassment. All the above acts also qualify as sexual violence if they are committed against someone who is unable to consent or refuse.
Sexual violence is divided into five categories:
- A completed sex act (as defined below) without the victim's consent, or involving a victim who is unable to consent or refuse (as defined below). - An attempted (non-completed) sex act without the victim's consent, or involving a victim who is unable to consent or refuse (as defined below). - Abusive sexual contact (as defined below).
- Non-contact sexual abuse (as defined below).
- Sexual violence, type unspecified.
-Inadequate information available to categorize into one of the other 4 categories.
# Consent
Words or overt actions by a person who is legally or functionally competent to give informed approval, indicating a freely given agreement to have sexual intercourse or sexual contact.
# Inability to Consent
A freely given agreement to have sexual intercourse or sexual contact could not occur because of age, illness, disability, being asleep, or the influence of alcohol or other drugs.
# Inability to Refuse
Disagreement to have sexual intercourse or sexual contact was precluded because of the use of guns or other non-bodily weapons, or due to physical violence, threats of physical violence, real or perceived coercion, intimidation or pressure, or misuse of authority.
# Sex Act (or Sexual Act)
Contact between the penis and the vulva or the penis and the anus involving penetration, however slight; contact between the mouth and the penis, vulva, or anus; or penetration of the anal or genital opening of another person by a hand, finger, or other object.
# Abusive Sexual Contact
Intentional touching, either directly or through the clothing, of the genitalia, anus, groin, breast, inner thigh, or buttocks of any person without his or her consent, or of a person who is unable to consent or refuse.
# Non-Contact Sexual Abuse
Sexual abuse that does not include physical contact of a sexual nature between the perpetrator and the victim. It includes acts such as voyeurism; intentional exposure of an individual to exhibitionism; pornography; verbal or behavioral sexual harassment; threats of sexual violence to accomplish some other end; or taking nude photographs of a sexual nature of another person without his or her consent or knowledge, or of a person who is unable to consent or refuse.
# Incident
A single act or series of acts of sexual violence that are perceived to be connected to one another and that may persist over a period of minutes, hours, or days. One perpetrator or multiple perpetrators may commit an incident.
Examples of an incident include a husband forcing his wife to have unwanted sexual acts but only one time, a stranger attacking and sexually assaulting a woman after breaking into her apartment, a man kidnapping a female acquaintance and repeatedly assaulting her over a weekend before she is freed, a college student forced to have sex by several men at a fraternity party, a man forcing his boyfriend to have unwanted sex, or a family member touching the genitalia of a 6-year-old child.
# Involved Parties
# Victim
Person on whom the sexual violence is inflicted. Survivor is often used as a synonym for victim.
# Current or Former Legal Spouse
Someone to whom the victim is or was legally married, as well as a separated legal spouse.
Another Current or Former Intimate Partner Someone, besides a legal current, former, or separated spouse, with whom the victim has or had an ongoing intimate relationship, such as a common-law spouse, former common-law spouse, separated common-law spouse, cohabiting intimate partner, former cohabiting intimate partner, boyfriend/girlfriend, former boyfriend/girlfriend (opposite or same sex).
# Another Family Member
Someone sharing a relationship by blood or marriage, or other legal contract or arrangement (i.e., legal adoption, foster parenting). This includes current as well as former family relationships. Therefore, though not an exhaustive list, stepparents, parents, siblings, former in-laws, and adopted family members are included in this category. This category excludes intimate partners.
# Person in Position of Power or Trust
Someone such as a teacher, nanny, caregiver, foster care worker, religious leader, coach, or employer (not an exhaustive list).
# Friend/Acquaintance
Someone who is known to the victim but is not related to the victim by blood or marriage, and is not a current or former spouse, another current or former intimate partner, another family member, or a person in a position of power or trust. Examples are a co-worker, neighbor, date, former date, or roommate (not an exhaustive list).
# Another Non-Stranger
Someone who is known by sight but is not a current or former spouse, another current or former intimate partner, another family member, a person in a position of power or trust, or a friend/acquaintance. Examples include guards, maintenance people, or clerks (not an exhaustive list).
# Stranger
Someone unknown to the victim.
# Terms Associated with the Circumstances and Consequences of Violence
# Illness
An acute or short-term condition of poor health. It includes a physical or a mental condition. Examples of illnesses are pneumonia or depressive episodes.
# Disability
Any chronic or long-term impairment resulting in some restriction or lack of ability to perform an action or activity in the manner or within the range considered normal. It includes a physical or a mental impairment. Examples of disabilities are mental retardation, paralysis, or clinical depression.
# Substance Abuse
Abuse of alcohol or other drugs. This also includes alcohol or other drug dependence.
# Substance Abuse Treatment
Any treatment related to alcohol or other drug use, abuse, or dependence.
# Pregnancy Impact
Pregnancy resulting from sexual violence or loss of an existing pregnancy following sexual violence.
# Physical Injury
Any physical harm, including death, occurring to the body resulting from exposure to thermal, mechanical, electrical, or chemical energy interacting with the body in amounts or rates that exceed the threshold of physiological tolerance, or from the absence of such essentials as oxygen or heat. Examples of physical injuries are vaginal or anal tears attributable to an incident of sexual violence.
# Psychological Functioning
The intellectual, mental health, emotional, behavioral, or social role functioning of the victim. Changes in psychological functioning can be either temporary or intermittent (i.e., persisting for 180 days or less) or chronic (i.e., likely to be of an extended and continuous duration persisting for a period greater than 180 days).
Examples of changes in psychological functioning include increases in or development of anxiety, depression, insomnia, eating disorders, post-traumatic stress disorder, dissociation, inattention, memory impairment, self-medication, self-mutilation, sexual dysfunction, hypersexuality, and attempted or completed suicide.
# Inpatient Medical Health Care
Treatment by a physician or other health care professional related to the physical health of a victim who has been admitted to a hospital.
# Outpatient Medical Health Care
Treatment by a physician or other health care professional related to the physical health of a victim who has not been admitted to a hospital. Includes treatment in an emergency department.
# Physical Evidence Collection
Collection of hairs, fibers or specimens of body fluids from a victim's body or garments that may aid in the identification of the perpetrator.
# Mental Health Care
Individual or group care by credentialed or licensed psychiatrists, psychologists, social workers, or other counselors related to the mental health of the victim. Excludes substance abuse treatment. It may involve treatment when the victim has been admitted to a hospital or when the victim has not been admitted. It includes pastoral counseling if such counseling is specifically related to the mental health of the victim.
# Residential Institution
A location where the victim or perpetrator resides. Includes settings such as a nursing home, a college campus, a retirement home, or a jail/prison (not an exhaustive list).
# Commercial Establishment
A business such as a restaurant, a bar or club, or a gym or athletic facility (not an exhaustive list).
# Law Enforcement
Police, as well as tribal authorities, prison authorities, and campus authorities (not an exhaustive list).
# DATA ELEMENTS FOR TRADITIONAL AND SURVEY SURVEILLANCE OF SEXUAL VIOLENCE I. TRADITIONAL SURVEILLANCE
Traditional surveillance across more than one data source requires a unique identifier to link information from separate data sources to each other, to reduce duplication of information collected, and to identify repeat visits by the same person.
A.
# TRADITIONAL SURVEILLANCE -EXPANDED
Refers to information in addition to "traditional surveillance -minimum" that will be available in some cases and beneficial to collect through traditional surveillance, but that may not be practical to collect in all settings.
# SURVEY SURVEILLANCE -MINIMUM
Refers to the least amount of information that should be collected through survey surveillance.
# SURVEY SURVEILLANCE -EXPANDED
Refers to information in addition to "survey surveillance -minimum" that would be beneficial to collect if resources allow.
# IDENTIFYING INFORMATION
# Discussion
To protect victim privacy and confidentiality, access to this data element must be limited to authorized personnel. Case ID may be assigned by the agency compiling or collecting sexual violence surveillance data, or, specific to traditional surveillance, it may be an identifier previously assigned by the contributing data source. Case ID may or may not be identical to the unique identifier created to allow linkage across multiple sources.
# Data Type (and Field Length)
CX -extended composite ID with check digit (20). See Technical Notes.
# Field Values/Coding Instructions
Component 1 is the identifier. Component 2 is the check digit. Component 3 is the code indicating the check digit scheme employed. Components 4 -6 are not used unless needed for local purposes.
Enter the primary identifier used by the facility to identify the victim in Component 1.
If none or unknown is applicable, enter "" or "unknown" in Component 1, and do not make entries in the remaining components. Components 2 and 3 are for optional use when a check digit scheme is employed.
Example, when M11 refers to the algorithm used to generate the check digit: Component 1 = 1234567 Component 2 = 6 Component 3 = M11
# Data Standards or Guidelines
Health Level 7, Version 2.3 (HL7 1996).
# DATA SOURCE 1.02 Description/Definition
Source from which sexual violence surveillance information is abstracted.
# Uses/Type of Surveillance
Traditional surveillance -minimum
# Discussion
No single agency or survey is likely to include all of the data elements recommended. As a consequence, anyone setting up a surveillance system will likely need to combine data from a number of sources (e.g., health care records, police records, and a survey) using a relational database. This will allow information about data elements to be gathered from each data source used. The mechanics of how to set up relational databases are not discussed in this document. A unique identifier will need to be created to allow for linkage across all data sources included. This identifier may or may not be identical to the data element 1.01 Case ID.
# Discussion
For more than 20 years, the federal government has promoted the use of a common language to promote uniformity and comparability of data on race and ethnicity for population groups. Development of the data standards stemmed in large measure from new responsibilities to enforce civil rights laws. Data were needed to monitor equal access in housing, education, employment, and other areas for populations that had historically experienced discrimination and differential treatment because of their race or ethnicity. The standards are used not only in the decennial census (which provides the data for the "denominator" for many measures), but also in household surveys, on administrative forms (e.g., school registration and mortgagelending applications), and in medical and other research. The categories represent a socialpolitical construct designed for collecting data on the race and ethnicity of broad population groups in the United States.
Race is a concept used to differentiate population groups largely on the basis of physical characteristics transmitted by descent. Racial categories are neither precise nor mutually exclusive, and the concept of race lacks clear scientific definition. The common use of race in the United States draws upon differences not only in physical attributes, but also in ancestry and geographic origins. Since 1977, the federal government has sought to standardize data on race and ethnicity among its agencies. 1997) was developed to meet federal legislative and program requirements, and these standards are used widely in the public and private sectors. The directive provides five basic racial categories but states that the collection of race data need not be limited to these categories. However, any additional reporting that uses more detail must be organized in such a way that the additional categories can be aggregated into the five basic groups. Although the directive does not specify a method of determining an individual's race, OMB prefers self-identification to identification by an observer whenever possible. The directive states that persons of mixed racial origins should be coded using multiple categories and not a multiracial category.
# Data Type (and Field Length)
CE -coded element (60).
# Repetition
Repeat coding is allowed for multiple racial categories. 1997).
# Other References
Core Health Data Elements (National Committee on Vital and Health Statistics 1996).
# CITY, STATE, AND COUNTY OF VICTIM'S RESIDENCE 2.05
Description/Definition City, state, and county of the victim's residence at the time the agency or survey providing data to the sexual violence surveillance system first documented sexual violence victimization for this person.
# Uses/Type of Surveillance
Allows examination of the correspondence between the location of the victim's residence and data element 4.06 (City, state, and county of location of most recent incident of sexual violence), and may have implications for intervention strategies.
Traditional surveillance -minimum Survey surveillance -expanded
# Discussion
Additional information (e.g., street address, zip code) can easily be added as components of this element if linkage across data sources is desired. However, to protect privacy and confidentiality, access to this level of detail must be limited to authorized personnel. Data collectors must take every step to ensure victim safety and confidentiality if the full extended version of this data element is used.
# Data Type (and Field Length)
XAD -extended address (106).
# Field Values/Coding Instructions
Component 3 is the city. Component 4 is the state or province. Component 9 is the county/parish code.
# Example:
Component 3 = Lima Component 4 = OH Component 9 = 019
The state or province code entered in Component 4 should be entered as a two-letter postal abbreviation. The county/parish code should be entered in Component 9 as the 3-digit Federal Information Processing Standards code. See XAD -extended address in the Technical Notes for additional information about other possible components of this data element. The numbering of these components (3, 4, and 9) is consistent with the numbering of components used elsewhere for full XAD coding.
# Data Standards or Guidelines
Health Level 7, Version 2.3 (HL7 1996).
# VICTIM'S FIRST KNOWN INCIDENT OF SEXUAL VIOLENCE
# Uses/Type of Surveillance
Identifies all types of sexual violence that occurred in the first known incident of sexual violence.
# Survey surveillance -expanded
# Discussion
This data element, through repeat coding, can provide information about each type of sexual violence in the first known incident ever.
Data Type (and Field Length) CE -coded element (60).
# Repetition
Repeat coding is allowed for multiple types of sexual violence in first incident ever.
# Field Values/Coding Instructions Code Description 0
No known sexual violence by anyone ever 1 C ompleted sex act without the victim's consent or involving a victim who is unable to consent or refuse 2
A ttempted (non-completed) sex act without the victim's consent, or involving a victim who is unable to consent or refuse 3
Abusive sexual contact 4
Non-contact sexual abuse 5 S exual violence, type unspecified 9
Unknown whether any category of sexual violence ever occurred If the response is coded "9" (unknown whether any category of sexual violence ever occurred), codes "0," "1," "2," "3," "4," or "5" should not be used.
# AGE OF VICTIM AT FIRST TIME OF KNOWN INCIDENT OF SEXUAL VIOLENCE EVER 3.02 Description/Definition
Age of victim at time of first known incident of sexual violence described in 3.01.
# Uses/Type of Surveillance
This data element, used with data element 2.01 (Birth Date of Victim) allows for determination of how long ago the first incident of sexual violence happened.
# Survey surveillance -expanded
# Discussion
If exact age at time of first known incident of sexual violence ever is not available or is unknown, the age ranges and codes below should be used to estimate age.
# CIRCUMSTANCES AT TIME OF FIRST KNOWN INCIDENT OF SEXUAL VIOLENCE EVER 3.03 Description/Definition
Circumstances associated with the first known incident of sexual violence described in 3.01.
# Uses/Type of Surveillance
Identifies some of the circumstances associated with the first known incident of sexual violence.
# Survey surveillance -expanded
# Discussion
Additional information can be collected that differentiates illicit and prescription drug use by victim and/or perpetrator.
# Data Type (and Field Length)
CE -coded element (60).
# Repetition
Repeat coding is allowed.
# WEAPON USED DURING FIRST KNOWN INCIDENT OF SEXUAL VIOLENCE EVER 3.04
Description/Definition Type(s) of weapon(s) other than bodily force used in the first known incident of sexual violence described in 3.01.
# Uses/Type of Surveillance
Documents the use of a gun, knife, or other non-bodily weapon in the first known incident of sexual violence.
# Survey surveillance -expanded
# Discussion
Severity and likelihood of physical injury and other serious consequences may be associated with weapon use. Data collectors may want to record more information to elaborate on code "3" (other non-bodily weapon(s) used).
# Data Type (and Field Length) CE -coded element (60).
# Repetition
Repeat coding is allowed.
# Field Values/Coding Instructions
# MULTIPLE PERPETRATORS INVOLVED IN FIRST KNOWN INCIDENT OF SEXUAL VIOLENCE EVER 3.05 Description/Definition
Whether one or multiple perpetrators were involved in the first known incident of sexual violence described in 3.01.
# Uses/Type of Surveillance
Allows examination of differences between incidents involving one perpetrator and incidents involving more than one perpetrator.
# Survey surveillance -expanded
# Discussion
Sexual violence incidents involving more than one perpetrator may differ in nature from sexual violence incidents involving only one perpetrator.
# Data Type (and Field Length)
CE -coded element (60).
# Field Values/Coding Instructions
Code Description 1 F irst incident of sexual violence involved one perpetrator. 2 F irst incident of sexual violence involved two or more perpetrators. 9
Unknown number of perpetrators was involved in first incident of sexual violence.
# RELATIONSHIP OF PERPETRATORS(S) AND VICTIM AT TIME OF FIRST KNOWN INCIDENT OF SEXUAL VIOLENCE EVER 3.06 Description/Definition
The relationship of the victim and perpetrator(s) at the time of the first known incident of sexual violence described in 3.01.
# Uses/Type of Surveillance
Allows examination of a full range of possible relationships between victim and perpetrator and allows examination of differences in experiences by type of relationship.
# Survey surveillance -expanded
# Discussion
Legally married spouses are categorized separately in this data element because laws related to sexual violence in some states apply only to legally married spouses. Separating these categories would allow the collector to examine the differences in experiences that might exist due to the legality of the relationship.
# Data Type (and Field Length) CE -coded element (60).
# Repetition
If there was more than one perpetrator (see data element 3.05 Multiple Perpetrators), code the relationship of each perpetrator involved in the first incident of sexual violence ever described in 3.01.
# Field Values/Coding Instructions Code
Description 1 I n the first incident of sexual violence ever, the perpetrator was a current legal spouse of the victim. 2 I n the first incident of sexual violence ever, the perpetrator was a former legal spouse of the victim. 3 I n the first incident of sexual violence ever, the perpetrator was another current intimate partner of the victim. 4
I n the first incident of sexual violence ever, the perpetrator was another former intimate partner of the victim. 5
I n the first incident of sexual violence ever, the perpetrator was another family member of the victim. 6
I n the first incident of sexual violence ever, the perpetrator was a person in a position of power or trust to the victim. 7
I n the first incident of sexual violence ever, the perpetrator was a friend/acquaintance of the victim. 8
I n the first incident of sexual violence ever, the perpetrator was another non-stranger to the victim. 9
I n the first incident of sexual violence ever, the perpetrator was a stranger to the victim.
In the first incident of sexual violence ever, the relationship with the perpetrator is unknown.
This data element should be coded to reflect the perpetrator's relationship to the victim at the time of the first incident of sexual violence described in 3.01.
# COHABITATION OF VICTIM AND PERPETRATOR(S) AT TIME OF FIRST KNOWN INCIDENT OF SEXUAL VIOLENCE EVER 3.07 Description/Definition
The living arrangement of victim and perpetrator(s) at the time of the first known incident of sexual violence described in 3.01.
# Uses/Type of Surveillance
Allows the examination of differences based on whether the victim and perpetrator were living together at the time of the first known incident of sexual violence.
# Survey surveillance -expanded
# Discussion
This data element could serve as a proxy for familiarity between victim and perpetrator if information about data element 3.06 (Relationship of perpetrator and victim) is not available.
# Data Type (and Field Length) CE -coded element (60).
# Repetition
If there was more than one perpetrator (see data element 3.05 Multiple Perpetrators), code data on the living arrangements of the victim and each of the perpetrators who committed the first incident of sexual violence described in 3.01.
# Field Values/Coding Instructions Code
Description 0
V ictim was known NOT to be sharing the same primary residence as the perpetrator at the time of the first incident of sexual violence. 1
V ictim was known to be sharing the same primary residence as the perpetrator at the time of the first incident of sexual violence. 9
Unknown if the victim was sharing the same primary residence as the perpetrator at the time of the first incident of sexual violence.
# NUMBER OF INCIDENTS IN LIFETIME 3.08 Description/Definition
Number of incidents of sexual violence in the victim's lifetime.
# Uses/Type of Surveillance
Provides a measure of the frequency of incidents of sexual violence in the victim's lifetime by any perpetrator.
# Traditional surveillance -expanded Survey surveillance -minimum
# Discussion
Recall that the definition of incident is "A single act or series of acts of sexual violence that are perceived to be connected to one another and that may persist over a period of minutes, hours, or days. One perpetrator or multiple perpetrators may commit an incident." Although the definition of sexual violence includes five distinct categories, the codes here combine information across the five categories to provide a general measure of lifetime prevalence of sexual violence.
# Data Type (and Field Length)
CE -coded element (60).
# Field Values/Coding Instructions Code Description 0
No known sexual violence by anyone ever 1 1 incident of sexual violence in the victim's lifetime 2 2-5 incidents of sexual violence in the victim's lifetime 3 6-10 incidents of sexual violence in the victim's lifetime 9
Unknown how many incidents of sexual violence in the victim's lifetime If data element 3.01 is used and the response is coded "0" (no known sexual violence occurred by anyone ever) or "9" (unknown whether any category of sexual violence ever occurred), this data element should not be used.
If data element 3.01 is used and the response is coded "0" (no known sexual violence occurred by anyone ever) or "9" (unknown whether any category of sexual violence ever occurred), this section (4) of the document should not be used.
If 3.08 is used and the response is coded "0" (no known sexual violence occurred by anyone ever) or "9" (unknown how many incidents of sexual violence in the victim's lifetime), this section (4) of the document should not be used.
# Uses/Type of Surveillance
Identifies all types of sexual violence that occurred in the most recent incident.
Traditional surveillance -minimum Survey surveillance -minimum
# Discussion
This data element, with repeat coding, provides information about each type of sexual violence in the most recent incident committed by any perpetrator.
# Data Type (and Field Length)
CE -coded element (60).
# Repetition
Repeat coding is allowed for multiple types of sexual violence in most recent incident.
# Field Values/Coding Instructions Code Description 0
No known sexual violence by anyone ever 1 C ompleted sex act without the victim's consent, or involving a victim who is unable to consent or refuse 2 A ttempted (non-completed) sex act without the victim's consent, or involving a victim who is unable to consent or refuse 3
Abusive sexual contact 4
Non-contact sexual abuse 5 S exual violence, type unspecified 9
Unknown whether any category of sexual violence ever occurred If the response is coded "9" (unknown whether any category of sexual violence ever occurred), codes "0," "1," "2," "3," "4," or "5" should not be used.
# AGE OF VICTIM AT TIME OF MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.02 Description/Definition
Age of victim at time of most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
This data element, used with data element 2.01 (Birth Date of Victim) allows for determination of how long ago the most recent incident of sexual violence happened.
# Survey surveillance -minimum
# Discussion
If exact age at time of most recent incident of sexual violence is not available or is unknown, the age ranges and codes below should be used to estimate age.
# Uses/Type of Surveillance
Allows an estimation of the frequency of sexual violence within the last year by any perpetrator.
# Survey surveillance -minimum
# Discussion
Recall the definition of an incident: "A single act or series of acts of sexual violence that are perceived to be connected to one another and that may persist over a period of minutes, hours, or days. One perpetrator or multiple perpetrators may commit an incident." Although the definition of sexual violence includes five distinct categories, the codes here combine information across the five categories to provide a general measure of past year incidence of sexual violence.
# Data Type (and Field Length)
CE -coded element (60).
# Field Values/Coding Instructions
Code Description 0 0 incidents of sexual violence in the 12 months prior to the date the agency or survey providing data to the sexual violence surveillance system first documented sexual violence victimization for this person 1 1 incident of sexual violence in the 12 months prior to the date the agency or survey providing data to the sexual violence surveillance system first documented sexual violence victimization for this person 2 2-5 incidents of sexual violence in the 12 months prior to the date the agency or survey providing data to the sexual violence surveillance system first documented sexual violence victimization for this person 3 6-10 incidents of sexual violence in the 12 months prior to the date the agency or survey providing data to the sexual violence surveillance system first documented sexual violence victimization for this person 4
More than 10 incidents of sexual violence in the 12 months prior to the date the agency or survey providing data to the sexual violence surveillance system first documented sexual violence victimization for this person 9
Unknown how many incidents of sexual violence in the 12 months prior to the date the agency or survey providing data to the sexual violence surveillance system first documented sexual violence victimization for this person
# CIRCUMSTANCES AT TIME OF MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.04 Description/Definition
Circumstances associated with the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
Identifies some of the circumstances associated with the most recent incident of sexual violence.
# Survey surveillance -expanded
# Discussion
Additional information can be collected that differentiates illicit and prescription drug use by victim and/or perpetrator.
# Data Type (and Field Length) CE -coded element (60).
# Repetition
Repeat coding is allowed. Uses/Type of Surveillance Documents the use of a gun, knife, or other non-bodily weapon in the most recent incident of sexual violence.
# Field Values/Coding Instructions
# Survey surveillance -expanded
# Discussion
Severity and likelihood of physical injury and other serious consequences may be associated with weapon use. Data collectors may want to record more information on code "3" (other non-bodily weapon(s) used).
# Data Type (and Field Length)
CE -coded element (60).
# Repetition
Repeat coding is allowed.
# CITY, STATE, AND COUNTY OF LOCATION OF MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.06
Description/Definition City, state, and county of location of the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
Allows correspondence between the location of the victim's residence and the location of the most recent incident of sexual violence and may have implications for intervention strategies.
Traditional surveillance -minimum Survey surveillance -expanded
# Discussion
Additional information (e.g., street address, zip code) can easily be added as components of this element if linkage across data sources is desired. However, to protect privacy and confidentiality, access to this level of detail must be limited to authorized personnel. Data collectors must take every step to ensure victim safety and confidentiality if the full extended version of this data element is used.
# Data Type (and Field Length)
XAD -extended address (106).
# Field Values/Coding Instructions
Component 3 is the city. Component 4 is the state or province. Component 9 is the county/parish code.
# Example:
Component 3 = Lima Component 4 = OH Component 9 = 019
The state or province code entered in Component 4 should be entered as a two-letter postal abbreviation. The county/parish code should be entered in Component 9 as the 3-digit Federal Information Processing Standards code. See XAD -extended address in the Technical Notes for additional information on other possible components of this data element. The numbering of these components (3, 4, and 9) is consistent with the numbering of components used elsewhere for full XAD coding.
# Data Standards or Guidelines
Health Level 7, Version 2.3 (HL7 1996).
# MULTIPLE PERPETRATORS INVOLVED IN MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.07
Description/Definition Whether one or multiple perpetrators were involved in the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
Allows examination of differences between incidents involving one perpetrator and incidents involving more than one perpetrator.
Traditional surveillance -expanded Survey surveillance -expanded
# Discussion
Sexual violence incidents involving more than one perpetrator may differ in nature from those involving only one perpetrator.
# Data Type (and Field Length)
CE -coded element (60).
# Field Values/Coding Instructions Code Description 1
The most recent incident of sexual violence involved one perpetrator. 2
The most recent incident of sexual violence involved two or more perpetrators. 9
Unknown number of perpetrators was involved in most recent incident of sexual violence.
# SEX OF PERPETRATOR(S) INVOLVED IN MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.08
Description/Definition Sex of perpetrator(s) involved in the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
Standard demographic and identifying information for perpetrator.
Traditional surveillance -expanded Survey surveillance -expanded Data Type (and Field Length) CE -coded element (60).
# Repetition
If there was more than one perpetrator (see data element 4.07 Multiple Perpetrators), code data on the sex of each perpetrator involved in the most recent incident of sexual violence described in 4.01.
# Field Values/Coding Instructions
# Discussion
For more than 20 years, the federal government has promoted the use of a common language to promote uniformity and comparability of data on race and ethnicity for population groups. Development of the data standards stemmed in large measure from new responsibilities to enforce civil rights laws. Data were needed to monitor equal access in housing, education, employment, and other areas for populations that historically had experienced discrimination and differential treatment because of their race or ethnicity. The standards are used not only in the decennial census (which provides the data for the "denominator" for many measures), but also in household surveys, on administrative forms (e.g., school registration and mortgage-lending applications), and in medical and other research. The categories represent a social-political construct designed for collecting data on the race and ethnicity of broad population groups in the United States.
Race is a concept used to differentiate population groups largely on the basis of physical characteristics transmitted by descent. Racial categories are neither precise nor mutually exclusive, and the concept of race lacks clear scientific definition. The common use of race in the United States draws upon differences not only in physical attributes, but also in ancestry and geographic origins. Since 1977, the federal government has sought to standardize data on race and ethnicity among its agencies. 1997) was developed to meet federal legislative and program requirements, and these standards are used widely in the public and private sectors. The directive provides five basic racial categories but states that the collection of race data need not be limited to these categories. However, any additional reporting that uses more detail must be organized in such a way that the additional categories can be aggregated into the five basic groups. Although the directive does not specify a method of determining an individual's race, OMB prefers self-identification to identification by an observer whenever possible. The directive states that persons of mixed racial origins should be coded using multiple categories and not a multiracial category.
# Data Type (and Field Length)
CE -coded element (60).
# HISPANIC OR LATINO ETHNICITY OF PERPETRATOR(S) INVOLVED IN MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.10
Description/Definition Ethnicity of perpetrator(s) involved in the most recent incident of sexual violence described in 4.01. Hispanic or Latino ethnicity refers to a person of Mexican, Puerto Rican, Cuban, South or Central American, or other Spanish culture or origin, regardless of race. The term "Spanish origin" can be used in addition to "Hispanic or Latino."
Uses/Type of Surveillance Data on ethnicity are used in public health surveillance and in epidemiological, behavioral and social science, clinical, and health services research.
Traditional surveillance -expanded Survey surveillance -expanded
# Discussion
Ethnicity is a concept used to differentiate population groups on the basis of shared cultural characteristics or geographic origins. A variety of cultural attributes contribute to ethnic differentiation, including language, patterns of social interaction, religion, and styles of dress. However, ethnic differentiation is imprecise and fluid. It is contingent on a sense of group identity that can change over time and that involves subjective and attitudinal influences. Since 1977, the federal government has sought to standardize data on race and ethnicity among its agencies. The Office of Management and Budget's (OMB) Revisions to the Standards for the Classification of Federal Data on Race and Ethnicity (OMB 1997) was developed to meet Federal legislative and program requirements, and these standards are used widely in the public and private sectors. The directive provides two basic ethnic categories -Hispanic or Latino and Not of Hispanic or Latino Origin -but states that collection of ethnicity data need not be limited to these categories. However, any additional reporting that uses more detail must be organized in such a way that the additional categories can be aggregated into the two basic groups. OMB prefers that data on race and ethnicity be collected separately. The use of the Hispanic category in a combined race/ethnicity data element makes it impossible to distribute persons of Hispanic ethnicity by race and, therefore, reduces the utility of the five basic racial categories by excluding from them persons who would otherwise be included.
# Data Type (and Field Length)
CE -coded element (60).
# Repetition
If there was more than one perpetrator (see data element 4.07 Multiple Perpetrators), code data on the ethnicity of each perpetrator involved in the most recent incident of sexual violence described in 4.01.
# RELATIONSHIP OF PERPETRATOR(S) AND VICTIM AT TIME OF MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.12 Description/Definition
The relationship of perpetrator(s) to the victim at the time of the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
Allows examination of a full range of possible relationships between victim and perpetrator and allows examination of differences in experiences by type of relationship.
Traditional surveillance -minimum Survey surveillance -minimum
# Discussion
Legally married spouses are categorized separately in this data element because laws related to sexual violence in some states apply only to legally married spouses. Separating these categories would allow the collector to examine the differences in experiences that might exist due to the legality of the relationship.
# Data Type (and Field Length)
CE -coded element (60).
# Repetition
If there was more than one perpetrator (see data element 4.07 Multiple Perpetrators), code the relationship of each perpetrator involved in the most recent incident of sexual violence described in 4.01.
# Field Values/Coding Instructions Code
Description 1 I n the most recent incident of sexual violence, the perpetrator was a current legal spouse of the victim. 2 I n the most recent incident of sexual violence, the perpetrator was a former legal spouse of the victim. 3 I n the most recent incident of sexual violence, the perpetrator was another current intimate partner of the victim. 4
I n the most recent incident of sexual violence, the perpetrator was another former intimate partner of the victim. 5
I n the most recent incident of sexual violence, the perpetrator was another family member of the victim. 6
I n the most recent incident of sexual violence, the perpetrator was a person in a position of power or trust to the victim. 7
I n the most recent incident of sexual violence, the perpetrator was a friend/acquaintance of the victim. 8
I n the most recent incident of sexual violence, the perpetrator was another non-stranger to the victim.
I n the most recent incident of sexual violence, the perpetrator was a stranger to the victim. 99
In the most recent incident of sexual violence, the relationship with the perpetrator is unknown.
The data element should be coded to reflect the perpetrator's relationship to the victim at the time of the most recent incident of sexual violence described in 4.01.
# WHETHER PERPETRATOR(S) IN MOST RECENT INCIDENT HAS SEXUALLY VICTIMIZED VICTIM IN THE PAST 4.13 Description/Definition
Whether or not the perpetrator(s) in the most recent incident of sexual violence described in 4.01 has (have) sexually victimized the victim in the past.
# Uses/Type of Surveillance
This data element is designed to determine if there was a history of previous sexual violence by the perpetrator in the most recent incident. It can be used to estimate a pattern of violence by the same perpetrator and can be linked with data element 4.12 (Relationship of Perpetrator and Victim).
# Survey surveillance -expanded
# Discussion
Some research suggests that repeated sexual violence by the same perpetrator may increase in frequency or severity over time (Finkelhor and Yllo 1985;Russell 1990). This data element, as currently written, does not allow for a record of the changes in the types of sexual violence by the same perpetrator over time. Data collectors may wish to create additional data elements to document more detailed information about the types of sexual violence committed by the same perpetrator as in the most recent incident.
# Data Type (and Field Length) CE -coded element (60).
# Repetition
If there was more than one perpetrator (see data element 4.07 Multiple Perpetrators), code data on each perpetrator involved in the most recent incident of sexual violence described in 4.01.
# Field Values/Coding Instructions Code
Description 0 P erpetrator in most recent incident has NOT sexually victimized the victim in the past. 1 P erpetrator in most recent incident has sexually victimized the victim in the past. 9
Unknown whether or not perpetrator in most recent incident has sexually victimized the victim in the past.
# LOCATION OF MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.15 Description/Definition
The physical location of the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
This data element may have implications for intervention strategies.
# Survey surveillance -expanded
# Discussion
This data element may assist in determining whether certain locations are more common than others.
# Data Type (and Field Length) CE -coded element (60).
# Repetition
Repeat coding is allowed if sexual violence in most recent incident occurred in more than one physical location. This data type is used for certain fields that commonly contain check digits (e.g., internal agency identifier indicating a specific person, such as a patient or client). Component 1 contains an alphanumeric identifier. The check digit entered in Component 2 is an integral part of the identifier but is not included in Component 1. Component 3 identifies the algorithm used to generate the check digit. Component 4, , is the unique name of the system that created the identifier. Component 5, , is a code for the identifier type, such as MR for medical record number (see Table 0203 in HL7, Version 2.3). Component 6, , is the place or location where the identifier was first assigned to the individual (e.g., University Hospital).
# Field Values/Coding Instructions
# NM -numeric
An entry into a field of this data type is a number represented by a series of ASCII numeric characters consisting of an optional leading sign (+ or -), one or more digits, and an optional decimal point. In the absence of a + or -sign, the number is assumed to be positive. Leading zeros or trailing zeros after a decimal point are not meaningful. The only nonnumeric characters allowed are the optional leading sign and decimal point.
# TS -time stamp
# Form: YYYY]]]]]]]
A data element of this type is string data that contains the date and time of an event. YYYY is the year, MM is the month, and DD is the day of the month. The time, HHMM, is based on a 24-hour clock in which midnight is 0000 and 2359 is 11:59 p.m., and +/-ZZZZ is the offset from Greenwich Mean Time (for example -0500 is Eastern Daylight Time, and -0600 is Eastern Standard Time). If the optional +/-ZZZZ is missing, local time is assumed.
A TS data field should be left blank when the time of an event or the information is not recorded (missing data). As a convention (not an HL7 standard), 99 can be used to indicate that this information is not known: Missing values are values that are either not sought or not recorded. In a computerized system, missing values should always be identifiable and distinguished from unknown or null values. Typically, no keystrokes are made, and as a result alphanumeric fields remain as default characters (most often blanks) and numeric fields are identifiable as never having had entries.
Entry
Unknown values are values that are recorded to indicate that information was sought and found to be unavailable. Various conventions are used to enter unknown values: the word "unknown" or a single character value (9 or U) for the CE -coded element data type; 99 for two or more unknown digits for the TS -time stamp data type; and 9 or a series of 9s for the NM -numeric data type. Note: the use of Unknown, U and 9s in this document to represent values that are not known is an arbitrary choice. Other notations may be used for unknown value entries.
Null values are values that represent none or zero or that indicate specific properties are not measured. For alphanumeric fields, the convention of entering "" in the field is recommended to represent none (e.g., no telephone number), and the absence of an inquiry requires no data entry (e.g., not asking about a telephone number results in missing data). For numeric fields, the convention of entering 8 or a series of 8s is recommended to denote that a measurement was not made, preserving an entry of zero for a number in the measurement continuum. Note: the use of "" and 8s in this document to represent null values is an arbitrary choice. Other notations may be used for null value entries.
Null or unknown values in multicomponent data types (i.e., CE, CX, and XAD) are indicated in the first alphanumeric component. For example, in an XAD data type, "" or "unknown" would be entered in the component to indicate there was no address or that the address was not known, and no data would be entered in the remaining components.
Data Elements and Components That Are Not Applicable. Data entry is not required in certain fields when the data elements or their components do not pertain (e.g., victim's pregnancy impact would not be applicable to male victims). Skip patterns should be used as needed to reduce data entry burdens.
# Repetition
Repeat coding is allowed for multiple racial categories. If there was more than one perpetrator (see data element 4.07 Multiple Perpetrators), code data on the race(s) of each perpetrator involved in the most recent incident of sexual violence described in 4.01.
# Field Values/Coding Instructions Code Description 1
American Indian/Alaska Native -a person having origins in any of the original peoples of North and South America (including Central America), and who maintains tribal affiliation or community attachment 2
Asian -a person having origins in any of the original peoples of the Far East, Southeast Asia, or the Indian subcontinent including, for example, Cambodia, China, India, Japan, Korea, Malaysia, Pakistan, the Philippine Islands, Thailand, and Vietnam 3
Black or African American -a person having origins in any of the black racial groups of Africa. Terms such as "Haitian" or "Negro" can be used in addition to "Black or African American" 4
Native Hawaiian or Other Pacific Islander -a person having origins in any of the original peoples of Hawaii, Guam, Samoa, or other Pacific Islands 5
White -a person having origins in any of the original peoples of Europe, the Middle East, or North Africa 6
Other race (specify) 9
Unknown -a person's race is unknown
# Data Standards or Guidelines
Revisions to the Standards for the Classification of Federal Data on Race and Ethnicity (OMB 1997).
# Other References
Core Health Data Elements (National Committee on Vital and Health Statistics 1996).
# Field Values/Coding Instructions Code Description 1
Of Hispanic or Latino origin 2 N ot of Hispanic or Latino origin 9 U nknown whether of Hispanic or Latino origin
# Data Standards or Guidelines
Revisions to the Standards for the Classification of Federal Data on Race and Ethnicity (OMB 1997).
# Other References
Core Health Data Elements (National Committee on Vital and Health Statistics 1996).
# AGE OF PERPETRATOR(S) AT TIME OF MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.11
Description/Definition Age of perpetrator(s) at time of the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
This data element, used with data element 4.02 (Age of Victim at Time of Most Recent Incident of Sexual Violence) allows for determination of age difference between victim and perpetrator.
Traditional surveillance -expanded Survey surveillance -minimum
# Discussion
If exact age at time of most recent incident of sexual violence is not available or is unknown, the age categories and codes below should be used to estimate age.
Data Type (and Field Length) NM -numeric (3) for exact age; CE -coded element (60) for age ranges.
# Repetition
If there was more than one perpetrator (see data element 4.07 Multiple Perpetrators), code data on the age of each perpetrator involved in the most recent incident of sexual violence described in 4.01.
# Description/Definition
The living arrangement of the victim and perpetrator(s) at the time of the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
Allows the examination of differences based on whether the victim and perpetrator were living together at the time of the most recent incident of sexual violence.
# Survey surveillance -expanded
# Discussion
This data element could serve as a proxy for familiarity between victim and perpetrator if information on data element 4.12 (Relationship of Perpetrator and Victim) is not available.
# Data Type (and Field Length)
CE -coded element (60).
# Repetition
If there was more than one perpetrator (see data element 4.07 Multiple Perpetrators), code data on the living arrangements of the victim and each of the perpetrators who perpetrated the most recent incident of sexual violence described in 4.01.
# Field Values/Coding Instructions
Code Description 0
V ictim was known NOT to be sharing the same household as the perpetrator at the time of the most recent incident of sexual violence. 1
V ictim was known to be sharing the same household as the perpetrator at the time of the most recent incident of sexual violence. 9
Unknown if the victim was sharing the same household as the perpetrator at the time of the most recent incident of sexual violence.
# HUMAN IMMUNODEFICIENCY VIRUS (HIV) DIAGNOSED FOLLOWING MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.16
Description/Definition Whether or not Human Immunodeficiency Virus (HIV) was diagnosed following the most recent incident of sexual violence described in 4.01.
Uses/Type of Surveillance Documents HIV infection following the most recent incident of sexual violence.
Traditional surveillance -expanded Survey surveillance -expanded
# Discussion
It is conceivable that HIV could be contracted from sexual contact other than that which occurred in the most recent incident of sexual violence. In addition, if the most recent incident of sexual violence was within 3 to 6 months of the date of record or data of survey, possible HIV infection may be unknown or undetectable.
# Data Type (and Field Length)
CE -coded element (60).
# Field Values/Coding Instructions
Code Description 0
V ictim was known NOT to have been diagnosed with HIV following the most recent incident of sexual violence. 1
V ictim was known to have been diagnosed with HIV following the most recent incident of sexual violence. 9
Unknown if victim was diagnosed with HIV following the most recent incident of sexual violence.
# SEXUALLY TRANSMITTED DISEASE (EXCLUDING HIV) DIAGNOSED FOLLOWING MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.17 Description/Definition
Whether or not a sexually transmitted disease (excluding HIV) was diagnosed following the most recent incident of sexual violence described in 4.01.
Uses/Type of Surveillance Documents sexually transmitted disease (STD) infection, not including HIV, following the most recent incident of sexual violence.
# Traditional surveillance -expanded Survey surveillance -expanded
# Discussion
It is conceivable that an STD could be contracted from sexual contact other than that which occurred in the most recent incident of sexual violence. In addition, if the most recent incident of sexual violence was within 3 to 6 months of the date of record or data of survey, possible STD infection may be unknown or undetectable.
# Data Type (and Field Length) CE -coded element (60).
# Field Values/Coding Instructions Code
Description 0
V ictim was known NOT to have been diagnosed with an STD (excluding HIV) following the most recent incident of sexual violence. 1
V ictim was known to have been diagnosed with an STD (excluding HIV) following the most recent incident of sexual violence. 9
Unknown if victim was diagnosed with an STD (excluding HIV) following the most recent incident of sexual violence.
# PREGNANCY IMPACT FROM MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.18 Description/Definition
The pregnancy impact to the victim following the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
This data element allows for an examination of the relationship between the most recent incident of sexual violence and occurrence of pregnancy or loss of existing pregnancy.
Traditional surveillance -expanded Survey surveillance -expanded
# Discussion
This data element combines a pregnancy resulting from the sexual violence with loss of an existing pregnancy as a result of the sexual violence. Data collectors may want to separate these two categories to understand differences between these two types of pregnancy impact.
# Data Type (and Field Length)
CE -coded element (60).
# Field Values/Coding Instructions
Code Description 0
V ictim was known NOT to have had a pregnancy impact following the most recent incident of sexual violence. 1
V ictim was known to have had a pregnancy impact following the most recent incident of sexual violence. 9
Unknown if victim had a pregnancy impact following the most recent incident of sexual violence.
# PHYSICAL INJURY TO VICTIM DURING MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.19 Description/Definition
The physical injury to the victim during the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
This data element documents the extent of physical injury experienced by the victim during the most recent incident of sexual violence.
# Traditional surveillance -minimum Survey surveillance -minimum
# Discussion
This data element only documents those injuries that are recognized as happening during the most recent incident of sexual violence. Data collectors may want to gather more detail about the types of physical injuries that occurred during the most recent incident.
# Data Type (and Field Length) CE -coded element (60).
# Field Values/Coding Instructions Code
Description 0
V ictim was known NOT to have suffered any physical injury during the most recent incident of sexual violence. 1
V ictim was known to have suffered physical injury during the most recent incident of sexual violence. 9
Unknown if victim suffered physical injury during the most recent incident of sexual violence.
# DEATHS RELATED TO MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.20 Description/Definition
All deaths associated with the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
Incidents of sexual violence involving one or more deaths may be different in nature from those that do not involve any fatalities.
Traditional surveillance -minimum Data Type (and Field Length) CE -coded element (60).
# Repetition
Repeat coding is allowed if more than one death occurred as a result of most recent incident.
# Field Values/Coding Instructions Code Description 0
No known deaths resulted from the most recent incident of sexual violence.
Victim's death, by homicide, resulted from the most recent incident of sexual violence.
Victim's death, self-inflicted, resulted from the most recent incident of sexual violence.
Perpetrator's death, by homicide, resulted from the most recent incident of sexual violence.
Perpetrator's death, self-inflicted, resulted from the most recent incident of sexual violence. Death of someone else resulted from the most recent incident of sexual violence. 9
Unknown if any deaths resulted from the most recent incident of sexual violence.
# CHANGE(S) IN PSYCHOLOGICAL FUNCTIONING IN VICTIM FROM MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.21 Description/Definition
The change(s) in psychological functioning caused or aggravated by the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
Allows an examination of a full range of possible changes in psychological functioning as a result of the most recent incident of sexual violence.
# Traditional surveillance -expanded Survey surveillance -minimum
# Discussion
Research emphasizes the links between sexual violence and various changes in psychological functioning (DeKeseredy 1995;Resick 1993). Data collectors may want to gather more detail on the types of changes in psychological functioning to determine differential effects of the various types. Some psychological consequences will not be evident in the most recent incidents of sexual violence (e.g., depression), especially if the violence happened relatively recently. But it is important for surveillance mechanisms to try to track the multitude of changes in psychological functioning that may occur over time following sexual violence, such as increases in anxiety, depression, eating disorders, or post-traumatic stress disorder (PTSD), as these are often consequences of sexual violence (Kilpatrick, Resnick, Saunders and Best 1998;Kuyken 1995;Resick 1993;Shields and Hanneke 1992
# INPATIENT MEDICAL CARE RECEIVED BY VICTIM FOLLOWING MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.22 Description/Definition
The inpatient medical health care received by the victim following the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
This data element documents the inpatient medical care received by the victim.
# Survey surveillance -expanded
# Discussion
This data element may be used as a proxy for data element 4.19 (Physical Injury), if that information is not available. This element may also provide an estimate of the severity of physical injury and/or psychological changes.
# OUTPATIENT MEDICAL CARE RECEIVED BY VICTIM FOLLOWING MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.23 Description/Definition
The outpatient medical health care received by the victim following the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
This data element documents the outpatient medical health care received by the victim.
# Survey surveillance -expanded
# Discussion
This data element may be used as a proxy for data element 4.19 (Physical Injury), if that information is not available. This element may also provide an estimate of the severity of physical injury and/or changes in psychological functioning.
# Data Type (and Field Length)
CE -coded element (60).
# Field Values/Coding Instructions Code
Description 0
V ictim was known NOT to have received any outpatient medical health care following the most recent incident of sexual violence. 1
V ictim was known to have received outpatient medical health care following the most recent incident of sexual violence. 9
Unknown if the victim received outpatient medical health care following the most recent incident of sexual violence.
# MENTAL HEALTH CARE RECEIVED BY VICTIM FOLLOWING MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.25 Description/Definition
The mental health care received by the victim following the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
This data element documents care received related to changes in psychological functioning.
# Survey surveillance -expanded
# Discussion
Research demonstrates the link between sexual violence and serious decreases in psychological functioning. This data element may be used as a proxy if information for data element 4.21 (Changes in Psychological Functioning) is not available. *An example of mental health care at some time after one year is if an adult received mental health care 10 years after a childhood experience of sexual violence.
If a rape crisis center or other sexual assault service provider is located in the mental health facility, code as a rape crisis center or other sexual assault service provider under element 4.27.
# LAW ENFORCEMENT CONTACTED FOLLOWING MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.26 Description/Definition
Whether law enforcement was contacted at any time following the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
This data element allows for an estimate of the amount and type(s) of sexual violence reported to law enforcement.
Traditional surveillance -expanded Survey surveillance -expanded
# Discussion
Research shows that the large majority of sexual violence is unreported to authorities (Bachar and Koss 2001;Koss 1992;Plichta and Falik 2001). When collecting data through survey surveillance, this data element can be used to calculate the percentage of the sample for which the most recent incident of sexual violence was reported. This data element can also be used in conjunction with 4.01 (Type(s) of Sexual Violence) and 4.12 (Relationship of Perpetrator(s) and Victim) to determine if there are patterns of reporting based on type of sexual violence and/or relationship to perpetrator.
# Data Type (and Field Length)
CE -coded element (60).
# Field Values/Coding Instructions Code Description 0
Law enforcement was NOT contacted following the most recent incident of sexual violence. 1
Law enforcement was contacted following the most recent incident of sexual violence. 9
Unknown if law enforcement was contacted following the most recent incident of sexual violence.
# INVOLVEMENT BY A RAPE CRISIS CENTER/SEXUAL ASSAULT SERVICE PROVIDER FOLLOWING MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.27 Description/Definition
Whether or not there was involvement by a rape crisis center or other sexual assault service provider (e.g., child advocacy center) following the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
This data element allows for an estimate of the number and type(s) of sexual violence incidents that are brought to the attention of rape crisis centers or sexual assault service providers.
# Survey surveillance -expanded
# Discussion
Research shows that the large majority of sexual violence is unreported (Bachar and Koss 2001;Koss 1992;Plichta and Falik 2001). When collecting data through survey surveillance, this data element can be used to calculate the percentage of the sample for which the most recent incident of sexual violence was reported to rape crisis centers or sexual assault service providers. This data element can also be used in conjunction with 4.01 (Type of Sexual Violence) and 4.12 (Relationship of Perpetrator and Victim) to determine if there are patterns of reporting based on type of sexual violence and/or relationship to perpetrator.
# Data Type (and Field Length) CE -coded element (60).
# Field Values/Coding Instructions Code Description 0
Involvement by a rape crisis center or other sexual assault service provider did NOT occur following the most recent incident of sexual violence. 1
Involvement by a rape crisis center or other sexual assault service provider did occur within one year following the most recent incident of sexual violence. 2
Involvement by a rape crisis center or other sexual assault service provider did occur at some time after one year following the most recent incident of sexual violence.- 9
Unknown if there was involvement by a rape crisis center or other sexual assault service provider following the most recent incident of sexual violence.
*An example of involvement by a rape crisis center or other sexual assault service provider at any time after one year is if an adult went to a rape crisis center 10 years after a childhood experience of sexual violence.
If the rape crisis center or other sexual assault service provider is located in a mental health facility, code as a rape crisis center or other sexual assault service provider.
# INVOLVEMENT BY CHILD PROTECTIVE SERVICES FOLLOWING MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.28 Description/Definition
Whether or not there was involvement by child protective services for the victim following the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
This data element allows for an estimate of the number and type(s) of sexual violence incidents that are brought to the attention of child protective services.
# Survey surveillance -expanded
# Discussion
This data element refers to the victim of the most recent incident of sexual violence. The most recent incident of sexual violence may have occurred several years ago, when the victim was a child. The victim can either be a child or an adult who is referring back to childhood.
Research shows that the large majority of sexual violence is unreported (Bachar and Koss 2001;Koss 1992;Plichta and Falik 2001) Involvement by child protective services was not applicable because the victim was an adult at the time of the most recent incident of sexual violence. 9
Unknown if there was involvement by child protective services following the most recent incident of sexual violence.
# INVOLVEMENT BY ADULT PROTECTIVE SERVICES FOLLOWING MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.29 Description/Definition
Whether or not there was involvement by adult protective services for the victim following the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
This data element allows for an estimate of the number and type(s) of sexual violence incidents that are brought to the attention of adult protective services.
# Survey surveillance -expanded
# Discussion
Research shows that the large majority of sexual violence is unreported (Bachar and Koss 2001;Koss 1992;Plichta and Falik 2001). When collecting data through survey surveillance, this data element can be used to calculate the percentage of the sample for which the most recent incident of sexual violence was reported to adult protective services. This data element can also be used in conjunction with 4.01 (Type of Sexual Violence) and 4.12 (Relationship of Perpetrator and Victim) to determine if there are patterns of reporting based on type of sexual violence and/or relationship to perpetrator.
# Data Type (and Field Length) CE -coded element (60).
# Field Values/Coding Instructions Code Description 0
Involvement by adult protective services did NOT occur following the most recent incident of sexual violence. 2
Involvement by adult protective services did occur following the most recent incident of sexual violence. 9
Unknown if there was involvement by adult protective services following the most recent incident of sexual violence.
# DATE OF MOST RECENT INCIDENT OF SEXUAL VIOLENCE 4.30 Description/Definition
Date of the most recent incident of sexual violence described in 4.01.
# Uses/Type of Surveillance
This data element can be used in conjunction with data element 2.01 (Birth Date of Victim) to calculate the victim's age at the time of the most recent incident of sexual violence.
Traditional surveillance -minimum Survey surveillance -expanded
# Discussion
If the most recent incident of sexual violence described in 4.01 lasted for more than one day, code the date that the incident ended. If the date of the incident is unknown, date of record can be used in traditional surveillance.
# Data Type (and Field Length)
TS -time stamp (26).
# Field Values/Coding Instructions
Year, month, and day are entered in the format YYYYMMDD. For example, the date May 1, 1996, would be coded as "19960501." See also TS -time stamp in the Technical Notes.
Data Standards or Guidelines E1384-96 (ASTM 1996) and Health Level 7, Version 2.3 (HL7 1996.
# XAD -extended address | 17,435 | {
"id": "16ef803593ee2410572da8755233116d357366d8",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Since the establishment of the original Immediately Dangerous to Life or Health (IDLH) values in 1974, the National Institute for Occupational Safety and Health (NIOSH) has continued to review available scientific data to improve the methodology used to derive acute exposure guidelines, in addition to the chemical-specific IDLH values. The primary objective of this Current Intelligence Bulletin (CIB) is to present a methodology, based on the modern principles of risk assessment and toxicology, for the derivation of IDLH values, which characterize the health risks of occupational exposures to high concentrations of airborne contaminants. The methodology for deriving IDLH values presented in the CIB incorporates the approach established by the National Advisory Committee on Acute Exposure Guideline Levels# Executive Summary
Chemicals are a ubiquitous component of the modern workplace. Occupational exposures to chemicals have long been recognized as having the potential to adversely affect the lives and health of workers. Acute or short-term exposures to high concentrations of some airborne chemicals have the ability to quickly overwhelm workers, resulting in a spectrum of undesirable outcomes that may include irritation of the eyes and respiratory tract, severe irreversible health effects, impairment of the ability to escape from the exposure environment, and, in extreme cases, death. Airborne concentrations of chemicals capable of causing such adverse health effects or of impeding escape from high-risk conditions may arise from a variety of non-routine workplace situations affecting workers, including special work procedures (e.g., in confined spaces), industrial accidents (e.g., chemical spills or explosions), and chemical releases into the community (e.g., during transportation incidents or other uncontrolled-release scenarios).
Since the 1970s, the National Institute for Occupational Safety and Health (NIOSH) has been responsible for the development of acute exposure guidelines called Immediately Dangerous to Life or Health (IDLH) values, which are intended to characterize these high-risk conditions. Used initially as key components of the NIOSH Respirator Selection Logic , IDLH values are established (1) to ensure that the worker can escape from a given contaminated environment in the event of failure of the respiratory protection equipment and (2) to indicate a maximum level above which only a highly reliable breathing apparatus, providing maximum worker protection, is permitted. In addition, occupational health professionals have employed these acute exposure guidelines beyond their initial purpose as a component of the NIOSH Respirator Selection Logic. Examples of such applications of the IDLH values include the development of Risk Management Plans (RMPs) for non-routine work practices governing operations in high-risk environments (e.g., confined spaces) and the development of Emergency Preparedness Plans (EPPs), which provide guidance for emergency response personnel and workers during unplanned exposure events.
Since the establishment of the IDLH values in the 1970s, NIOSH has continued to review available scientific data to improve the protocol used to derive acute exposure guidelines, in addition to the chemical-specific IDLH values. The information presented in this Current Intelligence Bulletin (CIB) represents the most recent update of the scientific rationale and the methodology (hereby referred to as the IDLH methodology) used to derive IDLH values. The primary objectives of this document are to The IDLH methodology outlined in this CIB reflects the modern principles and understanding in the fields of risk assessment, toxicology, and occupational health and provides the scientific rationale for the derivation of IDLH values based on contemporary risk assessment practices. According to this protocol, IDLH values are based on health effects considerations determined through a critical assessment of the toxicology and human health effects data. This approach ensures that the IDLH values reflect an airborne concentration of a substance that represents a high-risk situation that may endanger workers' lives or health. Relevant airborne concentrations are typically addressed through the characterization of inhalation exposures; however, airborne chemicals can also contribute to toxicity through other exposure routes, such as the skin and eyes. In this document, airborne concentrations are referred to as acute inhalation limits or guidelines to adhere to commonly used nomenclature.
The emphasis on health effects is consistent with both the traditional use of IDLH values as a component of the respirator selection logic and the growing applications of IDLH values in RMPs for non-routine work practices governing operations in high-risk environments (e.g., confined spaces) and the development of EPPs. Incorporated in the IDLH methodology are the standing guidelines and procedures used for the development of community-based acute exposure limits called Acute Exposure Guideline Levels (AEGLs).
The inclusion of the AEGL methodology has helped ensure that the health-based IDLH values derived with use of the guidance provided in this document are based on validated scientific rationale.
The IDLH methodology is based on a weight-of-evidence approach that applies scientific judgment for critical evaluation of the quality and consistency of scientific data and in extrapolation from the available data to the IDLH value. The weight-of-evidence approach refers to critical examination of all available data from diverse lines of evidence and the derivation of a scientific interpretation on the basis of the collective body of data, including its relevance, quality, and reported results. This is in contrast to a purely hierarchical or strength-of-evidence approach, which relies on rigid decision criteria for selecting a critical adverse effect, a point of departure (POD), or the point on the dose-response curve from which dose extrapolation is initiated and for applying default uncertainty factors (UFs) to derive the IDLH value. Conceptually, the derivation process for IDLH values is similar to that used in other risk-assessment applications, including these steps:
- Hazard characterization
- Identification of critical adverse effects
- Identification of a POD
- Application of appropriate UFs, based on the study and POD
- Determination of the final risk value.
However, the use of a weight-of-evidence approach allows for integration of all available data that may originate from different lines of evidence into the analysis and the subsequent derivation of an IDLH value. Ideally, this ensures that the analysis is not restricted to a limited dataset or a single study for a specific chemical. application of the appropriate UFs to each potential POD allows for consideration of the impact of the overall dataset as well as the uncertainties associated with each potential key study in determining the final IDLH value.
The primary steps (see Figure 3-1) applied in the establishment of an IDLH value include the following:
- Critical review of human and animal toxicity data to identify potential relevant studies and characterize the various lines of evidence that can support the derivation of the IDLH value
- Determination of a chemical's mode of action (MOA) or description of how a chemical exerts its toxic effects
- Application of duration adjustments (time scaling) to determine 30-minute-equivalent exposure concentrations and the conduct of other dosimetry adjustments, as needed
- Selection and application of a UF for POD or critical adverse effect concentration, identified from the available studies to account for issues associated with interspecies and intraspecies differences, severity of the observed effects, data quality, or data insufficiencies
- Development of the final recommendation for the IDLH value from the various alternative lines of evidence, with use of a weight-of-evidence approach to all of the data.
NIOSH recognizes that in some cases a health-based IDLH value might not account for all workplace hazards, such as safety concerns or considerations. Here are some examples of situations and conditions that might preclude the use of a health-based IDLH value:
- The airborne concentration of a substance is sufficient to cause oxygen deprivation (oxygen concentration <19.5%), a life-threatening condition
- The concentration of particulate matter generated during a process significantly reduces visibility, preventing escape from the hazardous environment
- The airborne concentration of a gas or vapor is greater than 10% of the lower explosive limit (LEL) and represents an explosive hazard.
In such cases, it is important that safety hazards or other considerations be taken into account. Information on the safety hazards will be incorporated in the support documentation (see Appendix A) for an IDLH value, to aid occupational health professionals in the development of RMPs for non-routine work practices governing operations in high-risk environments (e.g., confined spaces) and EPPs. In the event that the derived health-based IDLH value exceeds 10% of the LEL concentration for a flammable gas or vapor, the air concentration that is equal to 10% of the LEL will become the default IDLH value for the chemical. The following hazard statement will be included in the support documentation: "The health-based IDLH value is greater than 10% of the LEL (>10% LEL) of the chemical of interest in the air. Safety considerations related to the potential hazard of explosion must be taken into account. " In addition, the notation ">10% LEL" will appear beside the IDLH value in the NIOSH Pocket Guide to Chemical Hazards and other NIOSH publications. The equivalent default approach for dust would be based on 10% of the minimum explosive concentration (MEC). However, determining the combustibility of dusts is viii NIOSH CIB 66 - Derivation of Immediately Dangerous to Life or Health (IDLH) Values complicated and dictated by the relationship between multiple dust-specific factors including, but not limited to, particle size distribution, minimum ignition energy, explosion intensity, and dispersal in the air . The ability to quantify dust-specific concentrations that could represent explosive hazards for risk assessment purposes is limited and often not possible given the absence of critical data, such as chemical-specific MEC and other previously identified factors. Despite the absence of specific guidance, NIOSH will critically assess the explosive nature of a dust when sufficient technical data are available. If determined to be appropriate, the findings of this assessment will be incorporated into the derivation process to ensure that the IDLH value is protective against both health and safety hazards. When an explosive hazard is identified for an aerosol, NIOSH will include the following hazard statement: "Dust may represent an explosive hazard. Safety considerations related to hazard of explosion must be taken into account. " In addition, the notation (Combustible Dust) will appear in other NIOSH publications.
Supplemental information is included in this CIB to provide insight into (1) the literature search strategy, (2) the scheme used to prioritize and select chemicals for which an IDLH value will be established, and (3) an overview of the analysis applied by NIOSH to develop a scientifically based approach for the selection of the UF during the derivation of IDLH values. In addition, Appendix A presents an example of the derivation of an IDLH value for chlorine , based on the scientific rationale and process outlined in this CIB. The example highlights the primary steps in establishment of an IDLH value, including a critical review of the identified human and animal data, discussion of the selection of the POD and UF, and extrapolation of the 30-minute-equivalent exposure concentration from animal toxicity data.
# Glossary *
Acute Exposure: Exposure by the oral, dermal, or inhalation route for 24 hours or less.
Acute Exposure Guideline Levels (AEGLs): Threshold exposure limits for the general public applicable to emergency exposure periods ranging from 10 minutes to 8 hours. AEGL-1, AEGL 2, and AEGL-3 are developed for five exposure periods (10 and 30 minutes, 1 hour, 4 hours, and 8 hours) and are distinguished by varying degrees of severity of toxic effects ranging from transient, reversible effects to life-threatening effects . AEGLs are intended to be guideline levels used during rare events or single once-in-a-lifetime exposures to airborne concentrations of acutely toxic, high-priority chemicals . The threshold exposure limits are designed to protect the general population, including the elderly, children or other potentially sensitive groups that are generally not considered in the development of workplace exposure recommendations (additional information available at /).
Acute Reference Concentration (RfC): An estimate (with uncertainty spanning perhaps an order of magnitude) of a continuous inhalation exposure for an acute duration (24 hours or less) of the human population (including sensitive subgroups) that is likely to be without an appreciable risk of deleterious effects during a lifetime. It can be derived from a NOAEL, LOAEL, or benchmark concentration, with uncertainty factors (UFs) generally applied to reflect limitations of the data used. Generally used in USEPA noncancer health assessments .
Acute Toxicity: Any poisonous effect produced within a short period of time following an exposure, usually 24 to 96 hours.
Acute Toxicity Test: Experimental animal study to determine what adverse effects occur in a short time (usually up to 14 days) after a single dose of a chemical or after multiple doses given in up to 24 hours.
Adverse Effect: A substance-related biochemical change, functional impairment, or pathologic lesion that affects the performance of an organ or system or alters the ability to respond to additional environmental challenges.
# Analytical (Actual) Concentration:
The test article concentration to which animals are exposed (i.e., the concentration in the animals' breathing zone), as measured by analytical (GC, HPLC, etc.) or gravimetric methods. The analytical or gravimetric concentration (not the nominal concentration) is usually used for concentration response assessment.
# Assigned Protection Factor (APF):
The minimum anticipated protection provided by a properly functioning respirator or class of respirators to a given percentage of properly Gestation: Pregnancy, the period of development in the uterus from conception until birth.
Hazard: A potential source of harm. Hazard is distinguished from risk, which is the probability of harm under specific exposure conditions.
Healthy Worker Effect: Epidemiological phenomenon observed initially in studies of occupational diseases: workers usually exhibit lower overall disease and death rates than the general population, due to the fact that elderly individuals and those with significant pre-existing illness are less likely to be active in the workforce than healthy adults. Death rates in the general population may be inappropriate for comparison with occupational death rates, if this effect is not taken into account.
Immediately Dangerous to Life or Health (IDLH) condition: A situation that poses a threat of exposure to airborne contaminants when that exposure is likely to cause death or immediate or delayed permanent adverse health effects or prevent escape from such an environment .
# IDLH value:
A maximum (airborne concentration) level above which only a highly reliable breathing apparatus providing maximum worker protection is permitted . IDLH values are based on a 30-minute exposure duration.
# Implantation:
The process by which a fertilized egg implants in the uterine lining, typically several days following conception, depending on the species.
Inhalation Reference Concentration (RfC): An estimate (with uncertainty spanning perhaps an order of magnitude) of a continuous inhalation exposure for a chronic duration (up to a lifetime) of the human population (including sensitive subgroups) that is likely to be without an appreciable risk of deleterious effects during a lifetime . It can be derived from a NOAEL, LOAEL, or benchmark concentration, with UF generally applied to reflect limitations of the data used. Generally used in USEPA noncancer health assessments.
Internal Dose: A dose denoting the amount absorbed without respect to specific absorption barriers or exchange boundaries.
# International Toxicity Estimates for Risk Database (ITER):
A free Internet database of human health risk values and cancer classifications for over 600 chemicals of environmental concern, from multiple organizations worldwide (additional information available at /).
Intraperitoneal: Within the peritoneal cavity (the area that contains the abdominal organs).
LC 01 : The statistically determined concentration of a substance in the air that is estimated to cause death in 1% of the test animals.
LC 50 : The statistically determined concentration of a substance in the air that is estimated to cause death in 50% (one half) of the test animals; median lethal concentration. LD 50 : The statistically determined lethal dose of a substance that is estimated to cause death in 50% (one half) of the test animals; median lethal concentration.
# LD LO :
The lowest dose of a substance that causes death, usually for a small percentage of the test animals.
# LEL:
The minimum concentration of a gas or vapor in air, below which propagation of a flame does not occur in the presence of an ignition source.
Lethality: Pertaining to or causing death; fatal; referring to the deaths resulting from acute toxicity studies. May also be used in lethality threshold to describe the point of sufficient substance concentration to begin to cause death.
Lowest Observed Adverse Effect Level (LOAEL): the lowest tested dose or concentration of a substance that has been reported to cause harmful (adverse) health effects in people or animals.
# Malignant:
A growth with a tendency to invade and destroy nearby tissue and spread to other parts of the body.
Maternal Toxicity: Adverse effects occurring in the mother during a developmental study.
Maternal toxicity can result in adverse effects to the fetus.
# Maximum Likelihood Concentration:
A statistical estimate of the concentration that was most likely to cause the desired effect.
# Mode of Action:
The sequence of significant events and processes that describe how a substance causes a toxic outcome. Mode of action is distinguished from the more detailed mechanism of action, which implies a more detailed understanding on a molecular level.
# Nominal Concentration:
The concentration of test article introduced into a chamber. It is calculated by dividing the mass of test article generated by the volume of air passed through the chamber. The nominal concentration does not necessarily reflect the concentration to which an animal is exposed.
# No Observed Adverse Effect Level (NOAEL):
The highest tested dose or concentration of a substance that has been reported to cause no harmful (adverse) health effects in people or animals. Surrogate: Relatively well studied chemical whose properties are assumed, with appropriate adjustments for differences in potency, to apply to an entire chemically and toxicologically related class; for example, benzo(a)pyrene data are assumed to be toxicologically equivalent to those for all carcinogenic polynuclear aromatic hydrocarbons or are used as a basis for extrapolating to these other chemicals.
# Systemic Concentration:
The concentration in a blood or tissue arising from exposure to a substance that is absorbed and distributed throughout the body.
# Toxic Inhalation Hazard (TIH):
Gases or volatile liquids that are known or presumed on the basis of tests to be so toxic to humans as to pose a hazard to health in the event of a release during transportation, determined by DOT.
# Toxicity:
The degree to which a substance is able to cause an adverse effect on an exposed organism.
xxii Toxicology: Scientific discipline involving the study of the actual or potential danger presented by the harmful effects of substances (poisons) on living organisms and ecosystems, of the relationship of such harmful effects to exposure, and of the mechanisms of action, diagnosis, prevention, and treatment of intoxications.
NIOSH
Tumor: An abnormal mass of tissue that results from excessive cell division that is uncontrolled and progressive. Tumors perform no useful body function. Tumors can be either benign (not cancerous) or malignant (cancerous).
Uncertainty Factors: Mathematical adjustments applied to the POD when developing IDLH values. The UFs for IDLH value derivation are determined by considering the study and effect used for the POD, with further modification based on the overall database.
# Weight of Evidence (Toxicity):
Extent to which the available biomedical data support a conclusion, such as whether a substance causes a defined toxic effect (e.g., cancer in humans), or whether an effect occurs at a specific exposure level.
# Workplace Environmental Exposure Levels (WEELs):
Exposure levels that provide guidance for protecting most workers from adverse health effects related to occupational chemical exposures expressed as a TWA or ceiling limit.
xxiii
NIOSH
# Background
The concept of using respirators to protect workers in situations that are immediately dangerous to life or health was discussed at least as early as the 1940s. The following is from a 1944 U.S. Department of Labor (DOL) bulletin:
The situations for which respiratory protection is required may be designated as, ( 1 The Occupational Safety and Health Administration (OSHA) defines an IDLH concentration in the hazardous waste operations and emergency response regulation as follows:
An atmospheric concentration of any toxic, corrosive or asphyxiant substance that poses an immediate threat to life or would interfere with an individual's ability to escape from a dangerous atmosphere .
In the OSHA regulation on "permit-required for confined spaces, " an IDLH condition is defined as follows:
Any condition that poses an immediate or delayed threat to life or that would cause irreversible adverse health effects or that would interfere with an individual's ability to escape unaided from a permit space . Note: Some materials (e.g., hydrogen fluoride gas and cadmium vapor) may produce immediate transient effects that, even if severe, may pass without medical attention, but are followed by sudden, possibly fatal collapse ~ 6 to 24 hours after exposure. The victim "feels normal" from recovery from transient effects until collapse. Such materials in hazardous quantities are considered to be "immediately dangerous to life or health. " In the current respiratory protection standard, OSHA states that an IDLH condition is as follows:
An atmosphere that poses an immediate threat to life, would cause irreversible adverse health effects, or would impair an individual's ability to escape from a dangerous atmosphere .
As part of this standard, additional guidance is provided by OSHA that dictates the type and application of respirators in IDLH conditions. Specific information that is provided in the respiratory protection standard requires:
- A trained standby person be present with suitable rescue equipment when self-contained breathing apparatus or hose masks with blowers are used in IDLH atmospheres; and ) because of a lack of relevant toxicity data, and therefore, the designation "UNKNOWN" was used in the IDLH value listing. For most of these substances, the concentrations above which only the "most protective" respirators were allowed were based on assigned protection factors that ranged from 10 to 2,000 times the PEL, depending on the substance. There were also 10 substances (e.g., n-pentane and ethyl ether ) for which it was determined only that the IDLH values were in excess of the lower explosive limits (LELs). Therefore, the LEL was selected as the IDLH value, with the designation "LEL" added in the IDLH value listing. For these substances, only the "most protective" respirators were permitted above the LEL in the SCP draft technical standards.
For 14 substances (e.g., beryllium and endrin ), the IDLH values determined during the SCP were greater than the concentrations permitted on the basis of assigned respiratory protection factors. In most instances the IDLH values for these substances were set at concentrations 2,000 times the PEL.
# Update of the IDLH Values in 1994
The NIOSH definition for an IDLH condition, as given in the NIOSH Respirator Decision Logic , is a situation "that poses a threat of exposure to airborne contaminants when that exposure is likely to cause death or immediate or delayed permanent adverse health effects or prevent escape from such an environment. " It is also stated that the purpose of establishing an IDLH value is to "ensure that the worker can escape from a given contaminated environment in the event of failure of the respiratory protection equipment."
The respirator decision logic uses an IDLH value as one of several respirator selection criteria. "Highly reliable" respirators (i.e., the most protective respirators) would be selected for emergency situations, firefighting, exposure to carcinogens, entry into oxygen-deficient atmospheres, entry into atmospheres that contain a substance at a concentration greater than 2,000 times the NIOSH recommended exposure limit (REL) or OSHA PEL, and entry into IDLH conditions. These "highly reliable" respirators include either a self-contained breathing apparatus (SCBA) that has a full face piece and is operated in a pressure-demand or other positivepressure mode or else a supplied-air respirator that has a full face piece and is operated in a pressuredemand or other positive-pressure mode in combination with an auxiliary SCBA operated in a pressure-demand or other positive-pressure mode.
When the IDLH values were developed in the mid-1970s, only limited toxicological data were available for many of the substances. - It is useful for verifying that all key data and scientific issues are considered and thus serves as one step in verifying that a robust literature search has been completed.
- It assists in identifying critical issues with study design, methodology, or results for critical studies that must be considered in developing an IDLH value.
- In some cases, alternative exposure limits/ values may aid in determining a potential range for the IDLH value (after taking into account the methodology differences used to develop various short-term limits/values), as described later in this section.
Because the documentation for the IDLH values is intended to be a concise summary document, NIOSH incorporates in the IDLH documentation information on the acute effects of chemicals and selected short-term limits/values from other indepth peer-reviewed assessments, for comparison purposes. - AEGL-2 is the airborne concentration (expressed as ppm or mg/m 3 ) of a substance above which it is predicted that the general population, including susceptible individuals, could experience irreversible or other serious, long-lasting adverse health effects or an impaired ability to escape.
- AEGL-3 is the airborne concentration (expressed as ppm or mg/m 3 ) of a substance above which it is predicted that the general population, including susceptible individuals, could experience life-threatening health effects or death.
Airborne concentrations below the AEGL-1 represent exposure levels that could produce mild and progressively increasing irritation or asymptomatic, non-sensory effects, such as non-disabling odor and taste. With increasing airborne concentrations above each AEGL, there is a progressive increase in the likelihood of occurrence and the severity of effects described for each corresponding AEGL.
Although the AEGL values represent threshold levels for the general public, including susceptible subpopulations, such as infants, children, the elderly, persons with asthma, and those with other illnesses, it is recognized that individuals, subject to unique or idiosyncratic responses, could experience the effects described at concentrations below the corresponding AEGL.
Like the IDLH value, the AEGL-2 is designed to protect from irreversible or other serious effects and escape-impairing effects. Thus, the effects that are the basis for the AEGL-2 closely match those of interest for the IDLH value. In addition, the AEGLs include a 30-minute value, which is the same duration of interest for the IDLH values. One significant difference between the IDLH value and the AEGL-2 is that the AEGL-2 is designed to protect the general population, including potentially sensitive subpopulations (i.e., children, elderly, and individuals with pre-existing health impairments). IDLH values are designed for worker populations, which traditionally exclude the most sensitive subpopulations.
This assumption is based on the consideration that there would be a smaller likelihood for significant inclusion of specific sensitive subpopulations in the population of working adults. In addition, the selection of the critical effect (health endpoint) and interpretation of the severity of the health impact to the population of interest (in this case a worker population in a high-risk environment) may be different than that used for the AEGL-2. This means that given the same set of data, the IDLH value will often be in the range of the 30-minute AEGL-2 but will vary somewhat because of the fundamental differences between the approaches applied to establish AEGL values and IDLH values for a chemical. The IDLH value is usually below the 30-minute AEGL-3, since, for most chemicals, serious or escape-impairing effects relevant for IDLH values occur at concentrations below the lethality threshold. In light of these considerations, recent AEGL-2 and AEGL-3 values can provide a rough gauge for identifying a potential range for the IDLH value. Exceptions may occur, partially because the AEGL process follows fairly strict methodology guidelines , including the use of default approaches in the absence of chemical-specific data, whereas the process for developing IDLH values relies heavily on the overall weight of evidence, with limited use of default procedures. The extensive AEGL documentation for each chemical has been thoroughly reviewed by expert committees and is often a useful resource for de novo analyses. In addition, the AEGL documentation includes detailed analysis of all key studies, often including calculation of the value of the ten Berge exponent n ; for a detailed description of the ten Berge exponent, see Section 3.5-Time Scaling.
The AEGL values are derived by the NAC/AEGL Committee, which is a Federal Advisory Committee Act (FACA) committee established to identify,
# Emergency Response Planning Guidelines
Emergency Response Planning Guidelines (ERPGs) are developed by the American Industrial Hygiene Association (AIHA) for emergency planning and are intended as health-based guideline concentrations for single exposures to chemicals [AIHA 2006[AIHA , 2008. These guidelines (i.e., the ERPG documents and ERPG values) are intended for use as planning tools for assessing the adequacy of accident prevention and emergency response plans, including transportation emergency planning, and for developing community emergency response plans.
As with AEGLs, there are three ERPG guidance concentration levels designed for community protection . However, ERPGs are derived for only single-exposure durations of 1 hour. Each of the three levels is defined and briefly discussed below:
- ERPG-1: The maximum airborne concentration below which it is believed that nearly all individuals could be exposed for up to one hour without experiencing other than mild, The ERPG-1 identifies a level that does not pose a health risk to the community but that may be noticeable because of slight odor or mild irritation. In the event that a small, nonthreatening release has occurred, the community could be notified that they may notice an odor or slight irritation but that concentrations are below those which could cause unacceptable health effects. For some materials, because of their properties, there may not be an ERPG-1. Such cases would include substances for which sensory perception levels are higher than the ERPG-2 level. In those cases, the ERPG-1 level would be given as "Not Appropriate. " It is also possible that no valid sensory perception data are available for the chemical. In these cases, the ERPG-1 level would be given as "Insufficient Data. "
- ERPG-2: The maximum airborne concentration below which it is believed that nearly all individuals could be exposed for up to one hour without experiencing or developing irreversible or other serious health effects or symptoms that could impair an individual's ability to take protective action.
Above ERPG-2, there may be significant adverse health effects, signs, or symptoms for some members of the community that could impair their ability to take protective action. These effects might include severe eye or respiratory irritation, muscular weakness, central nervous system (CNS) impairments, or serious adverse health effects.
- ERPG-3: The maximum airborne concentration below which it is believed that nearly all individuals could be exposed for up to one hour without experiencing or developing lifethreatening health effects.
The ERPG-3 level is a worst-case planning level, above which there is the possibility that some members of the community may develop life-threatening health effects. This guidance level could be used to determine the airborne concentration of a chemical that could pose life-threatening consequences should an accident occur. This concentration could be used in planning stages to project possible levels in the community. Once the distance from the release to the ERPG-3 level is known, the steps to mitigate the potential for such a release can be established.
Like the IDLH value, the ERPG-2 is designed to protect from irreversible or other serious and escape-impairing effects and therefore is based on effects similar to those considered as the basis for IDLH values. Like the IDLH values, ERPGs are for acute exposure, but they are based on a 1-hour rather than 30-minute exposure. All other things being equal, this would mean that ERPG-2 values will generally be lower than the corresponding IDLH values, since the potential exposure time for the ERPG is higher. Moreover, even though ERPGs are developed by an occupational health organization, ERPGs are more like the AEGLs in that they are designed to protect the general population, and thus susceptible populations are more of a consideration for ERPGs than for IDLH values.
# Occupational Exposure Limits
OELs are derived by various governmental, nongovernmental, and private organizations for application to repeated or daily worker exposure situations. For example, in the United States, OELs are developed by several organizations. Examples of such organizations and their respective OEL values include; NIOSH RELs, OSHA PELs, MSHA PELs, ACGIH TLVs®, and AIHA WEELs®. Although the exact definition varies among organizations (see Glossary), the general intent of OELs is to identify airborne concentrations of substances in the air to which all or nearly all workers can be exposed on a repeated basis for a working lifetime without adverse health effects. OELs are developed on the basis of available human data (such as results from epidemiologic studies or controlled human exposure studies), animal toxicologic data, or a combination of human and animal data. The health basis on which exposure limits are established may differ from substance to substance; protection against impairment of health may be a guiding factor for some, whereas reasonable freedom from irritation, narcosis, nuisance, or other forms of stress may form the basis for others. For most OELs, health impairment refers to effects that shorten life expectancy, compromise physiological function, impair the capability of resisting other toxic substances or disease processes, or impair reproductive function or developmental processes. Alternative considerations, such as technological feasibility, analytical achievability and economic impact, are often included during the establishment of an OEL based on the mandate of the organization deriving the exposure limit. For this reason, it is important to review the support documentation for any OEL to determine its basis (i.e., health endpoint versus alternative endpoint) and intended purpose.
OELs are guidelines (or regulatory standards, if mandated by OSHA and MSHA) intended for use in the practice of industrial hygiene, for the control of potential workplace hazards. OELs are not intended for use in other situations, such as the evaluation or control of ambient air pollution, or for estimating the toxic potential of continuous uninterrupted exposures or other exposure scenarios involving extended work periods, or as proof of existing disease or physical conditions. OELs neither clearly delineate between safe and dangerous concentrations nor serve as a relative index of toxicity. For other endpoints, the severity for the basis of STELs may be less than that for the IDLH value. For example, mild irritation that would not be escape-impairing and mild narcosis that affects work efficiency but is not escape-impairing could be the bases for a STEL but would be considered below the threshold of interest for an IDLH value. Thus, depending on the nature of the effect caused by the chemical, the IDLH value may or may not be comparable to a STEL value for the same substance.
# Other Acute Exposure Limits/Values
A number of other governmental agencies and organizations also develop, or have developed, acute inhalation exposure limits/values intended to address various applications, exposed populations, and durations. These include acute exposure limits/values listed in Table 2-2.
Documentation for acute exposure limits/values from these selected organizations is reviewed and considered if it is deemed to provide specific insights that impact the development or interpretation of the IDLH value. For example, acute exposure limits/values from other government agencies and organizations might be included in the documentation for IDLH values if they are more recent or have unique data not available in other sources.
# Criteria for Determining IDLH Values
A weight-of-evidence approach based on scientific judgment is used in the IDLH methodology, both for evaluating the quality and consistency of the scientific data and in extrapolating from the available data to the IDLH value. The weight-of-evidence approach refers to the critical examination of all the available data from diverse lines of evidence and deriving a scientific interpretation based on the collective body of data, including its relevance, quality, and reported results. This is in contrast to a purely hierarchical (or strength-of-evidence) approach, which would use rigid decision criteria for selecting a critical adverse effect concentration and applying default uncertainty factors (UFs) to derive the IDLH value. The documentation of the IDLH value for each chemical is not intended to be a comprehensive review of all the available studies; instead, it focuses on the key data, decisions points, and scientific rationale integrated into the overall weight of evidence applied to derive the IDLH value for a chemical of interest. An example of the documentation for development of an IDLH value is provided in Appendix A, which explains the logic and rationale behind the derivation of the IDLH values for chlorine (CAS# 7782-50-5).
Because IDLH values are often developed from limited data, the process for developing a value often applies data from multiple lines of evidence rather than a single key high-quality study. Overall, the following approach is used for deriving IDLH values:
- Critical review of human and animal toxicity data to identify potential relevant studies and characterize the various lines of evidence that can support the derivation of the IDLH value
- Application of duration adjustments to determine 30-minute-equivalent exposure concentrations, as well as other dosimetry adjustments as needed
- Application of a UF for each potential POD or critical adverse-effect concentration identified from the available studies to account for issues associated with interspecies and intraspecies differences, the severity of the observed effects (including concern about cancer or reproductive or developmental toxicity), and data quality or data insufficiencies
- Developing the final recommendation for the IDLH value from the various alternative lines of evidence, using a weight-of-evidence approach, from all of the data. The use of a weight-of-evidence approach allows for the integration of all available data that may originate from different lines of evidence into the analysis and the subsequent derivation of an IDLH value. Ideally, this ensures that the analysis is not restricted to a limited dataset or a single study for a specific chemical. In particular, application of the appropriate UF to each potential POD allows for consideration of the impact of the overall dataset as well as the uncertainties associated with each potential key study in determining the final IDLH value. See Appendix A for an example of how a typical dataset is evaluated to derive an IDLH value.
As illustrated in the remainder of this CIB, derivation of IDLH values uses a systematic data evaluation process that gives preference for data that provide the greatest degree of confidence in the assessment. The approach describes some overall preferences that define a general data hierarchy, but the methodology allows for all of the data to be evaluated by means of a weight-of-evidence approach to develop a toxicologically meaningful IDLH value that is consistent with the dataset as a whole. Implementing such a procedure requires considerable expertise and relies heavily on weighing various lines of evidence, with vetting by multiple scientists through a rigorous peer review process. Thus, although the following sections describe general processes and priorities for use of the data, these approaches are provided as general guidance, and the focus is on interpretation of the overall database.
# Importance of Mode of Action and Weight-of-Evidence Approach
The - Metabolic Toxicants: This class of chemicals acts by interfering with the cell's ability to generate and store energy and includes, for example, cyanides and azides. Initial effects of these chemicals are CNS symptoms (some similar to those noted previously for CNS depressants) and toxicity, ultimately leading to respiratory failure.
- Target Organ Toxicants: Certain organs or organ systems, such as the liver or kidney, are the site of toxicity for many chemicals. Organspecific effects are typically not evaluated in acute lethality studies. In-depth study of a single inhalation exposure may include evaluation of histopathology or clinical chemistry for certain organ systems. Also, acute poisoning incidents in humans may indicate that the liver or kidney is a target. These organ systems are frequently the most sensitive systemic targets because of the high blood flow to these organs and their capacity for metabolizing chemicals to more reactive forms.
In addition, some chemicals target specific organs or have unique systematic effects. - A smaller UF is used when the endpoint is known to be very sensitive (e.g., cardiac sensitization in response to an epinephrine challenge, which is considered a sensitive marker of a severe effect).
- MOA information may also be used to support a flatter time extrapolation curve for sensory irritants, based on the observation that effects from such chemicals (after the first few minutes of exposure) are driven primarily by concentration and less by duration of exposure.
- MOA information indicating that the chemical targets the route of entry, with resulting effects such as eye, nose, and throat irritation, would indicate that the route-to-route extrapolation is not appropriate.
- MOA information may suggest the use of surrogates when information on the chemical of interest is limited or when a breakdown product is identified as being the primary cause of toxicity. For example, HCN is commonly used as a surrogate for acetocyanohydrin (CAS# 78-97-7), which spontaneously forms acetone and HCN. Another example is the use of hydrogen chloride (CAS# 7647-01-0) as a surrogate for chlorosilanes, which decomposes when exposed to water (i.e., humidity) to form hydrogen chloride and silanols. In both cases, the surrogates (i.e., HCN and hydrogen chloride) are directly linked to the severity of the toxic effect.
- Finally, MOA information may suggest potential refinements to the dose-response analysis. For example, carbon monoxide toxicity is due to the formation of carboxyhemoglobin (COHb), and the IDLH value for carbon monoxide is based on calculated COHb levels.
# Process for Prioritization of Chemicals
In addition to serving as a crucial factor in the selection of respiratory protection equipment, IDLH values play an important role in planning work practices surrounding potential emergency highexposure environments in the workplace and in guiding actions by emergency response personnel during unplanned exposure events. Ideally, such guidance values would be available for all chemicals that might be present under high-exposure situations. However, the development of IDLH values is not necessary for many chemicals, such as those with very low exposure potential or those that do not exhibit significant acute toxicity via the inhalation route. A prioritization process is used by NIOSH to ensure that resources allocated to IDLH value development yield the greatest impact on risk reduction. This process takes into account both toxicity and exposure potential and is applied to a broad range of potentially hazardous chemicals (e.g., chemical warfare agents, industrial chemicals, or agrochemicals) subject to emergency or uncontrolled releases. A qualitative algorithm is used to generate a priority ranking. This process provides initial priority rankings based on a simple approach that uses readily available sources of information. More sophisticated hazard-or risk-based ranking schemes could be used, but gathering and analyzing the data would require the same approximate effort required to actually derive an IDLH value. A complex ranking approach would not meet the primary objective to quickly and efficiently identify chemicals of greatest concern. The resulting priorities are further modified according to NIOSH emphasis areas. For example, chemicals can be added or removed from the priority list on the basis of new information related to toxicity or exposure potential. The development and use of a documented prioritization process allows for more frequent updating by NIOSH of both input data and prioritization criteria to meet changing needs. The prioritization approach is described more fully in Appendix B.
# Literature Search Strategy
NIOSH performs in-depth literature searches to ensure that all relevant data from human and animal studies with acute exposures to the substance are identified. An initial literature search is done, including searches for information from the sources listed in The electronic literature searches are screened for relevant articles, and a bibliography of relevant literature is compiled that identifies studies for retrieval and review. Peer-reviewed toxicology reviews are also examined, including those identified by searching the databases and organization websites, as noted in - Duration of the exposure for the study.
Once this information is compiled, critical effect levels are adjusted to a 30-minute-equivalent concentration to derive a POD estimate for each study or study endpoint. Through the application of the weight-of-evidence approach described in this document, the critical study that contributes most significantly to the qualitative and quantitative assessment of risk is selected as the basis of an IDLH value. Appendix A provides an example of how such information is compiled and used in the derivation of the IDLH value for chlorine (CAS# 7782-50-5). The weight given to each study in selection of a final POD is based on the reliability of the reported findings (as determined from an assessment of study quality), the relevance of the study type for predicting human effects from acute inhalation exposure, and the estimated 30-minute adjusted effect level.
# Study Quality Considerations
For toxicology studies, quality considerations that affect the reliability of each study include the key elements of the study design and the adequacy of study documentation. For example, such aspects of study quality might include the following:
- Relevance of the exposure regimen to a single 30-minute inhalation exposure
- Quality of atmosphere generation system and analytical techniques used to assess exposure conditions
- Degree of evaluation of toxic endpoints
- Number of animals used and relevance of the test species to humans.
Other considerations for evaluation of study quality include the reliability of the cited data source, whether the study adhered to or was equivalent to current standards of practice (e.g., USEPA or Organisation for Economic Co-operation and Development test guidelines), and whether good laboratory practices (GLPs) were followed. These considerations are evaluated for each study according to the general concepts outlined by Klimish et al. . Although a single authoritative guide to such study quality evaluation for epidemiology studies is not available, human effects data studies are judged on the basis of current standards of practice for conducting epidemiology or clinical studies [USEPA 1994;Federal Focus Inc. 1995
# Study Relevance Considerations
The weight-of-evidence approach requires a critical evaluation of each study as to its relevance to the ultimate goal of the IDLH value derivationto develop a scientifically-based estimate of the 30-minute human threshold concentration for severe, irreversible or escape impairing effects. The methodology for developing IDLH values follows a hierarchical approach based on the following preference for data:
- Acute human inhalation toxicity data
- Acute animal inhalation toxicity data
- Data for longer-term inhalation studies
- Inhalation data for analogous chemicals (i.e., toxicological surrogates)
- Acute animal oral toxicity data. all weight-of-evidence approach that considers study reliability, quality (as discussed in Section 3.4.1), relevance, and the magnitude of the observed effect levels. The evaluation of study relevance includes the type and severity of the effects observed, study duration, and route of exposure.
Other considerations that will be addressed during the selection of the key data include the following:
- Primary versus secondary sources.
- Peer-reviewed versus non-peer-reviewed studies.
The term primary data refers to information obtained directly from original studies and reports, whereas secondary data refers to information summarized within reviews and monographs. Primary data are given more weight within the derivation of IDLH values, whereas secondary data are used to provide background and supporting information. An exception to this may occur when critical primary data are unobtainable and an IDLH value cannot be derived without being based on data contained in a secondary source. In such cases, the IDLH value may be based on the information contained within the secondary data source. Some secondary sources provide greater value than other sources. For example, authoritative secondary sources might include robust toxicity profiles that have undergone extensive review, such as the ATSDR Toxicological Profiles or EPA IRIS Toxicological Reviews.
Peer reviewed data are generally preferred as the basis of an IDLH value, over data obtained from non-peer-reviewed sources. For this reason, peerreviewed data take precedent over non-peer-reviewed data within the IDLH methodology. Exceptions are made when issues with the peer-reviewed data are identified or if non-peer-reviewed studies are determined to be of higher quality. Non-peerreviewed data may take precedent over peer-reviewed data in circumstances such as these:
- Non-peer-reviewed studies used standardized or guideline-compliant protocols, but available peer-reviewed studies used non-standardized protocols.
- The toxic effects reported in non-peer-reviewed studies align better with the health endpoints of interest (e.g., escape-impairing effects, irreversible effects, or lethality) than do the effects reported in peer-reviewed studies (e.g., mild irritation).
- Non-peer-reviewed studies demonstrate better biological and statistical significance due to increased sample size, selection of test species, or overall study design, in comparison with peer-reviewed studies.
Ultimately, the basis of an IDLH value will result from the weight-of-evidence approach incorporated into the CIB that reflects the relative strengths and weaknesses of all the data.
# Relevance of the Type and Severity of the Effect
# General considerations in identifying the severity of effects for IDLH derivation
Relevance of the effect is evaluated in the context of the goal for deriving an IDLH value (i.e., to develop a high-confidence estimate of the 30-minute human threshold concentration for severe, irreversible, or escape-impairing effects). Studies that identify with good precision the actual threshold for such effects are rare; therefore, usually it is necessary either to extrapolate from an effect level that is above a threshold, by relying on a lowest observed adverse effect level (LOAEL) for severe or escape-impairing effects, or to use a lower-bound estimate of the threshold by relying on a no observed adverse effect level (NOAEL) for severe or escape-impairing effects. In some cases, concentration modeling can be used to further refine such estimates on the basis of actual study concentrations. All of the data for effects relevant to the IDLH are evaluated and used in this effort, including data on mortality, severe or irreversible effects, and escapeimpairing effects. Data on exposure levels causing less severe effects, which are below the threshold of interest, are useful as estimates of the NOAEL for severe effects or escape-impairment. Together, these data can describe the exposure-response relationship for the chemical of interest, which compares the estimated exposure concentration to the reported effects. Having an understanding of this relationship allows the potential region of the threshold concentration to be more accurately determined for the most-sensitive severe or escapeimpairing effects. estimate or the 95% lower-bound confidence limit on the BMC (BMCL). Thus, a BMCL 05 is the estimated 95% lower confidence bound on the concentration associated with a 5% increased lethality response above controls.
Such model-calculated values are preferred over LC LO values, because they are not dependent on the actual concentrations tested and reflect the response at each concentration. Use of a lower confidence limit (i.e., the BMCL) also has the advantage of taking into account the uncertainty in the data and statistical power of the study. Frequently, the BMCL 05 (i.e., the lower 95% confidence limit on the concentration associated with a 5% response) and BMC 01 (i.e., the central tendency estimate of the concentration associated with a 1% response) are both calculated for lethality data, and the lower value is used as the lethality threshold. The lower value is often the BMCL 05 , due to the relatively wide confidence limits associated with the small sample size. An extensive discussion on the application of benchmark dose (BMD) within the development of acute emergency response guidelines has been included in the AEGL SOP . This includes key considerations, shortcomings, and uncertainty within the process. An alternative approach used for estimating a nonlethal exposure level from LC 50 values has been applied in the AEGL methodology . This approach uses 1/3 of the LC 50 value as the POD to estimate the boundary between the lethality threshold and a non-lethal exposure level. When compared to LC 01 values and BMCL 05 values for selected chemicals, in general, 1/3 of the LC 50 value resulted in lower estimates of a non-lethal threshold ; thus, this is a healthprotective approach. Although the use of lethality data as the basis of an IDLH value is not ideal, the absence of concentration-response data may require the use of LC 50 values as a POD.
Although estimates of a lethality threshold are preferred over other measures of lethal concentrations, in many cases, the only available data from acute lethality studies are LC 50 values (i.e., concentrations associated with a 50% mortality incidence). † If LC 50 value estimates are available for multiple species, then the lowest reliable LC 50 value in the most relevant animal species is used for extrapolation to predict human response. If no data are available that favor the use of one animal species over another, then the most sensitive species is used after considering study quality. Multiple LC 50 values may also be available from a single study, including values for both sexes individually and for the two sexes combined. In such cases, the data are evaluated for any clear difference between the sexes. If a clear difference exists, the LC 50 from the more sensitive sex is used. If there is no clear difference, the combined LC 50 value is used, since the combined data provide a higher statistical power.
# Consideration of escape-impairing effects
For effects other than mortality, reported health effects in both human and animal studies are classified as severe, irreversible, or escape-impairing. Identifying which effects may be escape-impairing is complicated by the fact that observed signs and symptoms in animals may differ from those expected to occur in humans. For example, the same underlying MOA that manifests as changes in respiration rate, nasal discharge, or altered activity level in an acute toxicity test in animals may be reported as intolerable irritation in humans. For this reason, guidance was developed that allows for more consistent assigning of comparative severity of observed effects (i.e., severe and irreversible versus non-severe; escape-impairing versus nonescape-impairing) for commonly observed adverse effects used as the basis of IDLH values. Appendix C provides the guidelines for classifying effects commonly seen in acute animal studies.
Generally, basing IDLH values on effects that can impair escape relates to consideration of irritation responses (e.g., severe eye burning or coughing) or impacts on the nervous system (e.g., headache, dizziness, drowsiness), although other effects (e.g., cardiovascular or gastrointestinal tract effects) may also be considered, when warranted. To facilitate a consistent approach, qualitative descriptions of severity have been developed with study results assigned to one of three categories: mild, moderate, or severe. The severity and the type of the effect are considered in determining whether escape impairment is likely. For example, moderate to severe eye irritation, but not mild irritation, is generally considered an appropriate basis for an IDLH value based on escape impairment. For effects on the CNS, narcosis or moderate dizziness is considered sufficiently adverse to impair escape, whereas effects such as headache are generally not considered as an adequate basis for the IDLH value unless described in the study as debilitating or occurring with other symptoms that directly impaired vision or mobility.
Additional consideration is needed for screening assays, such as the respiratory depression 50% (RD 50 ) assay and cardiac sensitization tests. The RD 50 assay is a sensitive measure of sensory irritation, which occurs due to stimulation of trigeminal nerve endings in the cornea and nasal mucosa. These effects frequently are due to a decrease in respiratory frequency that occurs in some laboratory animals when exposed to chemical irritants. The RD 50 value is considered as part of the overall weight of evidence and can be used to support the selection of a POD from other studies that identified the concentration that caused clinical signs of irritation or generated histopathologic changes consistent with moderate or severe irritant effects . The RD 50 value can also be used as the POD if no reliable LOAEL is available. However, the LOAEL is preferred over the RD 50 value as a POD because of uncertainties in relating the respiratory depression response in rodents to potential clinical or tissue changes in humans that would be correlated with severe irritation in humans [Bos et al. 1992[Bos et al. , 2002.
Cardiac sensitization is another sensitive endpoint that serves as the basis of some IDLH values. This endpoint reflects a serious effect in humans, which is characterized by the sensitization of the heart to arrhythmias. Cardiac sensitization can occur from exposure to some hydrocarbons and hydrocarbon derivatives which make the mammalian heart abnormally sensitive to epinephrine. This can result in ventricular arrhythmias and, in some cases, can lead to sudden death . The arrhythmia results from the hydrocarbon potentiating the effect of endogenous epinephrine (adrenalin), rather than a direct effect of exposure to the hydrocarbon. As described by NAS , "the mechanism of action of cardiac sensitization is not completely understood but appears to involve a disturbance in the normal conduction of the electrical impulse through the heart, probably by producing a local disturbance in the electrical potential across cell membranes. "
Cardiac sensitization is determined by injecting the test animal (usually dogs, but rodents are also used) with epinephrine to establish a background (control) response, followed by an injection of epinephrine during exposure to the chemical of interest. Different doses of epinephrine are often tested for the initial injection, and the dose of epinephrine chosen is the maximum dose that does not cause a serious arrhythmia . The test is very conservative, because the levels of epinephrine administered result in blood concentrations approximately 10 times the blood concentrations that would be achieved endogenously in dogs or humans , even under highly stressful situations. Thus, even though scenarios where IDLH values would apply would be highly stressful, the cardiac sensitization test is considered a sensitive measure of a severe effect. Cardiac sensitization is relevant to humans, but because of the conditions of the assay, which focuses on the measurement of the response to a challenge injection with epinephrine, the assay itself is very sensitive . The sensitivity of the assay is considered in the weight-of-evidence approach when selecting the POD and in the selection of the UF.
# Consideration of severe and irreversible effects
A variety of health effects may result from acute exposures that do not immediately impair escape (although over an extended time period these effects may be lethal). Severe adverse effects that are not immediately escape-impairing are evaluated on a caseby-case basis, by weighing considerations such as the need for medical treatment, the potential for altered function or disability, the potential for long-term deficits in function, and the likelihood for secondary symptoms that would be escape-impairing. These include severe, but reversible, acute effects such as hemolysis, chemical asphyxia, delayed pulmonary edema, or significant acute organ damage (e.g., hepatitis, decreased kidney function). If a chemical is suspected of generating such effects, then it is important to evaluate the design of the study to ensure that adequate time was allowed, following completion of the exposure period, to determine whether such latent effects of interest were assessed. Standard developmental toxicity studies are not used directly because they typically involve repeated exposures (e.g., during all of gestation or from implantation through one day prior to expected parturition), and extrapolation from studies that involve long exposure periods thereby resulting in an unacceptable level of uncertainty. However, it is also recognized that some developmental effects can result from exposure during a critical window of development, and that the time in which the exposure is administered may be more important than exposure duration. Therefore, data from developmental studies are evaluated in the context of the overall weight-ofevidence analysis. For example, if developmental effects are seen, the data on MOA and the relative concentration response for maternal toxicity and fetal toxicity are evaluated to determine whether an increased UF is needed. Conversely, a potential IDLH value derived from systemic toxicity in the pregnant female can provide a health-protective, lower-bound estimate for the IDLH value, because the exposure duration of repeated days is much longer than the duration of interest-a single 30-minute exposure. Use of repeatedexposure studies in this manner can provide perspective to potential IDLH values derived from very high concentration acute studies where a large UF leads to relatively low IDLH values that are more than adequately protective. Information relating to key issues in the use of developmental toxicity data during the assessment of the health risks of acute exposure scenarios has been published . These publications provide supplemental resources that will be used to refine the derivation of IDLH values based on developmental toxicity data. As noted above, acute animal toxicity studies rarely include sufficient post-exposure monitoring to be useful for cancer assessment. Even when a study is sufficient for evaluating carcinogenicity following a single exposure (e.g., Hehir et al. ), such as following vinyl chloride exposure, the data are usually insufficient for a quantitative calculation of cancer risk. Therefore, concern for carcinogenicity is addressed by consideration of adding a supplemental UF (see Chapter 4). The cancer risk at the potential IDLH value can also be estimated and compared with a chosen risk level (i.e., a 1 in 1,000 excess cancer risk) . The concentration corresponding to a specified risk level is not usually used as the basis for the IDLH value, because of the considerable uncertainty in extrapolating from a chronic study to a single exposure. However, if the estimated cancer risk at the IDLH value without the supplemental UF is below 1 in 1,000, then the supplemental UF is not used.
# Relevance of the Exposure Duration for Acute Studies
Acute animal inhalation studies reviewed for the derivation of the IDLH value may use treatment regimens ranging from an exposure duration as short as a few minutes (e.g., <10 minutes) to several hours (e.g., 8 hours or more). Because the intended use of the IDLH value is for the prevention of adverse effects that may occur as a result of a single exposure for 30 minutes, the derivation of an IDLH value is ideally based on:
- Studies involving exposure for 30 minutes
- Studies that have information on the threshold for rapidly occurring escape-impairing effects
- Studies that include a sufficient observation period for potential severe delayed effects.
Acute studies of durations other than 30 minutes that provide information on escape-impairing effects and severe adverse effects are also desirable and used. Although inhalation studies of durations other than 30 minutes introduce uncertainties in extrapolating effects to a 30-minute duration, they are still used after being adjusted to a 30-minuteequivalent exposure duration, as discussed in detail in Section 3.5 on Time Scaling.
It is recognized that the ideal dataset applied during the derivation of an IDLH value will consist of high-quality 30-minute inhalation studies with effects in the severity range of interest. In most cases, such datasets are unavailable. Thus, when selecting among less-than-optimal study designs to identify the most appropriate critical study and POD, a weight-of-evidence approach is used to select the critical study. For example, within a given category of studies (e.g., acute lethality studies), preference is given to high-quality studies of the duration of interest (30 minutes) or involving minimal duration extrapolation (e.g., 20-minute exposure duration is preferred over a 4-hour exposure duration). However, the relative merits of a well-done study of longer duration versus a poorly conducted 30-minute study must be considered. A well-documented weight-of-evidence decision is even more important when there are no adequate acute inhalation studies in humans or animals. In such cases, consideration of all other available data is needed, including MOA information, repeated-exposure studies, studies of exposure routes other than inhalation (e.g., oral or direct-injection dosing), and studies with other (usually structurally related) chemicals. MOA understanding is particularly important in such situations and can determine such issues as whether route-to-route extrapolation is appropriate, the impact of using data from repeated-exposure studies, and which structurally related chemicals are appropriate to use by analogy. The following examples illustrate the impact of MOA on extrapolation decisions.
# For route-to-route extrapolation:
It is inappropriate to conduct route-to-route extrapolation for irritants, because they target the route of entry.
# For duration extrapolation:
It may be appropriate to extrapolate from repeated-exposure studies for irritants, since concentration is often a more important determinant of irritation than exposure duration. Irritation effects observed on the first day of exposure during a repeatedexposure study may be used as the basis of an IDLH value.
Repeated-exposure studies that identify subchronic or chronic systemic toxicity (rather than rapidonset clinical signs) are not used quantitatively as the basis for deriving the IDLH value. However, considerations of these other toxicity metrics are included in overall database evaluation during the consideration of UF and to assess the reliability of estimates derived from acute studies. For example, if a well-conducted repeated-exposure study shows no adverse effect at a given concentration, then such a finding can help to determine the lower range of potential values for an IDLH value, since single acute exposures will usually identify a higher POD. In this way, repeated-exposure studies can provide a lower bound on the range of potential IDLH values for a chemical if the databases of acute studies are limited or of marginal quality.
Table 3-6 illustrates how scientific judgment is used in considering duration. In this example, only limited acute data are available for the chemical, including an RD 50 study and one LC 50 . However, some information on the effects of acute exposure can be extracted from clinical signs reported for a subchronic exposure study in which exposure was for 6 hours/day, 5 days/week, for 13 weeks. Clinical signs reported at 4.9 ppm were limited to eyes halfclosed during exposure, an indication of eye irritation, but at a level that is not escape-impairing. However, at the next higher exposure level (15.3 ppm), the authors reported burning of the nose and eyes, as well as olfactory lesions. Although the lesions may have been related to the repeated exposure, it is reasonable to assume that the clinical signs of burning eyes and nose were observed during the first exposure, and that these effects would be escape-impairing. After consideration of time adjustments (see Section 3.5) and application of the appropriate UF (see Chapter 4.0), the LOAEL from the repeated-exposure study was used as the basis for the IDLH value, supported by the RD 50 . A slightly higher IDLH value would have been calculated from the LC 50 , but that value was not used, since it involves more extrapolation due to the severity of the response (lethality). Direct observations from the initial exposure during the repeated-exposure study were considered more reliable than using the RD 50 value directly, based on the uncertainties in interpreting the RD 50 assay.
# Relevance of the Exposure Measurements
Animal inhalation studies are typically conducted using either whole-body or nose-only exposure. Both methods have strengths and limitations.
Whole-body exposure more closely simulates the situation for occupational exposure and includes the potential for exposure both via inhalation and via dermal exposure to the chemical in the air. However, in rodent studies, whole-body exposure may also involve ingestion exposure that is not relevant to humans, due to grooming of fur on which the chemical has deposited. Nose-only exposure avoids the potential for ingestion exposure, but also eliminates the potential for human-relevant dermal exposure, and may place the animals under additional stress, because of their being restrained during exposure. There is no default preference for one exposure scenario over the other. Instead, the studies and results should be examined to determine whether the limitations of either method preclude the use of certain studies. For example, the observation of overt gastrointestinal (GI) effects from whole-body exposure suggests the potential for confounding by ingestion. In general, both noseonly and whole-body exposures are considered together in the overall weight-of-evidence evaluation.
Well-conducted inhalation studies generally report both nominal concentrations (the concentration expected on the basis of the amount of chemical introduced into the exposure system) and the analytical concentration (the amount actually measured). The two values should be similar; if they are markedly different, the reasons and implications for the difference should be determined. Large differences may reflect difficulty in maintaining the exposure atmosphere (e.g., the chemical may be adhering to the exposure chamber walls) or other issues, and may indicate uncertain study quality. Larger differences between nominal and analytical concentrations may be seen with static exposure studies (where the chemical is introduced into the chamber at the beginning of the experiment), as opposed to dynamic studies (where the chemical is continuously circulated and the chemical concentration is actively maintained at the target level). Because the analytical concentration reflects the actual concentration to which the animals were exposed, the analytical concentration is usually used in IDLH value calculations. However, in some cases, the nominal concentration may more appropriately reflect the exposure conditions. For example, substances, such as trichloromethylsilane (CAS# 75-79-6), sulfur trioxide (CAS# 7446-11-9), uranium hexafluoride (CAS# 7783-81-5), and acetone cyanohydrin (CAS# 75-86-5), react with the moisture in air to produce a variety of hydrolysis products. Table 3-7 provides examples of hydrolysis products associated with the previously listed substances. Because the observed toxicity is due to both the parent chemical and the hydrolysis products, nominal concentration is a better indicator of toxicity, since it reflects the total burden of toxic constituents, whereas analytical concentration would reflect only the concentration of the parent compound . In such cases, the decision of whether to use nominal or analytical concentrations depends on the approach that would be used for air monitoring and whether it would capture only the parent compound or the parent compound and its hydrolysis products.
Care should also be used in considering the exposure units. For example, it is appropriate to use ppm only for gases and vapors because ppm in air refers to molecules of the chemical in air (rather than being on a weight basis). The units of mg/m 3 can be used for particulates and aerosols, as well as gases and vapors. Although exposures to gases and vapors are usually reported in ppm, care is needed to ensure that units are not confused. Units of ppm can be converted to mg/m 3 using the ideal gas law. At 1 atmosphere of pressure and room temperature (25˚ C), the conversion is as follows: IDLH values derived for aerosols will reflect the relevant size fraction. Specific recommendations relating to size fractions for aerosols are included in the chemical-specific IDLH support documentation when sufficient data are available. The most appropriate size fraction is driven by the nature of acute toxicity observed. If such data are not available, the chemical-specific IDLH support documentation for the aerosol will note that the size fraction that represents the greatest hazard could not be determined. In such cases, total inhalable particulate is used as the basis for the IDLH value.
# Other Issues of Study Relevance-Use of Surrogates and Route Extrapolation
When neither human nor animal acute inhalation data are sufficient to derive an IDLH value for a chemical of interest, other approaches are considered, depending on the understanding of the MOA and availability of data. Available information on surrogates, or related compounds, primary metabolites, or key breakdown products (e.g., secondary chemical products formed from hydrolysis due to moisture in the air) that are closely related to the chemical of interest can be used when inadequate information is available for the chemical of interest. As an example of the use of a related compound during the derivation of an IDLH value, bromine pentafluoride (CAS# 7789-30-2) and chlorine pentafluoride (CAS# 13637-63-3) differ only in the primary halogen atom. Because of their similarities, bromine pentafluoride can be used as a surrogate for chlorine pentafluoride, and the limited toxicity data available for bromine pentafluoride indicate that its toxicity is comparable to or slightly less than that of the chlorine compound. Another example is the assessment of the acute inhalation hazard of an entire chemical class on the basis of the data for a single compound; the NAS/NRC drafted AEGL values for multiple chlorosilanes and metal phosphides with use of this approach [NAS 2007[NAS , 2009. This approach takes advantage of knowledge about the MOA and the actual form of the toxicity of related chemicals to use the entirety of the data for the class of chemicals to develop exposure values. For example, for the chlorosilanes the primary cause of the acute effect of interest (irritation) is hydrolysis in moist air to form hydrochloric acid. Thus, for the series of related chlorosilanes, the IDLH value can be derived from actual testing data for the most data-rich member of the family and by adjusting the IDLH value for other members according to the respective amounts of chlorine atoms produced during hydrolysis. A refinement of the use of surrogate chemicals or information on classes of related chemicals is to use data on the relative potency, when adequate data are available to quantitatively compare the chemical of interest with the surrogate but data for the chemical itself are not sufficient to develop an IDLH value. In such cases, the toxicity threshold is much better understood for the surrogate than for the chemical of interest, but the threshold for the chemical of interest can be adjusted on the basis of relative potency.
When a surrogate or relative-potency approach is used, it is necessary to consider the uncertainties associated with using a limited database for the chemical of interest versus the uncertainties associated with extrapolation from a surrogate chemical. An example of extrapolation from a breakdown product is the chemical reaction that causes acetone cyanohydrin to form HCN and acetone. The acute toxicity of acetone cyanohydrin is driven by exposure to an equimolar (i.e., having an equal number of moles) equivalent to HCN. Thus, the acute toxicity data for HCN can serve as a surrogate and basis of an IDLH value for acetone cyanohydrin [NAS 2002[NAS , 2005. Use of such surrogates is not necessary when adequate information on the primary chemical is available. In addition, if a surrogate is being considered as the basis for the IDLH value, it is important to consider whether other aspects of toxicity are associated with the parent chemical and whether these aspects are adequately addressed by the surrogate. For example, acetone cyanohydrin causes irritant effects that are not seen with exposure to HCN, but the most potent escapeimpairing effects are secondary to cyanide action as a metabolic toxicant. This results in HCN being the most valid surrogate for acetone cyanohydrin.
If no adequate inhalation data are available for the chemical of interest or for a potential surrogate, an IDLH value may be derived by extrapolation from studies that used exposure routes other than inhalation, such as oral or intraperitoneal (i.p.) dosing studies. As noted above, this route-to-route extrapolation is appropriate only if the effect of interest is systemic (i.e., involves absorption into the systemic blood circulation for distribution to an internal target tissue). Route extrapolation (e.g., from oral or i.p. dosing studies) is not appropriate if the chemical's primary relevant effects for IDLH development are as an irritant, or if it is expected to target the route of entry (i.e., respiratory tract) as the most sensitive end point. The ideal approach is to use a physiologically based pharmacokinetic (PBPK) model to conduct the route-to-route extrapolation, but it is rare that such data would exist (particularly for a chemical for which the inhalation data are insufficient to directly derive an IDLH value). In the absence of such a PBPK model, the approach is to estimate the concentration to which a 70-kg worker could be exposed in order to receive the equivalent systemic dose to that delivered in the oral or i.p. study. This conversion is a health-protective estimate of the air concentration that would result in the systemic dose, since a worker breathing at a rate of 50 liters per minute (L/min) for 30 minutes would inhale 1.5 m 3 of air. The basis for this decision is discussed in greater detail in Appendix E.
A second consideration in applying route-to-route extrapolation is the impact of first-pass metabolism. First-pass metabolism, also known as presystemic metabolism, refers to the metabolism of a chemical delivered from the GI tract directly to the liver via hepatic blood flow, before distribution to the general systemic circulation. First-pass metabolism by the liver generally decreases systemic exposure to the parent chemical following oral exposure when compared with inhalation exposure. More precisely, first-pass metabolism via the respiratory tract tends to be of smaller magnitude than for the liver resulting in increased systemic exposure to the parent chemical and decreased exposure to alternative organ systems to metabolites formed in the lungs. Quantitatively addressing the implications of first-pass metabolism is often difficult, and use of a surrogate for which inhalation data are available is considered to provide greater weight of evidence for chemicals where first-pass metabolism plays an important role. Comparing IDLH values derived from different approaches (e.g., using a surrogate versus using route-to-route extrapolation) can provide information on possible uncertainties involved and may help to set the range of reasonable IDLH values. Finally, since this approach is based on systemic dose, it assumes equal absorption via both routes (unless a separate correction is made) and ignores issues related to the physical characteristics of the chemical (e.g., gas/vapors versus particulate) and implications of particle size and dosimetry (i.e., determination of respiratory tract region deposition fractions). Where quantitative adjustments for differing routes of exposure are uncertain, this issue is further considered in the selection of additional UFs. Additional considerations for conducting route-to-route extrapolations are described in several guidance documents (e.g., ).
# Time Scaling
A critical consideration in developing IDLH values is accounting for exposure duration and the extrapolation from the experimental exposure duration to the duration of interest (i.e., 30 minutes). The toxicity of airborne chemicals depends on both exposure concentration and exposure duration, as well as physiochemical properties that affect respiratory deposition and systemic absorption. Ideally, information from validated PBPK or biologically based dose-response (BBDR) models is used for time extrapolation, but such information is rarely available. In the absence of such models, simpler concentration-time relationships are used. Historically, particularly for extended exposure durations, toxicity was described as the simple product of concentration (Conc) and time, so that Conc × time = k, a constant. In other words, if Conc 1 × time 1 = Conc 2 × time 2 , then the toxicity would be the same. This relationship is described as Haber's law, or Haber's rule .
The key assumption embedded in the relationship of Haber's rule is that damage (or depletion of protective tissue response) is irreversible and, therefore, that toxicity is cumulative, related to the total dose of the chemical . This assumption is generally not true for single acute exposures . For example, toxicity due to asphyxiants (e.g., argon or nitrogen) is related to the peak concentration of the chemical, rather than the cumulative dose. Sensory irritation and transient acute CNS effects may also be influenced more by the exposure concentration than the exposure duration.
Further investigation into the relationship between concentration, duration, and toxicity was conducted by ten Berge et al. , who proposed the following relationship between Conc and du-ration (time, t): Conc n × t = k. These investigators examined the data on 20 irritant and systemically acting gases and vapors; the results of this investigation indicated that n was ≤3 for lethality data from 18 of the 20 chemicals. This study is one of the primary published sources for values of n. Furthermore, based on the finding in this study that an n of 3 covers 90% of the chemicals in the dataset, the default value of an n for extrapolating from longer durations to shorter durations was chosen to be 3, as a health-protective approach.
The following approach is used in extrapolating across durations within the IDLH methodology:
1. No extrapolation is needed if the study of interest involved exposure for 30 minutes; the empirical data are used directly.
2. If information on the value of n is available from the original paper of ten Berge et al. or from authoritative reviews (e.g., AEGL documents), then that value is used. Note, however, there are caveats to the use of the ten Berge data, and other considerations in the choice of n. In general, a published value of n will be used directly only for studies reporting the same effect or effects related to the same underlying toxic mode of action. Use of the published values of n for application to studies conducted in different species or for different effects is done on a caseby-case basis, with rationale provided.
3. If no value of n is available in the literature, n can be mathematically derived directly from the key studies of interest and applied with the same caveats as noted in item 2.
4. If the data are not available to support the derivation of n, then a default of 1 is used if the duration of the study of interest is less than 30 minutes, in which case the ten Berge equation defaults to Haber's rule. Conversely, if the duration of the study of interest is more than 30 minutes, then the default of 3 is used for n. This approach generally yields health-protective estimates for the 30-minute equivalent POD, as shown in Appendix E-2.
5. In limited cases, the overall dose-response data and the mode of action information may suggest that the observed acute effects are independent (or nearly independent) of exposure duration, and that exposure concentration can be used with no further duration adjustment. If the POD is used from studies of durations other than 30 minutes without adjusting to a 30-minute equivalent value via the ten Berge correction, the rationale for this decision will be described in the documentation of the IDLH value.
Additional information and illustration on the application of time scaling within the IDLH methodology are included in Appendix E-2.
# Inclusion of Safety Considerations
Safety hazards are considered during the derivation of IDLH values to ensure the protection of worker safety and health. One particular consideration in the derivation of IDLH values is the potential for explosive concentrations of a flammable gas or vapor to be achieved at toxicologically relevant air concentrations. Maintaining safety considerations in the process for this methodology update is consistent with the prior method used to develop IDLH values. For gases and vapors, NIOSH has adopted a threshold of 10% of the LEL as a default basis for the IDLH values based on explosivity concerns. This threshold aligns with the airborne concentrations of a flammable gas, vapor or mist identified by OSHA as a hazardous explosive condition . In such events, when the air concentration that corresponds with 10% of the LEL is less than the health-based value using the approach outlined in Chapter 3, this air concentration will become the default IDLH value. The following hazard statement will be included in the support documentation: "The health-based IDLH value is greater than 10% of the LEL (>10% LEL) of the chemical of interest in the air. Safety considerations related to the potential hazard of explosion must be taken into account. " In addition, the notation (>10% LEL) will appear beside the IDLH value within the NIOSH Pocket Guide to Chemical Hazards and other NIOSH publications.
For dust, the equivalent default approach would be using 10% of the minimum explosive concentration (MEC). However, determining the combustibility of dusts is too complex to assign a single default measure. Dust combustibility and explosivity are dictated by the relationships among substance and scenario-specific factors including (1) particle size distribution, (2) minimum ignition energy, (3) moisture content, (4) explosion intensity and ( 5) dispersal in air . The ability to quantify combustible dust specific concentrations for application of an IDLH is often not possible given the absence of critical chemical-specific data, such as the MEC or the other previously identified factors. NIOSH will critically assess the explosive nature of a dust when sufficient technical data are available. If determined to be appropriate, the findings of this assessment will be incorporated in the derivation process to ensure that the IDLH value protects against both health and safety hazards. Total UF
The application of UFs is needed to account for uncertainties related to extrapolation from the concentration that caused effects in the selected toxicity study to those that would be expected to be below the threshold for such effects in workers exposed for up to 30 minutes. For example, if the most appropriate POD was an LC 50 value in rats from a 30-minute exposure study, then use of this value directly as the IDLH value would clearly not be acceptable since a sub-threshold concentration for humans is needed. Dividing the selected POD, such as the LC 50 value in this example, by an additional UF would then reduce the IDLH value to a lower concentration well below the LC 50 value.
In general, the UFs need to address all key areas of uncertainty that result from extrapolating from the available studies. Most organizations that develop exposure values/limits consider the following key areas of uncertainty:
- Interspecies variability in sensitivity: This area addresses differences in sensitivity between the test species (e.g., mouse, rat, etc.) and the average human for the population of interest (i.e., in the context of IDLH application, workers).
- Human variability in sensitivity: This area addresses differences in sensitivity between the average human from the population of interest to the sensitive component of the population of interest.
- Severity of effect: Because the IDLH value is intended to be below a concentration that will cause death or severe, irreversible, or escape-impairing effects, the UF needs to account for extrapolation from a POD that caused such responses in the selected toxicology study to a concentration below the threshold for these effects.
- Duration of exposure: Some organizations that develop exposure values/limits include consideration of the duration of the study that served as the POD in the UF determination and its relevance to the duration of interest. In the context of IDLH development, this area of uncertainty is addressed through duration adjustments of the POD rather than the explicit application of a UF.
- Other database deficiencies: When datasets available to develop IDLH values are very limited, it is necessary to account for the possibility that the available studies did not identify the most sensitive endpoint relevant to IDLH development. In such cases it is appropriate to increase the UF to account for this uncertainty.
An approach used by many organizations, such as by USEPA for developing reference concentrations areas of uncertainty and the multiplication of UFs for each of these areas to derive the final cumulative UF.
The IDLH methodology is a modification of this approach that blends the rigor of full consideration of the relevant areas of uncertainty embedded in the USEPA and AEGL approaches with the flexibility to fully use the limited data from multiple lines of evidence often encountered in IDLH development. Overall, the assignment of UFs for IDLH derivation includes two steps:
1. Selection of an appropriate preliminary UF range 2. Modification of this preliminary range to select a final value.
The preliminary UF ranges are based on consideration of the study design and the adverse health effect occurring at the POD. Use of a preliminary range of values helps to ensure consistency in application of UFs within the IDLH development effort for diverse chemicals. However, modification of the UF is often required on the basis of unique issues arising from the review of the database for each unique chemical. Thus, the IDLH methodology captures the need to use a consistent approach for UF application while maximizing the ability to make informed decisions based on weight-of-evidence considerations.
# The NIOSH IDLH Value Uncertainty Factor Approach
As discussed regarding the overall UF approach, the analysis focuses on the weight-of-evidence approach using all the relevant data. Thus, a range of preliminary UFs is shown for each of the typical types of effect levels that are available as a POD. However, the final UF applied is determined from the weight-of-evidence evaluation for each chemical that allows for modifying the preliminary UF on the basis of additional considerations unique to the dataset. The preliminary UF ranges are shown in Table 4-1. The most common UFs for a given data type are shown, but the range indicates how this value is commonly adjusted up or down according to the entirety of the database, as described further in this section. The preliminary UFs are applied as multiples of 1 or 10, with use of an intermediate value of 3. The value of 3 represents one half of the log10 unit (3.16 rounded to 3) as the minimum increments that are used for the UF adjustments to reflect the level of precision for such an approach. Although the value of 3 is used in place of 3.16 during the discussion of UFs, caution should be applied when multiplying UFs of 3 together. For LOAEL for an escape-impairing or irreversible effect in animals 3 to 30 NOAEL for an escape-impairing or irreversible effect in animals, or animal RD 50 1 to 10 LOAEL for an escape-impairing or irreversible effect in humans 1 to 10 NOAEL for an escape-impairing or irreversible effect in humans 1 to 3
Abbreviations: BMCL 10 = lower confidence limit on the concentration associated with a 10% response; IDLH = immediately dangerous to life or health; LC 01 = the statistically derived air concentration that caused lethality in 1% of test animals; LC 50 = median lethal concentration; LC LO = lowest concentration of a substance in the air reported to cause death; LOAEL = lowest observed adverse effect level; NOAEL = no observed adverse effect level; UF = uncertainty factor. *Typical UF Range is based on the information presented in Appendix D. example, when multiplying two UFs of 3 together (e.g., 3 × 3), the product will be 10, not 9. This is also illustrated by the multiplication of three UFs of 3, that is, 3 × 3 × 3 will equal 30, not 27.
Selection of values other than the preliminary UF for deriving an IDLH value is common, reflecting the use of a weight-of-evidence approach and the sometimes-conflicting data from multiple lines of evidence. Common situations that lead to movement away from the preliminary UF value relate to evaluation of data for the areas of uncertainty and extrapolation noted in the prior section.
- Interspecies variability in sensitivity: If chemical-specific data are available to help determine the magnitude of the differences in species sensitivity, then such data are used to refine the size of the final UF. For example, if information about specific sensitivity due to differences in species metabolism is available, the UF applied to the POD from an animal study is adjusted accordingly (either up or down, depending on the data). If health effects data that serve as the POD are from human studies, then the UF would not need to address this area of uncertainty.
- Human variability in sensitivity: If chemical-specific data are available to help determine the magnitude of the variability in human sensitivity, then such data are used to refine the size of the final UF. If health effects data that serve as the POD are from a sensitive human group (e.g., non-smoking, young adult females for a clinical study of nasal irritation ), then the UF would be smaller in addressing this area of uncertainty. Because IDLH values are used in occupational applications, the range of variability that needs to be covered in applying the UF is expected to be less than for development of exposure values/limits meant to protect sensitive members of the general public. Conversely, if additional data do not include sensitive populations (e.g., asthmatics who may be exposed to respiratory irritants), then a larger UF may be selected.
- Severity of effect: The size of the adjustment needed would reflect the severity of effect observed at the POD. This is reflected in the preliminary UF ranges shown in The consideration of the severity of effect also addresses the slope of the concentrationresponse curve. Steep concentration-response curves and high-quality data may result in UFs at the lower end of the range. Steep concentration-response curves represent estimates of responses that decrease rapidly with decreasing exposure concentrations, so that a smaller UF may be warranted to reach the response level in the concentrationresponse curve, compared with a more shallow concentration-response curve. Thus, if the concentration-response curve is very steep, a factor of 10 (rather than the preliminary UF of 30) may be applied to an LC 50 value, based on consideration of the overall database. This is because there is less than a factor of 3 between the LC 50 and the (actual or estimated) LC 01 value.
- Duration of exposure: For most acute limits, including IDLH values, acute studies are typically used directly as the basis for the POD. Thus, the available studies are generally representative of the overall duration of interest (exposure for a single day or less). Further refinements to account for uncertainties in duration extrapolation, such as between a 4-hour study and the 30-minute duration of interest for IDLH development, are addressed in the time-scaling adjustment to the POD (see Section 3.5), rather than as a consideration for the UF value. However, significant uncertainties may need additional consideration if the available study is limited in design or outside the immediate duration range of interest. For example, if only repeatexposure studies were available for a chemical to serve as the POD, and the observed effects were not clearly due only to initial acute exposures, then the use of such a POD might justify a smaller UF.
- Other database deficiencies: A UF at the higher end of the typical range (e.g., a UF of 10 instead of 3) is often used if major uncertainties or additional significant concerns are identified. If a database is very deficient, then the UF might be increased. This approach is often used if the only reliable data are lethality data from a single acute study. Other considerations for database deficiency relate to the potential for effects that were not evaluated in the available studies. For example, the higher end of the range may be used if the data indicate that the chemical is a sensory irritant and the data are insufficient to derive an IDLH value (e.g., due to inappropriate exposure durations) but indicate a large margin between concentrations causing severe irritation and those causing death. Other data gaps that may affect the size of the final UF reflect specific endpoints of concern. For example, a UF from the higher end of the range may be used if a chemical is a known or likely carcinogen or a developmental toxicant, with evidence that acute exposures may be of concern.
The examples in Appendix A highlight how these weight-of-evidence considerations are applied to select UFs and derive potential IDLH values.
# Research Support for the NIOSH Uncertainty Factor Approach
The UF approach used for deriving IDLH values is based on a review of NIOSH research efforts, approaches used by other organizations that establish acute exposure limits/values, and other independent research.
The NIOSH approach is similar to that of other agencies in terms of the areas of uncertainty accounted for in determining the appropriate value of the final UF. Although the NIOSH approach does not assign an individual factor for each area of uncertainty, there is generally good agreement between the NIOSH UF and the UF embedded in derivation of AIHA ERPG values and the cumulative UF used for derivation of the AEGL values. As expected, there is not complete alignment between these values, because of differences in application of IDLH values versus other types of acute exposure limits.
In particular, the UF applied to the IDLH value is often smaller than for deriving the ERPG or AEGL values, which results in a larger final exposure limit for IDLH values compared to these other guidelines. For example, differences often arise because of the explicit inclusion of potentially sensitive members of the general population (e.g., children, elderly, and individuals with health impairments) during the establishment of community-based acute exposure limits, such as the ERPG and AEGL. The IDLH values do not take into consideration the potentially sensitive members of the general population because it is assumed that they will not be substantially represented in the workforce for the purposes of considering average population responses. However, in some cases such populations may be considered when a chemical has specific effects on a target population that is well-represented in the expected worker population. An example would be an agent that has significant impacts on asthmatics.
In such cases, health effects data from asthmatics that have been exposed to the agent would be appropriate for defining the POD as the basis for deriving an IDLH value.
To further verify that the preliminary ranges of the UF are supported by existing data, NIOSH conducted an analysis of acute toxicity data to determine the appropriate size of the UF for extrapolating from various points of departure to derive IDLH values that would be expected to protect from lethal, severe, irreversible, or escape-impairing effects in humans. From these data compilations for chemicals with robust datasets, the ratios between animal lethality values commonly used as the POD for developing the IDLH value (e.g., LC 50 values) and the effect level for lethality or other non-lethal effects in humans were determined for each chemical. The distribution of these ratios was analyzed, and the median value and 95 th percentile value for each comparison were derived (see Appendix D). The resulting median values and upper-bound estimates for these case study chemicals were used to verify that the range of total UFs adopted in the IDLH methodology adequately accounts for the value that should be applied to an animal-based endpoint to protect from severe or escape-impairing effects in humans.
The analysis found that animal lethal concentrations and human effect thresholds (both LC LO values and LOAELs for severe or escape-impairing effects) were generally correlated, such that chemicals with low animal LC 50 values tended to have low human lethality thresholds and cause severe or escape-impairing effects in humans at low concentrations. This finding was important to support the approach of developing preliminary UF ranges that could be used to address protection from nonlethal effects when extrapolating from data from acute animal studies. Additional analyses were conducted by MOA category (e.g., irritant, CNS depressant, or "other") to determine if different UF ranges could be applied on the basis of a chemical's MOA. However, statistically significant differences were not found among the MOA categories. Thus, this further refinement to the approach for developing a preliminary UF to address effect severity by MOA category has not been applied for IDLH derivation. Overall, comparison of the median values to the UF ranges in Table 4-1 showed that the most common value is typically above or in the range of the median value for the comparison dataset. This result is also consistent with other evaluations that analyzed effect-level ratios from acute toxicity studies (e.g., Rusch et al. 2009). Additional results, as well as the results of the second approach, are presented in Appendix D.
US EPA The selection of the UF for chlorine was based on Chapter 4.0: Use of Uncertainty Factors. The UF of 30 was selected on the basis of (1) the extrapolation from a concentration that is lethal to animals, (2) animal to human differences, and (3) human variability. ‡ Derived values are calculated by dividing the Adjusted 30-minute LC by the UF. relationship is used for duration adjustment (C n × t = k); no empirically estimated n values were available; therefore, the default values were used: n = 3 for exposures greater than 30 minutes and n = 1 for exposures less than 30 minutes. †
The selection of the UF for chlorine was based on Chapter 4.0: Use of Uncertainty Factors. The UF of 10 was selected on the basis of (1) animal to human differences, and (2) human variability. ‡ Derived values are calculated by dividing the Adjusted 30-minute LC by the UF. summarized in Table A-2, along with time adjustments, UF, and potential derived IDLH values.
Non-lethal effects in mice, rabbits, and rats consisted of ocular and nasal irritation, transient changes in lung function, bronchitis, lesions in the nasal passages and lung, and mild edema . Nasal lesions appear to be the most sensitive effect, as they occur at the lowest tested concentrations of chlorine in both rats and mice . Multiple RD 50 studies have also been conducted in mice [Barrow et al. 1977;Barrow and Steinhagen 1982;Chang and Barrow 1984;Gagnaire et al. 1994
# A.3 Human Data
Deaths have been reported after inhalation exposures to chlorine, but specific exposure concentrations are not available from reports of accidental releases. Withers and Lees estimated lethal concentrations to humans using a probit analysis of available information. They estimated a 30-minute LC 50 value and LC 10 value of 100 and 50 ppm, respectively, for vulnerable populations. Critical human lethality data are summarized in Table A-4, along with time adjustments, UFs, and potential derived IDLH values.
Experimental exposure to non-lethal concentrations of chlorine has caused changes in nasal air resistance , transient changes in pulmonary function , irritation , and cough . The lowest concentration at which mild discomfort due to irritation or cough was reported is at 1 ppm for durations of 4 hours or greater ; however, irritation was not significant among volunteers exposed to 2 ppm for 30 minutes .
Multiple accidental exposures to non-lethal concentrations of chlorine have been reported, but specific exposure conditions and durations are not available. Symptoms of dyspnea, cough, and irritation are most commonly reported in the literature , but other severe systemic effects (such as headache, vomiting, giddiness, chest . Critical human non-lethality data are summarized in Table A-5, along with time adjustments, UFs, and potential derived IDLH values.
# A.4 IDLH Value Rationale Summary
Among the acute lethality studies, the mouse provides the lowest LC 50 value of 127 ppm for a 30-minute exposure period . A UF of 30 was applied to account for extrapolation from a concentration that is lethal to animals, animal to human differences, and human variability, resulting in a potential IDLH value of 4.2 ppm. Anglen reported a LOAEL of 2 ppm for throat irritation and cough in human volunteers exposed to chlorine for 1 hour. The LOAEL was duration-adjusted to a 30-minute-equivalent value of 2.8 ppm. A safety factor of 1 was applied to account for extrapolation from a threshold for irritant effects in humans, resulting in an IDLH value of 3 ppm rounded to 3 ppm.. This value is supported by the presence of irritant effects in human volunteers exposed to 1 ppm of chlorine for 4 hours , which would also result in a similar IDLH value. Abbreviation: LC =lethal concentration; LOAEL = lowest observed adverse effect level; NOAEL = no observed adverse effect level; ppm = parts per million; UF = uncertainty factor *For exposures other than 30 minutes, the ten Berge et al. relationship is used for duration adjustment (C n × t = k); no empirically estimated n values were available; therefore, the default values were used: n = 3 for exposures greater than 30 minutes and n = 1 for exposures less than 30 minutes.
# APPENDIX B: IDLH Value Development Prioritization
This appendix identifies how NIOSH will determine the priorities for developing IDLH values.
The guidance values play an important role in planning work practices surrounding potential highexposure environments in the workplace and in guiding actions by emergency response personnel during unplanned exposure events. Ideally, IDLH values would be available for all chemicals that might be present under high exposure situations. However, this breadth of coverage of IDLH values is not practical and might not even be necessary for many chemicals, such as those with very low exposure potential or those that are not acutely toxic. In addition, the absence of data and limited resources makes it difficult to evaluate the multitude of chemicals currently available in commerce. Therefore, a prioritization process is used by NIOSH to ensure that resources are allocated to yield the greatest impact on risk reduction in the event that control measures fail (including respiratory protection devices). This process takes into account both toxicity and exposure potential, and it is applied to a broad pool of relevant chemicals (e.g., chemical warfare agents, industrial chemicals, high-production-volume chemicals, or agrochemicals subject to emergency or uncontrolled releases). A qualitative algorithm is used to generate a tentative relative priority ranking. This process is intended only to provide tentative guidance based on a simple approach that uses readily available sources of information. The resulting priorities are further modified on the basis of NIOSH emphasis areas. For example, chemicals can be added or removed from the list on the basis of new information related to toxicity or exposure potential. The development and use of a documented prioritization process allows for frequent updating of both input data and prioritization criteria to meet changing needs.
Substances considered in the ranking process are compiled from existing databases of chemicals identified by other agencies as "of concern" because of use in chemical terrorism or as chemicals with the potential for exposure due to other uncontrolled releases (and thus having greater opportunities for high, acute exposures). Existing lists of agents of concern may not be fully representative of industrial chemicals for which acute exposures may occur during planned activities (e.g., special maintenance activities) or unplanned-release events. However, IDLH values for many of these sorts of chemicals were included in the original IDLH value development process and in the 1994 updates. Moreover, NIOSH adds additional chemicals of interest that are nominated by interested stakeholders or the subject of new emphasis programs. Chemicals from the following databases (as supplemented by NIOSH chemicals of interest) were included in the ranking process:
- Hazardous Substances Emergency Events Surveillance (HSEES)-This database contains self-reported incidents of accidental chemical releases. The database was created by the Agency for Toxic Substances and Disease Registry (ATSDR) .
- Exposure-related parameters can be divided into two categories: 1) those that provide a direct indication of exposure potential (e.g., number of recorded accidents or spills involving a chemical) and 2) data that provide indirect indication of exposure potential (e.g., volume produced). In weighing such metrics, a balance needs to be struck between the greater confidence provided by direct-release data based on the obvious relevance to exposure potential, and the need to have data on exposure potential that are available for most chemicals. Information on direct exposure indicators was obtained from the HSEES database . Although only 14 states participate in the program, the data are useful as an exposure indicator. Evidence of frequent past incidents involving uncontrolled releases receives a score of 1, and the absence of reporting of prior releases is scored 0.
Chemical production volume is used as an indirect indication of exposure potential . The USEPA classifies HPV chemicals as those chemicals produced or imported in the United States in quantities of 1 million pounds or more per year; medium-production chemicals are quantities of 25,000 to less than 1 million pounds per year; and low-production chemicals are quantities less than 25,000 pounds per year. HPV chemicals receive a score of 1, whereas low-and medium-productionvolume chemicals receive a score of 0.
Because the aim of the prioritization process is the development of guidance for protection from acute inhalation exposures, endpoints that best inform the potential for life-threatening, irreversible, or escape-impairing effects following acute inhalation exposures receive the greatest weight. The following approach and resources are used to score toxicity considerations:
1. Direct indication of exposure potential (e.g., number of recorded accidents or spills involving a chemical).
- Evidence of frequent past incidents involving uncontrolled releases
- HSEES-collects and analyzes actual hazardous chemical releases and emergency responder injuries
- Chemicals with uncontrolled releases (URs) are scored as a 1, and lack of reported data is scored as a 0.
2. Indirect indication of exposure potential (e.g., volume produced)
- Indicative of the potential for exposure from the amount of chemical that is produced
- USEPA classifies chemicals as low, medium, or high production volume (HPV)
- Chemicals classified as HPV are scored as a 1, whereas low-and medium-volume chemicals are scored as a 0.
3. Short-term exposure limits (STELs)-NIOSH RELs, OSHA PELs, AIHA WEELs, and ACGIH TLVs® - STEL values below 20 ppm for vapors and gases or 2 mg/m3 for particulates provide a reasonable cut point for identifying the most significantly acutely toxic substances.
- Substances with a STEL below these cut points receive a score of 1, whereas substances with a STEL equal to or greater than these values or that have no available STEL receive a score of 0. of "very toxic" or "toxic" are scored as 1; otherwise, chemicals are scored as 0.
- Chemicals that have not been evaluated by means of these systems are judged on the basis of the lowest reliable LC 50 compared to the EU R-phrase criteria. - Availability of toxicity data-the absence of adequate data precludes the development of an IDLH value. The lack of toxicity data for a chemical with high exposure potential is used to identify research needs.
- Availability of exposure monitoring methods -The availability of a validated sampling and analytical method increases the likely near-term utility of a derived IDLH value. The absence of a validated sampling and analytical method for high-priority chemicals could be used to identify research needs.
- Presence on existing lists of high priority agents-If other agencies have listed the material as a high priority, then the IDLH value may be useful to other agencies. This type of leveraging of resources is desirable and also helps to harmonize levels of worker health protection among agencies with related missions.
- Degree of safety hazard-If potential risk for two or more chemicals as determined on the basis of chemical toxicity is equal, then agents that have a greater degree of safety-related risk (e.g., flammability) are given greater weight. This consideration allows for easier comparison of overall risk profiles and selection of the most appropriate basis for risk management (e.g., developing entry criteria or emergency plans on the basis of whichever is the greater concern, safety or health risk).
The overall priority score is the sum of the exposure score and toxicity score: Where: AT = acute toxicant CA = carcinogenicity DT = developmental toxicant IRR = irritant PV = production volume STEL = short-term exposure limit UR = uncontrolled release - Tier II: Used qualitatively to make an overall judgment on priorities among chemicals with the same risk priority score.
As discussed in the main document, the intent of the IDLH value is to protect against exposures that are "likely to cause death or immediate or delayed permanent adverse health effects or prevent escape from such an environment. " In other words, the most appropriate effects to use as the basis of the IDLH value derivation are those that are severe, irreversible, or escape-impairing. Scientific judgment is an important aspect in evaluating severity of effects and determining which ones are irreversible, but guidance is available from a number of different sources.
Severe adverse effects that are not necessarily immediately escape-impairing are judged on a caseby-case basis weighing considerations such as the need for medical treatment, the potential for altered function or disability, and the potential for longterm deficits in function. These include severe, but reversible, acute effects such as hemolysis, chemical asphyxia, delayed pulmonary edema, and significant acute organ damage (hepatitis, decreased kidney function, etc.). If these effects could be caused by the chemical, it is important that the available toxicity studies evaluated the development of such effects by, for example, allowing sufficient time between exposure and evaluation of the endpoint.
Guidance on evaluating and ranking the severity of toxic effects is available from a number of organizations. DeRosa et al. developed a 10-category scheme for evaluating noncancer toxicity in the evaluation of Reportable Quantities under the USEPA Superfund legislation. Although designed for the context of chronic exposures, this approach provides insight into the relative severity of different types of histopathology and developmental toxicity. The Agency for Toxic Substances and Disease Registry (ATSDR) includes the following five severity rankings :
- No Observed Effect Level (NOEL)
- No Observed Adverse Effect Level (NOAEL)
- Minimal Lowest Observed Adverse Effect Level (LOAEL 1 )
- Moderate Lowest Observed Adverse Effect Level (LOAEL 2 )
- Frank Effect Level (FEL)
ATSDR applies this approach from acute exposures (defined as exposures up to 14 days) through chronic exposures, and a number of publications are available on applying this approach to various types of effects (e.g., Abadin et al. 1998Abadin et al. , 2007Chou and Pohl 2005;Pohl and Chou 2005;. Although intended for a different purpose, these analyses can provide insights into the evaluation of effect severity. In particular, the "moderate" LOAEL category used by ATSDR is more likely to be considered severe or irreversible, and thus relevant to IDLH value development.
Guidance on evaluation of the severity of effects is also available from the USEPA RfC guidelines and from the American Thoracic Society (e.g., Pellegrino et al. 2005).
Determining which effects are escape-impairing is complicated both by the limited guidance available from other sources and by the fact that reporting of signs and symptoms for similar underlying effects may differ across human and animal studies. For example, the same underlying mechanism may be described as inducing intolerable irritation in a human clinical study or case report, but may manifest as changes in respiration rate, nasal discharge, or altered activity level in an acute toxicity test in animals. For this reason, guidance was developed that allows for more consistent assigning of comparative severity of observed effects (i.e., escape-impairing versus non-escape-impairing) for commonly observed adverse effects used as the basis of IDLH values. To evaluate the assignment of UFs, NIOSH conducted several analyses in preparation of the IDLH methodology to determine if the approach used in the 1994 update needed to be revised. Numerous datasets were evaluated to identify the typical ratio between the IDLH and the POD derived from different types of studies. The results of these analyses are presented in Section D.1. In arriving at the final methodology presented in this CIB, the empirical analysis discussed in this appendix was supplemented by previously published data analysis . Current risk assessment principles related to the rationale and concepts for UF application for setting exposure guidelines of different types were also considered, in particular the process used by the AEGL committee.
# D.1 Analysis for Selected Approach
To derive a scientifically based approach for the use of Berge et al. 1986], by using chemical-specific values of n for lethality whenever possible or standard defaults (i.e., n = 1 for extrapolation from shorter to longer durations and n = 3 for extrapolation from longer to shorter durations), and using an n of 1 for time correction of human effects other than lethality (e.g., irritation or signs of CNS depression). It should be noted that the default ten Berge adjustment approach would also be most appropriate for less than lethal effects. However, since this analysis was intended as one of several range-finding approaches for uncertainty selection explored by NIOSH, the additional analysis required to make the adjustments from the poorly documented human studies was deemed an unnecessary refinement for this particular analysis. The default ten Berge approach is specified for lethal and non-lethal effects within the IDLH methodology outlined in this CIB.
The correct approach for extrapolation is uncertain for less-than-lethal effects. Adequate quantitative data are rarely available for severe adverse effects in humans to support concentration-response modeling. In particular, thresholds for lethality are difficult to estimate from the very limited available case report information. However, available effect levels in humans gleaned from peer-reviewed secondary sources were arrayed by concentration (Conc), duration of exposure (time, t), the concentration × duration product (Conc × t = k), and severity of effect for each study that provided human response data.
Results of this analysis are shown in that suggested that exposure at the RD 50 would likely cause intolerable sensory irritation. However, it is noteworthy that the RD 50 would have been considered in the overall weight of evidence in setting the IDLH values used in our analysis, which might have biased the results toward a value of 1. The second approach used data directly from current IDLH value documentation to analyze all of the chemicals in the current list of IDLH values that are based on human effects data and had at least one reported LC 50 value, resulting in a list of 94 chemicals for further examination. For each of these chemicals, the analysis identified the value of the lowest adequate 30-minute adjusted LC 50 value, the current IDLH value, and the MOA for which the current IDLH value was set. As for the first approach, three MOA categories were used:
1. Irritation 2. Neurological effects 3. "Other"
It was noted that the "other" category included several pesticides that act via inhibition of cholinesterase. Although this group was not analyzed separately, it does form a potential fourth group for additional analysis. The cholinesterase inhibitors were not included in the general neurological effects category, since they have a specific underlying mechanism that might yield significant differences in lethality to non-lethal-effect ratios, as compared with other organics that act via the more general mechanisms of CNS depression. Published data were also used to compile RD 50 estimates (the concentration of the chemical that results in a 50% decrease in respiratory rate in a standardized rodent test) for these same chemicals.
The distribution of the LC 50 /IDLH value ratios is shown in Figure D-1. Results of the LC 50 /IDLH value ratio analysis (shown in Figure D-1) indicate that a factor of 10 would account for human effect thresholds for effects such as severe irritation and neurological effects, for approximately half of the chemicals reviewed, although a factor as high as 100 may be needed to cover 95% of chemicals. Distribution of RD 50 /IDLH value ratios for 26 chemicals yielded a median ratio of 1, suggesting that exposure at the RD 50 would generally result in sensory irritation of sufficient severity to be judged as escape-impairing. This interpretation is consistent with study results suggesting that exposure at the RD 50 would likely cause intolerable sensory irritation. Overall, no clear pattern regarding MOA was evident when comparing LC 50 /IDLH value ratios and its primary MOA for the 94 chemicals or comparing RD 50 /IDLH value ratios for the 26 chemicals.
This analysis hypothesized that potent irritants may have a greater difference between the LC 50 and the threshold for serious effects in humans, as compared with chemicals that cause toxicity via other modes of action. If this hypothesis was true, then the implication would be that deriving an IDLH value from an LC 50 for such chemicals would require a greater UF than would be needed for chemicals with other modes of action. The analysis produced mixed results, with a significant MOA effect observed for a subset of 20 chemicals, but not in a broader analysis of current IDLH values. Based on these results, the data are not adequate to recommend a different UF by MOA category.
# D.2 Recommendation for Deriving IDLH Values
Three primary methods are traditionally applied during the development of acute emergency limits, such as the IDLH values, ERPGs, and AEGLs, to account for uncertainty in extrapolating from a key study to arrive at the final value. In developing the IDLH methodology, three possible approaches were considered.
- Method 1-Use a weight-of-evidence approach, without specifying any default UF values. This would be an approach consistent with many volunteer groups that set acute occupational values (e.g., the AIHA ERPG committee). This approach provides for the greatest degree of flexibility in integrating all the complexities of the data, without having to explain departures from defaults that might not be very meaningful in the context of a specific dataset. However, this approach generally has limitations in that it is not highly transparent-i.e., it is often difficult to "back-calculate" the basis for the final numeric value. Irritants-The critical effect that would be the basis for an IDLH value is irritation. CNS Depressants-The critical effect that would be the basis for an IDLH value is CNS system depression.
Other-The critical effect that would be the basis for an IDLH value arises from an MOA other than irritation or CNS depression. Pesticide-The critical effect that would be the basis for an IDLH value is cholinesterase inhibition.
Figure D-1. The distribution of ratios of the lowest 30-minute adjusted LC 50 value to the current IDLH value is shown for 94 substances, representing four MOA categories, to evaluate the potential uncertainty value that provides adequate coverage for each MOA.
- Method 2-Use a preliminary composite UF as a starting point, based on the nature of the overall dataset, and communicate areas of uncertainty that impacted the final IDLH value in the rationale statement. This approach provides a data-informed starting point for the analysis, supported by empirical analysis (see Section D.2). The approach is intended to provide flexibility in UF selection by accounting for typical overlaps in individual UF and data hierarchies at the beginning of the UF selection process. This provides an increase in transparency over the weight-ofevidence approach, without requiring significant effort to explain departures from prescribed defaults. This is the approach that has been included in the IDLH methodology.
- Method 3-Apply a set of default UFs and revise post-hoc on the basis of the dataset. This would be an approach similar to that used to derive the AEGL values. This approach assigns default values for well-defined areas of uncertainty that pertain to a specific dataset. The final UF is derived by multiplying the individual factors. This approach is the most clear in terms of transparency (i.e., ability to back-calculate the derived value). However, because of the nature of the datasets involved, application of default values often yields conflicting or inappropriate values from one potential critical study to another. The end result of such data conflicts is the application of a post-hoc weightof-evidence evaluation, in which the final UF or critical study selected might be changed to align better with the overall dataset.
Application of the three methods outlined above should yield similar results, with the primary differences focusing on the level of transparency offered by each approach versus the need for posthoc modifications. There has been a history of successful application of methods 1 and 3, in the context of acute emergency limit-setting. Method 2 is a hybrid of the methods that attempts to incorporate previous data and experience tailored to the unique needs of the IDLH Program. Method 2 is recommended as a reasonable blend of providing transparency in the basis for an assessment, without the rigid application of default values that may require extensive post-hoc explanations. Multiplication of default UF values may also tend to yield IDLH values that are more than adequately protective. In developing the approach, it was considered that setting IDLH values lower than needed can present new safety risks in the context of the intended application as a tool for respiratory protection selection.
absorption kinetics in an oral gavage dosing study, or the critical oral study used serial dosing or continuous dosing protocols. In these cases the peak concentration in the critical study would not represent the likely peak concentration reached for the inhalation study, and the currently proposed extrapolation method would not necessarily be adequately protective.
The toxicokinetic considerations for route-to-route extrapolation are complex. In most cases, because of the nature of the acute systemic effects involved in IDLH derivation, rapid absorption kinetics and rapid onset of effects are expected. Thus, under the most likely conditions, the 30-min inhalation volume of 1.5 m 3 is viewed as an adequate default approach. However, there are scenarios based on chemical kinetics or non-inhalation study designs that may impact the level of protection afforded by the default adjustment. For this reason, the IDLH methodology has been modified to further communicate the potential conditions where additional kinetic-based adjustments may be needed.
# E.2 Time Scaling Adjustments
In most cases IDLH values are derived from studies having exposures for periods shorter than or longer than 30 minutes. Thus, the PODs derived from such studies are adjusted to 30-minute-equivalent values. This adjustment is made by using the ten Berge et al. modification to Haber's Rule that assumes the following relationship between concentration (Conc) and duration (time, t): (Conc n × t = k). The impact of the value of n on the shape of the concentration-time-response curve is shown in Figures E-1 and E-2. As shown in these figures, larger values of n result in flatter curves, meaning that, for a given degree of toxicity, the concentration varies less with changes in duration. This is particularly apparent in Figure E-1, which shows the extrapolation from 4 hours to 30 minutes. This figure shows the impact of using different values of n to extrapolate to shorter durations from a concentration of 10 ppm at 4 hours. In this example, an n of 3 results in a concentration at 30 minutes that is not much higher than the test concentration at 4 hours, whereas the calculated concentration at 30 minutes is substantially higher when n = 1. Thus, using n = 3 for extrapolating from longer durations to 30 minutes results in lower concentrations, a more health-protective approach.
Figure E-2 shows the converse situation, extrapolating from an exposure to 10 ppm for 15 minutes to longer durations. In this case, the steeper curve associated with n = 1 results in a lower concentration at 30 minutes, compared with the value calculated using n = 3. Thus, using n = 1 is a more healthprotective approach in extrapolating from shorter durations to 30 minutes.
Based on these considerations, a default value of n = 1 is used for extrapolation from shorter durations, and a default value of n = 3 is used for extrapolation from longer durations to the 30-minute duration of interest. In both cases, a calculated n specific to the chemical and species of interest is preferred when data are available to calculate the value. E-3 shows the calculated concentrations when extrapolating from 10 ppm at 0.25 hours, using n values of 1, 2, or 3.
The following paragraph illustrates the effects of time scaling on inhalation toxicity data evaluated during the development of IDLH values for three chemicals:
- 1,1-Dimethylhydrazine (CAS# 57-41-7)
- Vinyl acetate (CAS# 108-05-04)
- Titanium tetrachloride (CAS# 7550-45-0). n- = exponent applied within ten Berge equation reviewed literature for 1,1-dimethylhydrazine.
Because the selected data were associated with exposure times less than 30 minutes, a default value of 1 for n within the ten Berge equation was applied on the basis of the rationale discussed in the previous paragraphs to extrapolate the most health-protective estimate. Time scaling resulted in a reduction of the exposure concentrations to approximately 17% to 50% of the original exposure concentrations for the 5-and 15-minute durations, respectively. Abbreviations: LC 50 value = median lethal concentration; LOAEL = lowest observed adverse effect level; ppm = parts per million; n = exponent applied in ten Berge equation . The study data are from Weeks et al. . To receive NIOSH documents or more information about occupational safety and health topics, contact NIOSH: | 27,085 | {
"id": "207a07c7767a74c79571d20d447003bec5937c90",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Rachel Wilson provided editorial support. Special thanks should go to the public health professionals who reviewed the document and provided helpful and practical suggestions for improvement:#
suggest that about 2.2% of children aged 1 -5 years (about 434,000 children) have elevated BLLs. 2 Research suggests that these elevated BLLs result primarily from exposure to lead in nonintact paint, interior settled dust, and exterior soil and dust in and around older deteriorating housing. Renovation in older housing also creates substantial lead hazards unless dust is contained and the areas are thoroughly cleaned. Although many sources of lead exposure exist for children, the recommendations in this report focus on preventing childhood exposure to lead-based paint hazards in and around housing.
A CCLPP fully supports the concept of local and state decision-making to determine the most appropriate blood lead screening approach based on local conditions and data, which was the centerpiece of the revised 1997 guidelines. Efforts to ensure that the health care system incorporates these guidelines are extremely important. Most childhood lead poisoning prevention programs focus on identification and management of individual cases of elevated BLLs (i.e., secondary prevention). Follow-up care for such children consists of education focused on lead hazards, behavior changes associated with lead exposure, medical and developmental follow-up, nutritional recommendations, and environmental interventions. 3 Environmental interventions to control identified lead hazards and halt further exposure may not be carried out because of lack of resources and/or statutory authority. Evidence suggests that the benefits of secondary prevention are limited. However, identification and provision of services to children with elevated BLLs remain important components of a comprehensive lead poisoning prevention program. To ensure successful elimination of elevated BLLs in children, programs must not rely solely on screening and secondary prevention but also focus on preventing lead exposure through the implementation of housing-based primary prevention.
The actions recommended in this report can be performed by an array of entities, including health departments and other public agencies, communitybased agencies, and the private sector. Health departments must provide leadership to increase knowledge about lead safety and to encourage broad action to make housing lead-safe (i.e., a condition in which lead-based paint hazards have been eliminated or controlled by trained or certified contractors). To accomplish these goals, health departments must assume a leadership role in fostering collaboration among housing agencies, elected officials, and other stakeholders. Recent developments in technology and national housing policy can help to correct unsafe housing, the primary vector of lead exposure. This document provides a rationale for emphasizing primary prevention and an outline of a comprehensive program based on eight core elements (See "Eight Elements of a Comprehensive Program for Primary Prevention of Childhood Lead Poisoning" on page 11).
# Executive Summary
Specific examples of recommended program elements in use are documented through a CDC-funded project titled, Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards, that was initiated in October 2002. Because even the most intense primary prevention efforts to increase the supply of lead-safe housing will take years, childhood lead poisoning prevention programs (CLPPPs) should continue to augment their systemic housing-based primary prevention changes with fast-track initiatives to identify high-risk families who could benefit from immediate assessment and risk-reduction services to prevent further childhood lead exposure. With the shift towards primary prevention, program evaluation efforts should be a research priority so that future commitments of resources can focus on cost-effectively achieving program and national goals to reduce childhood lead exposure. M Me em mb be er rs s o of f t th he e A Ad dv vi is so or ry y C Co om mm mi it tt te ee e o on n C Ch hi il ld dh ho oo od d L Le ea ad d P Po oi is so on ni in ng g P Pr re ev ve en nt ti io on n Clearance e examination-Visual examination and collection of lead dust samples by an inspector or risk assessor and analysis by an accredited laboratory upon completion of an abatement project, interim control intervention, or maintenance job that disturbs lead-based paint (or paint suspected of being lead-based) above the minimus levels. HUD and EPA have established maximum allowable lead dust levels on surfaces (e.g., floors, window sills, and window troughs). (HUD)
Consolidated P Plan-A plan required and approved by HUD for state and local grantees that receive federal housing and/or community development block grants that set forth the jurisdiction's statement of the housing problems, its 5-year plan to address the identified problems, and a 1-year action plan.
Distressed h housing-Residential property in poor physical condition or likely to fall into such condition because of deferred maintenance, which typically has multiple structural problems, code violations, and lead hazards.Distressed housing is typically older, occupied by very low income households or abandoned, and requires major investment of resources to correct structural deficiencies, repair building systems, and control health and safety problems.
Elevated b blood l lead l level-Blood lead level > _ 10 µg/dL. (CDC) Essential m maintenance p practices-Approved maintenance practices and procedures designed to control deteriorating paint and/or lead dust that are undertaken regularly to ensure a home is maintained in a lead-safe condition. These practices involve dust and paint chip containment using "wet" procedures and specialized cleanup.
Interim c controls-A set of measures designed to temporarily reduce human exposure to leadbased paint hazards. (HUD)
Lead h hazard-Accessible paint, dust, soil, water, or other source or pathway that contains lead or lead compounds that can contribute to or cause elevated BLLs. (CDC)
Lead h hazard c control-Activities, including interim measures and permanent abatement, to control and eliminate lead hazards. (EPA)
Lead h hazard s screen-A limited environmental screening activity focused on visual assessment, which may include paint, dust and soil sampling and is usually performed in housing units less likely to contain lead-based paint hazards or as a preliminary step in the lead hazard assessment process.- Lead-b based p paint-Paint or other surface coating that contains lead equal to or exceeding 1.0 milligram per square centimeter or 0.5% by weight or 5,000 parts per million by weight.
(HUD and EPA)
Lead r risk a assessment-An onsite investigation of a residential dwelling to discover any leadbased paint hazards and description of options to eliminate them, which includes lead dust and soil sampling. (HUD and EPA)
Lead-s safe-Housing with no lead paint hazards as determined by a lead risk assessment or by dust sampling at the conclusion of lead hazard control activities. If lead-based paint remains in the housing unit, its condition and any hazard control systems must be monitored to prevent new lead hazards.
# Lead-s safe m maintenance-See Essential maintenance practices
Lead-s safe w work p practices-Low-technology practices for general renovation, repainting, and maintenance projects that control, contain, and clean up lead dust and deteriorated leadbased paint in a manner that protects both the workers and the occupants of the unit being treated.
Lien-A legal instrument used by a court to impose a requirement upon a property owner for the satisfaction of some debt or duty.
Paint i inspection-A surface-by-surface investigation to determine the presence of lead-based paint (may include dust and soil sampling) and a report of the results. (HUD and EPA)
Primary p prevention-Interventions undertaken to reduce or eliminate exposures or risk factors before the onset of detectable disease. This includes measures to a) prevent the dispersal of lead in the environment through regulations or other measures that prevent harmful uses of lead and b) remove lead from the environment before children are exposed. (CDC)
Receivership-A condition in which a person or entity is appointed to receive and hold in trust a property under litigation.
Rehabilitation-Actions taken in which a building is physically modified, either to improve the condition of the structure or to change its use.
Remediation-Physical intervention in a building to control and/or eliminate identified deficiencies or hazards and render the building safe.
Renovation-Construction and/or home or building improvement measures (e.g., window replacement, weatherization, remodeling, and repairing). (HUD) Satisfactory c compliance-The conduct of both visual and laboratory (i.e., dust) tests by certified personnel and an accredited laboratory to ensure that the lead hazard control work completed in a home has rendered the unit lead-safe (commonly known as "clearance" within the context of lead hazard control) and has met applicable standards for work and lead safety.
Secondary p prevention-Response to a problem after it has been detected. This involves identifying children with elevated BLLs and eliminating or reducing their lead exposure. (CDC)
# Introduction
T his document presents recommendations developed by the Centers for Disease Control and Prevention's (CDC) Advisory Committee on Childhood Lead Poisoning Prevention (ACCLPP) through its Primary Prevention Work Group. This document emphasizes primary prevention of childhood lead poisoning to accelerate progress toward achieving the Healthy People 2010 Objective 8-11: the elimination of elevated blood lead levels (BLLs) in children. 1 To reach this objective, changes are required at the state and local levels, where childhood lead poisoning prevention programs (CLPPPs) must initiate and collaborate with other groups and agencies in implementing housing-based primary prevention strategies that work at the community level. Therefore, this document is directed primarily toward the state and local health departments responsible for childhood lead poisoning (including those with CLPPPs), local programs funded by the U.S. Department of Housing and Urban Development (HUD), and all other partners in primary prevention.
Dramatic reductions in BLLs of U.S. children during 1970 -1990 were attributed to population-based environmental policies that banned the use of lead in gasoline, paint, drinking-water conduits, food and beverage containers, and other products that created widespread exposure to lead. 4,5 These primary prevention efforts reflect one of the great public health successes of the 20th century. 6 These lead level reductions were achieved in conjunction with improved lead screening and identification of children with elevated BLLs. Because of the reduction in average BLLs in children from an estimated 15 µg/dL in 1976 1980 7 to approximately 2 µg/dL in 1999, 2 the cohort of children who reach age 2 each year may reap an estimated annual benefit as high as $110 -$319 billion from the prevented losses of future earning potential alone due to improved workforce participation and higher salaries. 8 Despite these gains in public health, lead exposure continues to affect young children in the United States. The limits of secondary prevention (i.e., implementing measures after a child has an elevated BLL) as a way to eliminate childhood lead poisoning increasingly are being recognized. The estimated average skeletal lead concentrations of contemporary humans is 500-to 1000 fold higher than that of preindustrial humans, 9 and an increasing body of scientific information has identified harmful health effects associated with BLLs lower than were previously considered "safe." Disparities by income and race are well-documented. 14 Introduction Because children are exposed to lead from a variety of sources and no discernable threshold has been defined for the adverse effects of lead, reducing all environmental sources of lead exposure (including lead from past uses that remains in the environment) is necessary. Residential lead hazards are the primary source of lead intake for U.S. children. However, numerous other sources of lead intake for U.S. children have been identified and vary by location. 3 Examples are the use of ethnic remedies; 15 cosmetics; 16 lead-containing ceramics; 17 vinyl mini-blinds; 18 lead brought from a work-site into the home by a parent; 19,20 and products such as crayons and toys that have been contaminated with lead. However, each of these products probably constitute a significant source of lead intake for only a small number of children. Industrial point sources, smelters, and power plant emissions also can contribute to lead intake in children. Reducing lead emissions and the introduction of new lead into the environment may be critical to achieving maximum reductions in lead exposure in areas affected by these sources. 25 Many opportunities remain for eliminating unnecessary lead uses and reducing emissions.
Although many sources of lead can affect certain individuals and communities, the primary source of childhood lead exposure in the United States is lead paint in older, deteriorating housing. 3,14,26 Children are most often exposed to lead in their homes through nonintact paint, interior settled dust, and exterior soil and dust. Renovation of older homes also can cause substantial lead hazards. 27,28 Therefore, the recommendations in this report focus on preventing childhood exposure to lead-based paint hazards in housing.
Most CLPPPs emphasize secondary prevention of lead poisoning (i.e., blood lead screening of children to identify and provide follow-up care for those with elevated BLLs). This approach has limited benefit for most children living in housing that poses an increased risk for lead-associated health effects. However, primary prevention interventions to reduce lead exposures populationwide have succeeded. Primary prevention of lead hazards within the home on an individual or community level requires that lead-based paint hazards in and around homes be identified and controlled before a child is exposed. Many CLPPPs have developed primary prevention activities, but few have made primary prevention their main focus, in major part because of limited resources and authority. In some instances, CLPPPs have emphasized primary prevention measures associated with behavior change (e.g., encouraging families to increase hand washing and wet mopping and achieve recommended levels of iron and calcium intake). However, such educational interventions alone do not significantly reduce exposures and offer little sustainable protection to children whose homes contain peeling paint and lead-dust hazards. 29 Although the medical and public health communities now possess knowledge of the primary prevention tools needed to do the job, we have yet to marshal the will and resources to accomplish such prevention. Recent trends indicate that the time is right for a concerted effort. In many communities lead caseloads have declined, sometimes in association with a decrease in screening penetration, but more likely paralleling nationwide declines in prevalence of elevated BLLs in children. Many health departments have taken the opportunity afforded by these developments to focus on improving "core" public health functions. 30 For example, many CLPPPs are now able to 1) assess populationwide risk for lead exposure; 2) use data to target interventions and improve service delivery; 3) track lead-related services provided by others (e.g., screening and medical management); and 4) collaborate with partners (e.g., state Medicaid agencies, managed-care organizations, housing agencies, and community-based organizations). As a result, many CLPPPs are poised to assume leadership roles in this shift toward housing-based primary prevention of childhood lead exposure.
The U.S. Department of Health and Human Services (DHHS) and CDC consistently have encouraged state and local health departments and housing agencies to move toward housing-based primary prevention, beginning with the 1991 document titled, Strategic Plan for the Elimination of Childhood Lead Poisoning, 31 and the CDC statement, Preventing Lead Poisoning in Young Children. 4 Most recently, CDC has begun requiring its grantee health departments and CLPPPs to develop a strategic plan to eliminate childhood lead poisoning and to include primary prevention strategies. 32 ACCLPP recommendations published in the 2002 document, Managing Elevated Blood Lead Levels in Young Children, 3 stated, "…primary prevention by the removal of ongoing lead exposure sources should be promoted as the ideal and most effective means of preventing elevated blood lead levels." We present these recommendations to promote this goal and turn it into reality.
# Introduction
Strong support and resource allocation from the federal government will increase the likelihood that state and local initiatives will succeed. Federal agencies have important roles in supporting primary prevention programs by sponsoring research, developing and periodically updating tools and guidance for assessing and monitoring lead safety in housing, insisting on lead-safe practices in all federally supported housing programs, funding lead hazard control and evaluation programs, providing technical assistance, periodically updating regulations, and reviewing federal funding requirements to ensure consistency with primary prevention goals. The ACCLPP recommends that DHHS strengthen its efforts to promote and facilitate primary prevention and maintain a leadership role in collaborating with other federal agencies. This document is a guideline for accomplishing primary prevention and lowering childhood lead exposure in communities around the nation, which is best achieved through eliminating the three primary exposure pathways (i.e., deteriorated paint, contaminated dust, and contaminated soil) in and around housing. This guideline provides a rationale and an outline of a comprehensive program for developing and implementing a primary prevention strategy (see box), as well as references and resources that may be useful in accomplishing this goal.
C Ch hi il ld dh ho oo od d L Le ea ad d E Ex xp po os su ur re e a as s a a P Pu ub bl li ic c H He ea al lt th h P Pr ro ob bl le em m L ead adversely affects children's cognitive and behavioral development. 3 Elevated BLLs in children are associated with growth impairment, increased blood pressure, impaired heme synthesis, increases in hearing threshold, and slowed nerve conduction. 33 Lead toxicity economically impacts individuals and society because cognitive ability is strongly correlated with productivity and expected earnings. An increase of 10 µg/dL in a child's BLL may reduce the present value of that child's individual future lifetime earnings by approximately $37,000. 8 Estimates based on 1999 -2000 data suggest that about 2.2% of children aged 1 -5 years (about 434,000 children) have BLLs of > _10 µg/dL. 34 A national survey found that children at highest risk for having an elevated BLL are those living in metropolitan areas and in housing built before 1946, from low-income families, and of African-American and Hispanic origin. 14 Because lead exposure disproportionately affects children in low-income families living in older housing, it represents a significant, preventable contributor to social disparities in health, educational achievement and overall quality of life.
# Limits o of S Secondary P Prevention A Alone
Intervening after a child's BLL becomes elevated could reduce or prevent furth lead exposure but may do little to reverse lead-associated cognitive impairment. 35,3 Most CLPPPs rely on the use of routine blood lead screening to identify childre with elevated BLLs. In most areas, follow-up care for children with BLLs 10 -2 µg/dL consists of education focused on lead hazards, changing behaviors associated with lead exposure, medical and developmental follow-up, nutritiona recommendations, and environmental interventions to prevent further exposure er 6 n 0 l . 3 However, children with BLLs > _ 15 µg/dL generally receive more intensive followup services as BLLs increase. Environmental interventions to control identified lead hazards and halt further exposure are sometimes not carried out. For children with elevated BLLs, the benefits of secondary prevention, even when comprehensive follow-up care is provided, may be limited 37 for the following reasons.
Postponement of corrective action until after exposure means that children are forced to experience the harmful effects of lead. Even after corrective actions are taken, reducing elevated BLLs is difficult because of the body burden of lead. Data from a recently conducted, multisite, randomized clinical trial indicate that chelation therapy, the recommended treatment for children with severely elevated BLLs > _ 45 µg/dL, does not bring about improved neuropsychological outcome at 3-year follow-up among toddlers with preexisting BLLs 20 -44 µg/dL. 38 This study confirms that chelation therapy does not reverse the neuropsychological effects of lead and underscores the need for preventing such effects.
- Most blood lead screening is not performed when children are young enough to receive the full benefits of effective environmental interventions.
The timing of efforts to reduce exposure of children with elevated BLLs is critical. The BLLs of infants living in contaminated environments rise rapidly when these children are between the ages of 6 -12 months 39 (the period at which crawling and mouthing behaviors are common). Often, by the time a child with an elevated BLL is identified through screening, he or she already has developed a large body burden of lead and is at increased risk for long-term health consequences. Environmental interventions (e.g., safe repair of deteriorated paint and reducing lead-contaminated dust in children's homes), that would effectively prevent BLL elevations in fetuses, infants, and young toddlers may not rapidly reduce the elevated BLL of a child who is no longer crawling and mouthing. Experience and recent developments in technology and national housing policy † make the implementation of housing-based primary prevention feasible on a larger scale. The following advancements have occurred within the last decade.
- Increased focus on low-income urban areas as disproportionately impacted by childhood lead exposure.
- Application of data mapping techniques that allows identification of neighborhoods and families whose children are at highest risk for lead exposure to ensure priority action.
- Expansion of knowledge about identification, control, and prevention of lead hazards, including recognition of the need to control, contain, and clean up lead dust during all activities that repair or disturb old paint.
- Expanded resources for lead hazard control.
- Requirements for notification regarding lead-based paint hazards at rental or sale of pre-1978 properties.
- Development of and wide accessibility to low cost tools for lead dust testing in order to identify hazards and provide clearance testing after completion of hazard control work.
Childhood Lead Exposure as a Public Health Problem - Experience with implementation of state and local standards of care for lead safety.
- Establishment of the HUD requirement for Consolidated Plans to address lead safety.
Calls for expanding primary prevention efforts also have increased steadily. In February 2000, the Federal Task Force on Environmental Health and Safety Risks to Children presented a 10-year plan for eliminating childhood lead poisoning, emphasizing that "the U.S. must immediately adopt a strategy to make housing lead-safe by eliminating lead-based paint hazards in the homes of children who are under the age of six years." 53
# Primary P Prevention P Program
The goal of targeting housing for primary prevention is to prevent adverse consequences of childhood lead exposure by removing the health hazards posed by lead-based paint and keeping homes "lead-safe." Primary prevention strategies must reflect geographic variation in the risk for lead exposure and must be designed to suit local circumstances, needs, and assets. Communities and homes at highest risk should receive the greatest attention and resources. CLPPPs in state and local health departments must identify these high-risk areas and provide the leadership needed to coordinate a successful effort to eliminate those risks before children experience elevated BLLs. Collaboration is essential among housing, community development, and code enforcement agencies; elected officials; federal agencies; property owners; and communitybased organizations. The expansion of effective primary prevention initiatives will reduce the need for and increase the efficiency of delivery of appropriate secondary prevention services. In addition, because primary prevention efforts to create an adequate supply of lead-safe housing will be time consuming, CLPPPs should augment their systemic housing-based primary prevention programs with fast-track initiatives to identify families at highest risk who could benefit from immediate assessment and risk-reduction services.
# Recommendations
The primary prevention capacities recommended in this section of the report comprise a framework for making housing lead-safe by 1) preventing future exposures and 2) protecting previously exposed children from further exposure. Some of the recommended measures will be most effective when carried out broadly (e.g., citywide training in lead-safe work practices and updating housing codes). Other activities should target areas where lead risk is highest (e.g., targeted code enforcement and community-based screening of housing at high risk for lead hazards). Other measures may be brought to the level of a specific property (e.g., when lead-associated hazard control efforts in the apartment of a child with an elevated BLL are extended to other apartments with similar lead hazards in the same building). In many cases, the activities in this report will be performed by organizations other than the local health department, including other public agencies, community groups, and the private sector including property owners and lead-abatement contractors. Public health agencies must provide leadership in educating others about lead safety and encouraging broad action to make housing "lead-safe." (See Appendix 1: Sample Roles and Responsibilities for Primary Prevention of Childhood Lead Poisoning.)
ACCLPP recommends the following eight elements as the foundation of a housing-based primary prevention program. Programs must be able to undertake the following activities to fully implement primary prevention.
1. I Identify h high-r risk a areas, p populations, a and a activities a associated with h housing-b based l lead e exposure b by a. Using surveillance, demographic, and housing data to identify highrisk geographic areas and to quantify progress in reducing childhood lead exposure and producing lead-safe housing units;
b. Using enhanced targeting strategies and information systems initially developed to improve lead screening for children to direct attention and expand resources to reduce lead hazards in high-risk housing, especially that occupied by at-risk families (i.e., low income with infants and/or expectant parents); c. Identifying high-risk families who could benefit from immediate assessment and services to reduce their lead exposure risk. One 2. U Use l local d data a and e expertise t to e expand r resources a and m motivate action f for p primary p prevention. f. Offering incentives to property owners for compliance with lead-safe housing treatments before children are poisoned.
g. Notifying tenants in adjacent units about possible lead hazards when a child is identified as lead poisoned in a multifamily building.
4. D Develop a and c codify s specifications f for l lead-s safe h housing t treatments.
EPA regulations establish technical benchmarks for lead safety, and HUD guidelines describe how to perform various lead safety procedures. However, local jurisdictions must decide when and where to apply these tools to maximize lead safety, given local conditions. £ Specifically, local laws or regulations should require minimum lead-safe housing treatments for property repair and maintenance that ensure the differential treatment of various housing components on the basis of characteristics of the local housing stock, a property's risk, and the characteristics of the rental market. For example, Maryland and Indiana require property owners to meet certain standards at property turnover and other key junctures. The effectiveness of enforcement should be examined and changes made as needed to ensure protection of children.
6. E Engage i in c collaborative p plans a and p programs w with h housing a and other a appropriate a agencies. Completing a shift to primary prevention requires a review of current programs so that priorities can be adjusted. Strengths of CLPPPs in identifying and working with families at highest risk could be used to prioritize individual families for services to ensure lead safety of their homes before exposure of their children. Some aspects of secondary prevention programs will be retained, others redirected, and some deferred. For example, CLPPPs should retain the capacity to ensure recommended blood lead screening and follow-up care for children with elevated BLLs. At the same time, the emphasis of health education activities could shift, for example, from providing general lead information to training contractors, property owners, and community members in lead-safe work practices. (See Appendix 4: Intersections of Primary and Secondary Prevention.) ‡ Jurisdictions receiving HUD Community Development Block Grant and HOME Housing Partnership funds are required to develop and update annually a Consolidated Plan that must address lead hazards. Specifically, as part of the needs assessment "the plan must estimate the number of housing units within the jurisdiction that are occupied by low-income families or moderate-income families that contain lead-based paint hazards" (24 CFR Part 91, Section 91.205 (e)) and the strategic plan "must outline the actions proposed or being taken to evaluate and reduce lead-based paint hazards, and describe how the lead-based paint hazard reduction will be integrated into housing policies and programs" (Ibid, Section 91.215 (g)). More generally, this section also provides "The consolidated plan must describe the jurisdiction's activities to enhance coordination between public and assisted housing providers and private and governmental health, mental health, and service agencies. With respect to the public entities involved, the plan must describe the means of cooperation and coordination among the State and any units of general local government in the metropolitan area in the implementation of its consolidated plan."
8. Evaluate p primary p prevention p progress, a and i identify r research opportunities.
a. CLPPPs should lead development and use of benchmarks and milestones for tracking the pace of primary prevention in their jurisdictions. They should ensure that local data guide decision making. Creative partnerships will be needed to evaluate primary prevention activity and progress. Systems for ongoing surveillance to capture children's BLLs across the full range of possible values and to track the presence and control of lead hazards in housing will be critical for measuring progress.
b. CLPPPs should identify and promote research opportunities as part of all ongoing primary prevention efforts. CLPPPs and their partners should simultaneously plan solid evaluations that will foster a better understanding of the effectiveness of their efforts. These evaluations will involve gathering baseline measures, systematically tracking program processes (i.e., interventions and costs) and measuring a variety of outcomes (in children, families, individual housing units, entire buildings and properties, neighborhoods, and communities). Additional research is needed to determine how to maintain safety for young children during application of primary prevention work, to refine lead safety interventions and standards, to measure the longevity and cost-effectiveness of preventive lead hazard control at various levels of intensity, to evaluate the efficacy of targeted educational efforts in reducing exposures, to evaluate the effectiveness of moving and maintaining young families in lead-safe housing, to determine the effectiveness of finance/subsidy strategies in creating lead-safe housing in targeted areas, to determine the effectiveness of applying lead hazard controls within neighborhoods to reduce cross-contamination of exterior hazards, and to evaluate community changes when regulatory mechanisms or guidelines are put into place. Federal agencies funding primary prevention efforts should consider the value of evaluation as part of any project proposal so that future commitments of resources can focus on successful approaches that cost-effectively achieve local and national goals. Although the same general functions can be used as part of primary prevention efforts in different jurisdictions, the assignment of roles and responsibilities for carrying out those functions most likely will vary from place to place. ACCLPP recognizes that the institutional or legal environment, capacity of agencies and organizations, level of commitment, resources, competing priorities, and personalities of staff members can affect program plans and implementation. Thus, the following roles for the eight elements are provided as samples for consideration by local programs as they begin to collaborate with other entities to accomplish their program goals.
1. Identify high-risk areas, populations, and activities associated with housing-based lead exposure. a. Legislators: Ensure that state and local health and housing agencies have sufficient resources and legal authority to establish and maintain necessary health and housing data systems.
b. Health and housing agencies: Collaborate on the analysis of locale-specific data to identify target areas.
c. Child, health, and housing advocates: Advocate for policies and resources to support the establishment and maintenance of health, housing, and related data systems.
2. Use local data and expertise to expand resources and motivate action for primary prevention. a. Health and housing agencies: 1) Disseminate information about housing that poses an increased risk for lead-associated health effects and the populations most likely to be affected to policymakers, media, and community stakeholders.
2) Engage policymakers, property owners, insurers, contractors, and others in developing a strategic plan (including resource building) for the primary prevention of lead poisoning.
b. Property owners, insurers, and contractors: Partner with government agencies to develop a strategic plan that establishes incentives and identifies resources for the primary prevention of lead poisoning.
c. Child, health, and housing advocates: Develop local strategies for building community awareness of and value for lead safe housing, and political will to implement primary prevention.
3. Develop strategies and ensure services for creating lead-safe housing. a. Legislators: 1) Evaluate and revise (as necessary) housing, health, and building codes to address lead safety.
2) Fund and provide incentives for lead-related services (including lead hazard remediation, lead-safe work practices, and dust-clearance training for contractors, maintenance personnel, property owners, and others) and for emergency lead-safe housing for high-risk families.
3) Fund the development of more safe and affordable housing.
b. Health & housing agencies: 1) Develop a strategy for improving existing housing to meet code and address lead safety.
2) Incorporate lead hazard screening, dust testing, referrals and minimum treatment standards into home visits.
3) Support training and provide technical assistance to property owners, contractors, and maintenance staff in lead-safe work practices and dust clearance testing.
4) Educate and provide services and/or referrals to high-risk families.
5) Build partnerships with property owners, insurers, and contractors to develop innovative, cost-effective, incentive-based strategies for making private sector housing lead-safe, especially distressed housing and housing in high-risk areas.
c. Property owners, insurers, and contractors: Partner with government agencies to develop cost-effective strategies for making private-sector housing lead-safe.
d. Child, health, and housing advocates: 1) Advocate for enforcement and improvement of housing and building codes to address lead safety.
2) Advocate for more safe and affordable housing.
3) Identify and advocate for services for high-risk families.
4. Develop and codify specifications for lead-safe housing treatments. a. Legislators: Evaluate and revise (as necessary) existing housing and building codes to incorporate a lead-safe standard of care for housing that is consistent with research and evaluation findings.
b. Health and housing agencies: 1) Develop and implement systematic approaches to ongoing collection and analysis of dwelling-unit specific data, including lead-paint content, dust levels, and condition of components (e.g., doors, windows, and trim).
2) Collaborate with academic and research institutions to conduct systematic research and evaluation that can be used to support the development of a cost-effective, lead-safe standard of care for housing.
3) Disseminate findings to policymakers, media, and community stakeholders.
5. Strengthen regulatory infrastructure to create lead-safe housing. a. Legislators: 1) Enact lead-safe housing standards.
2) Fund enforcement activities.
3) Monitor agency compliance.
b. Health and housing agencies: Promote the updating or establishment of a regulatory structure for lead safety including housing code. b. Health and housing agencies: 1) Identify workload and resource needs to make high-risk housing lead-safe.
2) Build public sector partnerships between agencies (e.g., the HUD-required Consolidated Plan).
3) Build private sector partnerships (e.g., between property owners, insurers, and contractors).
c. Property owners, insurers and contractors: Partner with government agencies to establish and implement plans for making and keeping private sector housing lead-safe, especially distressed housing.
d. Child, health & housing advocates: 1) Educate constituents about lead safety.
2) Build private sector partnerships.
3) Advocate for adequate resources for lead poisoning prevention. b. Health and housing agencies: 1) Assess existing programs: examine use of resources relative to primary and secondary prevention needs and evaluate the effectiveness of existing efforts.
2) Engage stakeholders in developing strategies for increasing and redeploying resources to meet primary prevention needs. d. Academic and research institutions: 1) Collaborate with local health and housing agencies in conducting lead poisoning prevention research and programmatic evaluation.
2) Disseminate research findings.
Identifying H High-R Risk F Families A Ap pp pe en nd di ix x I II I. . Options for Targeting High-Risk Families with Young Children
Until a sufficient stock of affordable, lead-safe housing is readily available, communities must take immediate steps to assist families who need lead hazard assessment and risk-reduction services. CLPPPs can use their expertise in the lead exposure patterns in their jurisdictions to implement efficient strategies to reach at-risk families. Once identified, such families should receive assessment of their children's lead exposure risk and services to help them prevent further exposure.
Several federal programs offer opportunities for efficient identification of and outreach to families with pregnant women and young children (e.g.,HS,EHS, and WIC). These programs recognize the influence of the prenatal and early childhood environment on child development, especially cognitive development, and the importance of early child development on later success in school and in life. Each program serves economically disadvantaged families who may be at increased risk for lead poisoning and can serve as a venue for initiating primary prevention activities. Because lead is a known developmental toxicant, ensuring that children are born into and grow up in a lead-safe environment should be integral to efforts to provide an environment that promotes optimal cognitive development. Provisions for assessment of lead-exposure risk and referral are not a formal part of these programs. Each program is described briefly below. Strategies f for P Providing A Assessment a and R Risk R Reduction S Services to A At-R Risk F Families
The programs described in the preceding section offer efficient opportunities for reaching at-risk families. Following are possible approaches.
- Assessment of lead exposure risk in the communities and populations served. Through use of blood-lead screening, dust-lead screening, census data, and housing data, CLPPPs could identify HS, EHS, and WIC providers that serve communities or populations at high risk for lead poisoning.
# Developing Local Policies for Lead-Safe Housing Treatments
In the early stage of developing local housing treatments, professionals face the challenge of recognizing that a minority of U.S. housing units (25%) contains lead-based paint hazards; therefore, a "one-size-fits-all" treatment plan is unlikely to be appropriate. However, the housing stock is a fluid entity; conditions change over time, so ongoing maintenance of the U.S. housing units in which lead-based paint has been identified (40% nationally) 44 is also of concern. Some states (e.g., Indiana and Rhode Island) have implemented tiered approaches that call for simple, baseline measures for all older housing, with more intensive interventions required in housing posing higher risk or in certain circumstances (e.g., in the home of a child identified as having elevated BLLs). § Policymakers can consider the following options for achieving lead-safe housing.
Baseline maintenance requirements for lead safety-All owners of pre 1978 rental properties could be required to follow several baseline actions, including: A Ap pp pe en nd di ix x I IV V. . Intersections of Primary and Secondary Prevention
Many tools needed for successful case management 3 also are needed for primary prevention. These include a description of the problem; targeting of children at highest risk for priority action (e.g., blood lead screening and lead hazard reduction); delineation of effective and feasible housing treatments; and broad collaboration to secure both public and private resources to promptly eliminate lead-based paint hazards. Specific examples of the application of expertise in secondary prevention toward primary prevention are presented below.
- Lead hazard control in the homes of children with elevated BLLs. Lead hazard control work should, at a minimum, be performed by persons knowledgeable about lead-safe work practices and be followed by clearance testing to confirm that lead dust hazards are not left behind at completion of the work. 3,53,58 Such action not only protects the children already identified as having elevated BLLs, but also accomplishes primary prevention for younger siblings or for the next family that occupies a lead-safe property. The challenge is how to bring about systemic change that will make these activities routine rather than rare.
- Targeting. Ongoing efforts to intensify screening among children at highest risk for lead exposure will provide a crucial point of intersection for secondary and primary prevention efforts. 59,60 The housing in areas selected for intensified blood-lead screening campaigns should be the target of screening to identify housing units that could be hazardous to present or future occupants. Homes of children <1 year of age also can be targeted using birth certificate data. Results from such screening would lay the foundation for remediation of identified lead-based paint hazards. Families targeted for blood lead screening could receive priority action to achieve lead safety in their homes before their children become exposed.
- Surveillance. CLPPPs have developed data systems that allow them to link information from disparate sources. These systems can be the foundation for initiatives that allow linking and exchange of critical information about populations, housing stock, and risk factors, including the addresses of homes where children have been poisoned or where lead hazards have been documented. Surveillance of housing stock through visual exterior assessments, followed by dust testing if deteriorated paint is observed is important to maintaining a stock of lead-safe housing. The condition of housing changes over time. For example, lead hazard reduction, whether triggered by identification of a lead-poisoned child and subsequent environmental investigation or through screening of housing and code enforcement, takes place at a point in time. Conditions in a home may deteriorate after remediation, and a home that was once "lead-safe" can develop new lead hazards over time. Housing registries and integrated surveillance systems enable communities to track housing condition, thereby supporting primary prevention.
- Technology transfer from secondary to primary prevention. Various stakeholders in the secondary prevention of childhood lead exposure have contributed to the development of a vast body of information, knowledge, and experience about lead safety and ways to establish, improve, and maintain it. All of this accumulated wisdom must be used to address primary prevention. For example, housing in which lead hazards have been identified and reduced could be the focal point of efforts to expand training in routine maintenance and repair. Health departments can sponsor training in lead dust sampling and help contractors gain certification. CLPPP staff can educate local and state elected officials and help revise housing codes to incorporate lead safety. | 8,491 | {
"id": "1c61e8875f7d8f8c9c11b10e095bff0127db09fd",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # I. RECOMMENDATIONS FOR AN EPICHLOROHYDRIN STANDARD
The National Institute for Occupational Safety and Health (NIOSH) recommends that employee exposure to epichlorohydrin in the workplace be controlled by adherence to the following sections.
The standard is designed to protect the health and safety of employees for up to a 10-hour workday, 40-hour workweek, over a working lifetime. Compliance with all sections of the standard should prevent adverse effects produced by exposure of employees to epichlorohydrin. Although the workplace environmental limits are considered to be safe levels based on current information, they should be regarded as the upper boundary of exposure, and every effort should be made to maintain exposure as low as is technically feasible.
The standard is measurable by techniques that are available and there is sufficient technology to permit compliance with the recommended standard. The standard will be subject to review and revision as necessary.
These criteria and the recommended standard apply to exposure of employees to solid, liquid, or gaseous 3-chloro-l,2-epoxypropane, hereafter referred to as "epichlorohydrin," either alone or in combination with other substances. Synonyms for epichlorohydrin include l-chloro-2,3epoxypropane, (chloromethyl)ethylene oxide, chloromethyloxirane, chloropropylene oxide, gamma-chloropropylene oxide, 3-chloro-l,2-propylene oxide, 2-chloromethyl-oxirane, alpha-epichlorohydrin, epichlorhydrin, and epichlorophydrin.
"Occupational exposure to epichlorohydrin" is defined as exposure to airborne epichlorohydrin at concentrations exceeding the action level. The 1 "action level" is defined as one-half of the recommended time-weighted average (TWA) environmental concentration for epichlorohydrin. Exposure to airborne epichlorohydrin at concentrations equal to or less than one-half of the workplace environmental limit, as determined in accordance with Section 8, will not require adherence to the following sections except for 2, 3, 4(a), 5(a,b,c,d), 6, and 7. If exposure to other chemicals also occurs, provisions of any applicable standard for the other chemicals shall also be followed. Procedures for sampling and analysis of environmental samples and calibration of equipment shall be as provided in Appendices I and II, or by any methods shown to be at least equivalent, in accuracy, sensitivity, and precision to the methods specified.
# Section -Medical
Medical surveillance shall be made available to all persons subject to occupational exposure to epichlorohydrin as described below:
(a) Preplacement medical examinations shall include:
(1) Comprehensive medical and work histories.
(2) Complete physical examination, giving particular attention to the kidneys, liver, respiratory tract, and hematopoietic system.
Additionally, employees shall be evaluated for complaints and evidence of eye, mucous membrane, or skin irritation. Further tests, such as determinations of the concentrations of serum glutamic-pyruvic transaminase (SGPT), serum glutamic-oxaloacetic transaminase (SGOT), and lactic dehydrogenase (LDH), may be considered by the responsible physician.
(3) An evaluation of the ability of the worker to use respirators.
(b) Periodic examinations shall be made available on an annual basis. These examinations shall include, but shall not be limited to:
(1) Interim medical and work histories.
(2) Physical examination as outlined in paragraph (a) (2) of this section. The responsible physician shall consider administering appropriate organ function tests.
(c) Proper medical management shall be provided for employees overexposed to epichlorohydrin. When there is known or suspected overexposure to epichlorohydrin, immediate first-aid measures shall be followed by prompt medical evaluation and followup assistance. First aid shall include removal of the employee from the area of excessive epichlorohydrin exposure, restoration of, or assistance in, breathing by trained personnel, and treatment of chemical burns.
# (d)
The responsible physician shall advise the worker that available information indicates that large doses of epichlorohydrin induce antifertility effects on rats; however, no effects on potency have been found. The relevance of this study to male or female workers has not yet been determined. It does, however, indicate that employers and workers should attempt to minimize exposures to epichlorohydrin whenever possible.
If the physician becomes aware of any adverse effects on the reproductive systems of workers exposed to epichlorohydrin, the information shall be forwarded to the Director, National Institute for Occupational Safety and Health, as promptly as possible. First Aid:
In case of skin contact, immediately remove all contaminated clothing, including footwear, wash skin with plenty of water for at least 15 minutes and call a physician. In case of eye contact, flush eyes with water for 15 minutes and call a physician.
# (b) Posting
Areas where epichlorohydrin is manufactured, stored, used, or handled shall be posted with a sign reading:
# EPICHLOROHYDRIN POISON! FLAMMABLE! SKIN CONTACT CAUSES DELAYED BURNS VAPOR IRRITATING TO EYES HARMFUL IF INHALED OR SWALLOWED
The sign shall be printed both in English and in the predominant language of non-English-reading workers, if any; the employer shall use these or other equally effective means to ensure that all employees know the hazards associated with epichlorohydrin and the locations of areas in which there is likely to be occupational exposure to epichlorohydrin. (1) Chemical safety goggles and face shields shall be provided by the employer and shall be worn during any operation in which epichlorohydrin may splash into the eyes. Suitable eye protective devices shall conform to 29 CFR 1910.133.
(2) Aprons, suits, boots, or face shields shall be worn when needed to prevent skin contact with liquid epichlorohydrin. The protective clothing for this purpose shall be made of impervious material, such as polyethylene, polypropylene, or polyvinyl chloride (PVC). Use of protective clothing made of neoprene, rubber, or leather is unsuitable.
# (b) Respiratory Protection
Engineering controls shall be used to maintain epichlorohydrin concentrations below the recommended environmental limits. Such control equipment shall be sparkproof. Compliance with the permissible exposure limit may not be achieved by the use of respirators except during the time necessary to install or test the required engineering controls, for nonroutine maintenance or repair activities in which brief exposures at concentrations in excess of the recommended limits may occur, and during emergencies when air concentrations of epichlorohydrin may exceed the recommended limits.
When a respirator is permitted by paragraph (b) of this section, it shall be selected and used pursuant to the following requirements:
(1) For the purpose of determining the type of respirator to be used, the employer shall measure, when possible, the concentration of airborne epichlorohydrin in the workplace initially and thereafter whenever process, worksite, or control changes occur which are likely to increase the epichlorohydrin concentrations; this requirement does not apply when only atmosphere-supplying, positive pressure respirators are used. The employer shall ensure that no worker is exposed to epichlorohydrin in excess of the recommended workplace environmental limits because of improper respirator selection, fit, use, or maintenance.
(2) A respiratory protection program meeting the requirements of 29 CFR 1910.134 shall be established and enforced by the employer.
The employer shall provide respirators in accordance with the provisions of both Table 1-1 and 30 CFR 11 and shall ensure that the employee uses the respirator provided.
(4) Respirators specified for use in higher concentrations of epichlorohydrin may be used in atmospheres of lower concentrations.
The employer shall ensure that respirators are adequately cleaned, and that employees are instructed in the use and the testing for leakage of respirators assigned to them.
Where an emergency which could result in employee injury from overexposure to epichlorohydrin may develop, the employer shall provide respiratory protection as listed in Table 1-1. (1) Gas mask (full facepiece) with chin-style or front-mounted organ ic vapor canister with impervious plastic cover for head and neck (2) Type C supplied-air respirator operated in the pressure-demand (positive pressure) or continuousflow mode with a full facepiece and impervious plastic cover for head and neck
Less than or equal to 1,000 ppm Type C supplied-air respirator with hood, helmet, or suit operated in the continuous-flow mode Greater than 1,000 ppm
(1) Self-contained breathing appa ratus with full facepiece operating either in the demand (negative pressure) mode or in the pressuredemand (positive pressure) mode worn under an impervious plastic suit with headpiece (2) Combination Type C suppliedair respirator with full facepiece operated in the pressure-demand (positive pressure) mode and an auxiliary self-contained air supply worn under an impervious plastic suit with headpiece (1) Self-contained breathing appa ratus in the pressure-demand (posi tive pressure) mode with full facepiece (2) Combination supplied-air res pirator with an auxiliary self-con tained air supply and full facepiece operated in the pressuredemand (positive pressure) mode Escape (no con centration limit)
(1) Gas mask (full facepiece) with chin-or front-mounted organic va por canister (2) Self-contained breathing appa ratus with full facepiece operated either in the demand (negative pressure) mode or in the pressuredemand (positive pressure) mode During cleanup operations, complete skin and respiratory protective equipment shall be used. After a cleanup operation, the employee shall be required to shower upon removing the protective equipment. If epichlorohydrin is spilled on shoes made of leather, canvas, rubber, or of any other permeable material, the shoes shall be rendered unusable and discarded.
If clothing is contaminated with epichlorohydrin, it shall be removed promptly and laundered thoroughly before reuse. (1) Entry into confined spaces, such as tanks, pits, tank cars, barges, process vessels, and tunnels, shall be controlled by a permit system. Permits shall be signed by an authorized employer representative certifying that preparation of the confined space, precautionary measures, and personal protective equipment are adequate, and that precautions have been taken to ensure that the prescribed procedures will be followed.
(2) Confined spaces which have contained epichlorohydrin shall be inspected and tested for oxygen deficiency, epichlorohydrin, and other contaminants, and, prior to entry by the employee, they shall be thoroughly ventilated, cleaned, and washed.
(3) Inadvertent entry of epichlorohydrin into the confined space while work is in progress shall be prevented by disconnecting and blocking of epichlorohydrin supply lines or by other appropriate means.
(4) Confined spaces shall be ventilated appropriately while work is in progress to keep the concentration of epichlorohydrin at or below the recommended environmental limits and to ensure an adequate supply of oxygen.
Individuals entering confined spaces where they may be exposed to epichlorohydrin shall wear either a combination Type C suppliedair respirator operated in the continuous-flow (positive pressure) mode or an auxiliary breathing air supply operated in the pressure-demand (positive pressure) mode equipped with a full facepiece, or a self-contained breathing apparatus operated in the pressure-demand (positive pressure) mode.
Each individual shall also wear a suitable harness with lifelines tended by another employee outside the space who shall also be equipped with the necessary protective equipment, including a self-contained breathing apparatus which operates in the pressure-demand (positive pressure) mode and has a full facepiece. Communication (visual, voice, signal line, telephone, radio, or other suitable means) shall be maintained by the standby person with the employees inside the enclosed spaces.
(g) For all work areas where epichlorohydrin is used, procedures as specified in this section, as well as any other procedures appropriate for a specific operation or process, shall be formulated in advance.
Employees shall be comprehensively instructed in the implementation of emergency procedures.
( Epichlorohydrin is a reactive molecule which may form covalent bonds under biologic conditions. In reactions with alcohols, amines, thiols, and other nucleophilic biochemical constituents of the cell, the epoxide ring opens and forms a new, stable, covalent carbon-hetero atom bond as shown in the reaction In Figure XIII-1. The initial reaction product may undergo a second nucleophilic reaction to form stable, covalent cross-linking bonds between two molecules either by direct displacement of the chlorine atom or through the formation of an unstable, short-lived cyclic intermediate. These cross-linking bonds may have a high degree of chemical stability, such as is typically found in epoxy resins. The chemical characteristic of epichlorohydrin to react as a bifunctional alkylating agent is an intrinsic property of its molecular structure.
Biologic organisms contain many different chemical nucleophiles, such as alcohols, acids, amines, sulfhydryls, carbohydrates, lipids, proteins, ribonucleic acids, and deoxyribonucleic acids. Epichlorohydrin has a greater tendency to react with more readily polarized groups, such as sulfhydryl groups, than with less readily polarized groups, such as hydroxyls.
The half-life of epichlorohydrin in water at a pH of 7 is 36.3 hours. Although the half-life of epichlorohydrin in biologic tissues is not known, the known presence of large numbers of nucleophiles in tissue suggests that it is shorter than the half-life in water. Mammalian enzymes which catalyze the hydrolysis of epichlorohydrin, or otherwise degrade it, would further decrease the biologic half-life. Under hypothetical, constant, nonlethal exposure conditions, a steady-state concentration of epichlorohydrin would eventually be achieved in all tissues, resulting in a steady-state rate of alkylation of tissue constituents. For reviews of the pertinent literature, the reader is referred to Ross and Loveless. Extent of Exposure Epichlorohydrin is commercially synthesized from allyl chloride, allyl alcohol, dichlorhydrin-glycerine, or propylene.
The total US production of epichlorohydrin was about 550 million pounds in 1975.
Projected expansion plans, if accomplished, would increase total production to 715 million pounds by 1978. In 1969, the total US production of crude epichlorohydrin was about 322 million pounds. An estimated 58% of this was used in the manufacture of synthetic glycerine and 42% was processed to refined epichlorohydrin. Refined epichlorohydrin is used in the manufacture
A number of cases of dermal sensitization have been reported in workers in the epoxy resin-producing and resin-using industries. Attempts to identify the causative agents have been only partially They felt that specific work practices were necessary since epichlorohydrin penetrates rubber and leather. They noted that there was a latent period of several minutes to several hours between contact with epichlorohydrin and its manifestations.
The authors drew attention to similar latent periods for burns caused by X-rays or by such alkylating agents as ethylene oxide.
The authors recommended that studies of the peripheral vasculature be done as soon as possible after dermal contact with epichlorohydrin to distinguish between sequelae and preexisting vascular alterations. This recommendation was made because of the persistent erythema observed after every accident and the pronounced peripheral arteriosclerosis diagnosed in one patient. No indication of sensitization was reported in the two workers whose contact with epichlorohydrin was repeated. These cases demonstrated that skin exposure to liquid epichlorohydrin can cause severe chemical burns. When skin contact with epichlorohydrin was short, the severity of the burns was much less than when skin contact was prolonged. Therefore, it can be concluded that the intensity of the burns is dependent on the duration and extent of exposure, which control the extent of reaction with cellular constituents.
In 1966, Pet'ko et al investigated the health of workers engaged in the production of epichlorohydrin from dichlorhydrin glycerine (DCG) in Russia. Environmental monitoring for both epichlorohydrin and DCG was done by an unspecified sampling method. In some instances, the epichlorohydrin concentration in air was reported to be 2-14 times greater than its maximum Although the authors considered the individual worker's length of service to be unrelated to the nature and frequency of the complaints, they did not report the nature of the complaints. They concluded that no deviations from the normal which could be interpreted in terms of occupational factors, other than two cases of occupational dermatitis, were identifiable. The dermatitis cases were not discussed. Pet'ko et al recommended that further studies be done, and that annual medical examinations by an internist, a dermatologist, and a neuropathologist be given. The authors gave no explanation for including a neuropathologist, nor did they specify the clinical basis for recommending blood and urine examinations. With Fregert as the subject, the authors found that, following application for 2 days of patches containing epichlorohydrin diluted to 0.1, 0.5, and 1.0% with ethanol, no immediate effects on the skin were visible, but reactions developed after 8-11 days at all test sites.
Retesting with 0.1 and 0.01% epichlorohydrin induced a positive reaction after 2 days to 0.1% epichlorohydrin and resulted in erythema with 0.01% epichlorohydrin.
Propene oxide, 0.2% in ethanol, gave a positive reaction when it was applied to skin previously exposed to epichlorohydrin.
Negative reactions were obtained with 1-chloropropane (1%), l-chloro-2propanol (1%), and ethylene oxide (1%) after application of these compounds to areas where epichlorohydrin had been applied. They concluded that the epoxy group and the three carbon atoms in the chain, but not the chlorine, No details of these exposures were given.
In addition, the method by which the epichlorohydrin concentrations were measured or verified was not reported.
In the absence of such information, it is reasonable to assume that these observations are derived from accidental occupational exposures to epichlorohydrin. In the absence of further details on the extent of epichlorohydrin exposure and individual histories, no general conclusions can be drawn from these data.
In the majority of the 48 cases, the deviations from normal ranges were no longer detectable after a few months.
In five of the reported cases, however, one or more of the measured values remained outside the normal range from 7 to 13 years after exposure. Reexamination of one of these men about 4.5 months later gave values for the various measurements that were within the normal ranges.
From the available data, it is not possible to attribute these persistent changes solely to epichlorohydrin exposure. The most frequently observed deviations from the normal ranges were a decrease in the percentage of polymorphonuclear leukocytes and an increase in the percentage of monocytes in the total leukocytes count of the blood; even these changes were not consistent in any employee. The data suggest that a detailed study of the medical histories of these individuals and their possible exposure to other chemicals is necessary before the hazards of permanent injury from epichlorohydrin exposure can be evaluated. These all illnesses during epichlorohydrin exposure were respiratory and only 24 (27%) were respiratory during nonexposure to epichlorohydrin. The consultants concluded that illness during employment in an epichlorohydrinexposure area is more likely to be respiratory than one during employment elsewhere.
Examination of the ECG revealed that 5.0% of the employees in the minimal-exposure group and 2.7% in the moderate-exposure group had abnormal records.
The consultants concluded that epichlorohydrin exposure had not influenced the ECG records.
Study of pulmonary roentgenograms indicated that, in both the moderate-and minimal-exposure groups, 1.8% contained abnormal findings, such as emphysematous blebs, mild pneumonic infiltration, pneumonia, mild chronic pulmonary fibrosis, minor emphysema, and infiltrative lesions. The consultants noted that, in addition to the same frequency of abnormal X-radiographs in both exposure groups, a rate of 1.8% did not appear to be greater than that which might be expected in similar unexposed workers.
The urinalysis showed no clear difference between the two exposure groups or any group tendency toward abnormality. Hematocrit index, lymphocyte count, SGOT and SGPT activities, and creatinine and BUN values were within the normal range. White blood cell counts were above the normal limit during the 4th, 8th, and 12th years of employment for the moderate-exposure group.
The mean group values for monocyte cell count were within normal limits except for the employees with more than 6 years of moderate exposure, in whom values fluctuated fairly widely around the laboratory normal. The difference between the average values for the two exposure groups was statistically significant (t value was 4.35 at a P value of less than 0.05) at the 3rd year. Eosinophil cell count was slightly elevated for the moderate-exposure group during the 2nd and 5th years of epichlorohydrin exposure. Lactate dehydrogenase activity was elevated in both groups. Examination of pulmonary function data indicated that the mean values for both groups did not differ from their normal values. The albumin-to-globulin ratio was normal for both groups, but, at the 4th year of exposure, the ratio for the moderate-exposure group was significantly lower (t value 3.88, with a P value of less than 0.05) than that of the minimal-exposure group. Based on all these findings, the consultants concluded that there were no toxic effects on the blood, liver, or kidneys related to the epichlorohydrin exposures.
This study, although helpful, is inadequate to fully estimate the health hazards associated with epichlorohydrin exposure for several reasons.
The data base comprises a group of employees occupationally exposed to epichlorohydrin for periods of 6 months-16 years, but no consideration was given to those who dropped out because of illness, retirement, or death. There is no way to evaluate from this study the importance of the group lost from the observation. A major deficiency of this study is the lack of a control group. In the absence of such a group, the "normal" range of the clinical and the biochemical tests is dependent on laboratory values alone. Further, because of the lack of controls, the consultants compared the effects of the moderate-exposure group with those of the minimal-exposure group. The consultants stated that the measure of exposure was "a crude one" and that it was only an estimate by the company of whether a given individual's exposure was minimal or moderate. No quantitative data with regard to exposure were provided, and there is no indication of even what range of concentration was referred to by the classifications "moderate" and "minimal." Therefore, any absence of doserelated effects could be accounted for both by the absence of a clear distinction between the groups and by loss of an affected segment of population. From the biochemical data, it is possible to conclude that groups of individuals "exposed" to epichlorohydrin do not show variations from the measured norms held to be appropriate by the laboratory. Further, because of the deficiencies discussed in this paragraph, the significance of the finding on respiratory tract illness is dubious. If the number of tests in any year of employment was related to the number, of employees, the length of employment in the moderate-exposure group was much shorter than in the minimal-exposure group, ie, less than half the workers were still employed during the 1st year and available for tests.
In Lung congestion, edema, consolidation, and inflamed areas with signs of abscess formation were also observed. Although epichlorohydrin is the most plausible causative agent, the absence of controls exposed to the propanol vehicle alone makes it difficult to attribute these changes solely to epichlorohydrin.
The rats exposed to epichlorohydrin at 56 ppm for 18 periods (approximately 864 mg/kg total amount inhaled) showed respiratory distress, nasal discharge, and weight loss.
Urinary protein excretion, hemoglobin concentrations in blood, and differential cell counts were normal.
Microscopic examination showed only an abscess in one lung, which the author did not attribute to epichlorohydrin exposure.
Rats exposed to epichlorohydrin vapor at 27 ppm for 18 days (approximately 417 mg/kg total amount inhaled) showed mild nasal irritation.
The lung of one rat contained hemorrhagic and consolidated areas. Rats exposed to epichlorohydrin at 17 ppm for 19 exposure periods showed no apparent effects at necropsy or on histopathologic examination. This exposure most likely resulted in an estimated inhaled amount of about 277 mg/kg. Two of eight rats developed pulmonary infections when exposed to epichlorohydrin at 9 ppm for 18 exposures (approximately 139 mg/kg, total amount inhaled); no effects were observed in the other six animals.
# Gage
also exposed two groups of two rabbits each to epichlorohydrin vapor. The periods of exposure were not mentioned, but probably, like the rats, the rabbits were exposed 6 hours/day, 5 days/week.
The first group was exposed for 20 daily periods at 35 ppm (approximately 439 mg/kg total amount inhaled). The second group was exposed for two periods at 16 ppm (approximately 20 mg/kg total amount inhaled). The latter exposure was reduced to 9 ppm and was continued for 20 more days (approximately 113 mg/kg total amount inhaled). Thus, the second group of rabbits is estimated to have inhaled a total amount of 133 mg/kg.
Inhalation of epichlorohydrin at 35 or 16 ppm produced nasal irritation.
At 9 ppm, no adverse effects were observed. Post-mortem examination did not reveal any abnormalities. Subsequent to these observations, Gage recommended that the epichlorohydrin concentration in the occupational environment where employees are working continually without respiratory protection should not exceed 5 ppm. However, It is noted that Gage did not attempt to investigate any long-term effects of epichlorohydrin inhalation since the longest exposure lasted only 19 days. The recommendation of 5 ppm was therefore based only on effects resulting from short-term exposure.
In 1961, Kremneva and Tolgskaya reported the effects of epichlorohydrin on mice, rats, and rabbits. Routes of administration were stomach intubation, subcutaneous injection, inhalation, and skin and eye application. Three groups of 10 mice and 5 rats each were administered aqueous solutions of epichlorohydrin by stomach tube in doses of 500, 325, and 250 mg/kg. All animals given doses of 500 or 325 mg/kg died within the first 2 days. In both rats and mice, the same type of intoxication pattern was evident: low mobility, slow and labored breathing, hyperemia of the skin, subcutaneous bleeding, ataxia, periodic body tremor, and distention of the abdomen. Microscopic examination of tissues of the dead animals revealed plethora of the internal organs, hemorrhage and edema of the lungs and pulmonary tissue, vacuolization of the liver cells with what was described as fatty degeneration, and degenerative processes in the epithelium of the convoluted tubules accompanied by some necrosis in the kidneys.
Foci of necrosis in the stomach and intestinal mucosa also were observed. At the 250-mg/kg dose, no visible signs of intoxication were evident during the 14-day observation period.
A total of 50 mice were injected with epichlorohydrin at doses of 500, 375, 250, and 125 mg/kg. The results of the injections were not described except that, at doses of 500 and 375 mg/kg, all of the mice died, and, at 250 mg/kg, 7 of 10 mice died. Epichlorohydrin at a dose of 125 mg/kg was tolerated without visible alteration in the behavior of the mice. To assess the irritant effects on eyes, 0.1 ml of epichlorohydrin solution in cottonseed oil was instilled into the right eyes of the rabbits; the left eyes served as the untreated controls.
The eyes were examined every 30 minutes for 3 hours. Corneal damage was present in 80% of the animals and a lesser degree of irritation occurred in the rest of the animals. Epichlorohydrin was found to be a strong ophthalmic irritant.
There was no evidence of sensitization to epichlorohydrin, as determined by the maximization test in 5 Hartley-strain guinea pigs.
Groups of 12 male Sprague-Dawley rats were injected ip daily for 30 consecutive days with 0.00955 or 0.01910 ml/kg of epichlorohydrin or 0.01910 ml/kg of cottonseed oil. At the end of 30 days, hemoglobin values were increased significantly at the low dose but decreased significantly at the higher dose (P values less than or equal to 0.05). The concentration of neutrophilic metamyelocytes increased significantly (P value less than or equal to 0.05) in the high-dose group but remained equal to that of controls for the rats administered epichlorohydrin at the lower dose.
Lymphocytes showed an insignificant dose-related decrease in frequency. A slight, insignificant dose-related increase in clotting time was also observed. Hepatic function, as measured by the BSP test, was not impaired.
The heart-to-body weight ratio increased in a dose-related way but the increases were not statistically significant. The ratio of the weight of the kidney to body weight increased significantly with both doses (P value less than or equal to 0.05). The ratio of brain-to-body weight of the rats was higher in the epichlorohydrin-treated rats than in the controls.
Microscopic examination did not reveal changes in any organs except in the lungs. Lesions in the lungs were evident in all groups, but the incidence and severity were "somewhat greater" in the treated animals than in the controls. It is noteworthy that the authors did not detect any kidney damage in the microscopic examination. All animals treated with epichlorohydrin had lower hematocrit and erythrocyte counts than the control rats, but this was significant (P value less than or equal to 0.05) only for the hematocrit value of the middle-dose group. An increase in the concentration of platelets in the blood also was observed in the epichlorohydrin-treated animals. Total leukocyte counts were lower in the low-dose group and higher in the highest-dose group than in the control group. These differences were not significant (P value less than or equal to 0.05). Leukocyte counts for the middle-dose and control groups were the same. A dose-related increase in the average percentage of segmented neutrophils was observed but was significant (P value less than or equal to 0.01) only for the high-dose group. The percentage of eosinophils increased in all experimental groups, the increases in the low-and the high-dose groups being significant (P value less than or equal to 0.05).
Significant reductions in the percentage of lymphocyte in the total leukocyte count were observed in the animals treated with the two highest doses (P values less than or equal to 0.05 and 0.01, respectively). The ratio of the weight of the brain to that of the body was significantly lower (P value less than or equal to 0.01) in the animals treated with epichlorohydrin at the highest dose than in the control animals; this is in contrast to the effect obtained in animals receiving 30 daily doses of epichlorohydrin.
The change in the brain-to-body weight ratio suggests abnormal changes in the CNS. The organ weight-to-body weight ratios for heart, kidneys, and liver were significantly (P values less than or equal to 0.01, 0.01, and 0.05) higher for the animals treated with the highest dose than for the controls. The ratio of spleen-to-body weight was not significantly different from that found in the controls.
The results indicate that repeated doses induce a cumulative effect on basic cellular growth with a subsequent perturbation of the normal rates of growth of these internal organs.
The effect of epichlorohydrin on pentobarbital-induced sleeping time was studied in mice.
Male ICR-strain mice in groups of 10 were administered 1/10, 1/5, or 1/2 of the acute ip LD50 dose (0.14 ml/kg) of epichlorohydrin. Control mice were administered ip saline injections.
Similar groups of mice were exposed to epichlorohydrin vapor at 1/10, 1/5, or 1/2 of the LT50 dose (71.89 mg/liter, 9.13 minutes). Control mice were placed in an inhalation chamber and were exposed to uncontaminated air.
Twenty-four hours after exposure to epichlorohydrin by either route, 50 mg/kg of sodium pentobarbital was administered ip. A dose-related increase in sleeping time was observed in all the epichlorohydrin-treated animals;
however, the only significant increases (P value less than or equal to 0.01) were in the groups receiving the highest ip dose or the highest inhalation exposure. Based on these extensive studies, it can be concluded that epichlorohydrin is severely irritating to the skin and the eyes, and perhaps to the lungs. Further, it also affects some metabolic processes of the liver. In most cases, the severity of its effects appears to be dose dependent.
In 1972, Lawrence and Autian Severe lesions of the neurons in the medulla oblongata, Ammon's horn, and cerebellum were also present.
Animals inhaling epichlorohydrin at 0.5 ppm showed an increase in leukocytes with altered fluorescence, a decrease in the nucleic acid content of the blood, but no significant effects on the amount of urinary coproporphyrin.
The concentration of leukocytes with altered fluorescence increased significantly (95% probability of not being a variation by chance), but the effect was less marked than on the animals treated at the highest dose. Animals in the third group, inhaling epichlorohydrin at 0.05 ppm, did not show similar effects. There were no morphologic differences between the control animals and those in the last two groups.
Based on the results of this continuous inhalation study, Fomin recommended that the mean diurnal maximum permissible concentration of epichlorohydrin in the atmosphere not exceed 0.2 mg/cu m (approximately 0.05 ppm). The exposure in this study was continuous and the effect of intermittent recovery periods was not evaluated.
In 1969, Golubev recorded changes in the diameters of the pupils of the eyes of rabbits. Groups of six rabbits were used to ascertain the irritating effects of eight different chemicals, including epichlorohydrin.
The rabbits' eyes were illuminated with a uniform reflected light, and the diameters of the pupils were measured with an instrument referred to as a tangential pupilometer. A 0.25 M solution of epichlorohydrin was instilled into the conjunctival sacs and the pupil diameters were measured 1, 3, 5, 10, 15, 20, 25, and 30 minutes later. In the control rabbits, the pupil diameters were measured both before instillation and at similar time intervals after instillation of 0.05 ml of saline solution into the conjunctival sacs.
Epichlorohydrin caused constriction ranging from 1 to 16% during the first 20 minutes, the initial diameter being considered 100%. The author noted that this effect was elicited at an epichlorohydrin concentration that caused no visible changes in the conjunctivae or cornea.
Thus, epichlorohydrin at 0.25 M (2.3%) has a measurable effect on the eyes.
In 1968, Pallade et al subcutaneously injected 67 white rats with a single dose of epichlorohydrin at 125 mg/kg to investigate epichlorohydrin-induced kidney damage.
Prior to injection, the urine of each animal was examined for protein, potassium, and sodium; thus, each animal served as its own control. The animals were kept in metabolism cages; their urines were examined 1, 2, 3, and 8 days after epichlorohydrin administration.
Oliguria or anuria was exhibited by 53 (79%) animals and polyuria by 4 (6%). The remaining 10 (15%) produced urine at normal rates.
Of the 53 rats that produced little or no urine during the period immediately after the administration of epichlorohydrin, 7 (13.2%) entered a stage of polyuria within 2-3 days after the administration of the dose.
The mortality rate was 13.4%, death occurring only among the oliguric and anuric animals. Pallade et al noted that proteinuria and the parallel oliguric and anuric conditions indicated the occurrence of renal damage. If tissue repair and regeneration had occurred, the animals survived. They concluded that the signs of renal disorders confirmed that epichlorohydrin is nephrotoxic to rats. Therefore, medical examinations oriented to the detection of renal disorders were recommended for workers exposed to epichlorohydrin. Information on how epichlorohydrin vapor was generated and information on the size of the chamber were not provided. Neither was it indicated whether the chamber was operated in the static or the dynamic mode. The total amounts of epichlorohydrin inhaled during each of the 4-hour exposure periods are estimated to have been approximately 51, 2.9, and 1.0 mg/kg, respectively.
The same effects seen in the previous experiment were observed.
In addition, the weights of the liver and kidneys were increased, whereas those of the lungs and the spleen were decreased.
Removal of BSP from blood was decreased on the day of exposure, but not reliably on the day after the exposure. The production of urine was increased by all three levels of exposure, with concomitant decreases in the specific gravity of the urine. The daily output of chlorides in the urine on the day after exposure was usually increased by a larger factor than the production of urine. The daily excretion of protein in the urine was also increased, but less than that of chlorides. No additional studies of pulmonary and splenic function were reported. Reductions in both oxygen consumption and body temperature also were observed.
A significant (P value not given) stimulating influence on the mobility of spermatozoa also Renal cytochrome oxidase activity was determined in 14 rats and in 10 controls 24 hours after they had been injected with epichlorohydrin. Statistically significant (P value less than or equal to 0.01) inhibitions were observed in treated animals. Renal succinic dehydrogenase activity, determined 24 hours after epichlorohydrin administration, was similar for 10 experimental and 6 control animals. Examination of 40 treated and 30 control rats showed reductions in renal carbonic anhydrase activity of 6-11% 24 hours after the administration of epichlorohydrin. This reduction was not statistically significant. Glutamic-pyruvic transaminase (GPT) (E.C. 2.6.1.2) activity was measured 2.5 and 24 hours after administration in both renal tissue and serum. There were 30 animals in the group examined at 2.5 hours after epichlorohydrin administration and 40 in the group examined 24 hours postdose. At 2.5 hours, no change was observed in the kidneys, but a statistically significant (P value less than or equal to 0.01) increase in SGPT activity was observed at both the 2.5-hour and the 24-hour intervals. A significant reduction (P value less than or equal to 0.01) was observed at 24 hours; there were 0.230 units of activity in the controls in contrast to 0.100 units in the experimental group. The average SGPT activity in the experimental group ranged from 0.051 ± 0.022 units at 2.5 hours to 0.058 ± 0.018 units at 24 hours, in contrast to the control group average of 0.034 ± 0.010 units. Glutamic-oxaloacetic transaminase (GOT; E.C. 2.6.1.1) activities in the kidney were similar for the controls and the intoxicated animals at 2.5 hours, but significantly (P values less than or equal to 0.01) different at 24 hours in the experimental rats (0.261 ± 0.046 units) versus 0.302 ± 0.040 units in the controls. SGOT activity was significantly increased in the experimental rats, 0.045 ± 0.009 units at 2.5 hours (P value less than or equal to 0.02) and 0.046 ± 0.010 units at 24 hours, (P value less than or equal to 0.01) in contrast to 0.040 ± 0.008 units observed in the controls. Alkaline phosphatase activity was assayed in 30 experimental animals and in 30 controls. The animals given epichlorohydrin showed a significant reduction (P value less than or equal to 0.01) in the mean kidney phosphatase activities (0.029 ± 0.008 units) at 2.5 hours but had a mean alkaline phosphatase activity comparable with that in the controls (0.033 ± 0.014 units) at 24 hours.
However, serum alkaline phosphatase activity decreased progressively at 11-day period had no effect on body weight. Inhalation exposure at a concentration of 0.6 mg/liter of epichlorohydrin for 4 hours/day for eight times during 11 days decreased body weight by about 9%. Combination of these two exposures, that to heat following immediately after that to epichlorohydrin, had the same effect on body weight as exposure to the chemical alone. This finding agrees with that relating to the LC50. In conclusion, the amount of heat stress had no effect on these two responses of rodents to epichlorohydrin. Despite these two negative findings, there were a few instances in which addition of heat stress seemed to modify toxic actions by the chemical stressor. Thus, rats exposed to both heat and epichlorohydrin were reported to have had somewhat less marked alterations of the structure of the liver and the thyroid follicles, but more pronounced ones of the adrenal medulla, than those exposed to epichlorohydrin alone. Five rats showed moderate accumulations, resembling abscesses in some instances, of leukocytes around blood vessels in the interstitial tissue of the thyroid. A dose of 150 mg/kg of epichlorohydrin was administered to 10 male mice by ip injection. Each male was then caged with three undosed virgin female mice for 1 week. The females were replaced each week for 8 weeks and then killed and examined for pregnancy. The numbers of total live implants, early fetal deaths, and late fetal deaths were recorded. Since, in general, late fetal deaths were exceedingly rare, total implants and early fetal deaths were the only features of pregnancy analyzed. The authors did not report the observation of any effects of epichlorohydrin on early fetal deaths or other reproductive characteristics, including male fertility.
Therefore, it can be concluded that epichlorohydrin at 150 mg/kg given to male mice in a single ip injection does not increase the ratio of early fetal deaths to the total number of implantations occurring in the uterus of the female mice to which they are mated.
A summary of the results of the animal studies, other than for carcinogenicity and mutagenicity, is presented in Table III In the mice injected with epichlorohydrin, there was a sl^in papilloma after 11.5 months, a hepatoma in another mouse after 13 months, and 2 lung adenomas in 1 mouse after 24 months. The control animals had occasional hepatomas and lung adenomas, but no skin papillomas. The survival of the epichlorohydrin-treated animals was poor. During the 1st year, 12 mice died; after the 2nd year, only a few survived. Falk noted that, except for the single skin papilloma, the tumors were generally of the same type and frequency of occurrence as in the control groups. Thus, the experiment was inconclusive. initially and whenever necessary during the experiment. A dose of 2.0 mg of epichlorohydrin in 0.1 ml of acetone was applied three times/week to the interscapular region. The test lasted for 580 days (83 weeks). Skin lesions were diagnosed as papillomas when they reached 1 cu mm in size and persisted for 30 days or more. No papillomas or carcinomas occurred in the animals to which epichlorohydrin, acetone, or nothing was applied.
A group of female ICR/Ha mice had 2.0 mg of epichlorohydrin in 0.1 ml of acetone applied to the skin followed 2 weeks later by three applications each week, and throughout the study (total duration, 385 days), of 2.5 jug of phorbol myristate acetate (PMA), a promoter, in 0.1 ml of acetone.
In the group of 50 mice to which epichlorohydrin was applied, nine mice developed papillomas and one a carcinoma. Control animals to which solvent (30) alone or no chemical (100) was applied produced no tumors. Of the mice to which PMA alone (30) was applied, three developed papillomas.
For an assay by subcutaneous injection, 50 mice were injected weekly in the left flank with 1.0 mg of epichlorohydrin in 0.05 ml of tricaprylin.
This test also lasted for 580 days.
Of the animals given epichlorohydrin, six developed sarcomas and one an adenocarcinoma (P value less than or equal to 0.05). In comparison, 1 mouse of 50 injected with tricaprylin alone developed sarcoma, and none of the control animals developed any tumors. The significance of subcutaneous sarcomas which occur at the injection site has been discussed by Grasso and Golberg. In an ip assay, 30 mice received weekly injections into the lower abdomen of 1.0 mg of epichlorohydrin in 0.05 ml of tricaprylin. The experiment was terminated after 450 days. None of the mice developed local sarcomas, but 11 had papillary lung tumors. Of mice given tricaprylin alone, 10 had papillary tumors of the lungs and 1 developed a local sarcoma.
The results of these studies raise concern about an additional risk of carcinogenesis for the segment of human population continuously exposed to epichlorohydrin throughout the working lifetime. Available experimental evidence indicates that the risk of induction of cancer in animals and in humans can be reduced by reducing the maximum allowed cumulative dose to which they are exposed. Based on these reports, further tests involving long-term inhalation exposures of animals to epichlorohydrin must be done to determine the degree of occupational risk following chronic inhalation of epichlorohydrin.
A tabular summary of results of the carcinogenic studies is presented in Recently, workers exposed to epichlorohydrin were monitored for the presence of mutagens in their urine (DJ Kilian, written communication, The most plausible molecular basis of the epichlorohydrin-induced mutations in all of these organisms is the covalent bonding of epichlorohydrin to the cellular genetic material, DNA. In view of the virtual irreversibility of these reactions, the degree of genetic damage may be expected to increase with exposure time in these lower organisms as well as in higher forms.
Reports of experimental attempts to induce the formation of terata by exposing pregnant female animals to epichlorohydrin have not been found. Sampling was conducted with charcoal tubes using a calibrated, battery-operated pump. Quintuplicate samples were taken for each job classification at a rate of 2 liters/minute for 7 hours. Epichlorohydrin and chlorinated hydrocarbons were extracted with 30 ml of carbon disulfide.
Epichlorohydrin recovery from the charcoal, after sampling, was found to be 65%. Analysis was performed with a hydrogen flame chromatograph equipped with a 12 feet x 1/8 inch column packed with 15% Oronite NiW on gas chrom CLA (60-80 mesh). In 1973, in the epoxy resin unit, one air sample in the breathing zone of each of four employees (three operators and one helper)
showed the average epichlorohydrin concentration to be 0.03 ppm.
In the same year, 78 grab samples indicated that epichlorohydrin concentrations ranged from 13 ppm to less than 0.60 ppm, with an average of 3.17 ppm. It should be noted that the "average" was a numerical average between the individual samples,-not the TWA concentrations. The results of air monitoring done in 1974 are presented in Table IV- From reference 2
Pet'ko et al reported the results of environmental monitoring done in epichlorohydrin and dichlorhydrin-glycerine production units in Russia. The sampling technique employed was not specified. Concentrations Data on collection efficiency were not given. However, sampling with silica gel in high humidity may result in considerable sample loss from the displacement of the organic vapors by water vapor. Whitman and Johnston reported that this problem could be overcome with a molecular sieve prefilter. They used a 5-Angstrom molecular sieve to remove water vapor from gas streams without interfering with the passage of aromatic hydrocarbons. White et al have described the design of activated charcoal tubes suitable for sampling, and such tubes are commercially available. Adsorption on activated charcoal is the preferred sampling method for epichlorohydrin alone for several reasons: epichlorohydrin is not displaced by water vapor as it is from silica gel; it is a simpler, more convenient procedure than the use of plastic bags or of a bubbler; it uses a small, portable sampling device, and the difficulties associated with handling liquids are eliminated. Disadvantages are that the amount of sample that can be collected is limited by the number of milligrams that the charcoal tube will hold before overloading, and that the more volatile compounds can migrate to the backup section during storage before analysis.
Various methods used for analyzing the collected samples have included colorimetry, infrared analysis, and gas chromatography. Colorimetric analysis suitable for analyzing epichlorohydrin in an aqueous solution has been used.
The colorimetric method is based on the oxidation of epichlorohydrin in aqueous solution by periodic acid. Epichlorohydrin in water is hydrolyzed to a glycol which is then oxidized to formaldehyde. Epichlorohydrin solutions of known concentrations are prepared to obtain the standard curve.
The formaldehyde further reacts with sodium arsenite and acetylacetone reagent to form a yellow complex. The acetylacetone reagent solution is prepared by mixing ammonium acetate, acetylacetone, and glacial acetic acid in water. Maximum optical density of the yellow complex occurs at about 412 nm. This method was capable of determining as little as 20jug of epichlorohydrin. Data on the efficiency of the analytical technique were not given.
Formaldehyde, or any substance that may yield formaldehyde under the test conditions, will interfere. Compounds that contain or would form vicinal terminal hydroxy groups, such as ethylene glycol and ethylene oxide, will also interfere.
A colorimetric method suitable for analyzing epichlorohydrin collected in sulfuric acid also has been used.
Air containing epichlorohydrin was drawn through two bubblers containing 3 ml of 20% sulfuric acid and 4 ml of 10% sulfuric acid, respectively. The content of the first bubbler was diluted 1:1 with water. A 3-ml sample was then oxidized with 0.5 ml of 3% potassium iodate solution and allowed to stand for 30 minutes, during which time a yellow color developed. A 10% sodium sulfite solution was added and the mixture was shaken until the color disappeared.
One milliliter of Schiff's reagent was then added and the intensity of the resulting color (magenta) was measured 1 hour later. The initial reactions occurring were similar to those in the previously discussed colorimetric method.
Epichlorohydrin is hydrolyzed by sulfuric acid and the resulting glycol is oxidized to formaldehyde by the iodate. The formaldehyde then reacts with Schiff's reagent to form a magenta complex.
Sodium sulfite is added to reduce the unreacted iodate.
It was found that this method was capable of analyzing 0.01-0.1 mg epichlorohydrin in a 6-ml solution. Data on sensitivity, specificity, and interferences were not reported. However, the same compounds that would interfere with the previously discussed colorimetric technique, such as ethylene oxide and ethylene glycol, would also interfere with this method. In addition, many aldehydes would interfere.
The formaldehyde can also react with phenylhydrazlne hydrochloride to form phenylhydrazone.
The phenylhydrazone of formaldehyde reacts with potassium ferricyanide to form a colored complex. The maximum optical density of this complex occurs at about 500 nm. Epichlorohydrin concentrations ranging from 0.45 to 14 mg/cu m in air were determined by this method. Precision tests indicated the maximum error between the two determinations to be only 0.3%.
The infrared absorption spectrum of epichlorohydrin showed the typical terminal epoxide absorption bands. The minimum amount of epichlorohydrin that was detected by infrared absorption was 0.3% (v/v in aqueous solution). However, a practical and detailed technique using infrared analysis for a quantitative determination of epichlorohydrin has not been developed.
In recent years, gas chromatography has become the method of choice of most investigators for separation and the analysis of organic materials.
It offers excellent specificity and sensitivity and is suitable for analyzing samples of airborne contaminants collected on charcoal.
Interferences are few, and most of those which do occur can be eliminated by altering the instrumental conditions. Muganlinsky et al developed a linear temperature program to analyze epichlorohydrin in the presence of chlorinated hydrocarbons which may be present as impurities.
The recommended methods for sampling and analyzing epichlorohydrin are collection by charcoal tube and analysis by gas chromatography.
Sampling involves the collection of personal samples on charcoal tubes, and analysis involves desorption with carbon disulfide and measurement with a gas chromatograph equipped with a suitable detector. [ Such a notation refers to the potential contribution to overall exposure by the cutaneous route, including mucous membranes and eyes, either by airborne or, more particularly, by direct skin contact with liquid epichlorohydrin. This designation is intended to indicate the need for appropriate measures for the prevention of cutaneous absorption so that the TLV is not invalidated.
The present federal standard (29 CFR 1910.1000) for occupational exposure to epichlorohydrin is 5 ppm as an 8-hour TWA limit. This standard was based on the ACGIH TLV.
In Russia and in France, the maximum allowable concentration (MAC) in the industrial environment is 1 mg/cu m (0.26 ppm). In the Rumanian Socialist Republic, 10 mg/cu m (approximately 2.6 ppm) of epichlorohydrin is the maximum concentration allowed in the occupational environment. In the Federal Republic of Germany, the permissible environmental exposure limit is 18 mg/cu m (approximately 3.6 ppm), and 5 mg/cu m (approximately 1 ppm) in the German Democratic Republic. The bases for these standards are not given. Air and water quality standards have not been found, although one recommendation of such a value (0.2 mg/cu m) has been presented.
Basis for the Recommended Standard
The induction of severe necrotic lesions occurring after dermal contact with epichlorohydrin has been reported. Lung edema and kidney lesions were reported in humans exposed to epichlorohydrin at concentrations greater than 100 ppm for very short periods. Changes in the voltage of the peaks of the alpha rhythm of the EEG measurements of volunteers exposed to epichlorohydrin at 0.08 ppm for a few minutes were reported by Fomin.
Since the significance of the changes observed in this component of the EEG measurements is not known, further research needs to be conducted in this area before any interpretations can be made. Acute exposure to epichlorohydrin at a high, but unknown, concentration caused irritation of the eyes and throat, nausea, and dyspnea; bronchitis with bronchiolar constrictions and enlarged liver were suspected to have resulted from this single overexposure. These findings suggest that a ceiling is required. Since acute effects have been observed on humans at 20 ppm, and the lowest concentration which induces effects during a short interval has not been identified, a ceiling concentration of 19 mg/cu m (5 ppm), based on professional judgment, is recommended to protect even the more sensitive fraction of the working population from these adverse effects.
The existing federal standard of 5 ppm is based on the 1968 TLV.
Further information on cumulative aspects of toxicity such as sterility, carcinogenesis, and mutagenesis have been reported in subsequent studies.
[ The minimal concentration which has been observed to induce effects on rats (about 5 ppm) is used to approximate the permissible exposure in the occupational environment by weighing additional risk factors. The chronic effects for which risk must be considered are carcinogenesis, mutagenesis, and antifertility, as well as liver, kidney, and lung damage.
Van Duuren et al found that a single application of 2.0 mg of epichlorohydrin on mouse skin initiated the tumorigenic process in at least 9 of the 30 experimental mice when it was followed by triweekly The total risk to the health of employees occupationally exposed to epichlorohydrin is the result of the independent risks due to carcinogenesis, mutagenesis, sterility, and damage to kidneys, liver, respiratory tract, and to the skin. At present, evidence for the existence of the risks, other than to skin depends primarily on data from experimental animal models. Concern for employee health requires that the probability of the occurrence of chronic effects be minimized. NIOSH recommends, therefore, that worker exposure to epichlorohydrin be limited to a concentration of 2 mg/cu m (0.5 ppm) as a TWA concentration. This value has been chosen on the basis of professional judgment, rather than on quantitative data which clearly distinguish no-effect concentrations from those at which adverse effects have been shown to occur in human populations.
A TWA concentration of 2 mg/cu m of epichlorohydrin should protect the employee against injury to organs during the individual's working lifetime, according to existing information. However, additional research is needed to provide support for the recommended environmental limit or to indicate the need for a different limit. The environmental limit implicitly assumes that the absorbed epichlorohydrin molecules will be detoxified by biochemical mechanisms, thereby reducing the risk of induction of human disease resulting from the cumulative toxicity of epichlorohydrin.
It is recognized that many employees handle small amounts of epichlorohydrin or work in situations where, regardless of the amount used, there is only negligible contact with epichlorohydrin.
Under these conditions, it should not be necessary to comply with all of the provisions of the recommended standard, which has been prepared primarily to protect employee health under all circumstances. For these reasons, "occupational exposure to epichlorohydrin" is defined as exposure above one-half the TWA environmental limit, thereby delineating those work situations which do not require the expenditure of resources for environmental monitoring and associated recordkeeping. Because of nonrespiratory hazards such as the production of burns on the skin in the use of epichlorohydrin, NIOSH recommends that appropriate work practices and protective measures to limit such contact be required regardless of the concentration of airborne epichlorohydrin. Further, the observation of changes in the concentration of biochemical constituents following human and animal exposure to epichlorohydrin suggest that the health of the exposed workers be monitored frequently. Thus, it is recommended that comprehensive medical examinations be offered to all employees subject to occupational exposure to epichlorohydrin and that the responsible physician consider the advisability of also administering any liver and kidney function tests.
# VI. WORK PRACTICES
Work practices must be designed to minimize or to prevent inhalation of epichlorohydrin and skin and eyes from coming into contact with epichlorohydrin. Good work practices are a primary means of controlling certain exposures and will often supplement other control measures.
Enclosure of materials, processes, and operations is completely effective as a control only when the integrity of the system is maintained.
Such systems should be inspected frequently for leaks and any leaks found should be promptly repaired. Special attention should be given to the condition of seals and joints, access ports, and other such places. Similarly, points of wear should be inspected regularly for damage. For this reason, clothing contaminated with epichlorohydrin must be removed immediately and thoroughly laundered before reuse. Shoes on which epichlorohydrin is spilled are to be rendered unusable and discarded. The protective clothing must be made of material not permeable to epichlorohydrin. Penetration through three types of rubber has been measured and found to be 9-11 minutes for nitrile rubber, 20-22 minutes for neoprene rubber, and 38-43 minutes for natural rubber.
Since the penetration time is dependent on both the type of the rubber and the thickness, it is noteworthy that in this test the thickness for each type of rubber was: 0.015 inches for nitrile rubber, 0.02 inches for neoprene rubber, and 0.030 inches for natural rubber.
# Mutagenic Effect
This effect must be systematically investigated in greater depth with respect to dose, time, and route in both lower organisms and mammals.
Animal tests using various doses, schedules, and routes of administration should be performed to see whether epichlorohydrin is a mutagen in mammals.
Specific locus tests or heritable translocations should be considered.
Animals should also be tested to see whether epichlorohydrin has any cytogenic effects.
# Kidney Function in Workers Exposed to Epichlorohydrin
The impairment of kidney function as a result of epichlorohydrin exposure has been found in animals. As yet, there is no evidence that such injury also occurs in workers exposed to epichlorohydrin. Since a segment of a working population which is exposed primarily to epichlorohydrin can be identified, kidney function tests should be given periodically to determine whether any changes in kidney function are occurring as a result of occupational exposure to epichlorohydrin.
# Skin Sensitization
Although epichlorohydrin is commonly stated to be a sensitizer of the skin, the data that have been found in this regard are far from complete or persuasive. Additional information on the degree and character of sensitization of the skin of humans is highly desirable. Some measure of variability in the skin response would be most useful.
# Electroencephalographic (EEG) Studies
The index of effect by epichlorohydrin on humans that seems to be the most sensitive on the basis of the available information is a change in the voltage of the alpha rhythm of the EEG.
More information on the doseresponse relationship for this effect and on its correlation with more usual alterations of function would be of great value. Collect enough samples to permit calculation of a TWA exposure for every operation or location in which there is exposure to epichlorohydrin.
# (a) Equipment
The sampling train consists of a charcoal tube and a vacuum pump.
(1) Charcoal tubes: Glass tubes, with both ends flamesealed, 7-cm long with a 6-mm OD and a 4-mm ID, containing two sections of 20/40 mesh activated charcoal separated by a 2-mm portion of polyurethane foam. The primary section contains 100 mg of charcoal, the backup section, 50 mg. A 3-mm portion of polyurethane foam is placed between the outlet end of the tube and the backup section. A plug of glass wool is placed in front of the primary section. Tubes with the above specifications are commercially available.
( (2) Break the tips of a charcoal tube to produce openings of at least 2 mm in diameter.
Repeat the procedure in (7) above at least three times, average the results and calculate the flowrate by dividing the volume between the preselected marks by the time required for the soapbubble to traverse the distance. If, for the pump being calibrated, the volume of air sampled is calculated as the product of the number of strokes times a stroke factor (given in units of volume/stroke), the stroke factor is the quotient of the volume between the two preselected marks divided by the number of strokes.
# Determination of Desorption Efficiency
The desorption efficiency of a particular compound can vary from one laboratory to another and also from one batch of charcoal to another. Chemical substances should be listed according to their complete name derived from a recognized system of nomenclature.
Where possible, avoid using common names and general class names such as "aromatic amine,"
"safety solvent," or "aliphatic hydrocarbon" when the specific name is known.
The "%" may be the approximate percentage by weight or volume (indicate basis) which each hazardous ingredient of the mixture bears to the whole mixture. This may be indicated as a range or maximum amount, ie, "10-40% vol" or "10% max wt" to avoid disclosure of trade secrets.
Toxic hazard data shall be stated in terms of concentration, mode of exposure or test, and animal used, eg, "100 ppm LC50-rat," "25 mg/kg LD50- The "Health Hazard Data" should be a combined estimate of the hazard of the total product. This can be expressed as a TWA concentration, as a permissible exposure, or by some other indication of an acceptable standard. Other data are acceptable, such as lowest LD50, if multiple components are involved.
Under "Routes of Exposure," comments in each category should reflect the potential hazard from absorption by the route in question. Comments should indicate the severity of the effect and the basis for the statement, if possible. The basis might be animal studies, analogy with similar products, or human experiences. Comments such as "yes" or "possible" are not helpful. Typical comments might be:
Skin Contact-causes delayed burns.
Eye Contact-some pain and transient irritation.
"Emergency and First Aid Procedures" should be written in lay language and should primarily represent first-aid treatment that could be provided by paramedical personnel or individuals trained in first aid.
Information in the "Notes to Physician" section should include any special medical information which would be of assistance to an attending physician including required or recommended preplacement and periodic medical examinations, diagnostic procedures, and medical management of overexposed employees. Respirators shall be specified as to type and NIOSH or US Bureau of Mines approval class, ie, "Supplied air," "Organic vapor canister," etc.
Protective equipment must be specified as to type and materials of construction.
# HEALTH, ED UC A TIO N, AND W ELFARE
# PUBLIC HEALTH SERVICE C E N T E R F O R D I S E A S E C O N T R O L N A T I O N A L I N S T I T U T E F O R O C C U P A T I O N A L S A F E T Y A N D H E A L T H R O B E R T A. T A F T L A B O R
and analytical procedure, was 7.21 and 5.33%, respectively. Under these conditions, the desorption efficiency was estimated to be 82.7%.
# Apparatus (a)
Gas chromatograph equipped with a flame ionization detector. (1) 20 ml/minute helium flowrate.
(3) 200 C injector temperature.
(3) 230 C manifold temperature (detector).
(4) 135 C isothermal oven or column temperature.
# Analysis of Samples
# MATERIAL SAFETY DATA SHEET
# PRODUCT IDENTIFICATION M A N U F A C T U R E R 'S N A M E R E G U L A R T E L E P H O N E N O . E M E R G E N C Y T E L E P H O N E N O .
A | 15,229 | {
"id": "75a8fd55249d59a1df4d074942016c131da0e604",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # CRITERIA DOCUMENT:
# RECOMMENDATIONS FOR AN OCCUPATIONAL EXPOSURE STANDARD FOR BERYLLIUM
(1) Spirometry, including FVC and F E V^q .
(2) A medical history questionnaire that includes presence and degree of respiratory symptoms, i.e., breathlessness, cough, sputum production, and wheezing.
(3) A 14" by 17" chest X-ray. (2) Workers with Minimum Beryllium Bxposure
# (i)
The employer shall provide laboratory coats or equivalen protective clothing to each such employee who works with or is exposed to beryllium, beryllium compounds, or beryllium containing products.
(ii) Clean protective clothing shall be supplied to each worker at least weekly. (2) Showers for exposed workers shall be required following a work shift and prior to putting on street clothes.
(3) Locker-shower facilities shall be so arranged that the showers can serve to demarcate between potentially "clean" and "contaminated" areas.
(4) Suitable provisions shall be made for the control of con taminated dust in workshoe storage and clothing hamper locations.
(5) Handwashing and toilet facilities shall be arranged so that following use, workers need not re-enter a potentially contaminated area. In 1936 G e l m a n^ described illnesses among workers in beryllium plants in Moscow. He described two phases of the clinical symptoms caused *Much confusion has resulted from use of this term because of its similarity in spelling relating to beryl ore (beryllosls) and to use of the term in describing both acute and chronic manifestations of beryllium related disorders. Accordingly, the terms "beryllium disease" and "beryllium poisoning" will be used in this report.
IV-1 by exposure to vapors of beryllium oxyfluoride: the first phase, beryllium fever, was not unlike that caused by exposure to zinc oxide fumes; the second phase was described as an extensive alveobronchiolitis. Two years later he described effects on the skin and X-ray changes in the lungs. oxides have been productive of osteogenic sarcomas in rabbits following intravenous administration and primary lung tumors in rats and monkeys following inhalation, there is no evidence that community or industrial exposure to beryllium compounds is associated with an increase in the incidence of cancer in humans."
In 1970 Mancuso,^ in an epidemiological study based upon mortality of beryllium workers, found an equal or higher rate of cancer of the lung in employees employed for a short duration (3 to 15 months) as contrasted to those employed for a longer period (18 months or longer). A higher mortality rate was also suggested among the short duration employees of one plant, but this was not supported by results from a second plant. There Based on the degree of interstitial cellular infiltration they found that 80 percent of the cases studied showed moderate to marked infiltration whereas in the remainder, cellular infiltration was only slight or absent. Clearly defined granulomatous lesions were not always present. The group with prominent interstitial cellular infiltration could be subgrouped into two groups in which granuloma formation was either
# IV-9
well formed or else poorly-formed-to-absent. Where cellular Infiltration was slight or absent, granuloma formation was numerous and well formed.
D u d l e y^ in 1959 reported that the occurence of granulomatous lesions often tended to draw attention away from the more fundamental diffuse interstitial infiltration of which the granulomas were only a part. This same reaction was also seen in skin, liver, kidney, lymph nodes, and skeletal muscle. Cardiac muscle, spleen, and pleura also could be involved.
Freiman and H a r d y^ suggest a distinct relation between the intensity of interstitial cellular infiltration and the forecast of disease severity, with the possibility that the degree of granuloma formation may play a significant secondary role. Calcific inclusions were also commonly present,
being observed in about two-thirds of the lungs from patients with chronic disease.
In addition, increased tissue levels of beryllium were noted in most cases.
Although most cases of beryllium disease can be recognized by patho logic changes,^»^5,46,47 observations are not specific for the disease.
Pulmonary sarcoidosis, "farmers lung",^® fungus diseases, and various pneumoconioses are but a few disorders which also produce a similar pathologic picture. Differentiation between sarcoidosis and chronic beryllium disease is the most difficult. Similarities and differences between the two disorders have been presented for diagnostic purposes.44»49 (c) Chronic Effects -Treatment: Early treatment of chronic beryllium disease was purely symptomatic with oxygen providing great relief in cases where impaired ventilation was noted. Antibiotics were only of value to treat secondary infections. Long periods of bedrest were employed.
# IV-10
Patients were occasionally transferred to warm or dry climates to provide temporary relief, but no detectable change in the course of the disease was noted. 30 Sarcoidosis presents the most troublesome problem in differential diagnosis of chronic beryllium disease. Hardy and F r e i m a n ,^»^ in presenting beryllium disease as a continuing diagnostic problem, listed specific differences between beryllium disease and sarcoidosis. These differences include weight loss and severe loss of appetite, rarely seen in sarcoidosis, the presence in histopathologic sections of intense cellular infiltration with nodular lesions and abundant calcific inclusions, the absence of many characteristic localization patterns frequently seen in sarcoidosis, and the occurrence of significant amounts of beryllium in the tissues of many patients.
(i) X-ray changes were reviewed by Gary and Schatzki"*^ in a study of all available X-rays in the Beryllium Registry, and they concluded that there was not an orderly sequence of lung involvement as had been previously postulated,22,58 but rather, definite reaction types which persisted unchanged for many years. Disease which began with nodular manifestations stayed nodular. Further, they stressed that because of the several types of response seen on films, it appeared impossible to make a differential diagnosis by radiological means alone.
In 1970, Weber, Stoeckle, and Hardy,in a study of 8 cases observed for up to 18 years, reported that X-ray changes, most frequently involving the upper lobes, consisted of granular, nodular, and linear densities occurring singly and in combined forms. Mixed patterns of granular and nodular densities were most commonly seen. Persistence of granular densities alone was rarely observed. Small and scattered linear densities often developed, and in advanced cases, were very marked and associated with emphysema. Fibrotic changes confined to the lower lobes were rarely seen.
With fibrotic and emphysematous changes, granular and nodular densities diminished to a point where X-ray diagnosis was not indicated.
Chamberlin^® stated that clinical and X-ray findings alone established only presumptive diagnosis of beryllium disease. Although X-ray findings are not specific, the appearance of a known pattern of beryllium disease on a chest film should immediately alert the physician to the possibility of this diagnosis.
(ii) Tissue sensitization has been reported. Sterner and Eisenbud in 1 9 5 1^8 proposed that beryllium acts to produce allergic sensi tization in tissues.
In 1959, the patch test developed by Curtis was claimed to give favorable differentiation between chronic beryllium disease and other pulmonary diseases.^ The positive patch test did not serve as an absolute diagnostic sign for chronic beryllium disease; in fact, in cases where differential diagnosis has been difficult, the patch test has not been very helpful.^8 J e t t^ has shown an immunologic basis to chronic beryllium disease, and a hypersensitivity phenomenon has been demonstrated.
Since the skin patch test can develop a hypersensitive state in persons who have never been exposed to beryllium, its use in differential diagnosis is generally discouraged and the test is best avoided in screening persons who 28 are to be exposed to beryllium.
(ill) Tissue and fluid analysis for beryllium is used to establish previous exposure to beryllium. The detection of beryllium in tissue or urine is evidence of exposure to beryllium only, not necessarily to the presence of beryllium disease.^ Lung biopsy has been recommended as a means for positive beryllium identification.^ Frequently, negative findings of beryllium in the lungs of persons having known exposure to the material are due to inadequate sample quantities. At least a 5 gm specimen is recommended for lung tissue.^ The chances of positive analysis are accordingly reduced for other organs since they contain much less material than the lung. Beryllium is not generally considered to be a natural environmental contaminant; however, C h o l a k^ tabulated the presence of extremely small quantities in soil, coal, and air. The more rarely produced osteogenic sarcoma and rickets are peculiar to experimental animals, although the granuloma of the skin with ulcer occurs in man as well.
# IV-13
( Beryllium "rickets" is likewise a condition not seen in man, probably because the rickets were produced in young animals on diets with substantial amounts of BeCO^ (0.5 and 2%),®^ conditions not met in the human situation.
A toxic macrocytic anemia that has been reported to result in animals exposed by inhalation to beryllium compounds and substantiated by demonor strated interference in both heme and globin synthesis seems not to have been noted in individuals with beryllium disease.
# IV-16
Finally, the morphologic changes in the lungs of animals with alveolar metaplasia, do not resemble in all respects those seen in man. It is apparent that exposure levels were probably much higher than reported.
It is interesting to note that the estimated duration of exposure ranged from 6 weeks to 9-1/2 years for the cases listed in Table V. This would seem to indicate that for development of chronic beryllium disease, comparatively short time intervals are all that are necessary at the relatively low levels believed to be found in industry since 1949.
The Director of the Registry also indicates that the incidence of confirmed chronic beryllium disease is continuing and at least three new 98 cases will be admitted to the Beryllium Case Registry in 1972.
Information from the Beryllium Registry has been valuable in (1) criteria selection for diagnosis of beryllium poisoning, (2 ) judgment of effectiveness of controls, and (3) evaluation of the clinical course of beryllium disease and response to therapy. Hardy^^ points out, however, that there are three important deficiencies in the Registry:
(1) lack of knowledge of the size of population at risk; (2 ) incomplete data describing the amount of beryllium exposure; and (3) failure to learn of all cases of the disease in a beryllium-using industry.
# Neighborhood Cases
The term "neighborhood case" has been applied to a patient in which It has yet to be definitely established whether ambient air contamina tion alone, at a distance from a plant, can cause chronic beryllium disease.
# Correlation of Exposure and Effects
# Calculations
The percent of absorption is converted to absorbance. The standard curve is used to get a beryllium concentration value in terms of^ig/ml for the absorbance value. This jig Be/ml value is multiplied by the sample aliquot to determine the total beryllium in the sample.
# Calculation of Beryllium Concentration
Total jig Be m3 n f a i r aampiA " Jig Be/m of ambient air
# Effect of Storage
Samples and standards can be stored indefinitely without loss of beryllium as long as the pH of solutions is maintained at less than 2.
XI.
# APPENDIX III MATERIAL SAFETY DATA SHEET
The following items of information w hich are applicable to a specific product or material containing beryllium shall be provided In the appropriate section of the Material Safety Data Sheet or approved form. If a specific item of information is inapplicable (i.e., flash point) initials "n.a."
for not applicable shall be inserted. Clean Sparkler Filter | 2,625 | {
"id": "cb87d4795487946086e4601cfd856abad06c0751",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Introduction
In the United States, annual epidemics of influenza occur typically during the late fall through early spring seasons.
# Influenza viruses can cause disease among persons in any age
The material in this report originated in the National Center for Immunization and Respiratory Diseases, Anne Schuchat, MD, Director; the Influenza Division, Nancy Cox, PhD, Director; the Office of the Chief Science Officer, Tanja Popovic, MD, Chief Science Officer; the Immunization Safety Office, John Iskander, MD, Acting Director, and the Immunization Services Division, Lance Rodewald, MD, Director. Corresponding preparer: Anthony Fiore, MD, Influenza Division, National Center for Immunization and Respiratory Diseases, CDC, 1600 Clifton Road, NE, MS A-20, Atlanta, GA 30333. Telephone: 404-639-3747; Fax: 404-639-3866; E-mail: [email protected]. group, but rates of infection are highest among children (1)(2)(3). Rates of serious illness and death are highest among persons aged >65 years, children aged <2 years, and persons of any age who have medical conditions that place them at increased risk for complications from influenza (1,4,5). An annual average of approximately 36,000 deaths during 1990-1999 and 226,000 hospitalizations during 1979-2001 have been associated with influenza epidemics (6,7).
Annual influenza vaccination is the most effective method for preventing influenza virus infection and its complications. Influenza vaccine can be administered to any person aged >6 months (who does not have contraindications to vaccination) to reduce the likelihood of becoming ill with influenza or of transmitting influenza to others. Trivalent inactivated influenza vaccine (TIV) can be used for any person aged >6 months, including those with high-risk conditions (Boxes 1 and 2). Live, attenuated influenza vaccine (LAIV) may be used for healthy, nonpregnant persons aged 2-49 years. If vaccine supply is limited, priority for vaccination is typically assigned to persons in specific groups and of specific ages who are, or are contacts of, persons at higher risk for influenza complications. Because the safety or effectiveness of LAIV has not been established in persons with underlying medical con ditions that confer a higher risk for influenza complications, these persons should only be vaccinated with TIV. Influenza viruses undergo frequent antigenic change (i.e., antigenic drift), and persons recommended for vaccination must receive an annual vaccination against the influenza viruses forecasted to be in circulation. Although vaccination coverage has increased Children and adolescents at high risk for influenza complications should continue to be a focus of vaccina tion efforts as providers and programs transition to rou tinely vaccinating all children and adolescents. Recommendations for these children have not changed.
Children and adolescents at higher risk for influenza complication are those: - aged 6 months-4 years; - who have chronic pulmonary (including asthma), cardiovascular (except hypertension), renal, hepatic, hematological or metabolic disorders (including diabetes mellitus); - who are immunosuppressed (including immuno suppression caused by medications or by human immunodeficiency virus); - who have any condition (e.g., cognitive dysfunction, spinal cord injuries, seizure disorders, or other neuro muscular disorders) that can compromise respiratory function or the handling of respiratory secretions or that can increase the risk for aspiration; - who are receiving long-term aspirin therapy who there fore might be at risk for experiencing Reye syndrome after influenza virus infection; - who are residents of chronic-care facilities; and, - who will be pregnant during the influenza season.
Note: Children aged <6 months should not receive influenza vaccination. Household and other close contacts (e.g., daycare providers) of children aged <6 months, including older children and adolescents, should be vaccinated.
# BOX 2. Summary of influenza vaccination recommendations, 2008: adults
Annual recommendations for adults have not changed. Annual vaccination against influenza is recommended for any adult who wants to reduce the risk for becoming ill with influenza or of transmitting it to others. Vacci nation also is recommended for all adults in the follow ing groups, because these persons are either at high risk for influenza complications, or are close contacts of per sons at higher risk: - persons aged >50 years;
- women who will be pregnant during the influenza season; - persons who have chronic pulmonary (including asthma), cardiovascular (except hypertension), renal, hepatic, hematological or metabolic disorders (includ ing diabetes mellitus); - persons who have immunosuppression (including immunosuppression caused by medications or by human immunodeficiency virus); - persons who have any condition (e.g., cognitive dys function, spinal cord injuries, seizure disorders, or other neuromuscular disorders) that can compromise respira tory function or the handling of respiratory secretions or that can increase the risk for aspiration; - residents of nursing homes and other chronic-care facilities; - health-care personnel;
- household contacts and caregivers of children aged 50 years, with particular emphasis on vaccinating contacts of children aged <6 months; and, - household contacts and caregivers of persons with medi cal conditions that put them at high risk for severe complications from influenza.
in recent years for many groups targeted for routine vaccina tion, coverage remains low among most of these groups, and strategies to improve vaccination coverage, including use of reminder/recall systems and standing orders programs, should be implemented or expanded. Antiviral medications are an adjunct to vaccination and are effective when administered as treatment and when used for chemoprophylaxis after an exposure to influenza virus. Oseltamivir and zanamivir are the only antiviral medications recommended for use in the United States. Amantadine or rimantidine should not be used for the treatment or preven tion of influenza in the United States until evidence of sus ceptibility to these antiviral medications has been reestablished among circulating influenza A viruses.
# Methods
CDC's Advisory Committee on Immunization Practices (ACIP) provides annual recommendations for the prevention and control of influenza. The ACIP Influenza Vaccine Work ing Group- meets monthly throughout the year to discuss newly published studies, review current guidelines, and con sider potential revisions to the recommendations. As they review the annual recommendations for ACIP consideration of the full committee, members of the working group con sider a variety of issues, including burden of influenza illness, vaccine effectiveness, safety and coverage in groups recom mended for vaccination, feasibility, cost-effectiveness, and anticipated vaccine supply. Working group members also request periodic updates on vaccine and antiviral production, supply, safety and efficacy from vaccinologists, epidemiolo gists, and manufacturers. State and local vaccination program representatives are consulted. Influenza surveillance and anti viral resistance data were obtained from CDC's Influenza Division. The Vaccines and Related Biological Products Advisory Committee provides advice on vaccine strain selec tion to the Food and Drug Administration (FDA), which selects the viral strains to be used in the annual trivalent influ enza vaccines.
Published, peer-reviewed studies are the primary source of data used by ACIP in making recommendations for the pre vention and control of influenza, but unpublished data that are relevant to issues under discussion also might be consid ered. Among studies discussed or cited, those of greatest sci entific quality and those that measured influenza-specific outcomes are the most influential. For example, populationbased estimates that use outcomes associated with laboratoryconfirmed influenza virus infection outcomes contribute the most specific data for estimates of influenza burden. The best evidence for vaccine or antiviral efficacy and effectiveness comes from randomized controlled trials that assess laboratoryconfirmed influenza infections as an outcome measure and consider factors such as timing and intensity of influenza cir culation and degree of match between vaccine strains and wild circulating strains (8,9). Randomized, placebo-controlled tri als cannot be performed ethically in populations for which vaccination already is recommended, but observational stud ies that assess outcomes associated with laboratory-confirmed influenza infection can provide important vaccine or antiviral effectiveness data. Randomized, placebo-controlled clinical trials are the best source of vaccine and antiviral safety data for common adverse events; however, such studies do not have the power to identify rare but potentially serious adverse events. - A list of members appears on page 59 of this report.
The frequency of rare adverse events that might be associated with vaccination or antiviral treatment is best assessed by ret rospective reviews of computerized medical records from large linked clinical databases, and by reviewing medical charts of persons who are identified as having a potential adverse event after vaccination (10,11). Vaccine coverage data from a nationally representative, randomly selected population that includes verification of vaccination through health-care record review is superior to coverage data derived from limited popu lations or without verification of vaccination but is rarely avail able for older children or adults (12). Finally, studies that assess vaccination program practices that improve vaccination cov erage are most influential in formulating recommendations if the study design includes a nonintervention comparison group. In cited studies that included statistical comparisons, a differ ence was considered to be statistically significant if the p-value was <0.05 or the 95% confidence interval (CI) around an estimate of effect allowed rejection of the null hypothesis (i.e., no effect).
These recommendations were presented to the full ACIP and approved in February 2008. Modifications were made to the ACIP statement during the subsequent review process at CDC to update and clarify wording in the document. Data presented in this report were current as of July 1, 2008. Fur ther updates, if needed, will be posted at CDC's influenza website ().
# Primary Changes and Updates in the Recommendations
The 2008 recommendations include five principal changes or updates:
- been identified in the United States and some other coun tries. However, oseltamivir or zanamivir continue to be the recommended antivirals for treatment of influenza because other influenza virus strains remain sensitive to oseltamivir, and resistance levels to other antiviral medi cations remain high.
# Background and Epidemiology
# Biology of Influenza
Influenza A and B are the two types of influenza viruses that cause epidemic human disease. Influenza A viruses are categorized into subtypes on the basis of two surface antigens: hemagglutinin and neuraminidase. Since 1977, influenza A (H1N1) viruses, influenza A (H3N2) viruses, and influenza B viruses have circulated globally. Influenza A (H1N2) viruses that probably emerged after genetic reassortment between human A (H3N2) and A (H1N1) viruses also have been identified in some influenza seasons. Both influenza A subtypes and B viruses are further separated into groups on the basis of antigenic similarities. New influenza virus vari ants result from frequent antigenic change (i.e., antigenic drift) resulting from point mutations that occur during viral repli cation (13).
Currently circulating influenza B viruses are separated into two distinct genetic lineages (Yamagata and Victoria) but are not categorized into subtypes. Influenza B viruses undergo antigenic drift less rapidly than influenza A viruses. Influenza B viruses from both lineages have circulated in most recent influenza seasons (13).
Immunity to the surface antigens, particularly the hemag glutinin, reduces the likelihood of infection (14). Antibody against one influenza virus type or subtype confers limited or no protection against another type or subtype of influenza virus. Furthermore, antibody to one antigenic type or sub type of influenza virus might not protect against infection with a new antigenic variant of the same type or subtype (15). Frequent emergence of antigenic variants through antigenic drift is the virologic basis for seasonal epidemics and is the reason for annually reassessing the need to change one or more of the recommended strains for influenza vaccines.
More dramatic changes, or antigenic shifts, occur less fre quently. Antigenic shift occurs when a new subtype of influ enza A virus appears and can result in the emergence of a novel influenza A virus with the potential to cause a pandemic. New influenza A subtypes have the potential to cause a pan demic when they are able to cause human illness and demon strate efficient human-to-human transmission and there is little or no previously existing immunity among humans (13).
# Clinical Signs and Symptoms of Influenza
Influenza viruses are spread from person to person prima rily through large-particle respiratory droplet transmission (e.g., when an infected person coughs or sneezes near a sus ceptible person) (16). Transmission via large-particle droplets requires close contact between source and recipient persons, because droplets do not remain suspended in the air and gen erally travel only a short distance (10 days after onset of symptoms (26). Severely immunocompromised persons can shed virus for weeks or months (27)(28)(29)(30).
Uncomplicated influenza illness is characterized by the abrupt onset of constitutional and respiratory signs and symp toms (e.g., fever, myalgia, headache, malaise, nonproductive cough, sore throat, and rhinitis) (31). Among children, otitis media, nausea, and vomiting also are commonly reported with influenza illness (32,33). Uncomplicated influenza illness typi cally resolves after 3-7 days for the majority of persons, although cough and malaise can persist for >2 weeks. How ever, influenza virus infections can cause primary influenza viral pneumonia; exacerbate underlying medical conditions (e.g., pulmonary or cardiac disease); lead to secondary bacte rial pneumonia, sinusitis, or otitis media; or contribute to coinfections with other viral or bacterial pathogens (34)(35)(36). Young children with influenza virus infection might have ini tial symptoms mimicking bacterial sepsis with high fevers (35)(36)(37)(38), and febrile seizures have been reported in 6%-20% of children hospitalized with influenza virus infection (32,35,39). Population-based studies among hospitalized chil dren with laboratory-confirmed influenza have demonstrated that although the majority of hospitalizations are brief (<2 days), 4%-11% of children hospitalized with laboratoryconfirmed influenza required treatment in the intensive care unit, and 3% required mechanical ventilation (35,37). Among 1,308 hospitalized children in one study, 80% were aged <5 years, and 27% were aged <6 months (35). Influenza virus infection also has been uncommonly associated with encepha lopathy, transverse myelitis, myositis, myocarditis, pericardi tis, and Reye syndrome (32,34,40,41).
Respiratory illnesses caused by influenza virus infection are difficult to distinguish from illnesses caused by other respira tory pathogens on the basis of signs and symptoms alone. Sen sitivity and predictive value of clinical definitions vary, depending on the prevalence of other respiratory pathogens and the level of influenza activity (42). Among generally healthy older adolescents and adults living in areas with con firmed influenza virus circulation, estimates of the positive predictive value of a simple clinical definition of influenza (acute onset of cough and fever) for laboratory-confirmed influenza infection have varied (range: 79%-88%) (43,44).
Young children are less likely to report typical influenza symptoms (e.g., fever and cough). In studies conducted among children aged 5-12 years, the positive predictive value of fever and cough together was 71%-83%, compared with 64% among children aged <5 years (45). In one large, populationbased surveillance study in which all children with fever or symptoms of acute respiratory tract infection were tested for influenza, 70% of hospitalized children aged <6 months with laboratory-confirmed influenza were reported to have fever and cough, compared with 91% of hospitalized children aged 6 months-5 years. Among children who subsequently were shown to have laboratory-confirmed influenza infections, only 28% of those hospitalized and 17% of those treated as outpa tients had a discharge diagnosis of influenza (38).
Clinical definitions have performed poorly in some studies of older patients. A study of nonhospitalized patients aged >60 years indicated that the presence of fever, cough, and acute onset had a positive predictive value of 30% for influenza (46). Among hospitalized patients aged >65 years with chronic car diopulmonary disease, a combination of fever, cough, and ill ness of <7 days had a positive predictive value of 53% for confirmed influenza infection (47). In addition, the absence of symptoms of influenza-like illness (ILI) does not effectively rule out influenza; among hospitalized adults with laboratoryconfirmed infection in two studies, 44%-51% had typical ILI symptoms (48,49). A study of vaccinated older persons with chronic lung disease reported that cough was not predic tive of laboratory-confirmed influenza virus infection, although having both fever or feverishness and myalgia had a positive predictive value of 41% (50). These results highlight the challenges of identifying influenza illness in the absence of laboratory confirmation and indicate that the diagnosis of influenza should be considered in patients with respiratory symptoms or fever during influenza season.
# Health-Care Use, Hospitalizations, and Deaths Attributed to Influenza
In the United States, annual epidemics of influenza typi cally occur during the fall or winter months, but the peak of influenza activity can occur as late as April or May (Figure 1). Influenza-related complications requiring urgent medical care, including hospitalizations or deaths, can result from the direct effects of influenza virus infection, from complications associated with age or pregnancy, or from complications of underlying cardiopulmonary conditions or other chronic dis eases. Studies that have measured rates of a clinical outcome without a laboratory confirmation of influenza virus infec tion (e.g., respiratory illness requiring hospitalization during influenza season) to assess the effect of influenza can be diffi cult to interpret because of circulation of other respiratory pathogens (e.g., respiratory syncytial virus) during the same time as influenza viruses (51)(52)(53).
# Month
During seasonal influenza epidemics from 1979-1980 through 2000-2001, the estimated annual overall number of influenza-associated hospitalizations in the United States ranged from approximately 55,000 to 431,000 per annual epidemic (mean: 226,000) (7). The estimated annual num ber of deaths attributed to influenza from the 1990-91 influ enza season through 1998-99 ranged from 17,000 to 51,000 per epidemic (mean: 36,000) (6). In the United States, the estimated number of influenza-associated deaths increased during 1990-1999. This increase was attributed in part to the substantial increase in the number of persons aged >65 years who were at increased risk for death from influenza com plications (6). In one study, an average of approximately 19,000 influenza-associated pulmonary and circulatory deaths per influenza season occurred during 1976-1990, compared with an average of approximately 36,000 deaths per season during 1990-1999 (6). In addition, influenza A (H3N2) viruses, which have been associated with higher mortality (54), pre dominated in 90% of influenza seasons during 1990-1999, compared with 57% of seasons during 1976-1990 (6).
Influenza viruses cause disease among persons in all age groups (1)(2)(3)(4)(5). Rates of infection are highest among children, but the risks for complications, hospitalizations, and deaths from influenza are higher among persons aged >65 years, young children, and persons of any age who have medical condi tions that place them at increased risk for complications from influenza (1,4,5,(55)(56)(57)(58). Estimated rates of influenzaassociated hospitalizations and deaths varied substantially by age group in studies conducted during different influenza epi demics. During 1990-1999, estimated average rates of influ enza-associated pulmonary and circulatory deaths per 100,000 persons were 0.4-0.6 among persons aged 0-49 years, 7.5 among persons aged 50-64 years, and 98.3 among persons aged >65 years (6).
# Children
Among children aged <5 years, influenza-related illness is a common cause of visits to medical practices and emergency departments. During two influenza seasons (2002-03 and 2003-04), the percentage of visits among children aged <5 years with acute respiratory illness or fever caused by laboratory-confirmed influenza ranged from 10%-19% of medical office visits to 6%-29% of emergency departments visits during the influenza season. Using these data, the rate of visits to medical clinics for influenza was estimated to be 50-95 per 1,000 children, and to emergency departments 6-27 per 1,000 children (38). Retrospective studies using medical records data have demonstrated similar rates of ill ness among children aged <5 years during other influenza sea sons (33,56,59). During the influenza season, an estimated 7-12 additional outpatient visits and 5-7 additional antibi otic prescriptions per 100 children aged <15 years has been documented when compared with periods when influenza viruses are not circulating, with rates decreasing with increas ing age of the child (59). During 1993During -2004 in the Boston area, the rate of emergency department visits for respiratory illness that was attributed to influenza virus based on viral surveillance data among children aged <7 years during the winter respiratory illness season ranged from 22.0 per 1000 children aged 6-23 months to 5.4 per 1000 children aged 5-7 years (60).
Rates of influenza-associated hospitalization are substantially higher among infants and young children than among older children when influenza viruses are in circulation (Figure 2) and are similar to rates for other groups considered at high risk for influenza-related complications (61)(62)(63)(64)(65)(66), including persons aged >65 years (59,63). During 1979-2001, the esti mated rate of influenza-associated hospitalizations, using a national sample of hospital discharges of influenza-associated hospitalizations in the United States among children aged 65 years (6). Of 153 laboratory-confirmed influenza-related pediatric deaths reported during the 2003-04 influenza season, 96 (63%) deaths occurred among children aged <5 years and 61 (40%) among children aged <2 years. Among the 149 children who died and for whom information on underlying health status was available, 100 (67%) did not have an underlying medical condition that was an indication for vaccination at that time (68). In California during the 2003-04 and 2004-05 influ enza seasons, 51% of children with laboratory-confirmed influenza who died and 40% of those who required admis sion to an intensive care unit had no underlying medical con ditions (69). These data indicate that although deaths are more common among children with risk factors for influenza com plications, the majority of pediatric deaths occur among chil dren of all age groups with no known high-risk conditions. The annual number of deaths among children reported to CDC for the past four influenza seasons has ranged from 84 during 2004-05 to 84 during 2007-08 (CDC, unpublished data, 2008).
Death associated with laboratory-confirmed influenza virus infection among children (defined as persons aged <18 years) is a nationally reportable condition. Deaths among children that have been attributed to co-infection with influ enza and Staphylococcus aureus, particularly methicillin resis tant S. aureus (MRSA), have increased during the preceding four influenza seasons (70;CDC, unpublished data, 2008). The reason for this increase is not established but might re flect an increasing prevalence within the general population of colonization with MRSA strains, some of which carry cer tain virulence factors (71,72).
# Adults
Hospitalization rates during the influenza season are sub stantially increased for persons aged >65 years. One retrospec tive analysis based on data from managed-care organizations collected during 1996-2000 estimated that the risk during influenza season among persons aged >65 years with underly ing conditions that put them at risk for influenza-related com plications (i.e., one or more of the conditions listed as indications for vaccination) was approximately 560 influenzaassociated hospitalizations per 100,000 persons, compared with approximately 190 per 100,000 healthy elderly persons. Persons aged 50-64 years with underlying medical conditions also were at substantially increased risk for hospitalizations during influenza season, compared with healthy adults aged 50-64 years. No increased risk for influenza-associated hos pitalizations was demonstrated among healthy adults aged 50-64 years or among those aged 19-49 years, regardless of underlying medical conditions (64).
Influenza is an important contributor to the annual increase in deaths attributed to pneumonia and influenza that is observed during the winter months (Figure 3). During 1976-2001, an estimated yearly average of 32,651 (90%) influenzarelated deaths occurred among adults aged >65 years (6). Risk for influenza-associated death was highest among the oldest elderly, with persons aged >85 years 16 times more likely to die from an influenza-associated illness than persons aged 65-69 years (6).
The duration of influenza symptoms is prolonged and the severity of influenza illness increased among persons with human immunodeficiency virus (HIV) infection (73)(74)(75)(76)(77). A retrospective study of young and middle-aged women enrolled in Tennessee's Medicaid program determined that the attrib utable risk for cardiopulmonary hospitalizations among women with HIV infection was higher during influenza sea sons than it was either before or after influenza was circulat ing. The risk for hospitalization was higher for HIV-infected women than it was for women with other underlying medical conditions (78). Another study estimated that the risk for influenza-related death was 94-146 deaths per 100,000 per sons with acquired immunodeficiency syndrome (AIDS), com pared with 0.9-1.0 deaths per 100,000 persons aged 25-54 years and 64-70 deaths per 100,000 persons aged >65 years in the general population (79).
Influenza-associated excess deaths among pregnant women were reported during the pandemics of 1918-1919 and 1957-1958 (80-83). Case reports and several epidemiologic studies Week and year also indicate that pregnancy increases the risk for influenza complications (84)(85)(86)(87)(88)(89) for the mother. The majority of stud ies that have attempted to assess the effect of influenza on pregnant women have measured changes in excess hospital izations for respiratory illness during influenza season but not laboratory-confirmed influenza hospitalizations. Pregnant women have an increased number of medical visits for respi ratory illnesses during influenza season compared with non pregnant women (90). Hospitalized pregnant women with respiratory illness during influenza season have increased lengths of stay compared with hospitalized pregnant women without respiratory illness. Rates of hospitalization for respi ratory illness were twice as common during influenza season (91). A retrospective cohort study of approximately 134,000 pregnant women conducted in Nova Scotia during 1990-2002 compared medical record data for pregnant women to data from the same women during the year before pregnancy. Among pregnant women, 0.4% were hospitalized and 25% visited a clinician during pregnancy for a respiratory illness. The rate of third-trimester hospital admissions during the influenza season was five times higher than the rate during the influenza season in the year before pregnancy and more than twice as high as the rate during the noninfluenza season. An excess of 1,210 hospital admissions in the third trimester per 100,000 pregnant women with comorbidities and 68 admissions per 100,000 women without comorbidities was reported (92). In one study, pregnant women with respira tory hospitalizations did not have an increase in adverse peri natal outcomes or delivery complications (93); however, another study indicated an increase in delivery complications (91). However, infants born to women with laboratoryconfirmed influenza during pregnancy do not have higher rates of low birth weight, congenital abnormalities, or low Apgar scores compared with infants born to uninfected women (88,94).
# Options for Controlling Influenza
The most effective strategy for preventing influenza is annual vaccination. Strategies that focus on providing rou tine vaccination to persons at higher risk for influenza com plications have long been recommended, although coverage among the majority of these groups remains low. Routine vac cination of certain persons (e.g., children, contacts of persons at risk for influenza complications, and HCP) who serve as a source of influenza virus transmission might provide addi tional protection to persons at risk for influenza complica tions and reduce the overall influenza burden, but coverage levels among these persons needs to be increased before effects on transmission can be reliably measured. Antiviral drugs used for chemoprophylaxis or treatment of influenza are adjuncts to vaccine but are not substitutes for annual vac cination. However, antiviral drugs might be underused among those hospitalized with influenza (95). Nonpharmacologic interventions (e.g., advising frequent handwashing and improved respiratory hygiene) are reasonable and inexpensive; these strategies have been demonstrated to reduce respiratory diseases (96,97) but have not been studied adequately to determine if they reduce transmission of influenza virus. Simi larly, few data are available to assess the effects of communitylevel respiratory disease mitigation strategies (e.g., closing schools, avoiding mass gatherings, or using respiratory pro tection) on reducing influenza virus transmission during typi cal seasonal influenza epidemics (98,99).
# Influenza Vaccine Efficacy, Effectiveness, and Safety Evaluating Influenza Vaccine Efficacy and Effectiveness Studies
The efficacy (i.e., prevention of illness among vaccinated persons in controlled trials) and effectiveness (i.e., prevention of illness in vaccinated populations) of influenza vaccines depend in part on the age and immunocompetence of the vaccine recipient, the degree of similarity between the viruses in the vaccine and those in circulation (see Effectiveness of Influenza Vaccination when Circulating Influenza Virus Strains Differ from Vaccine Strains), and the outcome being mea sured. Influenza vaccine efficacy and effectiveness studies have used multiple possible outcome measures, including the pre vention of medically attended acute respiratory illness (MAARI), prevention of laboratory-confirmed influenza virus illness, prevention of influenza or pneumonia-associated hospitalizations or deaths, or prevention of seroconversion to circulating influenza virus strains. Efficacy or effectiveness for more specific outcomes such as laboratory-confirmed influ enza typically will be higher than for less specific outcomes such as MAARI because the causes of MAARI include infec tions with other pathogens that influenza vaccination would not be expected to prevent (100). Observational studies that compare less-specific outcomes among vaccinated populations to those among unvaccinated populations are subject to biases that are difficult to control for during analyses. For example, an observational study that determines that influ enza vaccination reduces overall mortality might be biased if healthier persons in the study are more likely to be vaccinated (101,102). Randomized controlled trials that measure labo ratory-confirmed influenza virus infections as the outcome are the most persuasive evidence of vaccine efficacy, but such trials cannot be conducted ethically among groups recom mended to receive vaccine annually.
# Influenza Vaccine Composition
Both LAIV and TIV contain strains of influenza viruses that are antigenically equivalent to the annually recommended strains: one influenza A (H3N2) virus, one influenza A (H1N1) virus, and one influenza B virus. Each year, one or more virus strains in the vaccine might be changed on the basis of global surveillance for influenza viruses and the emer gence and spread of new strains. All three vaccine virus strains were changed for the recommended vaccine for the 2008-09 influenza season, compared with the 2007-08 season (see Recommendations for Using TIV and LAIV During the 2008-09 Influenza Season). Viruses for both types of currently licensed vaccines are grown in eggs. Both vaccines are admin istered annually to provide optimal protection against influ enza virus infection (Table 1). Both TIV and LAIV are widely available in the United States. Although both types of vac cines are expected to be effective, the vaccines differ in several respects (Table 1).
# Major Differences Between TIV and LAIV
During the preparation of TIV, the vaccine viruses are made noninfectious (i.e., inactivated or killed) (103). Only subvirion and purified surface antigen preparations of TIV (often referred to as "split" and subunit vaccines, respectively) are available in the United States. TIV contains killed viruses and thus cannot cause influenza. LAIV contains live, attenuated viruses that have the potential to cause mild signs or symp toms such as runny nose, nasal congestion, fever or sore throat. LAIV is administered intranasally by sprayer, whereas TIV is administered intramuscularly by injection. LAIV is licensed for use among nonpregnant persons aged 2-49 years; safety has not been established in persons with underlying medical conditions that confer a higher risk of influenza complica tions. TIV is licensed for use among persons aged >6 months, including those who are healthy and those with chronic medi cal conditions (Table 1).
# Correlates of Protection after Vaccination
Immune correlates of protection against influenza infection after vaccination include serum hemagglutination inhibition antibody and neutralizing antibody (14,104). Increased lev els of antibody induced by vaccination decrease the risk for illness caused by strains that are antigenically similar to those strains of the same type or subtype included in the vaccine (105)(106)(107)(108). The majority of healthy children and adults have high titers of antibody after vaccination (106,109). Although immune correlates such as achievement of certain antibody titers after vaccination correlate well with immunity on a popu lation level, the significance of reaching or failing to reach a certain antibody threshold is not well understood on the indi vidual level. Other immunologic correlates of protection that might best indicate clinical protection after receipt of an intranasal vaccine such as LAIV (e.g., mucosal antibody) are more difficult to measure (103,110).
# Immunogenicity, Efficacy, and Effectiveness of TIV Children
Children aged >6 months typically have protective levels of anti-influenza antibody against specific influenza virus strains after receiving the recommended number of doses of influenza vaccine (104,109,(111)(112)(113)(114)(115)(116). In most seasons, one or more vac cine antigens are changed compared to the previous season. In consecutive years when vaccine antigens change, children aged <9 years who received only 1 dose of vaccine in their first year of vaccination are less likely to have protective antibody responses when administered only a single dose during their second year of vaccination, compared with children who received 2 doses in their first year of vaccination (117)(118)(119).
When the vaccine antigens do not change from one season to the next, priming children aged 6-23 months with a single dose of vaccine in the spring followed by a dose in the fall engenders similar antibody responses compared with a regi men of 2 doses in the fall (120). However, one study con ducted during a season when the vaccine antigens did not change compared with the previous season estimated 62% effectiveness against ILI for healthy children who had received only 1 dose in the previous influenza season and only 1 dose in the study season, compared with 82% for those who received 2 doses separated by >4 weeks during the study sea son (121).
The antibody response among children at higher risk for influenza-related complications (e.g., children with chronic medical conditions) might be lower than those typically reported among healthy children (122,123). However, antibody responses among children with asthma are similar to those of healthy children and are not substantially altered during asthma exacerbations requiring short-term prednisone treatment (124).
Vaccine effectiveness studies also have indicated that 2 doses are needed to provide adequate protection during the first sea son that young children are vaccinated. Among children aged 6 months, although estimates have varied. In a randomized trial conducted during five influenza seasons (1985)(1986)(1987)(1988)(1989)(1990) in the United States among children aged 1-15 years, annual vaccination reduced laboratory-confirmed influenza A substantially (77%-91%) (106). A limited 1-year placebo-controlled study reported vaccine efficacy against laboratory-confirmed influenza illness of 56% among healthy children aged 3-9 years and 100% among healthy children and adolescents aged 10-18 years (127). A randomized, double-blind, placebo-controlled trial conducted during two influenza seasons among children aged 6-24 months indicated that efficacy was 66% against culture-confirmed influenza illness during 1999-2000, but did not significantly reduce culture-confirmed influenza ill ness during 2000-2001 (128). In a nonrandomized controlled trial among children aged 2-6 years and 7-14 years who had asthma, vaccine efficacy was 54% and 78% against laboratoryconfirmed influenza type A infection and 22% and 60% against laboratory-confirmed influenza type B infection, respectively. Vaccinated children aged 2-6 years with asthma did not have substantially fewer type B influenza virus infec tions compared with the control group in this study (129). Vaccination also might provide protection against asthma exacerbations (130); however, other studies of children with asthma have not demonstrated decreased exacerbations (131). Because of the recognized influenza-related disease burden among children with other chronic diseases or immunosup pression and the long-standing recommendation for vaccina tion of these children, randomized placebo-controlled studies to study efficacy in these children have not been conducted because of ethical considerations.
A retrospective study conducted among approximately 30,000 children aged 6 months-8 years during an influenza season (2003-04) with a suboptimal vaccine match indicated vaccine effectiveness of 51% against medically attended, clini cally diagnosed pneumonia or influenza (i.e., no laboratory confirmation of influenza) among fully vaccinated children, and 49% among approximately 5,000 children aged 6-23 months (125). Another retrospective study of similar size con ducted during the same influenza season in Denver but lim ited to healthy children aged 6-21 months estimated clinical effectiveness of 2 TIV doses to be 87% against pneumonia or influenza-related office visits (121). Among children, TIV effectiveness might increase with age (106,132).
TIV has been demonstrated to reduce acute otitis media in some studies. Two studies have reported that TIV decreases the risk for influenza-associated otitis media by approximately 30% among children with mean ages of 20 and 27 months, respectively (133,134). However, a large study conducted among children with a mean age of 14 months indicated that TIV was not effective against acute otitis media (128). Influ enza vaccine effectiveness against acute otitis media, which is caused by a variety of pathogens and is not typically diag nosed using influenza virus culture, would be expected to be relatively low when assessing a nonspecific clinical outcome.
# Adults Aged <65 Years
One dose of TIV is highly immunogenic in healthy adults aged <65 years. Limited or no increase in antibody response is reported among adults when a second dose is administered during the same season (135)(136)(137)(138)(139). When the vaccine and circulating viruses are antigenically similar, TIV prevents labo ratory-confirmed influenza illness among approximately 70%-90% of healthy adults aged <65 years in randomized controlled trials (139)(140)(141)(142). Vaccination of healthy adults also has resulted in decreased work absenteeism and decreased use of health-care resources, including use of antibiotics, when the vaccine and circulating viruses are well-matched (139)(140)(141)(143)(144)(145). Efficacy or effectiveness against laboratoryconfirmed influenza illness was 50%-77% in studies conducted during different influenza seasons when the vac cine strains were antigenically dissimilar to the majority of circulating strains (139,141,(145)(146)(147). However, effectiveness among healthy adults against influenza-related hospitalization, measured in the most recent of these studies, was 90% (147).
In certain studies, persons with certain chronic diseases have lower serum antibody responses after vaccination compared with healthy young adults and can remain susceptible to influenza virus infection and influenza-related upper respira tory tract illness (148)(149)(150). Vaccine effectiveness among adults aged <65 years who are at higher risk for influenza complica tions is typically lower than that reported for healthy adults. In a case-control study conducted during 2003-2004, when the vaccine was a suboptimal antigenic match to many circu lating virus strains, effectiveness for prevention of laboratoryconfirmed influenza illness among adults aged 50-64 years with high risk conditions was 48%, compared with 60% for healthy adults (147). Effectiveness against hospitalization among adults aged 50-64 years with high-risk conditions was 36%, compared with 90% effectiveness among healthy adults in that age range (147). A randomized controlled trial among adults in Thailand with chronic obstructive pulmonary dis ease (median age: 68 years) indicated a vaccine effectiveness of 76% in preventing laboratory-confirmed influenza during August 8, 2008 a season when viruses were well-matched to vaccine viruses. Effectiveness did not decrease with increasing severity of underlying lung disease (151).
Studies using less specific outcomes, without laboratory confirmation of influenza virus infection, typically have dem onstrated substantial reductions in hospitalizations or deaths among adults with risk factors for influenza complications. In a case-control study conducted in Denmark among adults with underlying medical conditions aged <65 years during 1999-2000, vaccination reduced deaths attributable to any cause 78% and reduced hospitalizations attributable to respiratory infections or cardiopulmonary diseases 87% (152). A benefit was reported after the first vaccination and increased with subsequent vaccinations in subsequent years (153). Among patients with diabetes mellitus, vaccination was associated with a 56% reduction in any complication, a 54% reduction in hospitalizations, and a 58% reduction in deaths (154). Cer tain experts have noted that the substantial effects on mor bidity and mortality among those who received influenza vaccination in these observational studies should be interpreted with caution because of the difficulties in ensuring that those who received vaccination had similar baseline health status as those who did not (101,102). One meta-analysis of published studies did not determine sufficient evidence to conclude that persons with asthma benefit from vaccination (155). How ever, a meta-analysis that examined effectiveness among per sons with chronic obstructive pulmonary disease identified evidence of benefit from vaccination (156).
# Immunocompromised Persons
TIV produces adequate antibody concentrations against influenza among vaccinated HIV-infected persons who have minimal AIDS-related symptoms and normal or near-normal CD4+ T-lymphocyte cell counts (157)(158)(159). Among persons who have advanced HIV disease and low CD4+ T-lymphocyte cell counts, TIV might not induce protective antibody titers (159,160); a second dose of vaccine does not improve the immune response in these persons (160,161). A randomized, placebo-controlled trial determined that TIV was highly effective in preventing symptomatic, laboratory-confirmed influenza virus infection among HIV-infected persons with a mean of 400 CD4+ T-lymphocyte cells/mm3; however, a lim ited number of persons with CD4+ T-lymphocyte cell counts of 100 CD4+ cells and among those with <30,000 viral copies of HIV type-1/mL (77).
On the basis of certain small studies, immunogenicity for persons with solid organ transplants varies according to trans plant type. Among persons with kidney or heart transplants, the proportion who developed seroprotective antibody con centrations was similar or slightly reduced compared with healthy persons (162)(163)(164). However, a study among persons with liver transplants indicated reduced immunologic responses to influenza vaccination (165)(166)(167), especially if vac cination occurred within the 4 months after the transplant procedure (165).
# Pregnant Women and Neonates
Pregnant women have protective levels of anti-influenza antibodies after vaccination (168,169). Passive transfer of antiinfluenza antibodies that might provide protection from vac cinated women to neonates has been reported (168,(170)(171)(172). A retrospective, clinic-based study conducted during 1998-2003 documented a nonsignificant trend towards fewer epi sodes of MAARI during one influenza season among vaccinated pregnant women compared with unvaccinated pregnant women and substantially fewer episodes of MAARI during the peak influenza season (169). However, a retrospec tive study conducted during 1997-2002 that used clinical records data did not indicate a reduction in ILI among vacci nated pregnant women or their infants (173). In another study conducted during 1995-2001, medical visits for respiratory illness among the infants were not substantially reduced (174). However, studies of influenza vaccine effectiveness among pregnant women have not included specific outcomes such as laboratory-confirmed influenza in women or their infants.
# Older Adults
Adults aged >65 years typically have a diminished immune response to influenza vaccination compared with young healthy adults, suggesting that immunity might be of shorter duration (although still extending through one influenza sea son) (175,176). However, a review of the published literature concluded that no clear evidence existed that immunity declined more rapidly in the elderly (177). Infections among the vaccinated elderly might be associated with an age-related reduction in ability to respond to vaccination rather than reduced duration of immunity (149)(150).
The only randomized controlled trial among communitydwelling persons aged >60 years reported a vaccine efficacy of 58% against influenza respiratory illness during a season when the vaccine strains were considered to be well-matched to cir culating strains, but indicated that efficacy was lower among those aged >70 years (178). Influenza vaccine effectiveness in preventing MAARI among the elderly in nursing homes has been estimated at 20%-40% (179,180), and reported out breaks among well-vaccinated nursing home populations have suggested that vaccination might not have any significant effectiveness when circulating strains are drifted from vaccine strains (181,182). In contrast, some studies have indicated that vaccination can be up to 80% effective in preventing influenza-related death (179,(183)(184)(185). Among elderly per sons not living in nursing homes or similar chronic-care facilities, influenza vaccine is 27%-70% effective in prevent ing hospitalization for pneumonia and influenza (186)(187)(188). Influenza vaccination reduces the frequency of secondary com plications and reduces the risk for influenza-related hospital ization and death among community-dwelling adults aged >65 years with and without high-risk medical conditions (e.g., heart disease and diabetes) (187)(188)(189)(190)(191)(192). However, studies demon strating large reductions in hospitalizations and deaths among the vaccinated elderly have been conducted using medical record databases and have not measured reductions in labora tory-confirmed influenza illness. These studies have been chal lenged because of concerns that they have not adequately controlled for differences in the propensity for healthier per sons to be more likely than less healthy persons to receive vaccination (101,102,183,(193)(194)(195).
# TIV Dosage, Administration, and Storage
The composition of TIV varies according to manufacturer, and package inserts should be consulted. TIV formulations in multidose vials contain the vaccine preservative thimerosal; preservative-free single dose preparations also are available. TIV should be stored at 35°F-46°F (2°C-8°C) and should not be frozen. TIV that has been frozen should be discarded. Dosage recommendations and schedules vary according to age group (Table 2). Vaccine prepared for a previous influenza season should not be administered to provide protection for any subsequent season.
The intramuscular route is recommended for TIV. Adults and older children should be vaccinated in the deltoid muscle. A needle length of >1 inch (>25 mm) should be considered for persons in these age groups because needles of <1 inch might be of insufficient length to penetrate muscle tissue in certain adults and older children (196). When injecting into the deltoid muscle among children with adequate deltoid muscle mass, a needle length of 7/8-1.25 inches is recom mended (197). In addition, to identify children who might be at greater risk for asthma and possibly at increased risk for wheezing after receiving LAIV, parents or caregivers of children aged 2-4 years should be asked: "In the past 12 months, has a health-care provider ever told you that your child had wheezing or asthma?" Children whose parents or caregivers answer "yes" to this question and children who have asthma or who had a wheezing episode noted in the medical record during the preceding 12 months, should not receive FluMist. † † Two doses administered at least 4 weeks apart are recommended for children aged 2-8 years who are receiving LAIV for the first time, and those who only received 1 dose in their first year of vaccination should receive 2 doses in the following year.
# MMWR August 8, 2008
Infants and young children should be vaccinated in the anterolateral aspect of the thigh. A needle length of 7/8-1 inch should be used for children aged <12 months.
# Adverse Events after Receipt of TIV Children
Studies support the safety of annual TIV in children and adolescents. The largest published postlicensure populationbased study assessed TIV safety in 215,600 children aged <18 years and 8,476 children aged 6-23 months enrolled in one of five health maintenance organizations (HMOs) during 1993-1999. This study indicated no increase in biologically plausible, medically attended events during the 2 weeks after inactivated influenza vaccination, compared with control periods 3-4 weeks before and after vaccination (198). A ret rospective study using medical records data from approximately 45,000 children aged 6-23 months provided additional evi dence supporting overall safety of TIV in this age group. Vac cination was not associated with statistically significant increases in any medically attended outcome, and 13 diag noses, including acute upper respiratory illness, otitis media and asthma, were significantly less common (199).
In a study of 791 healthy children aged 1-15 years, post vaccination fever was noted among 11.5% of those aged 1-5 years, 4.6% among those aged 6-10 years, and 5.1% among those aged 11-15 years (106). Fever, malaise, myalgia, and other systemic symptoms that can occur after vaccination with inactivated vaccine most often affect persons who have had no previous exposure to the influenza virus antigens in the vaccine (e.g., young children) (200,201). These reactions begin 6-12 hours after vaccination and can persist for 1-2 days. Data about potential adverse events among children after influenza vaccination are available from the Vaccine Adverse Event Reporting System (VAERS). A recently pub lished review of VAERS reports submitted after administra tion of TIV to children aged 6-23 months documented that the most frequently reported adverse events were fever, rash, injection-site reactions, and seizures; the majority of the lim ited number of reported seizures appeared to be febrile (202). Because of the limitations of passive reporting systems, deter mining causality for specific types of adverse events, with the exception of injection-site reactions, usually is not possible using VAERS data alone.
# Adults
In placebo-controlled studies among adults, the most fre quent side effect of vaccination was soreness at the vaccina tion site (affecting 10%-64% of patients) that lasted <2 days (203,204). These local reactions typically were mild and rarely interfered with the recipients' ability to conduct usual daily activities. Placebo-controlled trials demonstrate that among older persons and healthy young adults, administration of TIV is not associated with higher rates for systemic symptoms (e.g., fever, malaise, myalgia, and headache) when compared with placebo injections (139,155,(203)(204)(205).
# Pregnant Women and Neonates
FDA has classified TIV as a "Pregnancy Category C" medi cation, indicating that animal reproduction studies have not been conducted to support a labeling change. Available data indicate that influenza vaccine does not cause fetal harm when administered to a pregnant woman or affect reproductive capacity. One study of approximately 2,000 pregnant women who received TIV during pregnancy demonstrated no adverse fetal effects and no adverse effects during infancy or early child hood (206). A matched case-control study of 252 pregnant women who received TIV within the 6 months before deliv ery determined no adverse events after vaccination among pregnant women and no difference in pregnancy outcomes compared with 826 pregnant women who were not vacci nated (169). During 2000-2003, an estimated 2 million preg nant women were vaccinated, and only 20 adverse events among women who received TIV were reported to VAERS during this time, including nine injection-site reactions and eight systemic reactions (e.g., fever, headache, and myalgias). In addition, three miscarriages were reported, but these were not known to be causally related to vaccination (207). Similar results have been reported in certain smaller studies (168,170,208), and a recent international review of data on the safety of TIV concluded that no evidence exists to suggest harm to the fetus (209).
# Persons with Chronic Medical Conditions
In a randomized cross-over study of children and adults with asthma, no increase in asthma exacerbations was reported for either age group (210), and a second study indicated no increase in wheezing among vaccinated asthmatic children (130). One study (123) reported that 20%-28% of children with asthma aged 9 months-18 years had local pain and swell ing at the site of influenza vaccination, and another study (113) reported that 23% of children aged 6 months-4 years with chronic heart or lung disease had local reactions. A blinded, randomized, cross-over study of 1,952 adults and children with asthma demonstrated that only self-reported "body aches" were reported more frequently after TIV (25%) than placeboinjection (21%) (210). However, a placebo-controlled trial of TIV indicated no difference in local reactions among 53 chil dren aged 6 months-6 years with high-risk medical condi tions or among 305 healthy children aged 3-12 years (114). Among children with high-risk medical conditions, one study of 52 children aged 6 months-3 years reported fever among 27% and irritability and insomnia among 25% (113); and a study among 33 children aged 6-18 months reported that one child had irritability and one had a fever and seizure after vaccination (211). No placebo comparison group was used in these studies.
# Immunocompromised Persons
Data demonstrating safety of TIV for HIV-infected per sons are limited, but no evidence exists that vaccination has a clinically important impact on HIV infection or immuno competence. One study demonstrated a transient (i.e., 2-4 week) increase in HIV RNA (ribonucleic acid) levels in one HIV-infected person after influenza virus infection (212). Studies have demonstrated a transient increase in replication of HIV-1 in the plasma or peripheral blood mononuclear cells of HIV-infected persons after vaccine administration (159,213). However, more recent and better-designed studies have not documented a substantial increase in the replication of HIV (214-217). CD4+ T-lymphocyte cell counts or pro gression of HIV disease have not been demonstrated to change substantially after influenza vaccination among HIV-infected persons compared with unvaccinated HIV-infected persons (159,218). Limited information is available about the effect of antiretroviral therapy on increases in HIV RNA levels after either natural influenza virus infection or influenza vaccina tion (73,219).
Data are similarly limited for persons with other immuno compromising conditions. In small studies, vaccination did not affect allograft function or cause rejection episodes in recipients of kidney transplants (162,164), heart transplants (163), or liver transplants (165).
# Hypersensitivity
Immediate and presumably allergic reactions (e.g., hives, angioedema, allergic asthma, and systemic anaphylaxis) occur rarely after influenza vaccination (220,221). These reactions probably result from hypersensitivity to certain vac cine components; the majority of reactions probably are caused by residual egg protein. Although influenza vaccines contain only a limited quantity of egg protein, this protein can induce immediate hypersensitivity reactions among persons who have severe egg allergy. Manufacturers use a variety of different com pounds to inactivate influenza viruses and add antibiotics to prevent bacterial contamination. Package inserts should be consulted for additional information.
Persons who have had hives or swelling of the lips or tongue, or who have experienced acute respiratory distress or who collapse after eating eggs, should consult a physician for appropriate evaluation to help determine if vaccine should be administered. Persons who have documented immunoglobu lin E (IgE)-mediated hypersensitivity to eggs, including those who have had occupational asthma related to egg exposure or other allergic responses to egg protein, also might be at increased risk for allergic reactions to influenza vaccine, and consultation with a physician before vaccination should be considered (222)(223)(224).
Hypersensitivity reactions to other vaccine components can occur but are rare. Although exposure to vaccines containing thimerosal can lead to hypersensitivity, the majority of patients do not have reactions to thimerosal when it is admin istered as a component of vaccines, even when patch or intra dermal tests for thimerosal indicate hypersensitivity (225,226). When reported, hypersensitivity to thimerosal typically has consisted of local delayed hypersensitivity reactions (225).
# Guillain-Barré Syndrome and TIV
The annual incidence of Guillain-Barré Syndrome (GBS) is 10-20 cases per 1 million adults (227). Substantial evidence exists that multiple infectious illnesses, most notably Campylobacter jejuni gastrointestinal infections and upper res piratory tract infections, are associated with GBS (228)(229)(230). The 1976 swine influenza vaccine was associated with an increased frequency of GBS (231,232), estimated at one additional case of GBS per 100,000 persons vaccinated. The risk for influenza vaccine-associated GBS was higher among persons aged >25 years than among persons aged <25 years (233). However, obtaining strong epidemiologic evidence for a possible small increase in risk for a rare condition with mul tiple causes is difficult, and no evidence exists for a consistent causal relation between subsequent vaccines prepared from other influenza viruses and GBS.
None of the studies conducted using influenza vaccines other than the 1976 swine influenza vaccine have demonstrated a substantial increase in GBS associated with influenza vaccines. During three of four influenza seasons studied during 1977-1991, the overall relative risk estimates for GBS after influ enza vaccination were not statistically significant in any of these studies (234)(235)(236). However, in a study of the 1992-93 and 1993-94 seasons, the overall relative risk for GBS was 1.7 (CI = 1.0-2.8; p = 0.04) during the 6 weeks after vaccina tion, representing approximately one additional case of GBS per 1 million persons vaccinated; the combined number of GBS cases peaked 2 weeks after vaccination (231). Results of a study that examined health-care data from Ontario, Canada, during 1992-2004 demonstrated a small but statistically sig nificant temporal association between receiving influenza vac cination and subsequent hospital admission for GBS. However, no increase in cases of GBS at the population level was reported after introduction of a mass public influenza vacci nation program in Ontario beginning in 2000 (237). Data from VAERS have documented decreased reporting of GBS occurring after vaccination across age groups over time, despite overall increased reporting of other, non-GBS condi tions occurring after administration of influenza vaccine (203). Cases of GBS after influenza virus infection have been reported, but no other epidemiologic studies have documented such an association (238,239). Recently published data from the United Kingdom's General Practice Research Database (GPRD) found influenza vaccine to be protective against GBS, although it is unclear if this was associated with protection against influenza or confounding because of a "healthy vac cinee" (e.g., healthier persons might be more likely to be vac cinated and are lower risk for GBS) (240). A separate GPRD analysis found no association between vaccination and GBS over a 9 year period; only three cases of GBS occurred within 6 weeks after influenza vaccine (241).
If GBS is a side effect of influenza vaccines other than 1976 swine influenza vaccine, the estimated risk for GBS (on the basis of the few studies that have demonstrated an association between vaccination and GBS) is low (i.e., approximately one additional case per 1 million persons vaccinated). The poten tial benefits of influenza vaccination in preventing serious ill ness, hospitalization, and death substantially outweigh these estimates of risk for vaccine-associated GBS. No evidence indicates that the case fatality ratio for GBS differs among vaccinated persons and those not vaccinated.
# Use of TIV among Patients with a History of GBS
The incidence of GBS among the general population is low, but persons with a history of GBS have a substantially greater likelihood of subsequently experiencing GBS than persons without such a history (227). Thus, the likelihood of coinci dentally experiencing GBS after influenza vaccination is expected to be greater among persons with a history of GBS than among persons with no history of this syndrome. Whether influenza vaccination specifically might increase the risk for recurrence of GBS is unknown. However, avoiding vaccinat ing persons who are not at high risk for severe influenza com plications and who are known to have experienced GBS within 6 weeks after a previous influenza vaccination might be pru dent as a precaution. As an alternative, physicians might con sider using influenza antiviral chemoprophylaxis for these persons. Although data are limited, the established benefits of influenza vaccination might outweigh the risks for many per sons who have a history of GBS and who are also at high risk for severe complications from influenza.
# Vaccine Preservative (Thimerosal) in Multidose Vials of TIV
Thimerosal, a mercury-containing anti-bacterial compound, has been used as a preservative in vaccines since the 1930s (242) and is used in multidose vial preparations of TIV to reduce the likelihood of bacterial contamination. No scien tific evidence indicates that thimerosal in vaccines, including influenza vaccines, is a cause of adverse events other than occasion local hypersensitivity reactions in vaccine recipients. In addition, no scientific evidence exists that thimerosalcontaining vaccines are a cause of adverse events among chil dren born to women who received vaccine during pregnancy. Evidence is accumulating that supports the absence of sub stantial risk for neurodevelopment disorders or other harm resulting from exposure to thimerosal-containing vaccines (243)(244)(245)(246)(247)(248)(249)(250). However, continuing public concern about expo sure to mercury in vaccines was viewed as a potential barrier to achieving higher vaccine coverage levels and reducing the burden of vaccine-preventable diseases. Therefore, the U.S. Public Health Service and other organizations recommended that efforts be made to eliminate or reduce the thimerosal content in vaccines as part of a strategy to reduce mercury exposures from all sources (243,245,247). Since mid-2001, vaccines routinely recommended for infants aged <6 months in the United States have been manufactured either without or with greatly reduced (trace) amounts of thimerosal. As a result, a substantial reduction in the total mercury exposure from vaccines for infants and children already has been achieved (197). ACIP and other federal agencies and profes sional medical organizations continue to support efforts to provide thimerosal preservative-free vaccine options.
The benefits of influenza vaccination for all recommended groups, including pregnant women and young children, out weigh concerns on the basis of a theoretical risk from thime rosal exposure through vaccination. The risks for severe illness from influenza virus infection are elevated among both young children and pregnant women, and vaccination has been dem onstrated to reduce the risk for severe influenza illness and subsequent medical complications. In contrast, no scientifi cally conclusive evidence has demonstrated harm from expo sure to vaccine containing thimerosal preservative. For these reasons, persons recommended to receive TIV may receive any age-and risk factor-appropriate vaccine preparation, depending on availability. An analysis of VAERS reports found no difference in the safety profile of preservative-containing compared with preservative-free TIV vaccines in infants aged 6-23 months (202).
Nonetheless, certain states have enacted legislation banning the administration of vaccines containing mercury; the provi sions defining mercury content vary (251). LAIV and many of the single dose vial or syringe preparations of TIV are thime rosal-free, and the number of influenza vaccine doses that do not contain thimerosal as a preservative is expected to increase (Table 2). However, these laws might present a barrier to vac cination unless influenza vaccines that do not contain thime rosal as a preservative are easily available in those states.
The U.S. vaccine supply for infants and pregnant women is in a period of transition during which the availability of thime rosal-reduced or thimerosal-free vaccine intended for these groups is being expanded by manufacturers as a feasible means of further reducing an infant's cumulative exposure to mer cury. Other environmental sources of mercury exposure are more difficult or impossible to avoid or eliminate (243).
# LAIV Dosage, Administration, and Storage
Each dose of LAIV contains the same three vaccine anti gens used in TIV. However, the antigens are constituted as live, attenuated, cold-adapted, temperature-sensitive vaccine viruses. Additional components of LAIV include egg allan toic fluid, monosodium glutamate, sucrose, phosphate, and glutamate buffer; and hydrolyzed porcine gelatin. LAIV does not contain thimerosal. LAIV is made from attenuated viruses that are only able to replicate efficiently at tempera tures present in the nasal mucosa. LAIV does not cause sys temic symptoms of influenza in vaccine recipients, although a minority of recipients experience nasal congestion, which is probably a result of either effects of intranasal vaccine admin istration or local viral replication or fever (252).
LAIV is intended for intranasal administration only and should not be administered by the intramuscular, intrader mal, or intravenous route. LAIV is not licensed for vaccina tion of children aged 49 years. LAIV is supplied in a prefilled, single-use sprayer containing 0.2 mL of vaccine. Approximately 0.1 mL (i.e., half of the total sprayer contents) is sprayed into the first nostril while the recipient is in the upright position. An attached dosedivider clip is removed from the sprayer to administer the second half of the dose into the other nostril. LAIV is shipped to end users at 35°F-46°F (2°C-8°C). LAIV should be stored at 35°F-46°F (2°C-8°C) on receipt and can remain at that temperature until the expiration date is reached (252). Vac cine prepared for a previous influenza season should not be administered to provide protection for any subsequent season.
# Shedding, Transmission, and Stability of Vaccine Viruses
Available data indicate that both children and adults vacci nated with LAIV can shed vaccine viruses after vaccination, although in lower amounts than occur typically with shed ding of wild-type influenza viruses. In rare instances, shed vaccine viruses can be transmitted from vaccine recipients to unvaccinated persons. However, serious illnesses have not been reported among unvaccinated persons who have been infected inadvertently with vaccine viruses.
One study of children aged 8-36 months in a child care center assessed transmissibility of vaccine viruses from 98 vac cinated to 99 unvaccinated subjects; 80% of vaccine recipi ents shed one or more virus strains (mean duration: 7.6 days). One influenza type B vaccine strain isolate was recovered from a placebo recipient and was confirmed to be vaccine-type virus. The type B isolate retained the cold-adapted, temperaturesensitive, attenuated phenotype, and it possessed the same genetic sequence as a virus shed from a vaccine recipient who was in the same play group. The placebo recipient from whom the influenza type B vaccine strain was isolated had symp toms of a mild upper respiratory illness but did not experi ence any serious clinical events. The estimated probability of acquiring vaccine virus after close contact with a single LAIV recipient in this child care population was 0.6%-2.4% (253).
Studies assessing whether vaccine viruses are shed have been based on viral cultures or PCR detection of vaccine viruses in nasal aspirates from persons who have received LAIV. One study of 20 healthy vaccinated adults aged 18-49 years dem onstrated that the majority of shedding occurred within the first 3 days after vaccination, although the vaccine virus was detected in one subject on day 7 after vaccine receipt. Dura tion or type of symptoms associated with receipt of LAIV did not correlate with detection of vaccine viruses in nasal aspi rates (254). Another study in 14 healthy adults aged 18-49 years indicated that 50% of these adults had viral antigen detected by direct immunofluorescence or rapid antigen tests within 7 days of vaccination. The majority of samples with detectable virus were collected on day 2 or 3 (255). Vaccine strain virus was detected from nasal secretions in one (2%) of 57 HIV-infected adults who received LAIV, none of 54 HIVnegative participants (256), and three (13%) of 23 HIVinfected children compared with seven (28%) of 25 children who were not HIV-infected (257). No participants in these studies had detectable virus beyond 10 days after receipt of LAIV. The possibility of person-to-person transmission of vaccine viruses was not assessed in these studies (254)(255)(256)(257).
In clinical trials, viruses isolated from vaccine recipients have been phenotypically stable. In one study, nasal and throat swab specimens were collected from 17 study participants for 2 weeks after vaccine receipt (258). Virus isolates were analyzed by multiple genetic techniques. All isolates retained the LAIV genotype after replication in the human host, and all retained the cold-adapted and temperature-sensitive phenotypes. A study conducted in a child-care setting demonstrated that lim ited genetic change occurred in the LAIV strains following replication in the vaccine recipients (259).
# Immunogenicity, Efficacy, and Effectiveness of LAIV
LAIV virus strains replicate primarily in nasopharyngeal epithelial cells. The protective mechanisms induced by vacci nation with LAIV are not understood completely but appear to involve both serum and nasal secretory antibodies. The immunogenicity of the approved LAIV has been assessed in multiple studies conducted among children and adults (106,(260)(261)(262)(263)(264)(265)(266). No single laboratory measurement closely correlates with protective immunity induced by LAIV (261).
# Healthy Children
A randomized, double-blind, placebo-controlled trial among 1,602 healthy children aged 15-71 months assessed the effi cacy of LAIV against culture-confirmed influenza during two seasons (267,268). This trial included a subset of children aged 60-71 months who received 2 doses in the first season. In season one (1996-97), when vaccine and circulating virus strains were well-matched, efficacy against culture-confirmed influenza was 94% for participants who received 2 doses of LAIV separated by >6 weeks, and 89% for those who received 1 dose. In season two, when the A (H3N2) component in the vaccine was not well-matched with circulating virus strains, efficacy (1 dose) was 86%, for an overall efficacy over two influenza seasons of 92%. Receipt of LAIV also resulted in 21% fewer febrile illnesses and a significant decrease in acute otitis media requiring antibiotics (267,269). Other random ized, placebo-controlled trials demonstrating the efficacy of LAIV in young children against culture-confirmed influenza include a study conducted among children aged 6-35 months attending child care centers during consecutive influenza sea sons (270), in which 85%-89% efficacy was observed, and a study conducted among children aged 12-36 months living in Asia during consecutive influenza seasons, in which 64% 70% efficacy was documented (271). In one communitybased, nonrandomized open-label study, reductions in MAARI were observed among children who received 1 dose of LAIV during the 1990-00 and 2000-01 influenza seasons even though antigenically drifted influenza A/H1N1 and B viruses were circulating during that season (272). LAIV efficacy in preventing laboratory confirmed influenza has also been dem onstrated in studies comparing the efficacy of LAIV with TIV rather than with a placebo (see Comparisons of LAIV and TIV Efficacy or Effectiveness).
# Healthy Adults
A randomized, double-blind, placebo-controlled trial of LAIV effectiveness among 4,561 healthy working adults aged 18-64 years assessed multiple endpoints, including reductions in self-reported respiratory tract illness without laboratory confirmation, work loss, health-care visits, and medication use during influenza outbreak periods (273). The study was conducted during the 1997-98 influenza season, when the vaccine and circulating A (H3N2) strains were not wellmatched. The frequency of febrile illnesses was not signifi cantly decreased among LAIV recipients compared with those who received placebo. However, vaccine recipients had sig nificantly fewer severe febrile illnesses (19% reduction) and febrile upper respiratory tract illnesses (24% reduction), and significant reductions in days of illness, days of work lost, days with health-care-provider visits, and use of prescription antibi otics and over-the-counter medications (273). Efficacy against culture-confirmed influenza in a randomized, placebo-controlled study was 57%, although efficacy in this study was not dem onstrated to be significantly greater than placebo (155).
# Adverse Events after Receipt of LAIV Healthy Children Aged 2-18 Years
In a subset of healthy children aged 60-71 months from one clinical trial (233), certain signs and symptoms were reported more often after the first dose among LAIV recipi ents (n = 214) than among placebo recipients (n = 95), including runny nose (48% and 44%, respectively); headache (18% and 12%, respectively); vomiting (5% and 3%, respec tively); and myalgias (6% and 4%, respectively). However, these differences were not statistically significant. In other tri als, signs and symptoms reported after LAIV administration have included runny nose or nasal congestion (20%-75%), headache (2%-46%), fever (0-26%), vomiting (3%-13%), abdominal pain (2%), and myalgias (0-21%) (106,260,263,265,270,(273)(274)(275)(276). These symptoms were associated more often with the first dose and were self-limited.
In a randomized trial published in 2007, LAIV and TIV were compared among children aged 6-59 months (277). Children with medically diagnosed or treated wheezing within 42 days before enrollment, or a history of severe asthma, were excluded from this study. Among children aged 24-59 months who received LAIV, the rate of medically significant wheez ing, using a pre-specified definition, was not greater compared with those who received TIV (277); wheezing was observed more frequently among younger LAIV recipients in this study (see Persons at Higher Risk from Influenza-Related Compli cations). In a previous randomized placebo-controlled safety trial among children aged 12 months-17 years without a his tory of asthma by parental report, an elevated risk for asthma events (RR = 4.06, CI = 1.29-17.86) was documented among 728 children aged 18-35 months who received LAIV. Of the 16 children with asthma-related events in this study, seven had a history of asthma on the basis of subsequent medical record review. None required hospitalization, and elevated risks for asthma were not observed in other age groups (276).
Another study was conducted among >11,000 children aged 18 months-18 years in which 18,780 doses of vaccine were administered for 4 years. For children aged 18 months-4 years, no increase was reported in asthma visits 0-15 days after vac cination compared with the prevaccination period. A signifi cant increase in asthma events was reported 15-42 days after vaccination, but only in vaccine year 1 (278).
Initial data from VAERS during 2007-2008, following ACIP recommendation for LAIV use in children aged 2-4 years, do not suggest a concern for wheezing after LAIV in young children. However data also suggest uptake of LAIV is limited and continued safety monitoring for wheezing events after LAIV is indicated (CDC, unpublished data, 2008).
# Adults Aged 19-49 Years
Among adults, runny nose or nasal congestion (28%-78%), headache (16%-44%), and sore throat (15%-27%) have been reported more often among vaccine recipients than placebo recipients (252,279). In one clinical trial among a subset of healthy adults aged 18-49 years, signs and symptoms reported more frequently among LAIV recipients (n = 2,548) than pla cebo recipients (n = 1,290) within 7 days after each dose included cough (14% and 11%, respectively); runny nose (45% and 27%, respectively); sore throat (28% and 17%, respectively); chills (9% and 6%, respectively); and tiredness/ weakness (26% and 22%, respectively) (279).
# Persons at Higher Risk for Influenza-Related Complications
Limited data assessing the safety of LAIV use for certain groups at higher risk for influenza-related complications are available. In one study of 54 HIV-infected persons aged 18-58 years and with CD4 counts >200 cells/mm3 who received LAIV, no serious adverse events were reported during a 1-month follow-up period (256). Similarly, one study dem onstrated no significant difference in the frequency of adverse events or viral shedding among HIV-infected children aged 1-8 years on effective antiretroviral therapy who were admin istered LAIV, compared with HIV-uninfected children receiv ing LAIV (257). LAIV was well-tolerated among adults aged >65 years with chronic medical conditions (280). These find ings suggest that persons at risk for influenza complications who have inadvertent exposure to LAIV would not have sig nificant adverse events or prolonged viral shedding and that persons who have contact with persons at higher risk for influenza-related complications may receive LAIV.
# Serious Adverse Events
Serious adverse events after administration of LAIV requir ing medical attention among healthy children aged 5-17 years or healthy adults aged 18-49 years occurred at a rate of <1% (252). Surveillance will continue for adverse events, including those that might not have been detected in previous studies. Reviews of reports to VAERS after vaccination of approximately 2.5 million persons during the 2003-04 and 2004-05 influ enza seasons did not indicate any new safety concerns (281). Health-care professionals should report all clinically significant adverse events occurring after LAIV administration promptly to VAERS after LAIV administration.
# Comparisons of LAIV and TIV Efficacy or Effectiveness
Both TIV and LAIV have been demonstrated to be effec tive in children and adults, but data directly comparing the efficacy or effectiveness of these two types of influenza vac cines are limited. Studies comparing the efficacy of TIV to that of LAIV have been conducted in a variety of settings and populations using several different outcomes. One random ized, double-blind, placebo-controlled challenge study among 92 healthy adults aged 18-41 years assessed the efficacy of both LAIV and TIV in preventing influenza infection when challenged with wild-type strains that were antigenically similar to vaccine strains (282). The overall efficacy in preventing laboratory-documented influenza from all three influenza strains combined was 85% and 71%, respectively, when chal lenged 28 days after vaccination by viruses to which study participants were susceptible before vaccination. The differ ence in efficacy between the two vaccines was not statistically significant in this limited study. No additional challenges to assess efficacy at time points later than 28 days were conducted. In a randomized, double-blind, placebo-controlled trial, con ducted among young adults during an influenza season when the majority of circulating H3N2 viruses were antigenically drifted from that season's vaccine viruses, the efficacy of LAIV and TIV against culture-confirmed influenza was 57% and 77%, respectively. The difference in efficacy was not statisti cally significant and was based largely on a difference in effi cacy against influenza B (155).
A randomized controlled clinical trial conducted among children aged 6-71 months during the 2004-05 influenza season demonstrated a 55% reduction in cases of cultureconfirmed influenza among children who received LAIV com pared with those who received TIV (277). In this study, LAIV efficacy was higher compared with TIV against antigenically drifted viruses as well as well-matched viruses (277). An openlabel, nonrandomized, community-based influenza vaccine trial conducted during an influenza season when circulating H3N2 strains were poorly matched with strains contained in the vaccine also indicated that LAIV, but not TIV, was effec tive against antigenically drifted H3N2 strains during that influenza season. In this study, children aged 5-18 years who received LAIV had significant protection against laboratoryconfirmed influenza (37%) and pneumonia and influenza events (50%) (278).
Although LAIV is not licensed for use in persons with risk factors for influenza complications, certain studies have compared the efficacy of LAIV to TIV in these groups. LAIV provided 32% increased protection in preventing cultureconfirmed influenza compared with TIV in one study con ducted among children aged >6 years and adolescents with asthma (283) and 52% increased protection compared with TIV among children aged 6-71 months with recurrent respi ratory tract infections (284).
# Effectiveness of Vaccination for Decreasing Transmission to Contacts
Decreasing transmission of influenza from caregivers and household contacts to persons at high risk might reduce ILI and complications among persons at high risk. Influenza virus infection and ILI are common among HCP (285)(286)(287). Influenza outbreaks have been attributed to low vaccination rates among HCP in hospitals and long-term-care facilities (288)(289)(290). One serosurvey demonstrated that 23% of HCP had serologic evidence of influenza virus infection during a single influenza season; the majority had mild illness or sub clinical infection (285). Observational studies have demon strated that vaccination of HCP is associated with decreased deaths among nursing home patients (291,292). In one clus ter-randomized controlled trial that included 2,604 residents of 44 nursing homes, significant decreases in mortality, ILI, and medical visits for ILI care were demonstrated among resi dents in nursing homes in which staff were offered influenza vaccination (coverage rate: 48%), compared with nursing homes in which staff were not provided with vaccination (cov erage rate: 6%) (293). A review concluded that vaccination of HCP in settings in which patients were also vaccinated pro vided significant reductions in deaths among elderly patients from all causes and deaths from pneumonia (294).
Epidemiologic studies of community outbreaks of influenza demonstrate that school-age children typically have the high est influenza illness attack rates, suggesting routine universal vaccination of children might reduce transmission to their household contacts and possibly others in the community. Results from certain studies have indicated that the benefits of vaccinating children might extend to protection of their adult contacts and to persons at risk for influenza complica tions in the community. However, these data are limited and studies have not used laboratory-confirmed influenza as an outcome measure. A single-blinded, randomized controlled study conducted during as part of a 1996-1997 vaccine effectiveness study demonstrated that vaccinating preschoolaged children with TIV reduced influenza-related morbidity among some household contacts (295). A randomized, placebo-controlled trial among children with recurrent respi ratory tract infections demonstrated that members of families with children who had received LAIV were significantly less likely to have respiratory tract infections and reported signifi cantly fewer workdays lost, compared with families with chil dren who received placebo (296). In nonrandomized community-based studies, administration of LAIV has been demonstrated to reduce MAARI (297,298) and ILI-related economic and medical consequences (e.g., workdays lost and number of health-care provider visits) among contacts of vac cine recipients (298). Households with children attending schools in which school-based LAIV vaccination programs had been established reported less ILI and fewer physician visits during peak influenza season, compared with households with children in schools in which no LAIV vaccination had been offered. However a decrease in the overall rate of school absenteeism was not reported in communities in which LAIV vaccination was offered (298). These community-based studies have not used laboratory-confirmed influenza as an outcome.
Some studies have also documented reductions in influenza illness among persons living in communities where focused programs for vaccinating children have been conducted. A community-based observational study conducted during the 1968 pandemic using a univalent inactivated vaccine reported that a vaccination program targeting school-aged children (cov erage rate: 86%) in one community reduced influenza rates within the community among all age groups compared with another community in which aggressive vaccination was not conducted among school-aged children (299). An observa tional study conducted in Russia demonstrated reductions in ILI among the community-dwelling elderly after implemen tation of a vaccination program using TIV for children aged 3-6 years (57% coverage achieved) and children and adoles cents aged 7-17 years (72% coverage achieved) (300). In a nonrandomized community-based study conducted over three influenza seasons, 8%-18% reductions in the incidence of MAARI during the influenza season among adults aged >35 years were observed in communities in which LAIV was offered to all children aged >18 months (estimated cover age rate: 20%-25%) compared with communities with such vaccination programs (297). In a subsequent influenza sea son, the same investigators documented a 9% reduction in MAARI rates during the influenza season among persons aged 35-44 years in intervention communities, where coverage was estimated at 31% among school children, compared with con trol communities. However, MAARI rates among persons aged >45 years were lower in the intervention communities regard less of the presence of influenza in the community, suggesting that lower rates could not be attributed to vaccination of school children against influenza (301).
# Effectiveness of Influenza Vaccination when Circulating Influenza Virus Strains Differ from Vaccine Strains
Manufacturing trivalent influenza virus vaccines is a chal lenging process that takes 6-8 months to complete. This manu facturing timeframe requires that influenza vaccine strains for influenza vaccines used in the United States must be selected in February of each year by the FDA to allow time for manu facturers to prepare vaccines for the next influenza season. Vaccine strain selections are based on global viral surveillance data that is used to identify trends in antigenic changes among circulating influenza viruses and the availability of suitable vaccine virus candidates.
Vaccination can provide reduced but substantial crossprotection against drifted strains in some seasons, including reductions in severe outcomes such as hospitalization. Usu ally one or more circulating viruses with antigenic changes compared with the vaccine strains are identified in each influ enza season. However, assessment of the clinical effectiveness of influenza vaccines cannot be determined solely by labora tory evaluation of the degree of antigenic match between vac cine and circulating strains. In some influenza seasons, circulating influenza viruses with significant antigenic differ ences predominate and, compared with seasons when vaccine and circulating strains are well-matched, reductions in vac cine effectiveness are sometimes observed (126,139,145,147,191). However, even during years when vaccine strains were not antigenically well matched to circulating strains, sub stantial protection has been observed against severe outcomes, presumably because of vaccine-induced cross-reacting anti bodies (139,145,147,273). For example, in one study con ducted during an influenza season (2003-04) when the predominant circulating strain was an influenza A (H3N2) virus that was antigenically different from that season's vac cine strain, effectiveness among persons aged 50-64 years against laboratory-confirmed influenza illness was 60% among healthy persons and 48% among persons with medical condi tions that increase risk for influenza complications (147). An interim, within-season analysis during the 2007-08 influenza season indicated that vaccine effectiveness was 44% overall, 54% among healthy persons aged 5-49 years, and 58% against influenza A, despite the finding that viruses circulating in the study area were predominately a drifted influenza A H3N2 and a influenza B strain from a different lineage compared with vaccine strains (302). Among children, both TIV and LAIV provide protection against infection even in seasons when vaccines and circulating strains are not well matched. Vaccine effectiveness against ILI was 49%-69% in two obser vational studies, and 49% against medically attended, labora tory-confirmed influenza in a case-control study conducted among young children during the 2003-04 influenza season, when a drifted influenza A H3N2 strain predominated, based on viral surveillance data (121,125). However, continued improvements in collecting representative circulating viruses and use surveillance data to forecast antigenic drift are needed. Shortening manufacturing time to increase the time to iden tify good vaccine candidate strains from among the most recent circulating strains also is important. Data from mul tiple seasons and collected in a consistent manner are needed to better understand vaccine effectiveness during seasons when circulating and vaccine virus strains are not well-matched.
# Cost-Effectiveness of Influenza Vaccination
Economic studies of influenza vaccination are difficult to compare because they have used different measures of both costs and benefits (e.g., cost-only, cost-effectiveness, costbenefit, or cost-utility). However, most studies find that vac cination reduces or minimizes health care, societal, and individual costs, or the productivity losses and absenteeism associated with influenza illness. One national study estimated the annual economic burden of seasonal influenza in the United States (using 2003 population and dollars) to be $87.1 billion, including $10.4 billion in direct medical costs (303).
Studies (306,307). However, another United States study indicated no productivity and absentee savings in a strategy to vaccinate healthy working adults, although vaccination was still estimated to be costeffective (139).
Cost analyses have documented the considerable cost bur den of illness among children. In a study of 727 children at a medical center during 2000-2004, the mean total cost of hospitalization for influenza-related illness was $13,159 ($39,792 for patients admitted to an intensive care unit and $7,030 for patients cared for exclusively on the wards) (308). Strategies that focus on vaccinating children with medical conditions that confer a higher risk for influenza complica tions are more cost-effective than a strategy of vaccinating all children (309). An analysis that compared the costs of vacci nating children of varying ages with TIV and LAIV indicated that costs per QALY saved increased with age for both vac cines. In 2003 dollars per QALY saved, costs for routine vac cination using TIV were $12,000 for healthy children aged 6-23 months and $119,000 for healthy adolescents aged 12-17 years, compared with $9,000 and $109,000 using LAIV, respectively (310). Economic evaluations of vaccinating chil dren have demonstrated a wide range of cost estimates, but have generally found this strategy to be either cost-saving or cost-beneficial (311)(312)(313)(314).
Economic analyses are sensitive to the vaccination venue, with vaccination in medical care settings incurring higher pro jected costs. In a published model, the mean cost (year 2004 values) of vaccination was lower in mass vaccination ($17.04) and pharmacy ($11.57) settings than in scheduled doctor's office visits ($28.67) (315). Vaccination in nonmedical set tings was projected to be cost saving for healthy adults aged >50 years and for high-risk adults of all ages. For healthy adults aged 18-49 years, preventing an episode of influenza would cost $90 if vaccination were delivered in a pharmacy setting, $210 in a mass vaccination setting, and $870 during a sched uled doctor's office visit (315). Medicare payment rates in recent years have been less than the costs associated with pro viding vaccination in a medical practice (316).
# Vaccination Coverage Levels
Continued annual monitoring is needed to determine the effects on vaccination coverage of vaccine supply delays and shortages, changes in influenza vaccination recommendations and target groups for vaccination, reimbursement rates for vaccine and vaccine administration, and other factors related to vaccination coverage among adults and children. One of the national health objectives for 2010 includes achieving an influenza vaccination coverage level of 90% for persons aged >65 years and among nursing home residents (317,318); new strategies to improve coverage are needed to achieve these objectives (319,320). Increasing vaccination coverage among persons who have high-risk conditions and are aged <65 years, including children at high risk, is the highest priority for expanding influenza vaccine use.
On the basis of the 2006 final data set and the 2007 early release data from the National Health Interview Survey (NHIS), estimated national influenza vaccine coverage dur ing the 2005-06 and 2006-07 influenza seasons among per sons aged >65 years and 50-64 years increased slightly from 32% and 65%, respectively to 36% and 66% (Table 3) and appear to be approaching coverage levels observed before the 2004-05 vaccine shortage year. In 2005-06 and 2006-07, estimated vaccination coverage levels among adults with highrisk conditions aged 18-49 years were 23% and 26%, respec tively, substantially lower than the Healthy People 2000 and Healthy People 2010 objectives of 60% (Table 3) (317,318).
Opportunities to vaccinate persons at risk for influenza com plications (e.g., during hospitalizations for other causes) often are missed. In a study of hospitalized Medicare patients, only 31.6% were vaccinated before admission, 1.9% during admission, and 10.6% after admission (321). A study in New York City during 2001-2005 among 7,063 children aged 6-23 months indicated that 2-dose vaccine coverage increased from 1.6% to 23.7%. Although the average number of medi cal visits during which an opportunity to be vaccinated decreased during the course of the study from 2.9 to 2.0 per child, 55% of all visits during the final year of the study still represented a missed vaccination opportunity (322). Using standing orders in hospitals increases vaccination rates among hospitalized persons (323). In one survey, the strongest pre as being at high risk for influenza-related complications self-reported one or more of the following: 1) ever being told by a physician they had diabetes, emphysema, coronary heart disease, angina, heart attack, or other heart condition; 2) having a diagnosis of cancer during the previous 12 months (excluding nonmelanoma skin cancer) or ever being told by a physician they have lymphoma, leukemia, or blood cancer during the previous 12 months (Post coding for a cancer diagnosis was not yet completed at the time of this publication so this diagnosis was not include in the 2006-07 season data.); 3) being told by a physician they have chronic bronchitis or weak or failing kidneys; or 4) reporting an asthma episode or attack during the preceding 12 months. For children aged 65 years, or any person aged 5-17 years at high risk (see previous footnote ). To obtain information on household composition and high-risk status of household members, the sampled adult, child, and person files from NHIS were merged. Interviewed adults who were health-care workers or who had high-risk conditions were excluded. Information could not be assessed regarding high-risk status of other adults aged 18-64 years in the household, thus, certain adults 18-64 years who live with an adult aged 18-64 years at high risk were not included in the analysis. Also note that although the recommendation for vaccination of children aged 2-4 years was not in place during the 2005-06 season. Children aged 2-4 years in these calculations were considered to have an indication for vaccination to facilitate comparison of coverage date for subsequent years.
dictor of receiving vaccination was the survey respondent's blacks, and 54% for Hispanics (325). Among Medicare ben belief that he or she was in a high-risk group. However, many eficiaries, other key factors that contribute to disparities in persons in high-risk groups did not know that they were in a coverage include variations in the propensity of patients to group recommended for vaccination (324).
actively seek vaccination and variations in the likelihood that Reducing racial and ethnic health disparities, including dis-providers recommend vaccination (326,327). One study esti parities in influenza vaccination coverage, is an overarching mated that eliminating these disparities in vaccination covernational goal that is not being met (317). Estimated vaccina-age would have an impact on mortality similar to the impact tion coverage levels in 2007 among persons aged >65 years of eliminating deaths attributable to kidney disease among were 70% for non-Hispanic whites, 58% for non-Hispanic blacks or liver disease among Hispanics (328).
Reported vaccination levels are low among children at increased risk for influenza complications. Coverage among children aged 2-17 years with asthma for the 2004-05 influ enza season was estimated to be 29% (329). One study reported 79% vaccination coverage among children attend ing a cystic fibrosis treatment center (330). During the first season for which ACIP recommended that all children aged 6 months-23 months receive vaccination, 33% received one or more dose of influenza vaccination, and 18% received 2 doses if they were unvaccinated previously (331) were fully vaccinated (i.e., received 1 or 2 doses depending on previous vaccination history); however, results varied substan tially among states (334). As has been reported for older adults, a physician recommendation for vaccination and the percep tion that having a child be vaccinated "is a smart idea" were associated positively with likelihood of vaccination of chil dren aged 6-23 months (335). Similarly, children with asthma were more likely to be vaccinated if their parents recalled a physician recommendation to be vaccinated or believed that the vaccine worked well (336). Implementation of a reminder/ recall system in a pediatric clinic increased the percentage of children with asthma or reactive airways disease receiving vac cination from 5% to 32% (337).
Although annual vaccination is recommended for HCP and is a high priority for reducing morbidity associated with influenza in health-care settings and for expanding influenza vaccine use (338)(339)(340), national survey data demonstrated a vaccination coverage level of only 42% among HCP during the 2005-06 season (Table 3). Vaccination of HCP has been associated with reduced work absenteeism (286) and with fewer deaths among nursing home patients (292,293) and elderly hospitalized patients (294). Factors associated with a higher rate of influenza vaccination among HCP include older age, being a hospital employee, having employer provided healthcare insurance, having had pneumococcal or hepatitis B vac cination in the past, or having visited a health-care professional during the preceding year. Non-Hispanic black HCP were less likely than non-Hispanic white HCP to be vaccinated (341). Beliefs that are frequently cited by HCP who decline vaccination include doubts about the risk for influenza and the need for vaccination, concerns about vaccine effectiveness and side effects, and dislike of injections (342).
Vaccine coverage among pregnant women has not increased significantly during the preceding decade. (343). Only 12% and 13% of pregnant women participating in the 2006 and 2007 NHIS reported vaccination during the 2005-06 and 2006-07 seasons, respectively, excluding pregnant women who reported diabetes, heart disease, lung disease, and other selected high-risk conditions (Table 3). In a study of influ enza vaccine acceptance by pregnant women, 71% of those who were offered the vaccine chose to be vaccinated (344). However, a 1999 survey of obstetricians and gynecologists determined that only 39% administered influenza vaccine to obstetric patients in their practices, although 86% agreed that pregnant women's risk for influenza-related morbidity and mortality increases during the last two trimesters (345).
Influenza vaccination coverage in all groups recommended for vaccination remains suboptimal. Despite the timing of the peak of influenza disease, administration of vaccine decreases substantially after November. According to results from the NHIS regarding the two most recent influenza seasons for which these data are available, approximately 84% of all influenza vaccination were administered during September-November. Among persons aged >65 years, the percentage of September-November vaccinations was 92% (346). Because many persons recommended for vaccination remain unvacci nated at the end of November, CDC encourages public health partners and health-care providers to conduct vaccination clin ics and other activities that promote influenza vaccination annually during National Influenza Vaccination Week and throughout the remainder of the influenza season.
Self-report of influenza vaccination among adults, compared with determining vaccination status from the medical record, is a sensitive and specific source of information (347). Patient self-reports should be accepted as evidence of influenza vacci nation in clinical practice (347). However, information on the validity of parents' reports of pediatric influenza vaccina tion is not yet available.
# Recommendations for Using TIV and LAIV During the 2008-09 Influenza Season
Both TIV and LAIV prepared for the 2008-09 season will include A/Brisbane/59/2007 (H1N1)-like, A/Brisbane/10/ 2007 (H3N2)-like, and B/Florida/4/2006-like antigens. These viruses will be used because they are representative of influ enza viruses that are forecasted to be circulating in the United States during the 2008-09 influenza season and have favor able growth properties in eggs.
TIV and LAIV can be used to reduce the risk for influenza virus infection and its complications. Vaccination providers should administer influenza vaccine to any person who wishes to reduce the likelihood of becoming ill with influenza or trans mitting influenza to others should they become infected.
Healthy, nonpregnant persons aged 2-49 years can choose to receive either vaccine. Some TIV formulations are FDAlicensed for use in persons as young as age 6 months (see Recommended Vaccines for Different Age Groups). TIV is licensed for use in persons with high-risk conditions. LAIV is FDA-licensed for use only for persons aged 2-49 years. In addition, FDA has indicated that the safety of LAIV has not been established in persons with underlying medical condi tions that confer a higher risk for influenza complications. All children aged 6 months-8 years who have not been vacci nated previously at any time with at least 1 dose of either LAIV or TIV should receive 2 doses of age-appropriate vac cine in the same season, with a single dose during subsequent seasons.
# Target Groups for Vaccination
Influenza vaccine should be provided to all persons who want to reduce the risk of becoming ill with influenza or of transmitting it to others. However, emphasis on providing routine vaccination annually to certain groups at higher risk for influenza infection or complications is advised, including all children aged 6 months-18 years, all persons aged >50 years, and other adults at risk for medical complications from influenza or more likely to require medical care should receive influenza vaccine annually. In addition, all persons who live with or care for persons at high risk for influenza-related complications, including contacts of children aged <6 months, should receive influenza vaccine annually (Boxes 1 and 2). Approximately 83% of the United States population is included in one or more of these target groups; however, <40% of the U.S. population received an influenza vaccination during 2007-2008.
# Children Aged 6 Months-18 Years
Beginning with the 2008-09 influenza season, annual vac cination for all children aged 6 months-18 years is recom mended. Annual vaccination of all children aged 6 months-4 years (59 months) and older children with condi tions that place them at increased risk for complications from influenza should continue. Children and adolescents at high risk for influenza complications should continue to be a focus of vaccination efforts as providers and programs transition to routinely vaccinating all children. Annual vaccination of all children aged 5-18 years should begin in September 2008 or as soon as vaccine is available for the 2008-09 influenza sea son, if feasible. Annual vaccination of all children aged 5-18 years should begin no later than during the 2009-10 influ enza season.
Healthy children aged 2-18 years can receive either LAIV or TIV. Children aged 6-23 months, those aged 2-4 years who have evidence of possible reactive airways disease (see Considerations When Using LAIV) or who have medical con ditions that put them at higher risk for influenza complica tions should receive TIV. All children aged 6 months-8 years who have not received vaccination against influenza previ ously should receive 2 doses of vaccine the first year they are vaccinated.
# Persons at Risk for Medical Complications
Vaccination to prevent influenza is particularly important for the following persons who are at increased risk for severe complications from influenza, or at higher risk for influenzaassociated clinic, emergency department, or hospital visits. When vaccine supply is limited, vaccination efforts should focus on delivering vaccination to these persons:
# Persons Who Live With or Care for Persons at High Risk for Influenza-Related Complications
To prevent transmission to persons identified above, vacci nation with TIV or LAIV (unless contraindicated) also is rec ommended for the following persons. When vaccine supply is limited, vaccination efforts should focus on delivering vac cination to these persons:
- HCP;
- healthy household contacts (including children) and caregivers of children aged 50 years; and - healthy household contacts (including children) and caregivers of persons with medical conditions that put them at higher risk for severe complications from influenza.
# Additional Information About Vaccination of Specific Populations Children Aged 6 Months-18 Years
Beginning with the 2008-09 influenza season, all children aged 6 months-18 years should be vaccinated against influ enza annually. The expansion of vaccination to include all children aged 5-18 years should begin in 2008 if feasible, but no later than the 2009-10 influenza season. In 2004, ACIP recommended routine vaccination for all children aged 6-23 months, and in 2006, ACIP expanded the recommendation to include all children aged 24-59 months. The committee's recommendation to expand routine influenza vaccination to include all school-age children and adolescents aged 5-18 years is based on 1) accumulated evidence that influenza vaccine is effective and safe for school-aged children (see "Influenza Vaccine Efficacy, Effectiveness, and Safety"), 2) increased evidence that influenza has substantial adverse impacts among school-aged children and their contacts (e.g., school absen teeism, increased antibiotic use, medical care visits, and parental work loss) (see "Health-Care Use, Hospitalizations, and Deaths Attributed to Influenza"), and, 3) an expectation that a simplified age-based influenza vaccine recommenda tion for all school-age children and adolescents will improve vaccine coverage levels among the approximately 50% of school-aged children who already had a risk-or contact-based indication for annual influenza vaccination.
Children typically have the highest attack rates during com munity outbreaks of influenza and serve as a major source of transmission within communities (1,2). If sufficient vaccina tion coverage among children can be achieved, evidence for additional benefits, such as the indirect effect of reducing influenza among persons who have close contact with chil dren and reducing overall transmission within communities, might occur. Achieving and sustaining community-level reductions in influenza will require mobilization of commu nity resources and development of sustainable annual vacci nation campaigns to assist health-care providers and vaccination programs in providing influenza vaccination ser vices to children of all ages. In many areas, innovative community-based efforts, which might include mass vaccina tion programs in school or other community settings, will be needed to supplement vaccination services provided in healthcare providers' offices or public health clinics. In nonrandomized community-based controlled trials, reductions in ILI-related symptoms and medical visits among household contacts have been demonstrated in communities where vac cination programs among school-aged children were estab lished, compared with communities without such vaccination programs (299)(300)(301). Rates of school absences associated with ILI also were significantly reduced in some studies. In addi tion, reducing influenza transmission among children through vaccination has reduced rates for self-reported ILI among household contacts and among unvaccinated children (297,298).
Reducing influenza-related illness among children who are at high risk for influenza complications should continue to be a primary focus of influenza-prevention efforts. Children who should be vaccinated because they are at high risk for influ enza complications include all children aged 6-59 months, children with certain medical conditions, children who are contacts of children aged 50 years, and children who are contacts of persons at high risk for influenza complications because of medical con ditions. Influenza vaccines are not licensed by FDA for use among children aged <6 months. Because these infants are at higher risk for influenza complications compared with other child age groups, prevention efforts that focus on vaccinating household contacts and out-of-home caregivers to reduce the risk for influenza in these infants is a high priority.
All children aged 6 months-8 years who have not received vaccination against influenza previously should receive 2 doses of vaccine the first influenza season that they are vaccinated. The second dose should be administered 4 or more weeks after the initial dose. All HCP, as well as those in training for health-care profes sions, should be vaccinated annually against influenza. Per sons working in health-care settings who should be vaccinated include physicians, nurses, and other workers in both hospi tal and outpatient-care settings, medical emergency-response workers (e.g., paramedics and emergency medical technicians), employees of nursing home and chronic-care facilities who have contact with patients or residents, and students in these professions who will have contact with patients (339,340,349).
# HCP and Other Persons Who Can Transmit Influenza to Those at High Risk
Facilities that employ HCP should provide vaccine to work ers by using approaches that have been demonstrated to be effective in increasing vaccination coverage. Health-care administrators should consider the level of vaccination cover age among HCP to be one measure of a patient safety quality program and consider obtaining signed declinations from personnel who decline influenza vaccination for reasons other than medical contraindications (340). Influenza vaccination rates among HCP within facilities should be regularly mea sured and reported, and ward-, unit-, and specialty-specific coverage rates should be provided to staff and administration (340). Studies have demonstrated that organized campaigns can attain higher rates of vaccination among HCP with mod erate effort and by using strategies that increase vaccine acceptance (338,340,350).
Efforts to increase vaccination coverage among HCP are supported by various national accrediting and professional organizations and in certain states by statute. The Joint Com mission on Accreditation of Health-Care Organizations has approved an infection-control standard that requires accred ited organizations to offer influenza vaccinations to staff, including volunteers and licensed independent practitioners with close patient contact. The standard became an accredita tion requirement beginning January 1, 2007 (351). In addi tion, the Infectious Diseases Society of America recommended mandatory vaccination for HCP, with a provision for declina tion of vaccination based on religious or medical reasons (352). Fifteen states have regulations regarding vaccination of HCP in long-term-care facilities (353), six states require that healthcare facilities offer influenza vaccination to HCP, and four states require that HCP either receive influenza vaccination or indicate a religious, medical, or philosophical reason for not being vaccinated (354,355).
# Close Contacts of Immunocompromised Persons
Immunocompromised persons are at risk for influenza com plications but might have insufficient responses to vaccina tion. Close contacts of immunocompromised persons, including HCP, should be vaccinated to reduce the risk for influenza transmission. TIV is preferred for vaccinating house hold members, HCP, and others who have close contact with severely immunosuppressed persons (e.g., patients with hematopoietic stem cell transplants) during those periods in which the immunosuppressed person requires care in a pro tective environment (typically defined as a specialized patientcare area with a positive airflow relative to the corridor, high-efficiency particulate air filtration, and frequent air changes) (340,356).
LAIV transmission from a recently vaccinated person caus ing clinically important illness in an immunocompromised contact has not been reported. The rationale for avoiding use of LAIV among HCP or other close contacts of severely immunocompromised patients is the theoretical risk that a live, attenuated vaccine virus could be transmitted to the severely immunosuppressed person. As a precautionary mea sure, HCP who receive LAIV should avoid providing care for severely immunosuppressed patients for 7 days after vaccina tion. Hospital visitors who have received LAIV should avoid contact with severely immunosuppressed persons in protected environments for 7 days after vaccination but should not be restricted from visiting less severely immunosuppressed patients.
No preference is indicated for TIV use by persons who have close contact with persons with lesser degrees of immunosup pression (e.g., persons with diabetes, persons with asthma who take corticosteroids, persons who have recently received che motherapy or radiation but who are not being cared for in a protective environment as defined above, or persons infected with HIV) or for TIV use by HCP or other healthy nonpreg nant persons aged 2-49 years in close contact with persons in all other groups at high risk.
# Pregnant Women
Pregnant women are at risk for influenza complications, and all women who are pregnant or will be pregnant during influ enza season should be vaccinated. The American College of Obstetricians and Gynecologists and the American Academy of Family Physicians also have recommended routine vacci nation of all pregnant women (357). No preference is indi cated for use of TIV that does not contain thimerosal as a preservative (see Vaccine Preservative in Multidose Vials of TIV) for any group recommended for vac cination, including pregnant women. LAIV is not licensed for use in pregnant women. However, pregnant women do not need to avoid contact with persons recently vaccinated with LAIV.
# Breastfeeding Mothers
Vaccination is recommended for all persons, including breastfeeding women, who are contacts of infants or children aged <59 months (i.e., <5 years), because infants and young children are at high risk for influenza complications and are more likely to require medical care or hospitalization if infected. Breastfeeding does not affect the immune response adversely and is not a contraindication for vaccination (197). Women who are breastfeeding can receive either TIV or LAIV unless contraindicated because of other medical conditions.
# Travelers
The risk for exposure to influenza during travel depends on the time of year and destination. In the temperate regions of the Southern Hemisphere, influenza activity occurs typically during April-September. In temperate climate zones of the Northern and Southern Hemispheres, travelers also can be exposed to influenza during the summer, especially when trav eling as part of large tourist groups (e.g., on cruise ships) that include persons from areas of the world in which influenza viruses are circulating (358,359). In the tropics, influenza occurs throughout the year. In a study among Swiss travelers to tropical and subtropical countries, influenza was the most frequently acquired vaccine-preventable disease (360).
Any traveler who wants to reduce the risk for influenza infection should consider influenza vaccination, preferably at least 2 weeks before departure. In particular, persons at high risk for complications of influenza and who were not vacci nated with influenza vaccine during the preceding fall or win ter should consider receiving influenza vaccine before travel if they plan to
- travel to the tropics,
- travel with organized tourist groups at any time of year, or - travel to the Southern Hemisphere during April-September. No information is available about the benefits of revacci nating persons before summer travel who already were vacci nated during the preceding fall. Persons at high risk who receive the previous season's vaccine before travel should be revacci nated with the current vaccine the following fall or winter. Persons at higher risk for influenza complications should con sult with their health-care practitioner to discuss the risk for influenza or other travel-related diseases before embarking on travel during the summer.
# General Population
Vaccination is recommended for any person who wishes to reduce the likelihood of becoming ill with influenza or trans mitting influenza to others should they become infected. Healthy, nonpregnant persons aged 2-49 years might choose to receive either TIV or LAIV. All other persons aged >6 months should receive TIV. Persons who provide essential community services should be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Students or other persons in institutional settings (e.g., those who reside in dormitories or correctional facili ties) should be encouraged to receive vaccine to minimize morbidity and the disruption of routine activities during epi demics (361,362).
# Recommended Vaccines for Different Age Groups
When vaccinating children aged 6-35 months with TIV, health-care providers should use TIV that has been licensed by the FDA for this age group (i.e., TIV manufactured by Sanofi Pasteur (). TIV from Novartis (Fluvirin) is FDA-approved in the United States for use among persons aged >4 years. TIV from GlaxoSmithKline (Fluarix and FluLaval) or CSL Biotherapies (Afluria) is labeled for use in persons aged >18 years because data to demonstrate efficacy among younger persons have not been provided to FDA. LAIV from MedImmune (FluMist) is licensed for use by healthy nonpregnant persons aged 2-49 years (Table 1). A vaccine dose does not need to be repeated if inadvertently adminis tered to a person who does not have an age indication for the vaccine formulation given. Expanded age and risk group indications for licensed vaccines are likely over the next sev eral years, and vaccination providers should be alert to these changes. In addition, several new vaccine formulations are being evaluated in immunogenicity and efficacy trials; when licensed, these new products will increase the influenza vac cine supply and provide additional vaccine choices for practi tioners and their patients.
# Influenza Vaccines and Use of Influenza Antiviral Medications
Administration of TIV and influenza antivirals during the same medical visit is acceptable. The effect on safety and effectiveness of LAIV coadministration with influenza antivi ral medications has not been studied. However, because influenza antivirals reduce replication of influenza viruses, LAIV should not be administered until 48 hours after cessa tion of influenza antiviral therapy, and influenza antiviral medi cations should not be administered for 2 weeks after receipt of LAIV. Persons receiving antivirals within the period 2 days before to 14 days after vaccination with LAIV should be revaccinated at a later date (197,252).
# Persons Who Should Not Be Vaccinated with TIV
TIV should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine. Prophylactic use of antiviral agents is an option for preventing influenza among such persons. Information about vaccine components is located in package inserts from each manufacturer. Persons with moderate to severe acute febrile illness usually should not be vaccinated until their symptoms have abated. However, minor illnesses with or without fever do not contraindicate use of influenza vaccine. GBS within 6 weeks following a previous dose of TIV is considered to be a precaution for use of TIV.
# Considerations When Using LAIV
LAIV is an option for vaccination of healthy, nonpregnant persons aged 2-49 years, including HCP and other close con tacts of high-risk persons (excepting severely immuno compromised persons who require care in a protected environment). No preference is indicated for LAIV or TIV when considering vaccination of healthy, nonpregnant per sons aged 2-49 years. Use of the term "healthy" in this rec ommendation refers to persons who do not have any of the underlying medical conditions that confer high risk for severe complications (see Persons Who Should Not Be Vaccinated with LAIV). However, during periods when inactivated vac cine is in short supply, use of LAIV is encouraged when fea sible for eligible persons (including HCP) because use of LAIV by these persons might increase availability of TIV for per sons in groups targeted for vaccination, but who cannot receive LAIV. Possible advantages of LAIV include its poten tial to induce a broad mucosal and systemic immune response in children, its ease of administration, and the possibly increased acceptability of an intranasal rather than intramus cular route of administration.
If the vaccine recipient sneezes after administration, the dose should not be repeated. However, if nasal congestion is present that might impede delivery of the vaccine to the nasopharyn geal mucosa, deferral of administration should be considered until resolution of the illness, or TIV should be administered instead. No data exist about concomitant use of nasal corti costeroids or other intranasal medications (252).
Although FDA licensure of LAIV excludes children aged 2-4 years with a history of asthma or recurrent wheezing, the precise risk, if any, of wheezing caused by LAIV among these children is unknown because experience with LAIV among these young children is limited. Young children might not have a history of recurrent wheezing if their exposure to respi ratory viruses has been limited because of their age. Certain children might have a history of wheezing with respiratory illnesses but have not had asthma diagnosed. The following screening recommendations should be used to assist persons who administer influenza vaccines in providing the appropri ate vaccine for children aged 2-4 years.
Clinicians and vaccination programs should screen for pos sible reactive airways diseases when considering use of LAIV for children aged 2-4 years, and should avoid use of this vac cine in children with asthma or a recent wheezing episode. Health-care providers should consult the medical record, when available, to identify children aged 2-4 years with asthma or recurrent wheezing that might indicate asthma. In addition, to identify children who might be at greater risk for asthma and possibly at increased risk for wheezing after receiving LAIV, parents or caregivers of children aged 2-4 years should be asked: "In the past 12 months, has a health-care provider ever told you that your child had wheezing or asthma?" Children whose parents or caregivers answer "yes" to this question and children who have asthma or who had a wheezing episode noted in the medical record during the preceding 12 months should not receive LAIV. TIV is available for use in children with asthma or possible reactive airways diseases (363).
LAIV can be administered to persons with minor acute ill nesses (e.g., diarrhea or mild upper respiratory tract infection with or without fever). However, if nasal congestion is present that might impede delivery of the vaccine to the nasopharyn geal mucosa, deferral of administration should be considered until resolution of the illness.
# Persons Who Should Not Be Vaccinated with LAIV
The effectiveness or safety of LAIV is not known for the following groups, and these persons should not be vaccinated with LAIV:
- persons with a history of hypersensitivity, including ana phylaxis, to any of the components of LAIV or to eggs. - persons aged 50 years; - persons with any of the underlying medical conditions that serve as an indication for routine influenza vaccina tion, including asthma, reactive airways disease, or other chronic disorders of the pulmonary or cardiovascular sys tems; other underlying medical conditions, including such metabolic diseases as diabetes, renal dysfunction, and hemoglobinopathies; or known or suspected immunode ficiency diseases or immunosuppressed states; - children aged 2-4 years whose parents or caregivers report that a health-care provider has told them during the preceding 12 months that their child had wheezing or asthma, or whose medical record indicates a wheezing episode has occurred during the preceding 12 months; - children or adolescents receiving aspirin or other salicy lates (because of the association of Reye syndrome with wild-type influenza virus infection); - persons with a history of GBS after influenza vaccina tion; or - pregnant women.
# Personnel Who Can Administer LAIV
Low-level introduction of vaccine viruses into the environ ment probably is unavoidable when administering LAIV. The risk for acquiring vaccine viruses from the environment is unknown but is probably low. Severely immunosuppressed persons should not administer LAIV. However, other persons at higher risk for influenza complications can administer LAIV. These include persons with underlying medical conditions placing them at higher risk or who are likely to be at risk, including pregnant women, persons with asthma, and persons aged >50 years.
# Concurrent Administration of Influenza Vaccine with Other Vaccines
Use of LAIV concurrently with measles, mumps, rubella (MMR) alone and MMR and varicella vaccine among chil dren aged 12-15 months has been studied, and no interfer ence with the immunogenicity to antigens in any of the vaccines was observed (252,364). Among adults aged >50 years, the safety and immunogenicity of zoster vaccine and TIV was similar whether administered simultaneously or spaced 4 weeks apart (365). In the absence of specific data indicating interference, following ACIP's general recommen dations for vaccination is prudent (197). Inactivated vaccines do not interfere with the immune response to other inacti vated vaccines or to live vaccines. Inactivated or live vaccines can be administered simultaneously with LAIV. However, after administration of a live vaccine, at least 4 weeks should pass before another live vaccine is administered.
# Recommendations for Vaccination Administration and Vaccination Programs
Although influenza vaccination levels increased substantially during the 1990s, little progress has been made toward achiev ing national health objectives, and further improvements in vaccine coverage levels are needed. Strategies to improve vac cination levels, including using reminder/recall systems and standing orders programs (325,366,367), should be imple mented whenever feasible. Vaccination coverage can be increased by administering vaccine before and during the in fluenza season to persons during hospitalizations or routine health-care visits. Vaccinations can be provided in alternative settings (e.g., pharmacies, grocery stores, workplaces, or other locations in the community), thereby making special visits to physicians' offices or clinics unnecessary. Coordinated cam paigns such as the National Influenza Vaccination Week (December 8-14, 2008) provide opportunities to refocus public attention on the benefits, safety, and availability of influenza vaccination throughout the influenza season. When educating patients about potential adverse events, clinicians should emphasize that 1) TIV contains noninfectious killed viruses and cannot cause influenza, 2) LAIV contains weak ened influenza viruses that cannot replicate outside the upper respiratory tract and are unlikely to infect others, and 3) con comitant symptoms or respiratory disease unrelated to vacci nation with either TIV or LAIV can occur after vaccination.
# Information About the Vaccines for Children Program
The Vaccines for Children (VFC) program supplies vac cine to all states, territories, and the District of Columbia for use by participating providers. These vaccines are to be pro vided to eligible children without vaccine cost to the patient or the provider, although the provider might charge a vaccine administration fee. All routine childhood vaccines recom mended by ACIP are available through this program, includ ing influenza vaccines. The program saves parents and providers out-of-pocket expenses for vaccine purchases and provides cost savings to states through CDC's vaccine con tracts. The program results in lower vaccine prices and ensures that all states pay the same contract prices. Detailed information about the VFC program is available at http:// www.cdc.gov/vaccines/programs/vfc/default.htm.
# Influenza Vaccine Supply Considerations
The annual supply of influenza vaccine and the timing of its distribution cannot be guaranteed in any year. During the 2007-08 influenza season, 113 million doses of influenza vac cine were distributed in the United States. Total production of influenza vaccine for the United States is anticipated to be >130 million doses for the 2008-09 season, depending on demand and production yields. However, influenza vaccine distribution delays or vaccine shortages remain possible in part because of the inherent critical time constraints in manufac turing the vaccine given the annual updating of the influenza vaccine strains and various other manufacturing and regula tory issues. To ensure optimal use of available doses of influ enza vaccine, health-care providers, those planning organized campaigns, and state and local public health agencies should develop plans for expanding outreach and infrastructure to vaccinate more persons in targeted groups and others who wish to reduce their risk for influenza and develop contin gency plans for the timing and prioritization of administering influenza vaccine if the supply of vaccine is delayed or reduced.
If supplies of TIV are not adequate, vaccination should be carried out in accordance with local circumstances of supply and demand based on the judgment of state and local health officials and health-care providers. Guidance for tiered use of TIV during prolonged distribution delays or supply short falls is available at / vaccination/vax_priority.htm and will be modified as needed in the event of shortage. CDC and other public health agen cies will assess the vaccine supply on a continuing basis throughout the manufacturing period and will inform both providers and the general public if any indication exists of a substantial delay or an inadequate supply.
Because LAIV is only recommended for use in healthy non pregnant persons aged 2-49 years, no recommendations for prioritization of LAIV use are made. Either LAIV or TIV when considering vaccination of healthy, nonpregnant persons aged 2-49 years. However, during shortages of TIV, LAIV should be used preferentially when feasible for all healthy nonpreg nant persons aged 2-49 years (including HCP) who desire or are recommended for vaccination to increase the availability of inactivated vaccine for persons at high risk.
# Timing of Vaccination
Vaccination efforts should be structured to ensure the vac cination of as many persons as possible over the course of several months, with emphasis on vaccinating before influ enza activity in the community begins. Even if vaccine distri bution begins before October, distribution probably will not be completed until December or January. The following rec ommendations reflect this phased distribution of vaccine.
In any given year, the optimal time to vaccinate patients cannot be precisely determined because influenza seasons vary in their timing and duration, and more than one outbreak might occur in a single community in a single year. In the United States, localized outbreaks that indicate the start of seasonal influenza activity can occur as early as October. How ever, in >80% of influenza seasons since 1976, peak influenza activity (which is often close to the midpoint of influenza activity for the season) has not occurred until January or later, and in >60% of seasons, the peak was in February or later (Figure 1). In general, health-care providers should begin offering vaccination soon after vaccine becomes available and if possible by October. To avoid missed opportunities for vac cination, providers should offer vaccination during routine health-care visits or during hospitalizations whenever vaccine is available.
Vaccination efforts should continue throughout the season, because the duration of the influenza season varies, and influ enza might not appear in certain communities until February or March. Providers should offer influenza vaccine routinely, and organized vaccination campaigns should continue throughout the influenza season, including after influenza activity has begun in the community. Vaccine administered in December or later, even if influenza activity has already begun, is likely to be beneficial in the majority of influenza seasons. The majority of adults have antibody protection against influenza virus infection within 2 weeks after vaccina tion (368,369).
All children aged 6 months-8 years who have not received vaccination against influenza previously should receive their first dose as soon after vaccine becomes available as is feasible. This practice increases the opportunity for both doses to be admin istered before or shortly after the onset of influenza activity.
Persons and institutions planning substantial organized vaccination campaigns (e.g., health departments, occupa tional health clinics, and community vaccinators) should consider scheduling these events after at least mid-October because the availability of vaccine in any location cannot be ensured consistently in early fall. Scheduling campaigns after mid-October will minimize the need for cancellations because vaccine is unavailable. These vaccination clinics should be scheduled through December, and later if feasible, with attention to settings that serve children aged 6-59 months, pregnant women, other persons aged 50 years, HCP, and persons who are household con tacts of children aged <59 months or other persons at high risk. Planners are encouraged to develop the capacity and flexibility to schedule at least one vaccination clinic in December. Guidelines for planning large-scale vaccination clinics are available at / vaccination/vax_clinic.htm.
During a vaccine shortage or delay, substantial proportions of TIV doses may not be released and distributed until November and December or later. When the vaccine is sub stantially delayed or disease activity has not subsided, provid ers should consider offering vaccination clinics into January and beyond as long as vaccine supplies are available. Cam paigns using LAIV also can extend into January and beyond.
# Strategies for Implementing Vaccination Recommendations in Health-Care Settings
Successful vaccination programs combine publicity and education for HCP and other potential vaccine recipients, a plan for identifying persons recommended for vaccination, use of reminder/recall systems, assessment of practice-level vaccination rates with feedback to staff, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine, including use of standing orders pro grams (367,370). The use of standing orders programs by longterm-care facilities (e.g., nursing homes and skilled nursing facilities), hospitals, and home health agencies ensures that vaccination is offered. Standing orders programs for influenza vaccination should be conducted under the supervision of a licensed practitioner according to a physician-approved facil ity or agency policy by HCP trained to screen patients for contraindications to vaccination, administer vaccine, and monitor for adverse events. CMS has removed the physician signature requirement for the administration of influenza and pneumococcal vaccines to Medicare and Medicaid patients in hospitals, long-term-care facilities, and home health agencies (371). To the extent allowed by local and state law, these facilities and agencies can implement standing orders for influenza and pneumococcal vaccination of Medicare-and Medicaid-eligible patients. Payment for influenza vaccine under Medicare Part B is available (372,373). Other settings (e.g., outpatient facilities, managed care organizations, assisted living facilities, correctional facilities, pharmacies, and adult workplaces) are encouraged to introduce standing orders pro grams (374). In addition, physician reminders (e.g., flagging charts) and patient reminders are recognized strategies for increasing rates of influenza vaccination. Persons for whom influenza vaccine is recommended can be identified and vac cinated in the settings described in the following sections.
# Outpatient Facilities Providing Ongoing Care
Staff in facilities providing ongoing medical care (e.g., phy sicians' offices, public health clinics, employee health clinics, hemodialysis centers, hospital specialty-care clinics, and out patient rehabilitation programs) should identify and label the medical records of patients who should receive vaccination. Vaccine should be offered during visits throughout the influ enza season. The offer of vaccination and its receipt or refusal should be documented in the medical record. Patients for whom vaccination is recommended and who do not have regu larly scheduled visits during the fall should be reminded by mail, telephone, or other means of the need for vaccination.
# Outpatient Facilities Providing Episodic or Acute Care
Acute health-care facilities (e.g., emergency departments and walk-in clinics) should offer vaccinations throughout the influenza season to persons for whom vaccination is recom mended or provide written information regarding why, where, and how to obtain the vaccine. This written information should be available in languages appropriate for the popula tions served by the facility.
# Nursing Homes and Other Residential Long-Term-Care Facilities
Vaccination should be provided routinely to all residents of chronic-care facilities. If possible, all residents should be vac cinated at one time before influenza season. In the majority of seasons, TIV will become available to long-term-care facili ties in October or November, and vaccination should com mence as soon as vaccine is available. As soon as possible after admission to the facility, the benefits and risks of vaccination should be discussed and education materials provided. Signed consent is not required (375). Residents admitted after comple tion of the vaccination program at the facility should be vac cinated at the time of admission through March.
Since October 2005, the Centers for Medicare and Medic aid Services (CMS) has required nursing homes participating in the Medicare and Medicaid programs to offer all residents influenza and pneumococcal vaccines and to document the results. According to the requirements, each resident is to be vaccinated unless contraindicated medically, the resident or a legal representative refuses vaccination, or the vaccine is not available because of shortage. This information is to be reported as part of the CMS Minimum Data Set, which tracks nursing home health parameters (372,376).
# Acute-Care Hospitals
Hospitals should serve as a key setting for identifying per sons at increased risk for influenza complications. Unvacci nated persons of all ages (including children) with high-risk conditions and persons aged 6 months-18 years or >50 years who are hospitalized at any time during the period when vac cine is available should be offered and strongly encouraged to receive influenza vaccine before they are discharged. Standing orders to offer influenza vaccination to all hospitalized per sons should be considered.
# Visiting Nurses and Others Providing Home Care to Persons at High Risk
Nursing-care plans should identify patients for whom vac cination is recommended, and vaccine should be administered in the home, if necessary as soon as influenza vaccine is avail able and throughout the influenza season. Caregivers and other persons in the household (including children) should be referred for vaccination.
# Other Facilities Providing Services to Persons Aged >50 Years
Facilities providing services to persons aged >50 years (e.g., assisted living housing, retirement communities, and recre ation centers) should offer unvaccinated residents, attendees, and staff annual on-site vaccination before the start of the influenza season. Continuing to offer vaccination through out the fall and winter months is appropriate. Efforts to vac cinate newly admitted patients or new employees also should be continued, both to prevent illness and to avoid having these persons serve as a source of new influenza infections. Staff education should emphasize the need for influenza vaccine.
# Health-Care Personnel
Health-care facilities should offer influenza vaccinations to all HCP, including night, weekend, and temporary staff. Par ticular emphasis should be placed on providing vaccinations to workers who provide direct care for persons at high risk for influenza complications. Efforts should be made to educate HCP regarding the benefits of vaccination and the potential health consequences of influenza illness for their patients, themselves, and their family members. All HCP should be provided convenient access to influenza vaccine at the work site, free of charge, as part of employee health programs (340,350,351).
# Future Directions for Research and Recommendations Related to Influenza Vaccine
Although available influenza vaccines are effective and safe, additional research is needed to improve prevention efforts. Most mortality from influenza occurs among person aged >65 years (6), and more immunogenic influenza vaccines are needed for this age group and other risk groups at high risk for mortality. Additional research is also needed to understand potential biases in estimating the benefits of vaccination among older adults in reducing hospitalizations and deaths (101,193,377). Additional studies of the relative costeffectiveness and cost utility of influenza vaccination among children and adults, especially those aged <65 years, are needed and should be designed to account for year-to-year variations in influenza attack rates, illness severity, hospitalization costs and rates, and vaccine effectiveness when evaluating the longterm costs and benefits of annual vaccination (378). Addi tional data on indirect effects of vaccination are also needed to quantify the benefits of influenza vaccination of HCP in protecting their patients (294) and the benefits of vaccinating children to reduce influenza complications among those at risk. Because of expansions in ACIP recommendations for vaccination will lead to more persons being vaccinated, much larger research networks are needed that can identify and assess the causality of very rare events that occur after vaccina tion, including GBS. These research networks could also pro vide a platform for effectiveness and safety studies in the event of a pandemic. Research on potential biologic or genetic risk factors for GBS also is needed. In addition, a better under standing of how to motivate persons at risk to seek annual influenza vaccination is needed.
ACIP continues to review new vaccination strategies to pro tect against influenza, including the possibility of expanding routine influenza vaccination recommendations toward uni versal vaccination or other approaches that will help reduce or prevent the transmission of influenza and reduce the bur den of severe disease (379)(380)(381)(382)(383)(384). The expansion of annual vaccination recommendations to include all children aged 6 months-18 years will require a substantial increase in resources for epidemiologic research to develop long term studies capable of assessing the possible effects on community-level transmission. Additional planning to improve surveillance systems capable of monitoring effectiveness, safety and vac cine coverage, and further development of implementation strategies will also be necessary. In addition, as noted by the National Vaccine Advisory Committee, strengthening the U.S. influenza vaccination system will require improving vaccine financing and demand and implementing systems to help better understand the burden of influenza in the United States (385). Vaccination programs capable of delivering annual influenza vaccination to a broad range of the population could potentially serve as a resilient and sustainable platform for delivering vaccines and monitoring outcomes for other urgently required public health interventions (e.g., vaccines for pandemic influenza or medications to prevent or treat ill nesses caused by acts of terrorism).
# Seasonal Influenza Vaccine and Avian or Swine Influenza
Human infection with novel or nonhuman influenza A virus strains, including influenza A viruses of animal origin, is a nationally notifiable disease (386). Human infections with nonhuman or novel human influenza A virus should be iden tified quickly and investigated to determine possible sources of exposure, identify additional cases, and evaluate the possi bility of human-to-human transmission because transmission patterns could change over time with variations in these influenza A viruses.
Sporadic severe and fatal human cases of infection with highly pathogenic avian influenza A(H5N1) viruses have been identified in Asia, Africa, Europe and the Middle East, pri marily among persons who have had direct or close unpro tected contact with sick or dead birds associated with the ongoing H5N1 panzootic among birds (387)(388)(389)(390)(391)(392). Limited, nonsustained human-to-human transmission of H5N1 viruses has likely occurred in some case clusters (393,394). To date, no evidence exists of genetic reassortment between human influenza A and H5N1 viruses. However, influenza viruses derived from strains circulating among poultry (e.g., the H5N1 viruses that have caused outbreaks of avian influenza and occasionally have infected humans) have the potential to recombine with human influenza A viruses (395,396). To date, highly pathogenic H5N1 viruses have not been identified in wild or domestic birds or in humans in the United States.
Human illness from infection with different avian influ enza A subtype viruses also have been documented, including infections with low pathogenic and highly pathogenic viruses. A range of clinical illness has been reported for human infec tion with low pathogenic avian influenza viruses, including conjunctivitis with influenza A(H7N7) virus in the U.K., lower respiratory tract disease and conjunctivitis with influenza A(H7N2) virus in the U.K., and uncomplicated influenzalike illness with influenza A(H9N2) virus in Hong Kong and China (397)(398)(399)(400)(401)(402). Two human cases of infection with low pathogenic influenza A(H7N2) were reported in the United States (400). Although human infections with highly patho genic A(H7N7) virus infections typically have influenza-like illness or conjunctivitis, severe infections, including one fatal case in the Netherlands, have been reported (403,404). Con junctivitis has also been reported because of human infection with highly pathogenic influenza A(H7N3) virus in Canada and low pathogenic A(H7N3) in the U.K (397,404). In con trast, sporadic infections with highly pathogenic avian influ enza A(H5N1) viruses have caused severe illness in many countries, with an overall case-fatality ratio of >60% (394,405).
Swine influenza A(H1N1), A(H1N2), and A(H3N2) viruses are endemic among pig populations in the United States (406), including reassortant viruses. Two clusters of influenza A(H2N3) virus infections among pigs have been recently reported (407). Outbreaks among pigs normally occur in colder weather months (late fall and winter) and sometimes with the introduction of new pigs into susceptible herds. An estimated 30% of the pig population in the United States has serologic evidence of having had swine influenza A(H1N1) virus infection. Sporadic human infections with swine influ enza A viruses occur in the United States, but the frequency of these human infections is unknown. Persons infected with swine influenza A viruses typically report direct contact with ill pigs or places where pigs have been present (e.g., agricul tural fairs or farms), and have symptoms that are clinically indistinguishable from infection with other respiratory viruses (408). Clinicians should consider swine influenza A virus infection in the differential diagnosis of patients with ILI who have had recent contact with pigs. The sporadic cases identi fied in recent years have not resulted in sustained human-to human transmission of swine influenza A viruses or commu nity outbreaks. Although immunity to swine influenza A viruses appears to be low in the overall human population (<2%), 10%-20% of persons occupationally exposed to pigs (e.g., pig farmers or pig veterinarians) have been documented in certain studies to have antibody evidence of prior swine influenza A(H1N1) virus infection (409,410).
Current seasonal influenza vaccines are not expected to pro vide protection against human infection with avian influenza A viruses, including H5N1 viruses, or to provide protection against currently circulating swine influenza A viruses. How ever, reducing seasonal influenza risk through influenza vac cination of persons who might be exposed to nonhuman influenza viruses (e.g., H5N1 viruses) might reduce the theo retical risk for recombination of influenza A viruses of animal origin and human influenza A viruses by preventing seasonal influenza A virus infection within a human host.
CDC has recommended that persons who are charged with responding to avian influenza outbreaks among poultry receive seasonal influenza vaccination (411). As part of pre paredness activities, the Occupational Safety and Health Administration (OSHA) has issued an advisory notice regard ing poultry worker safety that is intended for implementation in the event of a suspected or confirmed avian influenza out break at a poultry facility in the United States. OSHA guide lines recommend that poultry workers in an involved facility receive vaccination against seasonal influenza; OSHA also has recommended that HCP involved in the care of patients with documented or suspected avian influenza should be vaccinated with the most recent seasonal human influenza vaccine to reduce the risk for co-infection with human influenza A viruses (412).
# Recommendations for Using Antiviral Agents for Seasonal Influenza
Annual vaccination is the primary strategy for preventing complications of influenza virus infections. Antiviral medica tions with activity against influenza viruses are useful adjuncts in the prevention of influenza, and effective when used early in the course of illness for treatment. Four influenza antiviral agents are licensed in the United States: amantadine, rimantadine, zanamivir, and oseltamivir. Influenza A virus resistance to amantadine and rimantadine can emerge rapidly during treatment. Because antiviral testing results indicated high levels of resistance (413)(414)(415)(416), neither amantadine nor rimantadine should be used for the treatment or chemopro phylaxis of influenza A in the United States during the 2007-08 influenza season. Surveillance demonstrating that susceptibility to these antiviral medications has been reestab lished among circulating influenza A viruses will be needed before amantadine or rimantadine can be used for the treat ment or chemoprophylaxis of influenza A. Oseltamivir or zanamivir can be prescribed if antiviral chemoprophylaxis or treatment of influenza is indicated. Oseltamivir is licensed for treatment of influenza in persons aged >1 year, and zanamivir is licensed for treating influenza in persons aged >7 years. Oseltamivir and zanamivir can be used for chemoprophylaxis of influenza; oseltamivir is licensed for use as chemoprophy laxis in persons aged >1 year, and zanamivir is licensed for use in persons aged >5 years.
During the 2007-08 influenza season, influenza A (H1N1) viruses with a mutation that confers resistance to oseltamivir were identified in the United States and other countries. As of June 27, 2008, in the United States, 111 (7.6%) of 1,464 influenza A viruses tested, and none of 305 influenza B viruses tested have been found to be resistant to oseltamivir. All of the resistant viruses identified in the United States and elsewhere are influenza A (H1N1) viruses. Of 1020 influenza A (H1N1) viruses isolated from patients in the United States, 111 (10.9%) exhibited a specific genetic mutation that con fers oseltamivir resistance (417). Influenza A (H1N1) virus strains that are resistant to oseltamivir remain sensitive to zanamivir. Neuraminidase inhibitor medications continue to be the recommended agents for treatment and chemoprophy laxis of influenza in the United States. However, clinicians should be alert to changes in antiviral recommendations that might occur as additional antiviral resistance data becomes available during the 2008-09 influenza season (. cdc.gov/flu/professionals/antivirals/index.htm).
# Role of Laboratory Diagnosis
Influenza surveillance information and diagnostic testing can aid clinical judgment and help guide treatment decisions. However, only 69% of practitioners in one recent survey indicated that they test patients for influenza during the influenza season (418). The accuracy of clinical diagnosis of influenza on the basis of symptoms alone is limited because symptoms from illness caused by other pathogens can overlap considerably with influenza (26,39,40) (see Clinical Signs and Symptoms of Influenza).
Diagnostic tests available for influenza include viral culture, serology, rapid antigen testing, reverse transcriptase-polymerase chain reaction (RT-PCR), and immunofluorescence assays (419). As with any diagnostic test, results should be evaluated in the context of other clinical and epidemiologic informa tion available to health-care providers. Sensitivity and speci ficity of any test for influenza can vary by the laboratory that performs the test, the type of test used, the type of specimen tested, the quality of the specimen, and the timing of speci men collection in relation to illness onset. Among respiratory specimens for viral isolation or rapid detection of influenza viruses, nasopharyngeal and nasal specimens have higher yields than throat swab specimens (420). In addition, positive influ enza tests have been reported up to 7 days after receipt of LAIV (421).
Commercial rapid diagnostic tests are available that can detect influenza viruses within 30 minutes (422,423). Cer tain tests are licensed for use in any outpatient setting, whereas others must be used in a moderately complex clinical labora tory. These rapid tests differ in the types of influenza viruses they can detect and whether they can distinguish between influenza types. Different tests can detect 1) only influenza A viruses; 2) both influenza A and B viruses, but not distinguish between the two types; or 3) both influenza A and B and distinguish between the two. None of the rapid influenza diagnostic tests specifically identifies any influenza A virus sub types.
The types of specimens acceptable for use (i.e., throat, nasopharyngeal, or nasal aspirates, swabs, or washes) also vary by test, but all perform best when collected as close to illness onset as possible. The specificity and, in particular, the sensi tivity of rapid tests are lower than for viral culture and vary by test (419,(422)(423)(424). Rapid tests for influenza have high speci ficity (>90%), but are only moderately sensitive (<70%). A recent study found sensitivity to be as low as 42% in clini cal practice (425). Rapid tests appear to have higher sensitiv ity when used in young children, compared with adults, possibly because young children with influenza typically shed higher concentrations of influenza viruses than adults (426). Since RT-PCR has high sensitivity to detect influenza virus infection compared to viral culture, rapid tests have lower sensitivity than viral culture when compared to RT-PCR.
The limitations of rapid diagnostic tests must be under stood in order to properly interpret results. Positive rapid influenza test results are generally reliable when community influenza activity is high and are useful in deciding whether to initiate antiviral treatment. Negative rapid test results are less helpful in making treatment decisions for individual patients when influenza activity in a community is high. Because of the lower sensitivity of the rapid tests, physicians should consider confirming negative tests with viral culture or other means because of the possibility of false-negative rapid test results, especially during periods of peak community influenza activity. The positive predictive value of rapid tests will be lower during periods of low influenza activity, and clinicians should consider the positive and negative predictive values of the test in the context of the level of influenza activ ity in their community when interpreting results (424). When local influenza activity is high, persons with severe respiratory symptoms or persons with acute respiratory illness who are at higher risk for influenza complications should still be consid ered for influenza antiviral treatment despite a negative rapid influenza test unless illness can be attributed to another cause. However, because certain bacterial infections can produce symptoms similar to influenza, if bacterial infections are sus pected, they should be considered and treated appropriately. In addition, secondary invasive bacterial infections can be a severe complication of influenza. Package inserts and the labo ratory performing the test should be consulted for more details regarding use of rapid diagnostic tests. Additional updated information concerning diagnostic testing is avail able at diagnosis.htm.
Despite the availability of rapid diagnostic tests, clinical specimens collected in virus surveillance systems for viral cul ture are critical for surveillance purposes. Only culture iso lates of influenza viruses can provide specific information regarding circulating strains and subtypes of influenza viruses and data on antiviral resistance. This information is needed to compare current circulating influenza strains with vaccine strains, to guide decisions regarding influenza treatment and chemoprophylaxis, and to formulate vaccine for the coming year. Virus isolates also are needed to monitor antiviral resis tance and the emergence of novel human influenza A virus subtypes that might pose a pandemic threat. Influenza sur veillance by state and local health departments and CDC can provide information regarding the circulation of influenza viruses in the community, which can help inform decisions about the likelihood that a compatible clinical syndrome is indeed influenza.
# Antiviral Agents for Influenza
Zanamivir and oseltamivir are chemically related antiviral medications known as neuraminidase inhibitors that have activity against both influenza A and B viruses. The two medi cations differ in pharmacokinetics, adverse events, routes of administration, approved age groups, dosages, and costs. An overview of the indications, use, administration, and known primary adverse events of these medications is presented in the following sections. Package inserts should be consulted for additional information. Detailed information about aman tadine and rimantadine (adamantanes) is available in previ ous ACIP influenza recommendations (427).
# Indications for Use of Antivirals Treatment
Initiation of antiviral treatment within 2 days of illness onset is recommended, although the benefit of treatment is greater as the time after illness onset is reduced. Certain per sons have a high priority for treatment (Box 3); however, treat ment does not need to be limited to these persons. In clinical trials conducted in outpatient settings, the benefit of antiviral treatment for uncomplicated influenza was minimal unless treatment was initiated within 48 hours after illness onset. However, no data are available on the benefit for severe influ enza when antiviral treatment is initiated >2 days after illness onset. The recommended duration of treatment with either zanamivir or oseltamivir is 5 days.
Evidence for the efficacy of these antiviral drugs is based primarily on studies of outpatients with uncomplicated influ enza. When administered within 2 days of illness onset to otherwise healthy children or adults, zanamivir or oseltamivir can reduce the duration of uncomplicated influenza A and B illness by approximately 1 day compared with placebo (143,(428)(429)(430)(431)(432)(433)(434)(435)(436)(437)(438)(439)(440)(441)(442). Minimal or no benefit was reported when antiviral treatment is initiated >2 days after onset of uncom plicated influenza. Data on whether viral shedding is reduced are inconsistent. The duration of viral shedding was reduced in one study that employed experimental infection; however, other studies have not demonstrated reduction in the dura tion of viral shedding. A recent review that examined neuraminidase inhibitor effect on reducing ILI concluded that neuraminidase inhibitors were not effective in the control of seasonal influenza (443). However, lower or no effectiveness using a nonspecific (compared with laboratory-confirmed influenza) clinical endpoint such as ILI would be expected (444).
Data are limited about the effectiveness of zanamivir and oseltamivir in preventing serious influenza-related complica tions (e.g., bacterial or viral pneumonia or exacerbation of chronic diseases), or for preventing influenza among persons at high risk for serious complications of influenza. In a study that combined data from 10 clinical trials, the risk for pneu monia among those participants with laboratory-confirmed influenza receiving oseltamivir was approximately 50% lower than among those persons receiving a placebo and 34% lower among patients at risk for complications (p<0.05 for both comparisons) (445). Although a similar significant reduction also was determined for hospital admissions among the over all group, the 50% reduction in hospitalizations reported in the small subset of high-risk participants was not statistically significant. One randomized controlled trial documented a decreased incidence of otitis media among children treated with oseltamivir (437). Another randomized controlled study conducted among influenza-infected children with asthma demonstrated significantly greater improvement in lung func tion and fewer asthma exacerbations among oseltamivir-treated children compared with those who received placebo but did not determine a difference in symptom duration (446). Inad equate data exist regarding the efficacy of any of the influenza antiviral drugs for use among children aged <1 year, and none are FDA-licensed for use in this age group.
Two observational studies suggest that oseltamivir reduces severe clinical outcomes in patients hospitalized with influ enza. A large prospective observational study assessed clinical outcomes among 327 hospitalized adults with laboratoryconfirmed influenza whose health-care provider chose to use oseltamivir treatment compared to untreated influenza patients. The average age of adults in this study was 77 years, and 71% began treatment >48 hours after illness onset. In the multivariate analysis, oseltamivir treatment was associated with a significantly decreased risk for death within 15 days of hospitalization (odds ratio = 0.21; CI = 0.06-0.80). Benefit was observed even among those starting treatment >48 (437,(448)(449)(450)(451). However, an observational study among Japa nese children with culture-confirmed influenza and treated with oseltamivir demonstrated that children with influenza A virus infection resolved fever and stopped shedding virus more quickly than children with influenza B, suggesting that oseltamivir might be less effective for the treatment of influ enza B (452).
The Infectious Diseases Society of America and the Ameri can Thoracic Society have recommended that persons with community-acquired pneumonia and laboratory-confirmed influenza should receive either oseltamivir or zanamivir if treat ment can be initiated within 48 hours of symptom onset. Patients who present >48 hours after illness onset are poten tial candidates for treatment if they have influenza pneumo nia or to reduce viral shedding while hospitalized (453). The American Academy of Pediatrics recommends antiviral treat ment of any child with influenza who is also at high risk of influenza complications, regardless of vaccination status, and any otherwise healthy child with moderate-to-severe influenza infection who might benefit from the decrease in duration of clinical symptoms documented to occur with therapy (454).
# Chemoprophylaxis
Chemoprophylactic drugs are not a substitute for vaccina tion, although they are critical adjuncts in preventing and controlling influenza. Certain persons are at higher priority for chemoprophylaxis (Box 4); however, chemoprophylaxis does not need to be limited to these persons. In community studies of healthy adults, both oseltamivir and zanamivir had similar efficacy in preventing febrile, laboratory-confirmed (460)(461)(462)(463)(464)(465). For example, a 6-week study of oseltamivir chemoprophylaxis among nursing home residents demon strated a 92% reduction in influenza illness (464). A 4-week study among community-dwelling persons at higher risk for influenza complications (median age: 60 years) demonstrated that zanamivir had an 83% effectiveness in preventing symptomatic laboratory-confirmed influenza (465). The effi cacy of antiviral agents in preventing influenza among severely immunocompromised persons is unknown. A small nonrandomized study conducted in a stem cell transplant unit suggested that oseltamivir can prevent progression to pneu monia among influenza-infected patients (466).
When determining the timing and duration for adminis tering influenza antiviral medications for chemoprophylaxis, factors related to cost, compliance, and potential adverse events should be considered. To be maximally effective as chemo prophylaxis, the drug must be taken each day for the duration of influenza activity in the community. Additional clinical guidelines on the use of antiviral medications to prevent influenza are available (453,454).
# Persons at High Risk Who Are Vaccinated After Influenza Activity Has Begun
Development of antibodies in adults after vaccination takes approximately 2 weeks (369,370). Therefore, when influenza vaccine is administered after influenza activity in a commu nity has begun, chemoprophylaxis should be considered for persons at higher risk for influenza complications during the time from vaccination until immunity has developed. Chil dren aged <9 years who receive influenza vaccination for the first time might require as much as 6 weeks of chemoprophy laxis (i.e., chemoprophylaxis until 2 weeks after the second dose when immunity after vaccination would be expected). Persons at higher risk for complications of influenza still can benefit from vaccination after community influenza activity has begun because influenza viruses might still be circulating at the time vaccine-induced immunity is achieved.
# Persons Who Provide Care to Those at High Risk
To reduce the spread of virus to persons at high risk, chemo prophylaxis during peak influenza activity can be considered for unvaccinated persons who have frequent contact with per sons at high risk. Persons with frequent contact might include employees of hospitals, clinics, and chronic-care facilities, household members, visiting nurses, and volunteer workers. If an outbreak is caused by a strain of influenza that might not be covered by the vaccine, chemoprophylaxis can be con sidered for all such persons, regardless of their vaccination status.
# Persons Who Have Immune Deficiencies
Chemoprophylaxis can be considered for persons at high risk who are more likely to have an inadequate antibody response to influenza vaccine. This category includes persons infected with HIV, particularly those with advanced HIV dis ease. No published data are available concerning possible effi cacy of chemoprophylaxis among persons with HIV infection or interactions with other drugs used to manage HIV infec tion. Such patients should be monitored closely if chemopro phylaxis is administered.
# Other Persons
Chemoprophylaxis throughout the influenza season or dur ing increases in influenza activity within the community might be appropriate for persons at high risk for whom vaccination is contraindicated, or for whom vaccination is likely to be ineffective. Health-care providers and patients should make decisions regarding whether to begin chemoprophylaxis and how long to continue it on an individual basis.
# Antiviral Drug-Resistant Strains of Influenza
Oseltamivir and Zanamivir (Neuraminidase Inhibitors) Among 2,287 isolates obtained from multiple countries during 1999-2002 as part of a global viral surveillance sys tem, eight (0.3%) had a more than ten fold decrease in sus ceptibility to oseltamivir, and two (25%) of these eight also were resistant to zanamivir (467). In Japan, where more oseltamivir is used than in any other country, resistance to oseltamivir was identified in three (0. In 2007-08, increased resistance to oseltamivir was reported among A (H1N1) viruses in many countries (469,470). Persons infected with oseltamivir resistant A (H1N1) viruses had not previously received oseltamivir treatment and were not known to have been exposed to a person undergoing oseltamivir treatment (469,470). In the United States, approxi mately 10% of influenza A (H1N1) viruses, no A (H3N2) viruses, and no influenza B viruses were resistant to oseltamivir during the 2007-08 influenza season, and the overall per centage of influenza A and B viruses resistant to oseltamivir in the United States was <5%. No viruses resistant to zanamivir were identified (417). Oseltamivir or zanamivir continue to be the antiviral agents recommended for the prevention and treatment of influenza (418). Although recommendations for use of antiviral medications have not changed, enhanced sur veillance for detection of oseltamivir-resistant viruses is ongo ing and will enable continued monitoring of changing trends over time.
Development of viral resistance to zanamivir or oseltamivir during treatment has also been identified but does not appear to be frequent (450,(471)(472)(473)(474). One limited study reported that oseltamivir-resistant influenza A viruses were isolated from nine (18%) of 50 Japanese children during treatment with oseltamivir (475). Transmission of neuraminidase inhibitorresistant influenza B viruses acquired from persons treated with oseltamivir is rare but has been documented (476). No iso lates with reduced susceptibility to zanamivir have been reported from clinical trials, although the number of post treatment isolates tested is limited (451,477). Only one clini cal isolate with reduced susceptibility to zanamivir, obtained from an immunocompromised child on prolonged therapy, has been reported (451). Prolonged shedding of oseltamivir or zanamivir-resistant virus by severely immunocompromised patients, even after cessation of oseltamivir treatment, has been reported (478,479).
# Amantadine and Rimantadine (Adamantanes)
Adamantane resistance among circulating influenza A viruses increased rapidly worldwide over the past several years, and these medications are no longer recommended for influ enza prevention or treatment, although in some limited cir cumstances use of adamantanes in combination with a neuraminidase inhibitor medication might be considered (see Prevention and Treatment of Influenza when Oseltamivir Resistance is Suspected). The proportion of influenza A viral isolates submitted from throughout the world to the World Health Organization Collaborating Center for Surveillance, Epidemiology, and Control of Influenza at CDC that were adamantane-resistant increased from 0.4% during 1994-1995 to 12.3% during 2003-2004 (480). During the 2005-06 influenza season, CDC determined that 193 (92%) of 209 influenza A (H3N2) viruses isolated from patients in 26 states demonstrated a change at amino acid 31 in the M2 gene that confers resistance to adamantanes (413,414). Preliminary data from the 2007-08 influenza season indicates that resistance to adamantanes remains high among influenza A isolates, with approximately 99% of tested influenza A(H3N2) isolates and approximately 10% of influenza A(H1N1) isolates resistant to adamantanes (CDC, unpublished data, 2008). Amanta dine or rimantidine should not be used alone for the treat ment or prevention of influenza in the United States until evidence of susceptibility to these antiviral medications has been reestablished among circulating influenza A viruses. Adamantanes are not effective in the prevention or treatment of influenza B virus infections.
# Prevention and Treatment of Influenza when Oseltamivir Resistance is Suspected
Testing for antiviral resistance in influenza viruses is not available in clinical settings. Because the proportion of influ enza viruses that are resistant to oseltamivir remains <5% in the United States, oseltamivir or zanamivir remain the medi cations recommended for prevention and treatment of influ enza. Influenza caused by oseltamivir-resistant viruses appears to be indistinguishable from illness caused by oseltamivir sensitive viruses (469). When local viral surveillance data indicates that oseltamivir-resistant viruses are widespread in the community, clinicians have several options. Consultation with local health authorities to aid in decision-making is rec ommended as a first step. Persons who are candidates for receiving chemoprophylaxis as part of an outbreak known to be caused by oseltamivir-resistant viruses or who are being treated for influenza illness in communities where oseltamivir resistant viruses are known to be circulating widely can receive zanamivir. However, zanamivir is not licensed for chemoprophylaxis indications in children aged <5 years, and is not licensed for treatment in children aged <7 years (451). In addition, zanamivir is not recommended for use in persons with chronic cardiopulmonary conditions, and can be diffi cult to administer to critically ill patients because of the inha lation mechanism of delivery. In these circumstances, a combination of oseltamivir and either rimantadine or aman tadine can be considered, because influenza A (H1N1) viruses characterized to date that were resistant to oseltamivir have usually been susceptible to adamantane medications (CDC, unpublished data, 2008). However, adamantanes should not be used for chemoprophylaxis or treatment of influenza A unless they are part of a regimen that also includes a neuraminidase inhibitor, because viral surveillance data has documented that adamantane resistance among influenza A viruses is common. Influenza B viruses are not sensitive to adamantane drugs.
# Control of Influenza Outbreaks in Institutions
Use of antiviral drugs for treatment and chemoprophylaxis of influenza is a key component of influenza outbreak control in institutions. In addition to antiviral medications, other outbreak-control measures include instituting droplet precau tions and establishing cohorts of patients with confirmed or suspected influenza, re-offering influenza vaccinations to unvaccinated staff and patients, restricting staff movement between wards or buildings, and restricting contact between ill staff or visitors and patients (481)(482)(483). Both adamantanes and neuraminidase inhibitors have been successfully used to control outbreaks caused by antiviral susceptible strains when antivirals are combined with other infection control measures. (460,462,464,(484)(485)(486)(487)(488).
When confirmed or suspected outbreaks of influenza occur in institutions that house persons at high risk, chemoprophy laxis with a neuraminidase inhibitor medication should be started as early as possible to reduce the spread of the virus (489,490). In these situations, having preapproved orders from physicians or plans to obtain orders for antiviral medications on short notice can substantially expedite administration of antiviral medications. Specimens should be collected from ill cases for viral culture to assess antiviral resistance and provide data on the outbreak viruses. Chemoprophylaxis should be administered to all eligible residents, regardless of whether they received influenza vaccinations during the previous fall, and should continue for a minimum of 2 weeks. If surveillance indicates that new cases continue to occur, chemoprophylaxis should be continued until approximately 7-10 days after ill ness onset in the last patient (489). Chemoprophylaxis also can be offered to unvaccinated staff members who provide care to persons at high risk. Chemoprophylaxis should be con sidered for all employees, regardless of their vaccination sta tus, if indications exist that the outbreak is caused by a strain of influenza virus that is not well-matched by the vaccine. Such indications might include multiple documented break through influenza-virus infections among vaccinated persons, studies indicating low vaccine effectiveness, or circulation in the surrounding community of suspected index case(s) of strains not contained in the vaccine.
In addition to use in nursing homes, chemoprophylaxis also can be considered for controlling influenza outbreaks in other closed or semiclosed settings (e.g., dormitories, correctional facilities, or other settings in which persons live in close prox imity). To limit the potential transmission of drug-resistant virus during outbreaks in institutions, whether in chronic or acute-care settings or other closed settings, measures should be taken to reduce contact between persons taking antiviral drugs for treatment and other persons, including those taking chemoprophylaxis.
# Dosage
Dosage recommendations vary by age group and medical conditions (Table 4).
# Adults
Zanamivir is licensed for treatment of adults with uncom plicated acute illness caused by influenza A or B virus, and for chemoprophylaxis of influenza among adults. Zanamivir is not recommended for persons with underlying airways dis ease (e.g., asthma or chronic obstructive pulmonary diseases).
Oseltamivir is licensed for treatment of adults with uncom plicated acute illness caused by influenza A or B virus and for chemoprophylaxis of influenza among adults. Dosages and schedules for adults are listed (Table 4).
# Children
Zanamivir is licensed for treatment of influenza among chil dren aged >7 years. The recommended dosage of zanamivir for treatment of influenza is 2 inhalations (one 5-mg blister per inhalation for a total dose of 10 mg) twice daily (approxi mately 12 hours apart). Zanamivir is licensed for chemopro phylaxis of influenza among children aged >5 years; the chemoprophylaxis dosage of zanamivir for children aged >5 years is 10 mg (2 inhalations) once a day.
Oseltamivir is licensed for treatment and chemoprophylaxis among children aged >1 year. Recommended treatment dos ages vary by the weight of the child: 30 mg twice a day for children who weigh 15-23 kg, 60 mg twice a day for those who weigh >23-40 kg, and 75 mg twice a day for those who weigh >40 kg. Dosages for chemoprophylaxis are the same for each weight group, but doses are administered only once per day rather than twice.
# Persons Aged >65 Years
No reduction in dosage for oseltamivir or zanamivir is rec ommended on the basis of age alone.
# Persons with Impaired Renal Function
Limited data are available regarding the safety and efficacy of zanamivir for patients with impaired renal function. Among patients with renal failure who were administered a single intravenous dose of zanamivir, decreases in renal clearance, increases in half-life, and increased systemic exposure to zanamivir were reported (450). However, a limited number of healthy volunteers who were administered high doses of intravenous zanamivir tolerated systemic levels of zanamivir that were substantially higher than those resulting from administration of zanamivir by oral inhalation at the recom mended dose (491,492). On the basis of these considerations, the manufacturer recommends no dose adjustment for inhaled zanamivir for a 5-day course of treatment for patients with either mild-to-moderate or severe impairment in renal func tion (451).
Serum concentrations of oseltamivir carboxylate, the active metabolite of oseltamivir, increase with declining renal func tion (450). For patients with creatinine clearance of 10-30 mL per minute (450), a reduction of the treatment dosage of oseltamivir to 75 mg once daily and in the chemoprophylaxis dosage to 75 mg every other day is recommended. No treat ment or chemoprophylaxis dosing recommendations are avail able for patients undergoing routine renal dialysis treatment.
# Adverse Events
When considering use of influenza antiviral medications (i.e., choice of antiviral drug, dosage, and duration of therapy), clinicians must consider the patient's age, weight, and renal function (Table 4); presence of other medical conditions; indications for use (i.e., chemoprophylaxis or therapy); and the potential for interaction with other medications.
# Zanamivir
Limited data are available about the safety or efficacy of zanamivir for persons with underlying respiratory disease or for persons with complications of acute influenza, and zanamivir is licensed only for use in persons without underly ing respiratory or cardiac disease (497). In a study of zanamivir treatment of ILI among persons with asthma or chronic obstructive pulmonary disease in which study medication was administered after use of a B2-agonist, 13% of patients receiving zanamivir and 14% of patients who received pla cebo (inhaled powdered lactose vehicle) experienced a >20% decline in forced expiratory volume in 1 second (FEV1) after treatment (451,498). However, in a phase-I study of persons with mild or moderate asthma who did not have ILI, one of 13 patients experienced bronchospasm after administration of zanamivir (451). In addition, during postmarketing sur veillance, cases of respiratory function deterioration after inhalation of zanamivir have been reported. Because of the risk for serious adverse events and because efficacy has not been demonstrated among this population, zanamivir is not recommended for treatment for patients with underlying air way disease (451). Allergic reactions, including oropharyn geal or facial edema, also have been reported during postmarketing surveillance (451,498).
In clinical treatment studies of persons with uncomplicated influenza, the frequencies of adverse events were similar for persons receiving inhaled zanamivir and for those receiving placebo (i.e., inhaled lactose vehicle alone) (428)(429)(430)(431)(432)498). The most common adverse events reported by both groups were diarrhea, nausea, sinusitis, nasal signs and symptoms, bronchitis, cough, headache, dizziness, and ear, nose, and throat infections. Each of these symptoms was reported by <5% of persons in the clinical treatment studies combined (451). Zanamivir does not impair the immunologic response to TIV (499).
# Oseltamivir
Nausea and vomiting were reported more frequently among adults receiving oseltamivir for treatment (nausea without vomiting, approximately 10%; vomiting, approximately 9%) than among persons receiving placebo (nausea without vom iting, approximately 6%; vomiting, approximately 3%) (434,435,450,500). Among children treated with oseltamivir, 14% had vomiting, compared with 8.5% of placebo recipi ents. Overall, 1% discontinued the drug secondary to this side effect (437), and a limited number of adults who were enrolled in clinical treatment trials of oseltamivir discontin ued treatment because of these symptoms (450). Similar types and rates of adverse events were reported in studies of oseltamivir chemoprophylaxis (450). Nausea and vomiting might be less severe if oseltamivir is taken with food (450). No published studies have assessed whether oseltamivir impairs the immunologic response to TIV.
Transient neuropsychiatric events (self-injury or delirium) have been reported postmarketing among persons taking oseltamivir; the majority of reports were among adolescents and adults living in Japan (501). FDA advises that persons receiving oseltamivir be monitored closely for abnormal behavior (450).
# Use During Pregnancy
Oseltamivir and zanamivir are both "Pregnancy Category C" medications, indicating that no clinical studies have been conducted to assess the safety of these medications for preg nant women. Because of the unknown effects of influenza antiviral drugs on pregnant women and their fetuses, these two drugs should be used during pregnancy only if the poten tial benefit justifies the potential risk to the embryo or fetus; the manufacturers' package inserts should be consulted (450,451). However, no adverse effects have been reported among women who received oseltamivir or zanamivir during pregnancy or among infants born to such women.
# Drug Interactions
Clinical data are limited regarding drug interactions with zanamivir. However, no known drug interactions have been reported, and no clinically critical drug interactions have been predicted on the basis of in vitro and animal study data (450,451,502).
Limited clinical data are available regarding drug interac tions with oseltamivir. Because oseltamivir and oseltamivir carboxylate are excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway, a potential exists for interaction with other agents excreted by this path way. For example, coadministration of oseltamivir and probenecid resulted in reduced clearance of oseltamivir car boxylate by approximately 50% and a corresponding approxi mate twofold increase in the plasma levels of oseltamivir carboxylate (468).
# MMWR August 8, 2008
No published data are available concerning the safety or efficacy of using combinations of any of these influenza anti viral drugs. Package inserts should be consulted for more detailed information about potential drug interactions.
# Sources of Information Regarding Influenza and Its Surveillance
Information regarding influenza surveillance, prevention, detection, and control is available at . During October-May, surveillance information is updated weekly. In addition, periodic updates regarding influenza are published in MMWR (). Addi tional information regarding influenza vaccine can be obtained by calling 1-800-CDC-INFO (1-800-232-4636). State and local health departments should be consulted about availabil ity of influenza vaccine, access to vaccination programs, information related to state or local influenza activity, report ing of influenza outbreaks and influenza-related pediatric deaths, and advice concerning outbreak control.
# Responding to Adverse Events After Vaccination
Health-care professionals should report all clinically signifi cant adverse events after influenza vaccination promptly to VAERS, even if the health-care professional is not certain that the vaccine caused the event. Clinically significant adverse events that follow vaccination should be reported at http:// www.vaers.hhs.gov. Reports may be filed securely online or by telephone at 1-800-822-7967 to request reporting forms or other assistance.
# National Vaccine Injury Compensation Program
The National Vaccine Injury Compensation Program (VICP), established by the National Childhood Vaccine Injury Act of 1986, as amended, provides a mechanism through which compensation can be paid on behalf of a per son determined to have been injured or to have died as a result of receiving a vaccine covered by VICP. The Vaccine Injury Table lists the vaccines covered by VICP and the inju ries and conditions (including death) for which compensa tion might be paid. If the injury or condition is not on the Table, or does not occur within the specified time period on the Table, persons must prove that the vaccine caused the injury or condition.
For a person to be eligible for compensation, the general filing deadlines for injuries require claims to be filed within 3 years after the first symptom of the vaccine injury; for a death, claims must be filed within 2 years of the vaccine-related death and not more than 4 years after the start of the first symptom of the vaccine-related injury from which the death occurred. When a new vaccine is covered by VICP or when a new injury/ condition is added to the Table, claims that do not meet the general filing deadlines must be filed within 2 years from the date the vaccine or injury/condition is added to the Table for injuries or deaths that occurred up to 8 years before the Table change. Persons of all ages who receive a VICP-covered vac cine might be eligible to file a claim. Both the intranasal (LAIV) and injectable (TIV) trivalent influenza vaccines are covered under VICP. Additional information about VICP is available at http//www.hrsa.gov/vaccinecompensation or by calling 1-800-338-2382.
- Zanamivir is administered through oral inhalation by using a plastic device included in the medication package. Patients will benefit from instruction and demonstration of the correct use of the device. Zanamivir is not recommended for those persons with underlying airway disease. † A reduction in the dose of oseltamivir is recommended for persons with creatinine clearance 15-23 kg, the dose is 45 mg twice a day. For children who weigh >23-40 kg, the dose is 60 mg twice a day. For children who weigh >40 kg, the dose is 75 mg twice a day. ¶ The chemoprophylaxis dosing recommendation for children who weigh 15-23 kg, the dose is 45 mg once a day. For children who weigh>23-40 kg, the dose is 60 mg once a day. For children who weigh >40 kg, the dose is 75 mg once a day. In studies of healthy volunteers, approximately 7%-21% Persons with Seizure Disorders of the orally inhaled zanamivir dose reached the lungs, and Seizure events have been reported during postmarketing use 70%-87% was deposited in the oropharynx (451,494). of zanamivir and oseltamivir, although no epidemiologic stud-Approximately 4%-17% of the total amount of orally inhaled ies have reported any increased risk for seizures with either zanamivir is absorbed systemically. Systemically absorbed zanamivir or oseltamivir use.
zanamivir has a half-life of 2.5-5.1 hours and is excreted unchanged in the urine. Unabsorbed drug is excreted in the Persons with Immunosuppression feces (451,465). A recent retrospective case-control study demonstrated that Oseltamivir oseltamivir was safe and well tolerated when used during the control of an influenza outbreak among hematopoietic stem Approximately 80% of orally administered oseltamivir is cell transplant recipients living in a residential facility (493).
absorbed systemically (495). Absorbed oseltamivir is metabo lized to oseltamivir carboxylate, the active neuraminidase inhibitor, primarily by hepatic esterases. Oseltamivir carboxy-
# Route
late has a half-life of 6-10 hours and is excreted in the urine Oseltamivir is administered orally in capsule or oral sus by glomerular filtration and tubular secretion via the anionic pension form. Zanamivir is available as a dry powder that is pathway (450,496). Unmetabolized oseltamivir also is excreted self-administered via oral inhalation by using a plastic device in the urine by glomerular filtration and tubular secretion included in the package with the medication. Patients should (468). be instructed about the correct use of this device.
# Reporting of Serious Adverse Events After Antiviral Medications
Severe adverse events associated with the administration of antiviral medications used to prevent or treat influenza (e.g., those resulting in hospitalization or death) should be reported to MedWatch, FDA's Safety Information and Adverse Event Reporting Program, at telephone 1-800-FDA-1088, by fac simile at 1-800-FDA-0178, or via the Internet by sending Report Form 3500 (available at watch/safety/3500.pdf ). Instructions regarding the types of adverse events that should be reported are included on MedWatch report forms.
# Additional Information Regarding Influenza Virus Infection Control Among Specific Populations
Each year, ACIP provides general, annually updated information regarding control and prevention of influenza. Other reports related to controlling and preventing influenza among specific populations (e.g., immunocompromised per sons, HCP, hospital patients, pregnant women, children, and travelers) also are available in the following publications:
- CDC. General recommendations on immunization: rec ommendations of the Advisory Committee on Immuniza tion Practices (ACIP) and the American Academy of Family Physicians (AAFP | 40,606 | {
"id": "527915b9eacb908fa3c40441a578655841935bfb",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | The Occupational Safety and health Act of 19"J emphasizes the need for standards to protect the healta -'tnd provide fcr the safety of workers occupationally exposed to an ever-'.acreasing runber cf potential hazards. The National Institute for Occupational Saiery and L'eaith (NIOSH) «%-aluates all available research data and criteria and recommends standards for occupational ¿xyosure. The Secretary of Labor v ili weigh thes- recommendations along with other considerations, such as feasibility and scans of implementation, in promulgating regulatory standards. NIOSH w ill periodically review the recommended standards to ensure continuing protection of workers and will make successive reports as cev research and epidemiologic studies are completed and as sampling and analytical methods are developed. A permanent Federal standard exists for worker exposure fo vinyl chloride.#
N li'SH rrronr.cni1 s t i.at i r p 1 ovcc I'Xi'csnrf t > v i nv 1 !i.i ! i ¡!i -" ; ; i J r k : !.ii , 1 - i -t t ''lie;
. ullu r i 'lb c I i1 t l i r p r o v i - i o n s I o r v i i r . I ch I o r i . t h e > ' - t - '' t ^ . -:
v n i i h a r c p r o v i d e d .is ., n v i n l i . \ I o t t i ' K «Î. ' i-nr- - i t , -i t ; i t e x i 'c-'t i on t ii.i ! t -i«-rt---i r a ' -r p r o v i s i on s in
C l 1: 1 ' 1 l-1 < 11 7 ( - _ ) ( 1 ) ' i -i v 1 . i ml . 11 s i ' 1 i ! I'M' i . -) ( - M i i ) - i ' ! '
- f i :> I a c e d v i t !i t 1 -o s e -i v i i i he 1 -. A 1 :> r - >y i --i on -. ' . '¡ .i l l m . m 1 ,. r - . r - i < 1 ; o ! : 1 - I h a 1 i de s ,1S ill -t i Ill'll hi-1 o'-' .
ill.' r t -r .T T '. HIOSH recaaacnds that employee exposure to vin^rl halides in the workplace be controlled by adherence to the provisions for vinyl chloride in 29 CFR 1910.1017, the contents of which are provided as Appendix I of this document, with the exception that the respirator provisions la 29 CFR 1910-1017 (g)(l)(i-iv> and also 29 CFR 1910 (g) ( 6)Ui) shall be replaced with those given below. All provisions shall be adhered to for each of the vinyl halides as defined below. The recommended occupational exposure limits are Measurable by cechn.Tuas that are valid, :»:roduc ib Le, .ind available to Indus tr*» and ^overmaer.t agencies. Sufficient te-.hnolcgy exists to permit cccipliance vith the reconcended standard. Zaployers should 3>ake every effort to limit «anployee exposure to the vinyl halides to concentrations that are as low as possible, with an eventual goal of zero exposure. Employee exposures shall be ' nc.pt at or below the limits prescribed In 29 CFR 1910.1017. The criteria and standard will be subject to review and revision as necessary.
These criteria and the recomended standard apply to workplace exposure of employees to the aonotaers vinyl chloride (CH2-CHC1), vinylidene chloride (CH2-CC12', vinyl bromide (CHI-CH3r) , vinyl fluoride (C£2"CHF), and vinylidene fluoride (CH2-CF2), including any unreacfed monomer that may remain in pol/mers of these halides. As used in this document, "vinyls" and "vinyl nalides" refer only to these five compounds unless the terms are otherwise qualified.
The biologic effects of exposure to the vinyl halides say Include changes in behavior, cardiovascular abnormalities, degenerative changes In the liver and bones, and the induction of malignant neoplasms, especially angiosarcomas of the liver. A great deal of information is available concerning the effects of exposure of humans and animals :o vinyl chlorid'*, much of this is relatively new information having been developed sine- October 1974, when 29 CFR 1910.1017 was promulgated. Am pert of its effort to provide worker protection, NTOSH has extensively reviewed the newly completed studies as well as the older literature on v u y l chloride and has considered this information in its evaluation of the other vinyl halides. The data that are available from studies of carcinogenicity, mutagenicity, and metabolism, and predictions of biologic reactivity on the basis of physical and chemical properties of the vinyl halides suggest that the other vinyl halides have carcinogenic potentials similar to that of vinyl chloride. There Is strong evidence from animal studies of carcinogenicity on the pare of vinylidene chloride and vinyl bromide. Although there Is a lack of toxicity data on viayl fluoride and vinylidene fluoride, until some animal toxicity and/or metabolism data are available, there appears to be no reason to treat these two compounds differently from the other vinyl haldles in considerations of worker protection.
# INTRODUCTION
This report presents the criteria and the recommended standard based thereon that were prepared to meet the need for preventing impairment of health from occupational exposure to vinyl halides. The criteria document fulfills the responsibility of the Secretary of Health, Education, and Welfare under Section 20 (a)(3) of the Occupational Safety and Health Act of 1S70 to "develop criteria dealing with toxic materials and harmful physical agents and substances which will describe exposure levels...at which no employee will suffer impaired health or functional capacities or diminished life expectancy as a result of his work, experience."
After reviewing data and consulting with 'tl.ers, NIOSH formalized a system for the development of criteria on which standards can be established to protect the health and provide for the safety of employees exposed to hazardous chemical and physical agents.
The criteria and recommended standards should enable management and labor to develop better engineering controls resulting in more healthful work environments. Simply complying with the recomaended standard should not be the final ¿oil.
These criteria for a recommended standard for vinyl halides are part of a continuing series of criteria developed by NI0S3.
The recommended standard applies to the handling, processing, manufacture, use, or storage of the vinyl 'halides. Ine standard was not designed for the population-at-large, and its application to situations other than occupational exposure is not warranted. The standard is Intended to: (1) protect against the development of shortand long-term systemic effects from exposure to vinyl halides; (2) protect against local effects on the skin and eyes; (3) minimize the risk of Induction of cancer;
(4) be measurable by techniques that are valid, reproducible, and available to industry and government agencies; and (5) be attainable with existing techrology. Th» diagnosis of a rare liver cancer» angiosarcoma, la employees involved in polymerization processes Involving exposure to vinyl chloride has generated research on related compounds, Including vlnylldene chloride and vinyl bromide.
The available data from studies with animals confirm the carcinogenic potential of vinyl chloride.
The available information on vinylidene chloride and vinyl bromide suggests that these compounds also are carcinogenic and may Induce the same type of characteristic tumor that is associated with exposure to vinyl chloride.
Although no reports of animal experiments have been located in which the effects of long-term exposure to vlnrl fluoride or vlnylldene fluoride were investigated, these compounds have b^en found to be mutagenic In bacteria, and they may have metabolic products and pathways similar to those of the other vinyl halides.
Examination of the chemical and physical properties of the vinyl halides Indicates that all of them or their metabolites may have similar macromolecular binding potentials. The limited data on vinyl fluoride and vinylidene fluoride suggest that they probably exert nearly the »«me tuaorigenic propensities as vinyl chloride, vinylidene chloride, and vinyl bromide.
To permit accurate assessment of the health hazards associated with the vinyl halides, additional research is necessary. This research should Include attempts to: (1) develop less toxic substitutes; (2) develop improved control technology; (3) develop respiratory protective devices, especially those with end-of-service-life indicators, for the »inyl halides other than vinyl chloride; and (4) develop improved sampling and analytic?l methods and continuous monitoring equipment.
# III. BIOLOGIC EFFECTS OF EXPOSURE
Vinyl halides ere of growing Industrial Importance, especially In the plastics Industry. The vinyl halides, vinyl chloride, vlnylldene chloride, vinyl bromide, vinyl fluoride, and vinylidene fluoride, are easily polymerized or copolymerized with various compounds, such as acrylonitrlie, vinyl acetate, and styrene, to form pliable, lightweight plastics or thermoplastic resins. Whereas there are many reports of epidemiologic, carcinogenic, mutagenic, and metabolic studies of vinyl chloride, there are few reports of studies of the biologic effects of any of the other vinyl halides. Because of the paucity of information on these latter compounds, NIOSH has undertaken evaluations of the structure-activity relationships, based on chemical and physical properties, of the compounds and has used these relationships, along with data from the vinyl chloride literature, as a basis for extrapolation from actual to potential hazards for substances about which direct information is inadequate.
# Pertinent physical and chemical properties of these vinyl halides are presented in Table XVII-1, and a list of some of the synonyms for these compounds is presented in Table XVTI-2.
The vinyl halides undergo metabolic conversions, presumably initiated by enzymatic oxidation, to the corresponding oxlranes (epoxides) f1-A]. Subsequently, the oxlranes are presumed to either bind covalently to cellular macromolecules or be spontaneously rearranged to the aldehyde or acyl halide, hydrolyzed to the dlol, conjugated with glutathione, or reduced back to the parent compound. The major adverse biologic effects of the vinyl halides or their metabolites may include carcinogenesis, mutagenesis, teratogenesis, and damage to the liver.
Such effects may be associated with electrophlllc reactions (alkylation) with essential cellular components, whereas the rearrangements and other reactions (reduction, hydrolysis, conjugation) have often been considered to be detoxification mechanisms. It is realized that the rates of these possible reactions may vary »rid that the risk of adverse effaces would be' a function o f th- relative rates leading to, and corresponding half-lives of, each metabolic intermediate. Therefore, toxicity of a different order of magnitude may be elicited by each of these compounds. Indeed, not all of these effects have been associated with each of the vinyl halides.
The absorption and subsequent metabolism of vinyl chloride have been described as concentration-dependent; a saturable enzyme system, predominantly responsible for its metabolism at low concentrations, and a secondary oxidative system, predominant at higher concentrations, have been postulated .
The authors of these reports have further postulated that the oxlrane is formed predominantly at the higher concentrations, le, through the secondary oxiditive pathway. The halogenated acetaldehyde is common to both pathways, however.
Thus, even if the oxirane is not formed at low concentrations, the potential for macromolecular alkylation exists through the aldehyde and subsequent intermediates.
The covalent reactions of che vinyl halides and/or cheir metabolites with biologic materials may alter the chemical behavior and physical characteristics of the cellular constituents so as to prevent the altered molecules from functioning normally In physiologic processes.
The formation of stable reaction products may account, in part, for the subsequent harmful effects observed in biologic systems exposed to the vinyl halidis.
The alkylatlon of the biologic materials controlling cellular metabolism by the vinyl halides and/or their metabolites Is the most plausible basis for the Induction of genetic and neoplastic alterations in cell populations exposed to these chemicals. Because of the long latent period before adverse effects such as neoplasia become manifest, measurable effects may not be observable until many year3 after exposure at low concentrations.
# Ext ¿.it of Exposure
NIOSH has estimated that approximately 2.5 million US workers may be occupationally exposed to vinyl halide monomers. A more precise estimate is difficult to make because of the lack of Information on exposure tn monomer released in manufacturing processes involving the polymers or copolymers. Some of occupations that involve exposure to the vinyl halides are listed In Table XVII-3.
# (a) Vinyl Chloride
Vinyl chloride has the chemical formula CH2-CHC1. At room temperature, it is a gas with a sweet, pleasant odor, and It has a boiling point of -13.9 C and a solubility in water of 0.11 g/100 ml at 24 C (Table XV1I-1). It is easily liquified and is stored and used industrially in the liquid form . Vinyl chloride was first prepared by Regnault in 1835 by reacting dlchloroethane with alcoholic potash . An effective industrial method for preparing vinyl chloride was established in 1913 by Griesheim-Elektron, using hydrochlorination of acetylene, with mercuric chloride as a catalyst, as described by Klatte and Rollett in 1911. It was not until World War II, however, that production of vinyl chloride for use in synthetic rubber was established on a large scale in the United States.
Vinyl chloride is currently produced commercially by the oxychloriration of ethylene, by the liquid or gaseous reaction of acetylene with hydrochloric acid, and by the pyrolysis of ethylene dichloride .
Production of vinyl chloride in 1975 in the United States amounted to about 4,063 million pounds , and the annual growth rate In the vinyl chloride Industry is expected to be about 6Z up to 1980 . It is still prepared commercially by reacting 1,1,2-trichloroethane with lime or caustic, most often aqueous calcium hydroxide, 3t 90 C . Other syntheses involve bromochloroethane, trichloroethyl acetate, tetrichloroethane, or catalytic cracking of trichioroethane .
The CS production of vinylidene chloride in 1974 was about 170 million pounds . It is primarily used in the production of plastics, including copolymerization with vinyl chloride or acrylonitrile to form various thermoplastic resins . In 1974 NIOSH estimated that 57,000 workers in the United States were potentially exposed to vinylidene chloride.
# (c) Vinyl Bromide
Vinyl bromide, CH2*CHBr, is a colorless gas at room temperature and has a boiling point of 15.85 C ai.d a solubility in water of 0.565 g/100 ml at 25 C (Table XVII-1). It was first prepared and described by Regnault, who obtained it by reacting dibromoethaae with alcoholic potash .
In 1872, Reboul reported the preparation if vinyl bromide after reacting acetylene with hydrogen bromide. The major commercial method for producing vinyl bromide is the reaction of ethylene dlbromlde with sodium hydroxide .
# Production of vinyl
bromide in the United States amounted to over 5 million pounds in 1976 . Currently, vinyl bromide is used primarily as a flame-retarding agent for acrylic fibers . In 1974 NIOSH estimated that 26,000 workers in the United States were potentially exposed to vinyl bromide. Although Che amount of vinyl fluoride used each year has not been reported, an average of 0.6 pound of acetylene is required to produce 1 pound of vinyl fluoride by the current method. F.ach year about 2 million pounds of acetylene are used in the United States for producing vinyl fluoride, which is used for making various copolymers that are used In end products such as insulation for electrical wires and in protective paints and coatings , indicating that some 3.3 million pounds of vinyl fluoride are produced annually. The number of workers potentially exposed to vinyl fluoride has not been estimated by NIOSH.
# (e) Vlnylidene Fluoride
Vlnylidene fluoride, CH2-CF2, is a colorless gas at room temperature with a faint, ethereal odor; it has a boiling point of -85.7 C and its solubility in water is 0.018 g/100 ml at 25 C (Table XVII-1). It was first prepared by Swarts by reacting 2,2-difluoro-l-bromoethane with sodium amylate . Vlnylidene fluoride is used in making polymers and copolymers t;hat are found in such end products as insulation for high-temperature wire, protective paints and coatings, and chemical tanks and tubing . In 1974 NIOSH estimated that 32,000 workers in the United States were potentially exposed to vinylidene fluoride.
# Historical Reports
The vinyl compounds assumed economic importance with the increased demand for synthetic rubber and the advent of the plastics industry. The first study of the toxicity of vinyl chloride was conducted after its potential industrial Importance became apparent.
The authors concluded that the maximum concentration of vinyl chloride causing no acute effects on humans after exposure for 5 minutes was between 0.8 and 1.2Z . The authors also stated that "vinyl chloride causes clearcut intoxicating symptoms which can serve as adequate warning signs of its presence." While these conclusions were valid for the acute irritant and psychomotor effects caused by the 5-minute exposures to vinyl chloride, the possibility of adverse effects of vinyl chloride at concentrations lower than those necessary to produce these symptoms was not discussed. Suclu ec al described clinical manifestations of vinyl chloride poisoning in 168 workers at two vinyl chloride manufacturing plants in Rumania.
Although workplace concentrations of vinyl chloride were given for each year from 1962 to 1972 (Table IIX-1), methods for these determinations were not reported.
The authors compared the workers' reports of symptoms indicative of effects on the nervous system in 1962 with those reported in 1966 .
For these 2 years, the percentages of workers (n-168) reporting dizziness were 47 and 10.2, drowsiness, 45 and 16.6, headache, 36.6 and 6.9, loss of memory, 13 and 8, euphoria, 11 and 1.2, and nervousness, 9 and 0.6. These data indicate that the central nervous system (CNS) effects observed were concentrationdependent . Th« authors stated that the frequency of adverse effects had diminished between 1962 and 1972 because of the institution of exposure-control and therapeutic measures .
Tbese measures included reduction of workplace concentrations of vinyl chloride, flushing vinyl chloride from the reactors before cleaning, wearing gloves (unspecified type) during manual cleaning operations, reduction of the workshlft to 6 hours, semiannual medical examinations coupled with transfer to another workplace If poisoning was suspected, interdiction of smoking (presumably only at the workplace), administration of vitamin C, vitamin B complex, and iron for 10 days a month, and supplying ointments wiuj cortisone to prevent skin lesions. The authors stated that these measures "reduced all symptoms by tv -thirds."
The importance of this paper lies in its characterization of the wide range of adverse effects observed in a worker population exposed to vinyl chloride; however, which exposures n d which workers were associated with particular effects is often unclear. The authors did not state in all cases whether or not the workers exmined in 1965 and In subsequent years were the same ones that were examined in 1962. The changes induced by the vitamin therapy and the changes caused by different engineering and administrative controls and work practices are &lso impossible to evaluate independently.
Veltman et al studied the effects of exposure to vinyl chloride in 70 polyvinyl chloride workers who had been employed for from 6 months to 21.8 years (average 7.7 years) in cleaning autoclaves and centrifuges, in drying and sifting processes, and in wrapping polyvinyl chloride as a dry end product. Exposure concentrations were not reported. Arteries in the fingers were narrowed, and microscopic changes, notably fragmentation of elastic fibers, were found in the fingers of all workers with skin abnormalities and in 6 of 28 workers with apparently unchanged skin . Thrombocytopenia was associated with enlargement of the spleen, but it was also found in patients whose spleens were not enlarged. Only 6 of 29 patients with thrombocytopenia showed Improvement in their platelet cotints 1-1.5 ' years after having left their jobs; 20 had even lower platelet counts than those seen initially, and 3 showed no change. None of the most seriously affected workers showed improveaent in their platelet counts. All of the phalangeal lesions healed within 2 years after the workers left polyvinyl chloride production work, however.
The authors stated that this study was significant because it showed that acroosteolysls was associated with employment in polyvinyl chloride production, it demonstrated that % vinyl chloride disease existed, and it Indicated that vinyl chloride disease might be detected by external signs (changes In fingers or skin) or blood tests (thrombocytopenia). The authors also stated that thrombocytopenia was the earliest manifestation of vinyl chloride toxicity and chat placelet counts should be required of vinyl chloride workers. However, only 8.62 had club-like changes of the fingertips and only 11.42 had skin changes. Although 812 of the workers had thrombocytopenia, which in some cases persisted or worsened after exposure tj vinyl chloride ended, the nonspecific nature of thrombocytopenia and the possibility that It might signal damage that Is already irreversible casts doubt on the usefulness of this clinical sign for the early detection and diagnosis of vinyl chloride disease. It is apparent that the vinyl chloride syndrome is complex, Involving change- in Che skin, bones, blood and blood vessels, liver, spleen, and possibly the nervous system.
Until the disease process is better understood, a decision on which changes are true constituents of a syndrome and which are Independent events, coincidentally discovered by the same examination, is impractical. Frequent dizziness in 26 of the 70 patients (37.12) In this study supports the findings of Suciu et al and suggests that the CNS may be affected by exposure to vinyl chloride, possibly indirectly by a vascular mechanism. Interference with CNS function might increase the risk of accidental injury to vinyl chloride workers.
Moulin and covorkers observed four cases of scleroderma accompanying vinyl chloride-induced acroosteolysls in workers. Three of the four workers had scleroderma o.: the face, and each had shortening of the fingars, thickening of the skin of the fingers with adhesion to the deep layers, palmar erythema and hyperhydrosis, difficulty In extending eh- fingers completely, and thickened skin on the palmar surface of the wrist and forearm with hard projecting nodules that were most prominent over the flexor tendons. Raynaud's syndrore had been experienced by two workers, one of whoa whose feet, face, and hands had slightly edematous, scleroderma-llke lesions. One Individual's condition was studied in detail and followed for 4 years . The worker, aged 33 years, had for 4 years cleaned autoclaves used In vinyl chloride polymerization.
There wafi no suggestion of a predisposition toward the development of acroosteolysls in the medical history of either the Individual or his family. The worker had a history of consuming 2 liters of vine a day. Ha was hospitalized for malaise and was found to have slurred speech, paralysis of the right arm and right side of the face, and loss of skin sensations on the same side of the abdomen.
Results of neurologic, roentgenographlc, and electroencephalographic examinations showed mild deviations from normal.
While the worker was hospitalized, marked abnormalities of his fingers were noted. The last phalanx of each finger i t a u d shortened enlarged, with the nail clubbed and wider than It was long.
The patient had all the skin changes described previously as accompanying scleroderma. Microscopic examination of tvo nodules from his forearm showed a normal epidermis and a thickened, fibrous dermis vlth edema separating fragmented collagen fibers, but no signs of Inflammation. Elastic fibers vera few and segmented.
No significant abnormalities of the blood vessels were reported. Roentgenograms of both hands shoved osteolysis of the last phalanx of each finger of both hands, Che distal three-fourths of each phalanx having disappeared. A complete roentgenographic examination of Che skeleton shoved beginning sacroiliac arthritis, but nortial phalanges in the tees. Arteriography showed normal circulation chrough the arm, but decreased circulation in che wrist and hand caused by extreme hypertonia of Che blood vessels of the fingers.
Three years after assignment to other work, the individual shoved regression of .he scleroderma, the fibrous nodules, and the circulatory disturbances . The fingers remained hypothermic compared vlth the thumb, and painful paresthesia still affected the fingertips. Bone repair was seen in most of the phalanges.
Moulin and coworkers concluded that Che incidence of acroosteolysls in vinyl chloridc polymerization workers could be considerably reduced by introduction of control measures to reduce exposure to vinyl chloride. They believed that the distal vasoconstriction they observed in these workers was not a true arteritis, although it was severe, and Chat the acroosteolysls and sclerodermatous changes in the skin were secondary complications of the peripheral vascular hypertonia. Because of the similarity of this disease to acrosclerotlc scleroderma, the authors suggested that dermatologists obtain an occupational nlstory from any patient: presenting the signs and symptoms described in their paper. , acroosteolysls in workers exposed to vinyl chloride during its manufacture or polymerization. The information concerning signs and symptoms of vinyl chloride exposure in these reports is substantially the same as that in the reports previously discussed .
# Several other authors have reported CNS effects
Other studies have identified adverse liver effects on workers exposed to vinyl chloride. Marsteller et al reported on 50 workers in a vinyl chloride polymerization plant, 45 cf whom underwent peritoneoscopy and 48 of whom had samples of liver taken for biopsy. All 50 underwent intravenous (lv) cholecystography and radiography of the upper gastrointestinal tract. Sclntography of the liver and spleen was oerformed for 48, using 197Hg. The llv*r was found by palpation to be enlarged in 31 cases and the spleen in 37 by sclntography.
The hepatic surface shoved conspicuously augmented vascularization and stellate, reticular, or nodular fibrosis and scarring of the capsule. The spleen was not well visualized ay peritoneoscopy except when lc was narlc«dly enlarged; then the crenaca margin waa sharply indented and shoved capsular flbroala and subcapsular hemorrhage*. Early signs of portal hypertension were noted, including ascites and dilatation and tortuosity of gastric and peritoneal veins.
Muller et al deacribed microscopic changes ibserved in liver speciaens taken for biopsy from 50 polyvinyl chloride production workers. Liver cells shoved focal hydropic svelling, single-cell necrosis, fjcal granular disintegration of the cytoplasm, hyperplasia with enlargement and polymorphism of cell nuclei, and often the presence of several nucleoli. Periportal and centrilobular fibroses were described, without accompanying cellular activity or involvement of the portal vessels, and about one-half of the specimens of liver contained fatty degeneration.
Changes in the cells lining the liver sinusoids were described as "most impressive,' and had begun to develop In the first few years of exposure. Proliferation of sinusal cells was the first change observed, and, after 6-10 years of exposure, sinusal cell nuclei had become markedly atypical.
Three cases of hemangioendothellal sarcoma of the liver vere discovered. The authors reported that the regions around the sarcomas gave the impression that there was a transformation of atypical sinusal cells to tumor cells and that the cytologic atyplas of sinusal cells might be of prospective importance.
# Thiess and Frentzel-Beyme
, acroosteolysis (16), and esophageal varices (13).
There vere five cases of "haemangioendothellosarcooa" in vorkers vho had been employed in vinyl chloride or polyvinyl chloride production areas for 11-17 years; four of the vorkers, aged 38-44 years, had died. Only 46Z of the vorkers exposed to vinyl chloride at one particular plant vere still vorklng in a polyvinyl chloride plant or were otherwise traceable; the rest vere lost to the statistical survey.
The authors noted that there vere many difficulties in retrospective surveys for occupational health hazards. They stated that: although mortality data vere accessary in identifying a new disease, morbidity data vere even more important, because death certificates vere not alvays accurate. They also stated that prospective investigations vere preferable to retrospective ones, although the latter approach had received priority due to considerations of the time required in relations to the yield of information.
# Physicians representing the government, universities, and the vinyl chloride and polyvinyl chloride industries in the Federal
Republic of Germany vere said to be cooperating in further epidemiologic studies. The authors pointed out that more data on vinyl chloride concentrations in Che workplace, periods of worker exposure, and control measures were needed before an association between exposure Co vinyl chloride and Che development of angiosarcoma and ocher disorders could be proven.
The authors stated that, with regard to the cause of death among employees at this plant, "a relatively large proportion of deaths occurring at ac early age, are due to unnatural causes of death I.e. accident at the work place or road accidents" . This may suggest that the worker exposed to vinyl chloride could himself become an occupational and social health hazard because of behavior-modifying effects of the material, and may therefore support the inferences drawn from the work of Veltman et al and Suciu et al discussed previously. Lange et al , in 1975, analyzed the medical ana work histories of 15 workers employed In the polyvinyl chloride processing industry In the Federal Republic of Germany for an average of 5 years (range 1.5-13 years). Seven of the workers (472) complained of sensations of pressure or pain in the upp^r abdomen, three (201) of frequent dizziness, tvo (132) of cold hands and feet, and one (72) of increasing weakness in his legs.
Medical investigations conducted on the workers consisted of a dermatologic examination and several laboratory t*sts. A BSP retention test was performed on 9 of the workers, a reticulocyte count on 10, roentgenograms of the chest, hands, and feet on 12, and a ilver-spleen scintigram, using 99 mTc-sulfur colloid on 11. Four of the workers also underwent laparoscopy and liver biopsy. Dermatologic examination revealed no clinical signs of scleroderma or Raynaud's syndrome . The results of the laboratory tests, however, showed slight to moderate thrombocytopenia (63,000-13S,000 cells/id.) in seven workers (472); increased BSP retention (5.2-15.12 at 45 minutes) in seven workers (472); reticulocytosls of 1.7-4. 42 In six workers (402); and leukopenia (3,250 cells/Ml) In one worker (72); more than one abnormality was found in some of the workers.
One of the workers examined by llver-spleen scintigram had slight splenomegaly and one of the workers who underwent laparoscopy and liver biopsy showed changes, although less distinct, "of the kind observed in PVCproductlon workers." The authors concluded that, despite the small sample size, thrombocytopenia, Increased BSP retention, reticulocytosls, splenomegaly, and leukopenia were characteristic of vinyl chloride disease. Lange et al also presented case studies of tvo workers from the same polyvinyl chloride, plant who had died of malignant tumors. The fire:-;: case was that of a 38-year-old autoclave cleaner who was employed for 12 years in the plant. Physical examination in 1968 showed a large tumor in the upper abdomen la the area of the liver and spleen. Chest roentgenograms shoved destruction of the fourth rib, on the left side of the back. Laboratory findings included Increased erythrocyte sedimentation rate (22-67 mm/hour), considerable anemia, reticulocytosls of 62, a reduction of serum iron (25j*g/100 ml), an increased serum gamma-globulln (25 relative percent), an "Increased" alkaline phosphatase level (65 units/ml), and an Increased SGOT level (24 units/ml), r all indicative of livar damage. Th« liver-spleen scintigram showed splenomegaly, hepatomegaly, and reduced storage in the liver reticuloendothelial system.
The patient then underwent a laparotomy, which revealed a generally enlarged liver with many palpable nodes on the surface. Microscopic examination of two of the liver biopsy specimens shoved hemangioendothelial sarcomas. After the laparotomy, the patienc was given cytostatic treatment, but his condition continued to deteriorat- and he died within a year after the initial diagnosis of the tumor.
The second case was that of a 39-year-old man who had worked for some portion of 11 years as an autoclave clearer in the plant and for 2 years in a machine-producing factory . He had felt pain in the lowtr right quadrant of his thorax for some months and had experienced a painful hardening in the upper right portion of his abdomen several weeks before his medical exam ination. Physical examination disclosed a tumor the size of an apple on the epigastric angle. Laboratory tests sh.^ed these values, which the authors considered abnormal: erythrocyte sedimentation rate (44-73 mm/hour), and the "lactic" dehydrogenase (296 units/ml), aud alkaline phosphatr.se (80 units/ml) activities in the scrum. Normal differentiated blood counts and platelet counts were found. Laparotomy, performed rwice on this worker, showed a fist sized whitish-yellow tumor on the lower part of the left Lobe of tne liver tnat extended to the posterior portion of the right lobe. Numerous other nodules were palpable in the liver.
A small degree of congestive spleen enlargement (not further defined) was also found. Microscopic examination of several biopsy specimens led to the diagnosis of hemangioendothelial sarcoma of the liver. Although postoperative radiotherapy was given, the patient died about a year after the initial diagnosis. , in 1974, reported that one of the authors, having noticed a diagnosis of angiosarcoma of the liver on a death certificate, recalled having performed a liver' biopsy 3 years earlier that led to the same diagnosis. An investigation shoved that both of these patients had worked in a. polyvinyl chloride plane la Kentucky,, and a search of plant and area hospital records showed autopsy reports of three other cases over a 10-year period; two additional cases were diagnosed by biopsy. A systematic program was therefore undertaken for detection of liver abnormalities in workers in a chemical plant producing polyvinyl chloride and synthetic rubber, using automated 12-or 18-factor blood analyses.
# Makk ct al
# Screening profiles
from the 12-or 18-factor analyses were obtained for 1,183 employees, of whom 75 (6.3Z) had either 2 liver-related abnormalities on the Initial screening or 1 such abnormality that persisted . Aa a result of further testing including exploratory laparotomy of these 75 workers, 2 unsuspected cases of angiosarcoma and 3 cases of portal fibrosis were discovered. A biopsy of the liver was performed on two other workers who requested it, but both samples proved to be normal. Abnormal test results were reported for serum alkaline phosphatase.in 35/72 ( The results of this study have helped to provide a basis for Identifying clinical manifestations of vinyl chloride-induced liver damage. No single test was found to be pathognomonic ¿or this disease, and both false positives and false negatives were common. Sometimes the clue to Che presence ot a n g ii3arcona of che liver was not the magnitude of elevation of an enzyme activity, eg, LDH, but the persistence at chac elevation. Curiously, Che p'.-centage of workers with abnormal 12-factor test results was lower in polyvinyl chloride production workers (21.52) than in either synthetic-rubber production workers (28.62) or all other workers (26.72). However, abnormal results serious enough to warrant comprehensive examinations were present in 9.82 of the polyvinyl chloride production workers, while only 6.92 of the synthetic-rubber workers and 4.92 of all other workers had such seriously abnormal results.
In a 1975 report, Creech and Makk amplified C h i 5 early report of screening test results . Specimens of the liver for biopsy were obtained from 16 employees of that same plant, 3 of whom had normal results on clinical screening tests . The results of these three biopsies were normal, as were the results -from two other biopsies taken from employees with tainor abnormalities on the screening examinations. Two cases of angiosarcoma were detected, both with accompanying fibrosis. Periportal fibrosis was the most common biopsy diagnosis occurring in polyvinyl chloride workers and in two workers from other production areas.
Two polyvinyl chloride production workers also had enlarged spleens, splenic vein thrombosis, and esophageal varices.
Only one of seven workers with acroosteolysis had an abnormal battery of screening tests, and the test results returned to normal within 3 veeics after the employee stopped drinking alcoholic beverages. Results of tlie clinical tests were essentially the same as had been previously reported . The percentages of abnormal test results among 274 polyvinyl chloride production workers, compared with those of 411 other production workers, were: total bilirubin, 35. 6 Creech and Makk concluded that no individual test was adequate to detect angiosarcoma or fibrosis, although the persistent 20-second "tumor blush" on angiography was useful for diagnosing angiosarcoma and venous U.rf pressure studies were useful for diagnosing fibrosis.
Blood tests did not predict the results on liver scans, and scans did not always detect fibrosis. The authors stated that a combination of blood tests, liver scans, venous pressure studies, angiography, and biopsies would be necessary for the diagnosis of fibrosis and angiosarcoma. This study, like the earlier one , concentrated on attempting to assemble a diagnostic test profile and did not report workplace vinyl chloride concentrations or attempt to correlate examination results with extent or duration of exposure.
In 1974, Creech et al reported oa fcur cases of angiosarcoma of the liver diagnosed in employees in one chemical plant between 1967 and 1973, that have been previously discussed . The four workers, with a mean age of 44.5 years (36, 41, 43, and 58), had each worked at least 4 continuous years in the vinyl chloride polymerization section of the plant prior to the onset of the disease and had been exposed to vinyl chloride for an average of 18 years (14, 14, 17, and 27) . Extensive, nonalcoholic-type cirrhosis, in addition to the angiosarcoma, was found in all four workers. Gastrointestinal bleeding was found in two of the four; other effects observed in one or more wurkers Included portal hypertension, enlarged livers and spleens, weight loss, jaundice, an epigastric mass, and thrombocytopenia. Nons of the workers had a history of prolonged use of alcohol or exposure to hepatotoxins known to produce angiosarcoma, eg, uhorium dioxide or arsenic, either at work or elsewhere.
Falk et al , in conjunction with NIOSH, conducted an investigation at a vinyl chloride polymerization plant in Kentucky where seven cases of angiosarcoma of the liver had been discovered, including the four which had previously been reported by Makk et al . These seven cases were compared with four cases of portal fibrosis found in the same worker population. Factors in Che comparison were: age at diagnosis, initial symptoms, physical examination findings, liver function studies, biopsy or autopsy findings, and work performed and overall duration of employment.
The II - patient» were whit® males and were between the ages of 28 and 58. at the time of diagnosis.
The seven men with tumors had been employed at the plant for an average of 18.0 years; one had no complaints, but the authors reported fatigue, abdominal pain, chest pain, weight loss, black stools, bloody vomit, and weakness . The four men with nonmallgnant liver disease (portal fibrosis) had been employed an average of 20.6 years; one reported chest pain and weight loss, one reported having had black stools on two occasions, one had been noted to be jaundiced when hospitalized for hernia repair, and one had been hospitalized for gallstones.
The physical examinations of these 11 men disclosed enlarged livers or spleens in 4 of the 7 with angiosarcoma and in 3 of the 4 wxth portal fibrosis.
In addition, one of the tumorous-patients had upper-rlght-quadrant tenderness. Two patients with angiosarcoma and 1 with portal fibrosis had no detectable abnormalities. Results of liver function studies on the men of the 'W»» two groups were similar.
In boch groups chere were elevations of the concentrations of total bilirubin and the activities of alkaline phosphatase, SGOT, and LDH in serum, but no consistent pattern matched to the clinical B A n i f e s tations emerged.
Of the seven workers with angiosarcoma, five had elevated SGOT activities, four had elevated activities of alkaline phosphatase or Increased concentrations of total bilirubin, three had heightened activities of LDH, and one had a decreased platelet count. Liver-«spleen scans showed defects or other abnormalities in five. In the four workers with portal fibrosis, the concentration of total bilirubin and the activity of SGOT were each elevated in three, the activity of alkaline phosphatase was elevated in two, the activity of LDH was increased in one, and two had abnormal liverspleen scans.
Platelet counts were not reported for this group. Twelve samples of liver for biopsy were obtained by opening the abdomen and only three by puncturing the abdominal wall and the liver ¿ '1th a Manghinl needle. In the seven men found to have angiosarcoma, liver biopsy findings included angiosarcoma In four, hepatitis in two, fibrosis in two, and cirrhosis in one. The biopsy sample from the four men without angiosarcoma revealed fibrosis of the liver; two of them had portal and subcapsular fibrosis, one had portal fibrosis, and one had chronic hepatitis with focal fibrosis. All five of the patlent3 who died of angiosarcoma had had biopsies, but angiosarcoma had been diagnosed In only two. Angiosarcoma had been diagnosed previously by biopsy in the two surviving patients, and fibrosis had been diagnosed previously by biopsy in the four without angiosarcoma. Falk et al stated that the development of angiosarcoma was related to exposure to vinyl chloride and that it was more closely related to the type of work performed than to the overall duration of employment. They suggested that a higher risk was associated with longer duration of employment as a helper (reaction vessel cleaners) than with work in which the monomer was handled in a closed system or in which only polymerized material was handled. The data presented, however, do not completely support this suggestion since several workers without angiosarcoma had actually had longer duration» of exposure as chemical helpers than those workers with angiosarcoma. Since conditions of exposure undoubtedly do not correlate exactly with job classifications, exposure concentrations and durations must be determined to permit assigning relative risk factors.
# Whelan et al .
Liver scans were performed on all of the 1,180 workers at the vinyl chloride polymerization plant , using radioisotopes of gold, iodine, or most often technetium. On 50 of the workers, hepatic venograms, hepatic and celiac angiograms, and pressures in the right atrium, the inferior vena cava, and both free and wedged hepatic vein positions were recorded. Specimens of liver for biopsy were obtained also. Vinyl chloride concentrations in the workplace were not reported.
Four employees had angiosarcoma of the liver; their average age was 43 years (range 37-49 years) and they had been employed in polymerization of vinyl chloride for 15 years (range 12-20 years) . Although several tuners were found In some livers, no tumors of the spleen were found although splenomegaly waa present In some cases. No outstanding pathologic condition was found In either the venous or the arterial systems of the liver, but the centers of the tumors appeared to be less vascular than normal liver tissue.
Because the wedged venous pressure measurements resulted In hepatic Infarction in three patients, these tests were discontinued as part of the routine screening procedure . The area of Infarction that resulted from measurement of wedged venous pressure caused one worker to appear erroneously to have angiosarcoma when he was examined angiographically. Spleen enlargement: occurred with and without portal hypertension; about 10Z of the 1,180 employees had enlarged spleens without any evidence of tumor.
Whelan et al concluded that lsotopic liver scans were the most useful procedures for detecting angiosarcoma. They also concluded that a peripheral tumor stain, puddling of the contrast agent from ths midarterlal phase up to 34 seconds, and hypovascularlty of the central portions of the tumor were characteristic angiographic features of this tumor. Because hepatic Infarcts following measurements of wedged venous pressures may give a similar angiographic picture, the authors recommended that wedged venous pressure studies, when necessary, be done after angiography.
Popper and Thomas studied surgical and autopsy samples from the livers of 11 vinyl chloride and polyvinyl chloride workers, in six of whom angiosarcoma of the liver had been diagnosed and five of whom had hepatic fibrosis. Two cases of primary hepatic carcinoma, one In a worker who had laminated polyvinyl chloride sheets for 17 year.« and the other in the 8-yearold daughter of a vinyl chloride polymerization worker, were described also, but no information on the concentrations and durations of exposure of these two persons was supplied.
In addition to angiosarcoma of the liver, typical lesions found in the liver» of the 11 patients Included subcapsular, portal, and perislnusoldal, fibrosis, Increased numbers of fibroblasts, formation of connective tissue ¿epta, sinusoidal dilatation (without signs of passive congestion, such as compression of hepatocytes), Increased size and number of sinusoidal lining cells, and enlarged hepatocytes with hyperchromatic nuclei accompanied by bile stasis [39j.
These lesions were usually more prominent in the group with angiosarcoma than In that with hepatic fibrosis.
# Five patients
(two with angiosarcoma and three with precursor signs) had enlarged spleens with "grossly visible and conspicuously enlarged Malpighian follicles separated by a meaty-appearlng homogeneous red pulp" . The follicles had large germinal centers with phagocytic cells. The perlarterlolar lymphatic sheaths were markedly enlarged. Cells lining splenic sinusoids were enlarged but did not show phagocytosis; their elongated cuboldal shape made them resemble glandular cells. Perifollicular hemorrhages were noted, and in one case the presence of Gasma-Gandy bodies suggested old hemorrhages.
Popper and Thcmaa proposed Chat hepatic fibrosis and hepatic and splenic cellular proliferation were precursor stages in the development of angiosarcoma .
Their evidence was insufficient, however, to prove that these changes -ere irreversible or progressive. Portal fibrosis was not found to be predictive of the development of angiosarcoma, and the focal Intralobular fibrosis in these subjects was similar to that in many elderly patients, particularly diabetics.
Bepatic focal subcapsular fibrosis was characteristic of angiosarcoma, however, and could be seen by peritoneoscopy or during surgery. The authors noted that there were similarities between the development of portal fibrosis in workers exposed to vinyl chloride and diseases in European vineyard workers exposed to arsenical pesticides, patients in India with "Idiopathic portal hypertension" (Bantl's syndrome), and par.lents with psoriasis who were treated for prolonged periods with Fowler's solution (potassium arsenite).
This report is valuable for its comprehensive description of vinyl chloride-induced visceral lesions and its comparison of these with other fibrotic lesions.
Thomas et al extended the previous microscopic studies of the livers and spleens of workers engaged In vinyl chloride polymerization. Among the 15 cases of angiosarcoma of the liver that the/ studied, the tumor had metastasized to the duodenum in one case, to the lung in a second, and to the lung, heart, kidney, and lymph nodes in a third. Specimens from 20 patients were reviewed microscopically.
These authors again postulated that their observations sight have illustrated a developmental continuum in which fibrosis precedes the development of angiosarcoma of the liver. They also noted that a specimen of liver obtained for biopsy from one patient 2 years after his last exposure to vinyl chloride showed that both hepatocytes and sinusoidal lining cells had returned to normal, but the fibrous scars persisted.
The case for development of angiosarcoma of the liver from a fibrotic precursor stage would be compelling, if Che specimens presented were . . obtained in a real-time sequence from an individual patient . The authors' arrangement of specimens from 20 different patients is plausible, however, because fibrotic changes are seen throughout anglosarcomatous livers and the same changes in the spleen accompany bcth fibrosis and angiosarcoma of the liver.
The disappearance of abnormal hepatocytes and sinusoidal lining cells from biopsy materials within 2 years after the worker was removed from exposure to vinyl chloride is additional evidence that this exposure caused the hepatic abnormalities.
# Zlmmermann and Eck reported a case of angiosarcoma of the liver In a 38-year-old chemical laboratory assistant In Germany who was exposed to vinyl chloride at an unspecified concentration for 3 years and 5 months (1960-1963).
Be was axposed for 3-4 hours two or three times a week and wore no protective mask. The worker was hospitalized with marked- abdominal distress about a year before his death in 1974.
Tumors of the liver were suspected after l a p a r o s c o p y but could not b« confirmed by microscopic examination of liver speclxnns at that time. An open-abdomen surgical sampling of the liver 6 months later produced evidence of occlusion of the portal vein, necrosis, and interstitial fibrosis. The changes were originally attributed :o tertiary syphilis, but serologic examinations were negative, and the patient's work history showed exposure to vinyl chloride. The patient lived for another 6 months and died from massive heniorrhage from esophageal varices. Post-mortem examination showed a metastasizing multilocular angiosarcoma or the liver with liver fibrosis.
The tumor had spread to the diaphragm, the pleura, and the lymph nodes around the pancreas. Hepatomegaly, with Icterus and fibrosis, and splenomegaly were also confirmed, and Che heart showed flaccid dilatation. Bronchitis was also indicated. . Liver structure was abnormal; lobular centers were often completely flbrotic, and branches of the portal vein were blocked by filamentary connective tissue and the diaphragm was bound to the liver by fibrous tissue. There were localized areas of recent necrosis. The spleen contained blood-forming centers for both red and white cell lines and had localized hemorrhages.
# Microscopic examination showed tumor cells bearing filaments and an abundance of connective tissue between atrophic liver-cell plates
The pancreas contained areas of fibrosis and of medlum-grade lipomatosis.
The testes shoved a reduction of spermiogenesls and slight fibrosis. Purulent n/ocarditis was found.
In the brain, there was localized atrophy of cerabellar Purkinje cells and necrosis of the frontoparietal cerebral cortex. attributed the development of angiosarcoma in the patient, to his prior exposure to vinyl chloride, noting that primary angiosarcoma of the liver is extremely Infrequent. In addition to vinyl chloride, the patient had been exposed to methylene chloride, styrene, acrylonltrile, and other substances. This 'paper [4IJJ showed some of the problems encountered In determining whether angiosarcoma of the liver was Induced by vinyl chloride.
# Zlmmermann and Eck
The short duration of work involving exposure to vinyl chloride (less than 3.5 years), the long Interval (7 0 years) before the onset of symptoms, the mixed exposures, and the infrequent occurrence of this tumor made the diagnosis and determination of cause difficult. , in 1974, noted six microscopically confirmed cases of hepatic angiosarcoma of the liver in Connecticut, five of the cases having been diagnosed after 1966. Two of the patients had apparently been occupationally exposed to polyvinyl chloride. A 47-year-old man had worked for the previous 10 years as an accountant in a factory producing vinyl sheet3 and processing polyvinyl chloride resins and had frequently visited the plant's production areas.
# Christine et al
Another patient, a 61-year-old man, had spent 25 years in an electrical plant operating a machine that applied polyvinyl chloride-containing plastic to wires. Two other patients, a 73-year-old man with a history of chvonic intake of alcohol and an 83-year-old woman, had had no known occupational exposure to polyvinyl chloride, but both had lived 35 years or longer within 2 miles of the electric wire plant or within 0.5 mile of the vinyl products plant mentioned above.
The other two patients, a housewife and an alcoholic nan, had neither occupational nor probable residential exposure to vinyl chloride.
None of the six patients had a history of hepatitis or exposure to hepatotoxlc drugs, medications, or agents other than alcohol.
Many other reports are available in which cases of angiosarcoma in vinyl chloride workers are discussed . Several other papers contain reports of scintigraphic investigations , histopathologic studies , ind clinical aspects of liver damage in workers exposed Co vinyl chloride. The infc-nr^nion contained in chese papers is substantially the same as that which has been presented in this section and these reports often discuss the same cases. Because of the prolonged latent period calculated from these data and che unavailability of complete information on exposure conditions or numbers of workers exposed in the worldwide production of vinyl chloride, any estimate of expected future cases of angiosarcoma in Che workforce would be unreliable. in. 1975. They projected an incidence rate, using a linear dose-response model, of angiosarcoma as 0.0052/person/year of exposure in highly exposed workers (350 ppm or 896 mg/cu m, 7 hours/day, 5 days/week) from incidence rates in rats. Using data from epidemiologic studies of vinyl chloride workers and data on the exposure durations for the 14 US occupational cases of angiosarcoma that were known as of 1974, a projected Incidence rate for angiosarcoma of 0.0031/person/year was calculaced.
# NIOSH has compiled a listing as of
# An estimation, of risk was performed by Kuzmack. and McGaughy
The authors concluded from their calculations that 7.5% of all highly exposed vinyl chloride workers would be expected to develop angiosarcoma and that 15Z would develop primary cancers at some site during their lifetimes. They also estimated that, as of 1974, only 38Z of the predicted number of angiosarcoma caused by vinyl chloride had been diagnosed.
The authors pointed out several possible sources of error in their estimates. These Involved uncertainties in the numerical estimates of the functions «id conceptual inadequacies in the assumptions of the models. For example, the accuracy of the assumed exposure concentration of 350 ppm and the asauaed duration of 7 hours/day, j day:,/week, was uncertain. Also, biologic latency was not directly observable, ind the time of initiation of some unknown irreversible damage night not have been accurately represented by the duration of total exposure for the known cases. The predictions are based on the assumption that the set of stochastic variables, such as genetic conposltlors, previous medical histories, diets, etc, that might influence tumor formation Is homogeneous. If Is further assumed that the then current exposures will be continued with ac major change.
These two assumptions cannot be supported since homogeneity in worker populations from various geographic areas Is highly unlikely and 3ince exposures have been decreasing In recent years.
Two maps prepared by Falk describe the geographic distribution of deaths f:.*om angiosarcoma of the liver in vinyl chloride polymerization workers (Figux. i XVII-1) and in people not engaged in work with vinyl chloride (Figure XVII-2). A comparlsou of these figures shows that there Is no reason to believe that some unknown geographic or demographic feature would account for the clustering of angiosarcomas of the liver in the vinyl chloride worker population.
Casterline et al [641, in 1977, reported a unique case of squamous-cell carcinoma of the buccal mucosa associated with chronic oral exposure to polyvinyl chloride.
The patient was a 22-year-old white male who had habitually chewed plastic insulation from wires and other plastic materials since he was 8 years old. He denied using any form of tobacco, alcoholic beverages, or illegal drugs. A pinhead-sized papule was found on the right anterior labial buccal sulcus a fte r an episode of aphthous stomatitis that lasted less than a week. In 3 month», the papule grew to about 1 centimeter in size a.id was excised by a dentist. Microscopic examination of the tissue resulted Ir. a diagnosis of Invasive squamous cell carcinoma.
A wider resection shoved that the tissua margins appeared free of tumor.
No recurrence.was noted in. the next 6 months.
The patient's oral hyglen« appeared excellent, but his teeth were grooved as a result of his habit of stripping wire with them. He reported keeping plastic material in his mouth for 6-8 hours at a time. No abnormalities other than the buccal lesion were found, although a complete examination was made specifically to search for signs of vinyl chloride-induced functional aberrations. believed that the development jf this cancer In an area of the mouth where the Individual frequently stored polyvinyl chloride materials was more than coincidental. They cited as further support for their belief the high incidence of cancers of the buccal cavity and pharynx found by Tabershaw and Gaffey to be associated with exposure to vinyl chloride. Casterline and coworkers urged that electronics workers be informed of the hazard of repeatedly holding plastic-covered materials in the mouth, ir the authors' conclusion Is correct, a little-suspected but significant route of exposure to vinyl chloride may exist.
# Casterline et al
The possibility of coincidence, however, may be greater than the authors were willing to concede.
# (b) Vlnylldene Chloride
McBirney , In 1954, reported on a case of fatal poisoning In a worker exposed to the vapor from dlchlorethylene (Identified as vlnylldene chloride ) stabilized with 12 of a sodium .hydroxide solution. The worker was 35 years old and had worked 8 hours/day, 5 days/week, for a "short" time extracting oil from fish livers using this mixture.
The worker first complained that the odor and vapors from the extraction kettles made him nauseous . A day or two before he was suddenly taken ill, the worker had acted "strangely" and his covorkers at first had thought that he was drunk. He was hospitalized and died within 2 days.
At autopsy, the brain, heart, lui:gs, spleen, liver, and kidneys were observed to be congested. The cause of death wa.s listed as bronchopneumonia. , the possibility of a preexisting condition, and the possibility that the vapor described as dlchloroethylene may have been 1,2-dichloroethylene (a substance known to have been used in extracting fish oil ) does not allow any conclusions to be draws.
# Lack of exposure information in this report
Liver function tests and scans were performed on a group of 46 workers at a New Jersey plant in which the concentration of vlnylldene chloride in the air ranged from below the analytic limit o£ detection (O.QQ mg) to 1.45 ppm (about 5.6 mg/cu m) .
Previous company experience had established a typical range of 0-5 ppm (0-19.85 mg/cu m ) , with occasional peaks of 300 ppm (1,191 mg/cu m) associated with accidental spills or leaks and with removal of samples of product from the reactor. This plant used nearly 200 chemicals, including several known hepatotoxins, so that exposure to a single agent was net claimed. The cests Included measurements of SGGT, SGFT, and serum gammaglutamyl transpeptidase, LDH, and alkaline phosphatase activities, total bilirubin, and indocyanine green clearance.
None of the 46 employees was found to have total bilirubin counts outside of normal limits, but 39Z had abnormal LDH activity, 30Z had abnormal gamma-glutamyl transpeptidase activity, 28Z had abnormal SGOT activity, 21Z had abnormal SCPT activity, and 13Z had abnormal alkaline phosphatase activity. Fourteen men had abnormal liver scans, but only five showed what the author defined as "definite hepatomegaly." Six employees (13Z) had severe impairment of indocyanine green excretion (less than 10Z clearance), 25 (56%) had moderate impairment (10-17Z clearance), and 14 (30Z) were found to be normal (greater than 17Z clearance). Fifteen workers were retested for dye clearance; two had returned to normal, four had deteriorated, and the rest remained unchanged. On the basis of these tests, biopsy studies of the liver were recommended for 10 workers, but only 5 agreed to undergo this kind of study.
All five employees on whom biopsy studies of the liver were performed exhibited abnormal clearances of indocyanine green . Two had borderline or mild portal fibrosis on biopsy, one had mild nonspecific activation of hapatocytic nuclei and a borderline increase in fat, one had mild steatosis, and on« had moderataly saver« steatosis with stellate fibrosis that suggested an alcoholic llvar injury. Two of the five had enlarged livers. None of the microscopic changes was attributed by the pathologist to the effects of industrial toxins.
In a follovup study undertaken by NIOSH upon invitation by the company, 256 employees were surveyed for serum total bilirubin, and alkaline phosphatase, GOT, GPT, and gamma-glutamyl transpeptidase activity . Two criteria of abnormality were used. Criterion A was a deviation greater than two standard deviations from the normal population mean of the laboratory performing the analysis; criterion B was the occurrence of a value outside the normal range used by the laboratory performing the analysis. An abnormally high result by either criterion on any test was regarded as indicative of liver impairment. Duration of employment, work history, exposure conditions, use of alcoholic beverages, current symptoms, history of liver disease, and demograpt.lcs were also recorded.
Of these, the only significant variable (?<0.001) related to abnormality, according to the more stringent criterion B, was duration of exposure at the site: 5.11 years for "cases" and 3.64 years for "noncaaes." A total of 75 employees (29Z) at the plant were classified a3 abnormal by criterion B on the basis of enzyme activity teats .
On the Individual teats, NIOSH found 46 employees (19Z) with abnormal elevations of SGPT activity, 42 (16Z) with abnormal serum gamma-glutamyl transpeptidase activity, 31 (12Z) with abnormal SGOT activity, and 5 (22) with abnormally elevated serum alkaline phosphatase activity. Only one employee (0.4Z) had an elevated serum total bilirubin value.
Every area of the plant had at least one employee who was judged abnormal by one of the two criteria stated. The Incidence of abnormal alkaline phosphatase and bilirubin values in this study was lover than that reported previously at a vinyl chloride plant.
# The
company had tested only a group of workers Involved in the p o ly m e r iz a t io n of vlnylldene chloride, who were exposed to this monomer at relatively high levels .
Whereas NIOSH had studied almost 88Z of the employees, nearly 5.6 times as many as the company had studied. The incidence of abnormal results on any test would be expected to be lower in NIOSH's study, therefore, as was the actual case.
The substances or the relative concentrations responsible for producing adverse effects where exposures are mixed cannot be identified conclusively. The case for the existence of actual liver damage from exposure to vlnylldene chloride rests on the company's correlation of microscopic and dye-clearance data with the liver enzyme studies.
In 1970, Henschler et al and Broser et al each reported on the same two cases of poisoning from occupational exposure to vlnylldene chloride copolymers that had occurred in Germany in 1965. In both cases the workers had been transporting an aqueous suspension of vlnylldene chloride copolymerized with another unspecified vinyl compound. The authors stated chat eh« suspension contained about 0.42 of low molecular weight halogenated hydrocarbons, of which vlnylidene chloride comprised about one-half. Both workers developed symptoms of poisoning while manually cleaning the transport tank.
The first worker was 33 years old . About 6 hours after he had worked in the tank for a "short," but indefinite time, he experienced fatigue, weakness, lack of appetite, and an "abnormal sense of taste." Nineteen hours after the initial exposure, he again entered the tank and remained for 45 minutes.
Five hours later, he experienced nausea, headache, dizziness, and eventually vomiting of blood. A "furry" feeling in the mouth and lips which he had noticed earlier now became more noticeable. He was admitted to a medical clinic 27-28 hours after his first exposure.
Conjunctivitis, inflammation of the epipharnyx, herpes labialls, pains in the epigastrium, perception disorders of the face, and deflection of the tongue to the right were observed.
Liver and kidney function tests were initially abnormal (low urine specific gravity, 3-6 leukocytes in the urine sediment, SGOT 20 mtJ, 82 BSP retention, 602 prothrombin time, and what was described as decreased water excretion of 570 ml); however, all findings except the sensory effects returned to normal after 3 weeks.
An extensive neurologic examination performed 3 months later revealed analgesia and hypoesthesia In the total trigeminal area, including the nose and oral mucosa . In addition, hypoesthesia and hypalgesia in the region of both ear muscles and under the angle of the jaw and absence cf the corneal reflexes were noted.
Other findings were normal except for labile hypertension (blood pressure 170/90 mmHg). Followup examinations conducted 2 and 4 years later showed the same types of findings, and the worker complained of the same symptoms.
The second worker, 53 years old, was exposed in the same way, but for a shorter time than the first .
His initial symptoms were essentially the same as those of the other worker. On admission to the medical clinic, 5 days after exposure, herpes labialls, hypertonic fundus, high blood pressure (170/115 mmHg), mild diabetes mellitus, and polycythemia (5.34 million erythrocytes, hemoglobin 17.2 g, color index 32.4, hematocrit 482) were observed. Kidney and liver functions were not abnormal. Perception disorders in the face and in the fingertips of both hands, paresis of the muscles of the cheeks and tongue, and bilateral double vision were also noted. After 4 months, the subject complained of loss of the sense of taste, deficient saliva flow, and difficulties in opening his mouth, chewing, and eating .
Findings of a medical examination included hyposmia and hypogeusla and analgesia, thermoanesthesia, and hypoesthesia in the area of the trigeminal nerve, the skin of the face, the oral mucosa, the top of the bead, th- tragus, the ear muscles, beneath the angle of the jaw, base of the tongue, throat, and the external auditory, passages. Corneal, nasal, and vomiting reflexes were all absent. Followup examinations 2 and 4 years later revealed no Improvement.
The authors attempted to find mono-and dlchloroacetylene in the aqueous mixture, because of the close resemblance between the signs and symptoms of intoxication vith vinyl derivatives and with acetylene dlchlorlde, but were unsuccessful; however, they postulated that the toxic effects observed In these workers could have been caused by mono-or dlchloroacetylene.
They suggested that caution be exercised where the potential existed for exposure to the Intermediate products of polyvinylidene chloride. , In a 1971 report, discussed similar effects on a 32-y<iar-old worker exposed to off-gas from an aqueous dispersion of a vinylidene chloride copolymer.
# Krieger et al
Several hour3 after receiving a jet of the gas in his face after opening a valve too soon and after manuaily cleaning a tank used to transport the copolymer, a job that lasted about 2 hour3, the worker developed pains in the upper lip, nose, and eyes, a frontal headache, and visual problems. Later he was bothered by a lack of sensation in his face and buccal mucous membraner, somnolence, anorexia, nausea, and difficulty in speaking and eating.
Fourth a days after the incident, an examining physician noted bilateral facial anesthesia, corneal anesthesia, and hypoesthesla. The worker had neuralgia involving the anterior two-thirds of his tongue, but no trigeminal motor involvement or facial motor disorders.
Krieger et al concluded that, because the clinical picture was similar to that described in previously published reports on the toxic effects of exposure to chlorinated acetylenes, these compounds probably were the toxic agents in Chls case.
Although none of these authors suggested that vinylidene chloride itself was the cause of the "cranial polyneuritis" observed, each suggested that there is a potential hazard to workers exposed to intermediates or impurities of vinylidene chloride copolymerization processes.
# (c) Vinyl Bromide, Vinyl Fluoride, and Vinylidene Fluoride
No reports of toxic effects on humans from exposure to vinyl brer,aide, vinyl fluoride, or vinylidene fluoride have been located.
# (d) Summary
The human studies reported in this section do not permit comparisons of the modes of action of the various vinyl compounds. Only for vinyl chloride have reports of a full range of tests on a large population of workers been published. The adverse effects observed on humans exposed to vinyl chloride, eg, the serum enzyme aberrations, CNS effects, vascular abnormalities, and tumors, Indicate that such exposure is a serious hazard In the occupational environment.
The other vinyl halides are also suspect because of their chemical similarity to vinyl chlorldc. The paucity of human data for the other vinyls should not be construed as an indication that they are Innocuous; the potential hazar < from occupational exposure to these compounds was only recently postulated.
The hazards presented by thes? compounds nay vary only quantitatively rather Chan qualitatively, and the variations may be and are likely to be based on thalr relative bloreactlvlties.
# Epidemiologic Studies
Although more than 30 epidemiologic studies of populations subject to occupational and environmental e:cpo3ure to vinyl chloride have been published since 1971, only one epidemiclogic study of workers exposed to vinylidene chloride has been located, and no epidemiologic reports on the other vinyl halides were found. In the first study , the experimental population consisted of 5,011 workers (96.4X male, 95.71 white, mean age 35.8 years).
The control population was the adult male population of Tecumseh, Michigan: 2,407 men over the age of 18. Criteria for selection of the control population were not presented, and analyses comparing vinyl chloride workers with controls were not given. In assessing health status, Dinman et al had each worker complete a., questionnaire designed to probe for signs or symptoms related to Raynaud's syndrome or peripheral vascular insufficiency and related hand Injuries. Roentgenograms of both hands of each worker were made and were reviewed Independently by tvo radiologists for signs of acroosteolysls.
Medical and occupational histories vere also obtained from each employee.
Twenty-five clear-cut cases of acroosteolysls were found in the worker population . These cases met the following diagnostic crltarla: defects along t&e shaft margin, sclerosis with recalcification, and shortening of the phalanges, or, marginal defects with residual fragments, transverse defects with or without distal fragmentation, and total resorption of the distal portion of the phalanx. Tventy-tvo of the 25 workers with abnormalities diagnosed by roentgenographic examination also indicated on the medical questionnaire that they had had symptoms characteristic of Raynaud's syndrome. The authors stated that in a few plants the concentrations cf airborne vinyl chloride had been measured inside the reactors during scraping operations, and that Che vinyl chloride concentrations had been generally below 100 ppm (256 mg/cu m) and usually about 50 ppm (128 mg/cu m) .
Air samples taken close Co the hards of Che scrapers had contained concentrations of ' .'inyl chloride ranging between 600 »nd 1,000 ppm ; however, Che authors did not present details or identify Che plants where these measurements had been madt. . concluded Chat work practices rather chan any one specific substance or combination of nacerials used In the manufacturing process were determinant of whecher or aot acroosteolysls would occ.ur.
# DInman and coworkers ard Cook et al
They considered acroosteolysls to be a manifestation of a systemic intoxication rather than of local effects by a coxlc material, so that prevention of transpulmonarv, percutaneous, and gastrointestinal absorptions of materials scraped off the walls of the reactors was seen as the first line of defense of the health of the reactor cleaners, the group of employees in wLlch the greatest incidence of this disease was found. Bagger-packers also had a high incidence of acroosceolysls and were required to have protection against absoprtion of material from the polymerized produce from the reactors.
The authors stated that, while gloves (unspecified type) were provided for reactor cleaners, the use of gloves was "Inconsistent" at those plants having workers with acroosteolysls.
They also stated that the procedure for airing out reactors before cleaning was frequently "short-cut" in. these planes.
They pointed oue that the complexity of the manufacturing processes, which involved -at least 227 different materials, including monomers, catalysts, ketones, and chlorinated hydrocarbon solvents, made conclusions about the hazard of any single ingrediert difficult.
They also proposed that "idiosyncratic sensitization or susceptibility" be considered as a possible determinant of the development of acroosteolysls. The authors also stated that there were several problems with the consistency of the diagnostic procedures, eg, the radiologists seldom agreed on a specific diagnosis, and with the accuracy of the survey techniques. The authors' investigation of differences in ehe planes' work pracclces did not provide an explanation as to why there were definitive cases of acroosteolysls in only 7 of ehe 32 plants . However, they did report that acroosteolysls was rare in plants using high-pressure water lances for cleaning the reactors and also In those that reduced the pressure within the reactor below the atmospheric pressure to the greatest extent and for the longest time before opening the reactor for cleaning.
The authors' suggestion that work practices and engineering controls might not be followed in those plants having cases of acroosteolysls was not documented. The fact Chat all cases of acroosCeolysis were diagnosed In workers vb-had been employed at sometime as reactor cleaners, although only 21Z of the total worker population had been employed in this category, Indicates that employees performing this task . were at greater risk of developing acroosteolysis. Since this task also has been found to have the potential for the highest exposure to vinyl chloride, it is reasonable Co assume that exposure to this substance contributes to the induction of acroosteolysis.
(2) Clinical Tests In 1972, Kramer and Mutchler [761 described a study in which environmental measurements were compared with clinical test results and meaical histories for 98 men who were occupationally exposed to vinyl chloride and to "small amounts" of vinylidene chloride in a polymerization facility.
Medical surveys and physical examinations had been conducted on 66 of these men during 1965 and 1-966, and the results were compared with results from a control group of 605 employees in other departments (not Identified) who were examined during the same period .
Ninety-five separate items were compared for the two groups by a test for differences between means assuming normal distributions.
The
# Six of 20 clinical variables
showed significant correlations (P<0.05) with the cumulative TWA concentration and the cumulative dose of vinyl chloride when allowance was made for the effects of age and obesity . Systolic blood pressure, diastolic blood pressure, BSP retention, icteric index, and serum beta-globulin concentration increased witn increasing TWA exposure concentration and total exposure dose (TWA concentration multiplied by cime on the job), while hemoglobin concentration decreased with increasing exposure. Although the authors did not present complete information on exposure concentrations, they stated that the mean TWA exposure concentration was 155 ppm (397 mg/cu m) in 1950 and 30 ppm (77 mg/cu m) in 1965. The authors noted that recent (not further defined) measurements of the workplace concentrations of vinyl chloride had shown them to average about 10 ppm (25.6 mg/cu m) with vinylidene chloride present in "trace" amounts, virtually always less than 5 ppm (19.8 mg/cu m ) . The authors also mentioned that data concerning exposures for each year since 1950 were available, but they did not present these data. also calculated the expected clinical values for these tests as functions of career TWA exposure concentrations, using the regression coefficients from estimated exposures for the study population.
# Kramer and Mutchler
Because blood pressure and Che concentration of hemoglobin in the blood did not move outside the normal range of values aud the significance of change in the concentration of beta protein vas not known, Kramer and Mutchler considered that the only dependent variables significantly linked to possible injury induced by prolonged exposure co vinyl chloride with trace amounts of vinylidene chloride were BSP retention and the icteric index. These measures indicate some interference with the normal liver function.
The two persons with the greatest increases in BSP retention were reexamined in 1968, having been removed from further exposure in 1965. One individual, who had a M story of hepatitis before exposure to vlr.yl chloride, retained high value? or BSP retention and icteric index; the other individual had essentially normal laboratory findings.
In 1975, Wyatt et al published an epidemiologic study of the results of selected blood screening tests and medical histories of workers in a chemical plant In Kentucky where polyvinyl chloride was made. Since angiosarcoma had been diagnosed in seven workers in the unit where polyvinyl chloride was manufactured (unit 62) in this plant, results from workers in this unit were compared with other workers in the chemical plant who had never worked In unit 62.
There were 413 employees with at least 1 month of experience in unit 62; they had means of 14 and 7 years of experience at the plant and In unit 62, respectively. They were compared with 469 employees who had never worked in unit 62 and who had a mean of 12 years of experience at the plant.
All employees in the study were male, and less than 10Z in each group were nonwhite. The average age was 40 in the unit-62 workers and 41 in the other group. Height and_welght were similar in the two groups.
Blood tests were performed for several months, beginning in January 1974 . Blood was drawn in the early morning after an overnight fast and the serum was analyzed for total protein, albumin, calcium, inorganic phosphate, creatinine, uric acid, total bilirubin, alkaline phosphatase, LDH, GOT, CPK, creatinine phosphate, and cholesterol. Normal values for each test were based on the experience of the clinical laboratory performing the tests. Intergroup differences were determined for the means of each teat, and significance was tested by calculation of chi-square. The effects of age were analyzed by regression analysis of the mean values of each test plotted by 5-year age groups.
The mean values of each test were not significantly different for the two populations . However, when the results of each test were classified as normal, above normal, or below normal, albumin, alkaline phosphatase, and GOT In the blood serum were found to differ significantly (P<0.05) with regard to percentages In each range. Multiple regression analyses showed significant differences (P<0.03) in the albumin and cholesterol tests for the two populations.
A comparison of the two populations by history of previous illness revealed significant differences (P<0.05) in the incidences of genitourinary disease, which was lower In the unit-62 workers, and "allergic" and "llver-spleen" illness, which were higher 'in the unlt-62 workers. made no attempt tc assess such factors as length of employment, selective criteria for employment, age, or behavioral differences between the groups. They pointed out that many individuals in both groups had abnormal test results, but they stated that this oust be "interpreted cautiously," particularly in the absence of a true control group. This study provides no information on exposure to potential chemical hazards for either of the groups.
# Wyatt et al
Without this information, the observations of differences between the groups are of limited value.
In 1977, Waxweiler et al The groups consisted of 134 rubber workers with "no" vinyl chloride exposure, 80 plastics workers vlth "light" vinyl chloride exposure, 126 chemical workers designated as vinyl chloride "exposed," and 71 former chemical workers who had had "past" vinyl chloride exposure. Information concerning exposure concentrations was not presented. Subjects were classified in one of the first three groups on the basis of their jobs at the time of the health survey. Basic blood screening tests and pulmonary function examinations were performed, and medical histories were obtained.
All test results were adjusted for age and the results of the pulmonary function tests and reports of respiratory symptoms were also adjusted for smoking. Alcohol consumption was analyzed, but no basis was found for adjustment o f the data.
Of the total study population, there were 21Z abnormal SGOT, 5% abnormal total bilirubin, 13Z abnormal alkaline phosphatase, and 4Z abnormal LDH .
The prevalence of these abnormalities was similar in all four groups, except that abnormal LDH values were present in 11.8Z of the former chemical workers.
The age-adjusted prevalence of hepatomegaly as diagnosed by palpation in current chemical workers (13.2Z) was almost twice that in rubber workers (7.1Z) and-plastics workers (7.32).
A similar gradient was noted when diagnosis was by percussion alone or by percussion and palpation together. One former and three current chemical workers had both abnormal values for two or more of the four liver function tests and hepatomegaly as diagnosed by both percussion and palpation. Liver scintlgraphs, after injection of 99 Tc sulfur colloid, were made for 123 workers exposed to vinyl chloride and were read by three specialists in nuclear medicine. In no case did all three specialists agree on whether any single film was abnormal; of the 29 films read as abnormal, only 4 were read as abnormal by 2 reviewers. No significant differences between the groups were reported for symptoms of Raynaud's syndrome . Twenty-two of 207 roentgenograms of the hand were read as abnormal for some state of acroosteolysis by one of two radiologists. Severe, persistent headaches were reported more frequently by the chem.cal (13.7Z) and plastics (12.7Z) workers than by rubber (8.6%) and former chemical (6.4Z) workers, and loss of consciousness on the job was more common in t h chemical (6.3Z), plastic (5.22), and former chemical (5.8Z) workers chan in the rubber (2.1Z) workers. Neurologic examinacion revealed "slightly" dim inished reflexes in Che chemical workers' group. The prevalence of angina pectoris, as measured by Che Rose Questionnaire for cardiovascular symptoms, was not noticeably different in the four groups. However, a much higher prevalence of systolic hypertension (>140 amHg) was noted in the former chemical workers.
A significantly higher (P90 ¡nmHg) was seen in all three vinyl chloride-exposed groups compared with that in the group of rubber workers (39.4-41.0Z vs 24.3Z).
No differences between the four groups were found in tne prevalence of respiratory volume impairment (adjusted for smoking) or of respiratory flow impairment; volume impairment did n e t differ between smoking and nonsmoking workers, although pulmonary function c e s t 3 made before and after the workshift showed results related to smoking rather than to job w-itegory . Sputum cytologic and chest roentgenographic examinations revealed only "minor" intergroup differences. On the health questionnaire, the plastics workers and former chemical workeru reported prevalence? of chronic respiratory symptoms "substantially" higher than those In the rubber workers, while the current chemical workers reported prevalences only "slightly" higher than those in rubber workers. concluded that the striking increase in LDH abnormalities in the former chemical workers (12Z vs 2-4Z for the other three groups) might have been a function of self-selection out of the chemical area because of symptoms of associated abnormalities. They also stated that the "most impressive" difference between the groups was the prevalence and degree of hepatomegaly, which showed a "weak" dose-response relationship with vinyl chloride exposure as estimated from job categories. Finally, the authors pointed out that, because of differences between plants in work practices, production techniques, composition of the workforce, the presence cr absence of various associated toxins, and other factors, general conclusions about the hazards of vinyl chloride exposure should not be drawn from the results of this single study.
# Waxweiler et al
The types of data most valuable In comparisons of epidemiologic reports, such as dally exposures and total accumulated doses, were not available to these authors .
Exposures considered "light" in this plant might have been classified differently in another plant.
The bias introduced by preselection for work and self-selection out of a hazardous environment is not quantifiable at present. The impact of these and other considerations, such as the latency of adverse effects on the estimation of the hazard of exposure to vinyl chloride remains to be determined.
Waxweiler and coworkers did, however, draw some tentative conclusions that merit further evaluation. The suggestion o^ a dose-response gradient for hepatomegaly, the significant increase in the Incidence of diastolic hypertension in the vinyl chlorideexposed workers, and the severe headaches and loss of consciousness indicate vinyl chlcrlde-induced health hazards that sho'uld be closely monitored.
3a la 1977, Fox and Collier reported on the mortality of over 7,000 man exposed to vinyl chloride at some time between 1940 and 1974 at 8 polyvinyl chloride plants in Great Britain. TWA exposure concentrations were estimated by the companies (presumably on the basis of job description and area sampling data) and classified as high (>200 ppm or 512 mg/cu m ) , medium (25-200 ppm or 64-512 mg/cu m ) , or low « 2 5 ppm or 64 mg/cu m ) , and as constant (most of the time) or intermittent (occasional).
The study Included a total population of 7,409 workers, 23Z of whom had 10 or more years of exposure to vinyl chloride .
The Standard Mortality Ratio (SMR), 100 times the ratio of the number of observed deaths in the population at risk and the number expected to occur from the same cause in a standard population of the same size on the basis of actual mortality figures, for this population was 75.4, using the sex-and age-standardized death rates for England and Wales for comparison. SMR's for all causes of death computed for eight factories revealed that at three of them there were significantly fewer deaths than expected, and that all had overall SMR's below 100. The SMR for cancer deaths was "marginally" higher than expected (101.4) in one plant. Four deaths from cancers of the liver were found, compared with 1.64 expected, for an SMR of 243.9. Two of these cancers were confirmed by microscopic examination as angiosarcoma, and two were confirmed as carcinomas rather than angiosarcoma. The two workers who died of angiosarcoma had had high constant exposures for 8 and 20 years. The two workers with carcinomas had had low, intermittent and medium, intermittent exposures for 6 and 18 years, respectively.
Three of the deaths from cancer of the liver occurred in one factory after 1966. One of these was an angiosarcoma. This was significantly in excess of the 0.13 deaths expected In this factory (P<0.01).
Analysis of mortality by year of entry into the industry showed that longer employment was associated with higher SMR's for cancer and circulatory disease .
Data on cancer of the liver suggested a dose-response relationship» since both cases of angiosarcoma of the lives occurred is members of the highest exposure group. There was a general tendency for the age-adjusted SMR for all causes for those men alive 15 years after they began employment to increase with Increasing time on the job, from 100.6 (for men employed <- years or less) to 104.7 (5-9 years) and to 113.3 (10-14 years).
Approximately 75Z of the 7,409 workers had been employed for less than 10 years, and more than half of those ever employed were still employed at the time of the study . Since only about one-fourth of the workers had been exposed to vinyl chloride at high concentrations for a long time, and since most of them who had completed 20 years of service had done so only recently, Fox and Collier suggested that there had not been a sufficient followup period during which to evaluate the carcinogenic effect of vinyl chloride. They also pointed out the complicating factors cf the "healthy worker effect" and the "survivor effect" in analyzing these data. ' The healthy worker effect means (3) Mortality and Morbidity Studies that most people accepted fo? employment are healthy, and, as a result, the workplace population tends to be In better health than the general population. The survivor effect stipulates that people experiencing adverse effects at the workplace tend to leave their jobs of their own volition; therefore, the remaining work population is composed of a larger percentage of people who are more resistant to the adverse effects of the Industry than the population of all people hired. The authors concluded that vinyl chloride was probably a carcinogen causing cancer of the liver in exposed workers; they noted however, that the cases of angiosarcoma observed were associated with exposure at "very high" concentrations.
They added that no evidence was found that vinyl chloride caused cancers other than those of the liver, and that although the sy.R for cancers as a group was consistently higher than that for all deaths, cais was difficult to evaluate becauje of population selection factors. These authors' conclusions axe necessarily biased by the choice of a general population as the control group, and this fact is pointed out by the authors in their discussion of preselection and survivor effects. Thus, the relation of observed effect's in the worker population to expected effects on the general population may give a less than objective analysis of the potential hazard.
In another report, Fox and Collier examined the effects of selection for work and survival in the industry on mortality in Industrial cohorts. They used the previously described worker population and data for these comparisons ; however, they compared the employees working at the time of their deaths with those who had left the industry . For all causes of death, the SMS. for employees alive after 15 years is the industry was 74.0, while for former employees alive after 15 years, the SMR was 108.4. A comparison of SMR*3 for cancer of the lung between the two groups was particularly striking, 50 for current workers and 156 for former workers. Results of comparisons for other causes of death by 10-year age groups revealed similar differences in SMR's.
# Observed and expected deaths categorized by cause of death and length of employment demonstrated increasing SMR's with Increasing length of time on the job .
The SMR for all causes of death for all workers progressed from 37.4 (.for those employed for 0-4 years) to 62.9 (for those employed for 5-9 years) to 75.1 (for those employed for 10-14 years) and to 94.2 (for those employed for more than 15 years).
The authors concluded that the results of the analyses showed clearly that death rates for employees in the polyvinyl chloride Industry depended on preselection for employment, their continuing employment in the industry, and length of the time during which the workers continued to work in the industry, asd the length of time during which the cohort was studied. They suggested that mathematical models taking these factors into consideration might be productive alternative methods for analyzing mortality studies. I Th««« studies indicate the potential pitfalls of assessing an industrial hazard on th« basis of comparisons with the general population. Th« influences of pres«l«etion ard survival factors are demonstrated by the findings that SMR'» are lower both for workers with less experience and for current workers than for former workers. If these factors actually affect the results of an epidemiologic analysis, the assessment of hazard may be lower than is correct.
In 1974, the results of a retrospective study on 8,334 men with at least 1 year of occupational exposure to vinyl caloride before December 31, 1972, were published by Tabershaw and Gaffey and submitted as a report to th«; Manufacturing Chemists Association by Tabershaw/Cooper Associates Inc . The study compared the mortality experience of the vinyl chloride workers with that of the general population and with that of other employee groups.
The vinyl chloride workers were separated into subgroups on the basis of intensity and duration of exposure and of combinations of these two factors. These subgroups were compared on the basis of the SMR's for various causes of death.
Thirty-five plants in the United States that either produced vinyl chloride or used it in the production of polyvinyl chloride gave information from their employment records .
Quantitative exposure data were not available for each job, but relative exposures were estimated by plant industrial hygiene and safety personnel.
Actual concentrations were not estimated. The authors calculated an exposure index (El) for each worker on the basis of an average monthly exposure score ranging from 1 (low' exposure) to 3 (high exposure).
The median birth year of the 7,128 workers traced successfully was 1931, '.he median duration of exposure was 80 months (6.7 years), the median El was 1.44, and the median year in which exposure bega-> vas 1962 . With the age-specific death rates as the standard of comparison, SMR's were calculated for approximately 30 causes of death.
No SMR for any cause of death was significantly greater than. 100» and SMR's for several causes of death were significantly below 100. For example, the SMR for "all causes'- was 75 (352 observed deaths vs 467 expected) and that for cardiovascular and renal diseases was 80.
When the workers with vinyl chloride were subdivided according to El (more or less than 1.5) and duration of exposure (above and below 5 years), no remarkable findings emerged from these tabulations .
No SMR's were significantly above 100, although several were significantly below 100. Several trends were apparent, however, from the cross tabulations. SMR's for all malignant neoplasms Increased with increasing El and duration, reaching an SMR of 141 for an employment duration of 5 or more years and an El of 1.5 or greater. Cardiovascular and renal diseases showed a similar trend, although the SMR's generally remained below 100 for all causes of death except hypertensive disease other than cardiac. There were slight, nonsignificant excesses of observed deaths from respiratory system, digestive organ and peritoneum, and "other" cancers that increased in relation to increased duration of employment and estimated exposure. Cancers of the buccal cavity *nH pharynx also appeared in excess but had their highest rate of occurrence in the low, short-exposure group.
The authors stated that the lower than expected overall mortality of the vinyl chlorxde workers was not a surprising finding because of the 'healthy worker effect," even though vinyl chloride poses a significant risk of death from a particular cause, ie, angiosarcoma of the liver. Deaths from cancers of the digestive organs and peritoneum were further examined to study the role of angiosarcoma in overall mortality. Of the 19 deaths from cancers of the digestive organa and peritoneum, 7 were due to cancers of the liver, 2 of which were identified on the death certificates as angiosarcoma.
However, according to the authors, other investigators using the same study population Identified four other deaths from angiosarcoma; the death certificates stated the causes of death In three of these as cancer of the liver and in one as cirrhosis of the liver. Laennec's cirrhosis was given as an alternative? cause of death on one death certificate identifying cancer of the liver as the primary cause. If there had been no cases of angiosarcoma, the difference from the expected number of cancers of the digestive organs would have been insignificant.
The authors concluded that the "consistent pattern of Increase" for particular causes of death with increasing exposure "appears" to relate mortality from cancer of the digestive syst:em or respiratory system, cancer of other unspecified sites, and lymphosarcoma to vinyl chloride exposure. They pointed out areas of possible bias In the studjr. The use of the US male population as a comparison group may have cauacd a slight overestimate of the SMR's, since the study population was from the eastern half of the United. States, where expected mortality is higher; also, 15% of the workers could not be traced and the assumption that their mortality distribution was similar to that of those traced may have been incorrect. The data obtained on workers who could not be^traced showed that; on. the average/ they were born 10 years earlier than the study group and had much shorter exposures and slightly higher El's.
The effect that these differences may have had on mortality is uncertain. Also, 1,500 workers whose exposures had occurred up to 35 years earlier were located too late to be evaluated In the study. Information about workers exposed for an extended period many years before might have more clearly elucidated the effects of occupational exposure and been especially valuable because of the apparently long latent period for vinyl chlorideinduced disorders.
Although the authors did not demonstrate a statistically significantly increased risk from exposure to vinyl chloride in this worker population, the observation that the SMR's for various cancers increased with increasing duration ofzemployment and increasing estimated exposure suggests that exposure to vinyl chloride Indeed contributes to increased cancer mortality risk-A followup study None of Chese differences changed Che major conclusions of Che prior sCudy. One more deaCh from angiosarcoma of Che liver was discovered.
A final report , which included che data from the above studies , was prepared by Equitable Environmental Health Inc in 1978.
This report increased the total worker group available for study to 10,173 from 37 plants. Although the successive additions to the study population resulted in changes In the SMR's, these changes were not significant or compound-related and did not cause any change from the conclusions reported in previous studies. and 1975 . They e v a l u a t e d C h e morcallcy of che accive and pensioned workers ac two planes in Kencucky, one of which was the polymerization plane where C h e firsC cases of angiosarcoma were seen, che other of which produced the vinyl chloride used by the former. One hundred and sixty-one death records were analyzed for this study. Causes of death were taken from company abstracts of the death certificates, and the number of observed deaths from each cause, stratified into 5-year age-and time-specific groups, was compared with the number expected on the basis of age-, time-, and cause-specific proportional mortality ratios for US white males. While the comparison did introduce an obvious bias in the seacisclcal treatment of results, the authors adequately pointed Chls out in their discussion.
# Mbnson et al published two nearly identical reports in 1974
The Crend in the raeios of observed to expected deaths from all cancers is a particular cause for concern since it indicated that the proportion of deachs caused by cancer was on che increase in this workforce. It must be remembered, however, that this was a study of workers at only two plants, and that che crend may noc have been relaCed to the vinyl chloride exposures alone. Ott et al examined the mortality during 1942-1973 of 594 employees exposed to vinyl chloride between 1942 and I960« Several methods of sampling nd analysis of airborne vinyl chloride had been used since 1950. Samples showed vinyl chloride concentrations before 1959 to have been generally welt below" 500 ppm (1,280 mg/cu m ) , with occasional excursions to 4,000 ppm (10,240 mg/cu m). In 1959, the company established a guideline of 50 ppm, (128 mg/cu m) as an 8-hour TWA exposure limit, and subsequent exposures were "generally found" to be below this concentration. Occupational histories were obtained from plant records, and exposures in the two largest units, which accounted for 466 of the workers, were classified as low (200 ppm; >512 mg/cu m ) , and each worker was assigned to one of these exposure groups on the basis of the highest estimated concentration at which he had been exposed for longer chan 1 month. The 128 workers from the three smaller production units, for which industrial hygiene data were not adequate to characterize exposures, were classified as having unmeasured exposures, which company industrial hyglenists estimated were, primarily in the low to intermediate range. For purposes of analysis, each exposure group was subdivided according to exposure at that concentration for less than 1 year or for 1 year or more. Exposures of workers to vinyl chloride at lower concentrations than their assigned category were not considered; for example, worker exposure at concentrations between 25 and 200 ppm for 6 months, and less than 25 ppm for 10 years, would be classified as intermediate exposure for less than 1 year.
All but one of the employees were traced through 1973 . Causes of death were obtained from death certificates for all deceased workers except one who had been employed in a low-exposure job for less than 1 year. Expected numbers of deaths for each exposure classification were calculated from OS death data for white males by determining the number of person-years in each exposure group over 5-year periods by 10-year age groups.
The total number of deaths in the study population was 89, compared with 100.1 expected deaths [861. Malignancies accounted for 20 deaths vs 17;9 expected, and none of these were cancers of the liver. There were three deaths from cirrhosis of the liver (3.1 expected), all in workers exposed to vinyl chloride for less than 1 year. Seventy-two of the vinyl chloride workers had also been employed in an arsenicals production facility where workers previously had an increased cancer mortality risk, and these workers were therefore not Included in the population used for the vinyl chloride s tudy.
With arsenicals workers excluded, total deaths in the population of 522 vinyl chloride workers numbered 79, 91Z of the US death rate for white males, and the number of deaths due to malignancies was 13, compared with 16.0 expected deaths . When the mortality data were stratified according to exposure concentration, 9 of these deaths from cancer were found to have occurred in the 163 workers with high exposure,- compared with 5.1 expected, and 6 of these were in workers exposed at high concentrations for longer than L year (2.9 expected). Statistical comparison of the ratio of observed to expected deaths from cancer in the high-expoaure group with that of all other groups combined, assuming a Poisson distribution for small sample sizes, showed a significant difference (P<0.025).
When only deaths occurring 15 years or more after the employee's initial exposure to vinyl chloride were included, eight of nine deaths from malignancies were in the high-exposure group (PCO.Ol). A survey of case histories of the 13 vinyl chloride workers who died from cancer showed that 2 had also had substantial exposure to benzene, 1 had a family history of malignancies that included 4 deaths in his imnediate family, and at least 3 of the 5 who died from lung cancer were smokers.
# Ott et al [8 6
] concluded that, although the number of deaths was small, the distribution of malignant neoplasms with respect to exposure categories suggested a possible dose-response relationship.
No associations were noted for malignant effects in the lower-exposure categories where the estimated TWA exposure concentration ranged from 10 to 100 ppm.
In 1974, Nicholson et al published a retrospective study of the mortality of a cohort of workers involved in the production of polyvinyl chloride at a plant In New York. From company and union records, 257 men were identified who had begun employment between 1946 and 1963 and who had worked for at least 5 years in the plant. The current health status of 255 workers was determined. More than half of the employees had worked primarily in polyvinyl chloride production, where reactor cleaning was routinely performed without respiratory protection, causing exposure to vinyl chloride. Approximately 25Z of the workers had been employed in maintenance, where exposures to vinyl chloride could be significant during repair work, and the remaining workers had been employed mainly in the shipping department and the laboratory. No records were kept of the vinyl chloride concentrations at which the workers were exposed. The only measurements made of environmental concentrations of vinyl chloride were those necessary to ensure that the explosive limit of 30,000 ppm (76.8 g/cu m) was not exceeded. The authors estimated that peak concentrations might often have exceeded 1,000 ppm (2,560 mg/cu m) and nay occasionally have reached 10,000 ppm (25.6 g/cu m ) . This estimate was based on the results of medical examinations conducted in March 1974 on a group of active and past workers, more than 50Z of whom reported symptoms of dizziness, headache, or euphoria, and 4Z of whom had lost consciousness on the job in comparison with the known development of these symptoms at high concentrations. The median age of the workers on the 10th anniversary of their employment was approximately 37 years, with 16Z under 30.
Of the 255 traceable workers, 82 had retired or were working elsewhere, 24 had died, and 149 were still employed at the same plant . Mortality among the workers was higher than expected based on death rates for New York State, excluding New York City, for deaths due to all causes and all cancers in groups exposed 10-15, 15-20, and 20-25 years, but not la the one exposed for 25 or more years. These increases were not statistically significant, however.
There were 10 deaths from all causes vs 6.1 expected in-the group exposed 10-15 years, 7 vs 6.6 in the group exposed 15-20 years, 7.0 vs 5.0 in the group exposed 20-25 years, and 0 vs 1.3 in eke group exposed 25 years or more. The observed vs expected numbers of deaths from all cancers in these groups were 3 vs 1.2» 3 vs 1.4, 3 vs l.l, and 0 vs 0.3, respectively. There were chree deachs caused by angiosarcoma of Che liver and chree deaChs from other cancers (glioblascoma, reticulum cell sarcoma, and lymphosarcoma) that Che auChors suggesced might have been related Co exposure to vinyl chloride.
The authors reported thaC Che three cases of angiosarcoma occurred in workers who had been exposed for the first time before 1951.
They suggested that, as the time from first exposure increased, additional cases of occupational cancer might occur in this group of workers.
The authors concluded that these data demonstrated the need to prevent exposure to vinyl chloride and to monitor, screen, and further study workers previously exposed to vinyl chloride.
In 1977, Chiazze et al published a study of 4,341 deaths that occurred during 1964-1973 in a population of polyvinyl chloride production plant workers who had had exposure to vinyl chloride. A total of 55 plants of 17 companies supplied mortality data on employees who had died either while actively employed, after retiring from the company with retirement benefits, or after terminating employment but while still covered by a company-sponsored insurance program.
A frequency distribution based on hospital and area was calculated for subjects whose cause of death was listed as cancer or liver disease.
When one hospital or several hospitals in the same area appeared on several of these deaCh certificates, a Registered Records Administrator visited these hospitals and reviewed all of their pathology records to determine whether any cases of angiosarcoma of the liver had been diagnosed. Five cases of angiosarcoma were found in this manner, but none of these persons had ever been employed at any of the plants, in the study.
Over the 10-year study period, the size of many of the plants changed drastically concluded that there "appeared" to be excesses in cancer mortality for both men and women in this study and that these excesses "appeared" to be concentrated in cancers of the digestive system. ^They listed several factors that made definitive Interpretation of the data "difficult. Th- use of PMR's, they pointed out, does not take into account the absolute risk of death in a population.
Also, the overall favorable mortality of working populations, which has been found in numerous Investigations, was not considered In this study. The authors did state, however, that their results "appeared to be consistent" w i t h p r e v i o u s l y p u b l i s h e d studies , and that these r esults s u g g e s t e d a nee d for c o n t i n u e d i n v e s t i g a t i o n . D e f i n i t i v e i n t e r p r e t a t i o n In each cancer mortality category, the calculated SMR. was higher for the 15-year group th a n for the 10-year group . The authors stated that this demonstrated Che importance of considering latency when looking for occupationally induced malignant neoplasms. They suggested that restricting the study to at least 10 years after initial worker exposure had minimized the "healthy worker effect," thereby eliminating the overall deficit of deaths In workers exposed to vinyl chloride often reported by other investigators. A similar pat-ern of latency had been reported previously by Nicholson et al . These results demonstrate an Increased risk of cancer for vinyl chloride workers and show the essentiality of taking latency into account.
In 1975, Duck ec al reported on the mortality experience of workers exposed to vinyl chloride in a plant in Great Brltian. A total of 2,120 male workers who had beeu exposed to vinyl chloride at some time since 1948 was identified from company records.
Information from death certificates was compiled for these workers, and the mortality was compared with that of men in England and Wales. Workers who died at more than 74 years of age or before 1955 were excluded from consideration in both populations. SMR's were computed on the basis of cause of death, Job description, duration of exposure, and year of first exposure . No significant excess mortality was found for the workers in any of the comparison categories. No cases of angiosarcoma of the liver were Identified in either population within the study period ; however, the authors stated that one such death occurred ait^r the end of the study and before the paper was published.
The authors pointed out that their comparisons did not allow for the selective effects of employment.
They also noted that their findings conflicted with those of Monson et al .
The significance of these observations is difficult to determine, since the authors did not supply exposure information in this report. It is possible that, because of .unique work practices and engineering and administrative controls, the aortality of workers from this plant may not be representative of workers from other plants or from the Industry as a whole. In none of the groups was the percentage of aberrations found in workers significantly different from that found in controls. The authors stated that the difference In mean age between the exposed and preemployment groups was not considered to be a "confounding" factor in their analyses, since the cytogenetic change most often associated with aging was chromosomal loss rather than chromosomal breakage. The authors concluded that adverse cytogenetic effects could be avoided in "controlled, minimal-exposure environments," but they did not suggest a quan-itative exposure limit. The frequency of breakage was significantly higher (P<0.05) in all three of the industry groups than in the CDC controls, but there were no significant differences between the Industry groups.
The majority (862) of the aberrations observed in each group were chromatid gaps. Chromatid breaks and isochromatid gaps and breaks were also observed.
The authors also attempted to relate frequencies of breakage to duration of employment. No significant gradient was reported for the high and low exposure groups, but a signficant gradient (P<0.01) was observed for the industry controls (100-199 months, 0Z breaks, 2 subjects; 200-299 months, 1.3Z breaks, 2 subjects; 300-399 months, 6.7Z breaks, 13 subjects). The euthors stated that the interpretation of this gradient was "uncertain" because of the smell number of employees, and because the mean age Increased with employment duration (31.0, 44.5, and 51.8 years, respectively).
The authors noted that their observations were not inconsistent with previous studies suggesting a twofold Increase in chromosome breakage frequency with occupational exposure co vinyl chloride; however, they concluded that the finding that the overall frequency of breakage did not differ significantly among high, low, and negligibly exposed industrial groups indicated that other agents in the chemical/rubber plant were capable of inducing the breaks, making it impossible to relate a particular agent to the abnormal effects observed. This difficulty does not diminish the concern warranted by the increased frequency of chromosome damage seen in chemical industry workers compared with that in workers outside the chemical industry. The difference was statistically significant (P<0.001); however, the frequency of abnormal cells was highest in the employees who had been exposed to vinyl chloride for the shortest pariod.
# Purchase et al examined 100 lymphocyccs from each of 56 polyvinyl
chloride workers and 24 workers with no exposure co vinyl chloride. Workers who had been exposed to X-rays, prolonged drug treatment, or recent viral » « infections were excluded from the study- Exposed workers had 6.32 of cells with breaks and gaps, 1.45Z of cells with unstable changes, and 0.382 of cells with stable changes, Nonsxposed workers had 3.632, 0.462, and 0.09Z of cells in these respective categories. The authors stated that all group differences were statistically significant (P<0.05).
A secondary analysis was conducted by Dresch and Norwood os the data from a number of studies of human lymphocytic chromosomal aberrations . This analysis made use of a binomial model, assuming a homogeneous group, and the Cochran model, a model t-rith provision for differences among clusters. The authors stated that the secondary analysis of each paper agreed with the authors' analysis, and that the cumulative data from all papers combined showed a statistically significant increase in the frequency of chromosomal aberrations with increasing exposure to vinyl chloride or polyvinyl chloride.
They pointed out, however, that the frequencies of aberrations were small, only a "fraction" of background, and that, In view of tb ; inconsistency between laboratories, the positive statistical evidence should be "treated with caution."
Reproduction Studies
Infante ,-r al , in 1976, reported on a study of pregnancy outcome in wives of workers exposed to vinyl chloride.
Interviews were conducted with 95 vinyl chloride polymerization workers and a control group of 158 rubber and polyvinyl chloride workers who were known to have had little or no exposure to vinyl chloride monomer. Paternal age, pregnancy outcome, and estimates of the date of conception for all pregnancies were obtained. Wives of the workers were not interviewed and maternal age and health status were not determined. The fetal death rates in the families of the exposed workers were agematched, for paternal age, with the control group, and the two groups were compared before and after exposure, ie, to vinyl chloride monomer for the exposed, group and to rubber and polyvinyl chloride fabrication emissions for controls .
Prior to exposure, there was no significant difference is fetal death rates between the offspring of vinyl chloride workers and those of the controls (6.1Z and 6.9Z). After exposure, however, this race was significantly higher (P<0.05) for offspring of vinyl chloride monomer workers (15.8Z vs 8.8Z). To determine the effect on the overall rate of fetal deaths of the data from women who chronically experienced abortions, workers' wives who had had more thas two abortions were excluded from the calculations. The results showed that prior to exposure the offspring of the controls had a higher fetal death rate than those of vinyl chloride workers (6.92 vs 3.1Z), but that after exposure of the fathers the rate for the offspring of the vinyl chloride workers was the higher (10.8Z vs 6.82). postulated that germ-cell damage in the father was the leading possibility as the cause of fetal death.
# Infante et al
Unfortunately, this study relied on indirect knowledge of the key variable, fecal deaths, by interviewing only che fathers. Therefore, Che reliabilicy of this information muse be considered quescionable. No seasonal variation in the rate of CNS birth defects was noted.
The reproductive histories of mothers in both groups were similar in the number of previous pregnancies and the percentage of live births and of other children with congenital anomalies . However, a family history of birth defects was reported by 11 of the 41 families in the case group but by only 4 of the 41 families in the control group. A family history of CNS birth defects wa5 reported in 5 of the case families but in none of the control families.
Occupational histories showed that two fathers .... each group had been emp .oyed in the local polyvinyl chloride plant at the _ thel r infants were cenceived, and that none of the mothers had ever worked in the polyvinyl chloride plant.
In evaluating the distance from the polyvinyl chloride plant to both the place of work and the place of residence of the parents, Edmonds et al found no significant differences between the birth defect and the control groups. However, multivariate analysis showed that, in families living within 3 miles of the plant, CNS birth defect cases were concentrated in the area northeast of the plant (?<0.02). The authors attempted to determine whether this clustering was related to atmospheric vinyl chloride concentrations, but concluded that existing data on plant emissions and meteorologic conditions were insufficient to reconstruct vinyl chloride concentrations at the time of conception. Edmonds and his associates concluded that there was no evidence Chat the high rate of CNS birth defects in Kanawha County for 1970-1974 was related to vinyl chloride exposure.
# (6) Summary
Although the results of the various vinyl chloride epidemiologic studies are sometimes contradictory, there is substantial evidence that workers in vinyl chloride plants may have an increased incidence of atypical liver function that suggested an increased fetal mortality due to exposure of the fathers to vinyl chloride has also been published; however, the method of data acquisition for this study was of questionable validity.
Two reports seem to indicate that birth defects and anomalies in offspring of parents living in the vicinities of vinyl chloride polymerization plants probably are not related to exposure to vinyl chloride.
Each of these studies has similar deficiencies, eg, the lack of exposure information for most cohorts, the relatively few workers for whom significant time has elapsed since first exposure, ie, >15 years, and the difficulties of assessing the "healthy worker" and "survivor" effects. Without this' information, it is impossible to quantitate the hazards from exposure to vinyl Comparison of the results of mortality experience of the total cohort, cohort with 15+ years of experience, and cohort with a total calculated dose exceeding 500 ppm-months (1,985 mg/cu m-monchs) showed no significant increase for any cause of death .
Comparison of the results of 17 clinical laboratory parameters showed no significant differences between the matched pairs of exposed and control workers.
Regressions . of the individual pair differences on estimated cumulative dose and duration of exposure showed no positive correlations at the 0.05 level.
The authors concluded that there were no findings "statistically related or individually attributable to vinylidene chloride exposure" in the cohort studied.
The study included few workers, however, and they were all from a single plant. The authors recommended that additional epidemiologic studies be conducted to develop information on chronic exposure to vinylidene chloride. This study does not indicate that there is an occupational hazard from exposure to vinylidene chloride.
# (c) Vinyl Bromide, Vinyl Fluoride, and Vinylidene Fluoride
No epidemiologic studies on workers exposed to vicryl bromide, vinyl fluoride, or vinylidene fluoride have been located.
# Animal Toxicity
The results of experiments involving exposure of laboratory animals to vinyl halides show that some of these compounds can Induce the same toxic effects on rodents as on humans, Including the characteristic angiosarcoma of the liver. No lifetime animal experiments have been located that demonstrate a no-observable-adverse-effect concentration for any of the vinyl halides. Comparison of the results of mortality experience of the total cohort, cohort with 15+ years of experience, and cohort with a total calculated dose exceeding 500 ppm-months (1,985 mg/cu m-monchs) showed no significant increase for any cause of death .
Comparison of the results of 17 clinical laboratory parameters showed no significant differences between the matched pairs of exposed and control workers.
Regressions . of the individual pair differences on estimated cumulative dose and duration of exposure showed no positive correlations at the 0.05 level.
The authors concluded that there were no findings "statistically related or individually attributable to vinylidene chloride exposure" in the cohort studied.
The study Included few workers, however, and they were all from a single plant. The authors recommended that additional epidemiologic studies be conducted to develop information on chronic exposure to vinylidene chloride. This study does not indicate that there is an occupational hazard from exposure to vinylidene chloride.
# (c) Vinyl Bromide, Vinyl Fluoride, and Vinjlidene Fluoride
No epidemiologic studies on workers exposed to vicryl bromide, vinyl fluoride, or vinylidene fluoride have been located.
# Animal Toxicity
The results of experiments involving exposure of laboratory animals to vinyl halides show that some of these compounds can induce the same toxic effects on rodents as on humans, including the characteristic angiosarcoma of the liver. No lifetime animal experiments have been located that demonstrate a no-observable-adverse-effect concentration for any of the vinyl halides. The recalculated LC100 values were 150,000 ppm for mici, 210.000 ppm for rats, and 280,000 ppm for guinea pigs and rabbits.
The authors reported that death was preceded by excitement, r.ontractioas and convulsions, and accelerated respirations. The excitement ph^se progressed to a state of "tranquility" characterized by Cheyne-Stoke i breathing and circulatory disturbances as Indicated by cyanosis and conjunctival congestion. This phase chen progressed into a state of deep narcosis chat was followed by respiratory failure and death.
# Abreu
and coworkers surveyed the anesthetic effects of 18 halogenated hydrocarbons, including vinyl chloride, on mice. Although control procedures were not discussed, it is assumed that each animal served as its own control.
# The minimal certain anesthetic concentration (EC100 for anesthesia), highest tolerated concentration (LC0), 50Z anesthetic concentration (EC50 for anesthesia) and 50Z lethal concentration (LC50) on inhalation were estimated. To establish each point on each substance's effect-concentration
# Pulmonary resistance increased and pulmonary compliance and respiratory minute volume decreased with increasing vinyl chloride concentrations .
The only values of the exposed animtls that were significantly different (P<0.05) from those of controls were pulmonary resistance (15.35Z higher than controls) and respiratory minute volume (12.302 lower than controls) at vinyl chloride concentrations o£ 102. No significant changes in heart rate or aortic blood pressure were observed in these animals.
Oster et al , in 1947, investigated the anesthetic effects on dogs of vinyl chloride gas mixed with oxygen. Two dogs were "momentarily" exposed to vinyl chloride at a 502 concentration (500,000 ppm; 1,280 g/cu m) which was then decreased to 72 (70,000 ppm; 179.2 g/cu m) by volume.
During exposure, the dogs maintained good abdominal relaxation, but their legs became rigid and muscular movements became uncoordinated. A third dog, anesthetized with 252 vinyl chloride, had signs similar to those in the first two dogs. After exposure the dogs continuously "crowed" and salivated heavily.
Four additional dogs received a local anesthetic, monocaine hydrochloride, and their blood pressures were checked by cannulation of the femoral artery .
After control blood pressure measurements were obtained, the animals were anesthetized with 102 vinyl chloride (100,000 ppm; 256 g/cu m) in oxygen. During the vinyl chloride' exposure, the dogs had normal blood pressures, but they showed such cardiac irregularities as Intermittent tachycardia, ventricular extrasystoles, and vagal beats.. Similar irregularities were detected with a stethoscope in noncannulated dogs on the same exposure regimen.
Six other dog3 were anesthetized wi'h 102 vinyl chloride (100,000 ppm) in oxygen . Electrocardiographic (ECG) records (lead II) were obtained a^ exposures to vinyl chloride (not further defined) sufficient to produce light surgical anesthesia, surgical anesthesia, and threatened respiratory collapse. At exposures producing surgical anesthesia, several changes were noted In the cardiac rhythm, especially marked tachycardia followed by bradycardia.
In addition, two of the six dogs showed R-wave inversions, and one dog showed incipient ventricular fibrillation. All six dogs showed abnormalities in the ECG record, ranging from sinus arrhythmia and transitory extreme left axis deviation, to atrioventricular (A.-V) . blocks ventricular tachycardia,, ventricular multifocal extrasystoles, and inversion of the T-wave with elevated ST segments. As ttie concentration of vinyl chloride was increased toward that necessary for respiratory failure, the ECG abnormalities disappeared except for the greatly reduced R-wave amplitude.
The authors concluded that vinyl chloride caused muscular incoordination in the extremities and serious cardiac arrhythmias. They also concluded that vinyl chloride was not safe as an anesthetic and that its use in humans was not warranted.
In 1974, Belej et al reported the changes in cardiac function of rhesus monkeys exposed to vinyl chloride at concentrations of 2.5, 5.0, and 10.02 (25,000, 50,000, and 100,000 ppm; 64, 128, and 256 g/cu m ) . Three animals were exposed at each concentration.
They were anesthetized with sodium phenobarbital, their tracheae were caanulated for artificial respiration, and their chests were opened by midstemal incisions to allow measurement of myocardial contractility, pulmonary arterial, aortic, and left atrial pressure. The vinyl chloride mixtures were administered for periods of 5 minutes alternating vlth lQ-ttinute exposures to room air. The number of test periods for each animal was not specified. Control procedures were not reported, but each animal probably served as Its own control.
The heart rate and aortic, left atrial, and pulmonary arterial pressures were not significantly different between any of the experimental and control groups .
The force of myocardial contraction in animals exposed to 10Z vinyl chloride was significantly different (P^es used were too high to allow a conclusion about whether inhalation of vinyl chloride may pose an acute threat to workers in times of fear-provoking stress or to those who may take medication containing epinephrine for asthma control or other reasons.
# Carr et al
# Clark and Tinston exposed beagle dogs to vinyl chloride at various concentrations by mask, for 5 minutes.
During the final 10 seconds of exposure, a' 5-jxg/kg injection of adrenaline was administered iv. ECG's were recorded and analyzed for 3erious arrhythmias, such as multifocal ventricular ectopic beats or ventricular fibrillation. At each concentration., four to seven dogs were exposed. Vinyl chloride was calculated, by a moving average interpolation technique, to have induced cardiac sensitization in 50Z of the animals at a concentration of 5Z.
In 1970, Viola reported on the effects on rats of exposure to vinyl chloride. Twenty-five male albino Wistar rats with an average weight of 150 g were exposed to vinyl chloride at a concentration of 3Z (30,000 ppm; 76.8 g/cu m) for 4 hours/day, 5 days/week, for 1 year.
Twenty-five similar rats served as controls.
Throughout the exposure period, the general physical appearance, behavior, and body weight of the animals were monitored.
After the exposures, some of the exposure survivors and some of the controls were killed at 20-day intervals, and gross and microscopic changes in the paws,-brain, liver, kidneys, and thyroid were noted. The number of rats surviving, Che number of examinations conducted, and the number of rats killed at each interval wexe not specified.
Exposure to vinyl chloride did not slgnlflcartly affect growth but was slightly soporific to the animals during the first 10 months of exposure .
During the 11th month, the exposed rats had lower body weights and showed less aggressiveness and less reaction to external stimuli chan the controls; they also suffered disturbances In equilibrium. No details about Che observations made were presented. Of che rats exposed to vinyl chloride, 13 died from cardiopulmonary complications and 2 died from hematoperltoneum. On microscopic examination, the authors observed chac most of che animals showed pathologic changes in the brain, liver, kidneys, and thyroid, and that six rats showed skeletal alteratious. He did not reporc whether these changes ware observed in the rats that died during exposure or in those that were killed after che exposure. The paws of che vinyl chloride-exposed racs had areas of hyperkeratosis, superficial chickening of the epidermis, and disappearance of che cucaneous "adnexa."
In addition, vacuolizaCion and degeneradon of che basal layer, a modest increase in the papillar layer, and edema of the epidermis were noted. The connective tissue of the skin showed fragmentation and decreased elastic reclculum and dissociation of collagen bundles. The small arcerlal vessels of che paws showed signs of endothelial fibrosis, and some vessels were completely occluded by proliferation of connective tissue.
Extensive metaplastis proliferadon of cartilage-like material was found around the small metatarsal bones .
The edges of the material were Irregular, with differential growth chac resulced in outward bending of Coes» in areas of mature compact bone, che carcllaglnous elements were grouped around a cencral nucleus of the bone, giving che appearance of chondroid metaplasia. In small bones, this chondroid metaplasia was often extensive, and the bones appeared to be impregnated with a mucoid substance that obliterated the cement lines by altering the characteristic deposition of the bone tissue.
# Microscopic examination of the brains of rats exposed lc vinyl chloride
showed diffuse degenerative lesions of the gray and white matter . Flbrotic processes had often surrounded and Invaded the small nerve bundles of Che gray maCter. There was also evidence of neuronal phagokaryosis with satellltosls and deposition of neuroglial elements around the altered nerve cells of che whice matter. In the cerebellum, there were signs of atrophy of the granular layer and profound degenerative changes in some areas of the Purkinje cell layer.
"Animals exposed to vinyl chloride had enlarged livers . Some livers appeared yellowish with smooth surfaces and were slightly more britcle chan normal.
Microscopic examinacion revealed signs of diffuse interstitial hepatitis, functional degeneration or necrosis of the hepatic cells, and marked cytoplasmic and nuclear polymorphism. The Kupffer cells were often hypertrophic and showed evidence of abnormal proliferation. Numerous areas of partial necrosis and diffuse fatty degeneration often blocked the portal capillaries, centrllobular veins, and sinusoids.
Intense fibrosclerocic reactions were also noted in the areas of degenerative change in the livers of a few exposed rats.
In contrast, the kidneys of the exposed an imals were relatively unaltered except for signs of tubulonephrosis and occasional chronic interstitial nephritis . The author also noted colloid goiters and a marked increase in parafollicular cells in the thyroids of several animals. Examination of the controls showed no alterations in any of the organs.
The author concluded that his investigation confirmed that rats were sensitive to vinyl chloride and that the lesions of bone and connective tissue were similar to those described in workers affected by acroosteolysis of the hands and to those in experimental "osteolathyrism." This report is valuable for its characterization of the systemic toxic effects resulting from chronic exposure to vinyl chloride.
An assessment of the hazard posed by exposure to vinyl chloride is not possible; however, because the information was not presented in sufficient detail to permit a statistical analysis. Observations on tumors produced in animals that were probably the same a& those in this paper were presented in a second paper by Viola et al and are discussed in a subsequent section of this chapter.
In 1961, Torkelson et al reported a three-part investigation on the effect of repeated exposure of rats, rabbits, dogj, and guinea pigs to vinyl chloride at 50, 100, 200, or 500 ppm (128, 256, 512, 1,280 mg/cu m).
In the first experiment, groups of 10 male and 10 female rats were placed in a 160-liter inhalation chamber containing vinyl chloride at a nominal concentration of 500 ppm for 7 hours/day, 5 days/week, for 4.5 months. The control group consisted of five male and five female rats.
Male rats exposed repeatedly at 500 ppm for 4.5 months showed at significantly higher (P<0.001) liver-to-body weight ratio than the controls .
Of the 20 exposed rats, 3 males and 1 female died during the exposure.
Microscopic examination showed an increased centrllobular granular degeneration of the liver and interstitial and tubular changes in the kidneys.
In the second experiment, the animals exposed to vinyl chloride at nominal concentrations of 100 or 200 ppm (£152) for 7 hours/day, 5 days/week, for 6 months, included 20-24 male and 24 female rats, 10 male and 8 female guinea pigs, 3 male and 3 female rabbits, and 1 male and 1 female dog .
In addition, eight groups of five male r*ts each were exposed at nominal concentrations of 100 or 200 ppm (£152) for 0.5, 1, 2, or 4 hours/day, 5 days/week, for 6 months. For each species and regimen, two control groups carefully matched on the basis of age, condition, and weight were used, one group of colony controls, and the other air-exposed in the chamber on a regimen similar to that of the experimental groups.
Of the 24 rats exposed at 100 ppm for 7 hours/day, 5 males and 1 female died . All racs exposed repeatedly for 7 hours/day shewed slight but significant; increases (?<0.05) in liver weight. Repeated exposure to vinyl chloride at. 200 ppm for 7 hours/day for 6 months resulted in the deaths of 3 of 12 rats . Of the five rata in each "short exposure" group, one, two, one, and one died after exposure for 0.5, 1, 2, and 4 hours/day, respectively. Organ weights were normal In all species, except that liver weights Increased in rats exposed repeatedly for 7 hours/day. At 8 weeks after exposure, male rats continued to have Increased liver weights, but the weights appeared to be returning to normal. Increases in liver weight of rats exposed for 2 or 4 hour3/day were not statistically significant.
All the rats exposed for 0.5 or 1.0 hour/day had normal organ weights compared with those of control rats. Microscopic examination showed liver changes characterized by centrilobular granular degeneration and necrosis with some foamy vacuolization in male rabbits and periportal cellular infiltration in female rabbits.
In the third experiment, 24 male and 24 female rats, 12 male and 12 female guinea pigs, 3 male and 3 female rabbits, and 1 male and 1 female dog were exposed to vinyl chloride at a concentration of 50 ppm for 7 hours/day, 5 days/week, for 6 months . Three additional groups of 10 male rats each were exposed for 1, 2, or 4 hours/day, 5 days/wesk, for 6 months. Matched groups of control animals were again used. Vinyl chloride concentrations in the chamber were determined by micro-Volhard titration. Repeated exposures to vinyl chloride at 50 ppm did not produce toxic signs in any of the animals. A decrease in the kidney weight was observed is the female rats, but it was not attributed to the exposure since it was not observed at higher concentrations.
Animals exposed repeatedly at 100, 200, or 500 ppm were found to have normal biochemical and hematologic values and urinalysis tests . Serum enzyme activities were normal is all dogs, rats, and rabbits. None of the organs examined had macroscopic tissue changes at. any exposure concentration. The authors concluded that repeated exposures f-»r 7 hours/day at 100 ppm could cause increased liver weight.
Lester et al conducted four experiments witn Sherman strain rats. In the first experiment, an unspecified number of rats were exposed in pairs to vinyl chloride at various concentrations for up to 2 hours In an exposure chamber. No control animals were described. In the rats exposed to vinyl chloride at a concentration of 5Z (50,000 ppm; 128 g/cu m ) , the righting reflex was lost; at 6Z (60,000 ppm; 153.6 g/cu m), however, it was said to be still present, but it was absent in rats exposed at 7Z (70,000 ppm; 179.2 g/cu m ) . The corneal reflex disappeared at a vinyl chloride concentration of 10Z (100,000 ppm; 256 g/cu m ) . Tne animals rapidly returned to normal after removal from the chamber. .One rat was killed after exposure to vinyl chloride at 10Z but showed no gross signs of adverse effects. Two rats exposed to vinyl chloride at 15Z were deeply anesthetized within 5 minutes. One rat had effusions of fluid from Che mouth and died cf asphyxia la 42 minutes; autopsy revealed edema and coogesclon of che lungs. The ocher ¡rat remained under deep anesthesia for 2 hours bue recovered promptly when removed from Che exposure.
In Chelr second experiment with rats, Lescer et al randomly assigned 18 male and 18 female rats co an experimental group and a control group. The experlmenCal animals were exposed Co vinyl chloride aC a concentration of 10Z (reduced co 8Z after 2 days) between 8:30 am and 4:30 pm. Two female raCs and an unspecified number of male rets in the experimental group died early in the experiment and were replaced, A total of three female rats died (days 2, 5,and 14) in the course of vinyl chloride exposure, and only two males survived all 15 day3 of exposure. In addition to mortality, exposure to vinyl chloride at a concentration of 3Z was associated with a failure to gain weight until day 9, followed by weight gain at a slower rate than the controls until the cessaClon of exposure, after which Che rates were equal. Neither grci'3 nor microscopic differences were noted becween experimencal and concrol livers and kidneys at necropsy. Some rats had parasitic liver cysts and focal necroclzlng pneumonia, buC kidneys were wlchln normal llmics of varlabillCy. Microscopic examination of the spleens of experimental rats showed a significantly higher incidence of severe abnormality than was found in controls, although some individual rats in the control group manifested equally severe splenic abnormalities.
Lester et al performed anocher experimenc to ascertain whether vinyl chloride was a lung irritant. Three female and five male rats, matched with controls, were exposed as before for 8 hours daily for 19 consecutive days Co vinyl chloride at a concentration of 5Z in air. All animals were deprived of food and water while in the exposure chamber. Hemoglobin determinations were made during the exposure period.
On the 20th day, all animals were anesthetized with ether, and blood was drawn by cardiac puncture. The animals were killed by an overdose of ether, and necropsies were performed.
The terminal blood sample was tested for hemoglobin, prothrombin time, hematocrit, red cell, white cell, and differencial white cell counts, and unspecified serum transaminase activities.
Prochrombin time, hematocrit, and the serum transaminase activities were normal for both groups. The experimencal group had a statistically significant increase (PC0.05) in red blood cells and a decrease (P<0.01) in white blood cells. The differential white cell counts of control and experimental groups were not significantly different. The ratio of the weight of the liver to the body weight was significantly elevated (males, P<0.01; females, P<0.001) In the experimental group. The five male experimental animals had thinner coats than normal and scaly calls; all other rats were normal In appearance. One of the experimental males had fibrous pleural adhesions, but these appeared to be old and unrelated to che experimental exposure.
Microscopic examination of all organs and tissues failed to disclose abnormalities other than those in the liver in either group.
There were no differences in intracellular or total fat in the liver, but livers in the experimental group had widespread swelling and vacuolization of cells with compression of the sinusoids. This difference was significant (P<0.001).
# ' - f I
In £ha last experiment of the aeries, rats were exposed to vinyl chloride ae a concentration of 2.0Z for 8 hours/day, 5 days/week, for 3 months . Sixty rats weighing about 75 g were separated randomly into two groups, each consisting of 15 males and 15 females. In the week before the rats were exposed to vinyl chloride, they were observed and weighed and blood was taken for hemoglobin determinations. All rats were weighed weekly, and hemoglobin determinations were made monthly. Four control and one experimental animal died in the course of the experiment. After 3 months, all animals were killed for necropsy after blood samples were drawn. The livers and spleens were veighed before being fixed in formalin. No significant differences were noced between the experimental and control body weights, hemoglobin levels, hematocrits, prothrombin times, or white cell monocyte and eosinophil counts. The livers of the experimental animals were significantly heavier (P<0.001) and the spleens significantly lighter (males, P<0.02; females, P<0.05) than in the control group. The experimental rats also had a statistically significant decrease (?<0.01) in white blood cell and neutrophil counts and an increase (?<0.01) in lymphocyte counts, when compared to the controls. Microscopically, the experimental group had fewer signs of kldnr.y damage but more extensive liver damage, as indicated by swelling of cells and compression of sinusoids, than controls. No microscopic differences between the spleens of the two groups were noted.
# In this
1963 paper, Lester and coworkers concluded that the only suggestion of a specific toxic action of vinyl chloride was the increase in liver weight; the Increase was not only statistically significant, but also of substantial magaitude (30Z heavier than controls). The authors stated that they did not know the significance of the increase in liver weight after exposure to vinyl chloride, nor whether the liver returned -o normal after exposure ceased.
In addition, the authors pointed out the increasing neurologic deficits with increasing concentrations of vinyl chloride, finally terminating in death from respiratory insufficiency at concentrations of 15Z for a single exposure or of 10% for repeated exposures.
# (t>) Vinylldene Chloride
In 1963, Irish , stated that there were at that time essentially ac published data on vinylldene chloride toxicity and summarized dat. from unpublished reports of the Dow Chemical Company. He presented no detailed information on the animals exposed in these studies. Irish concluded that vinylldene chloride produced adverse effects at concentrations below those necessary to produce irritation and below the odor threshold of 500-1,000 ppm. Jaeger et al estimated Che LC50's for fed and faseed rats exposed Co vinylidene chloride. One group of male Holeroan racs weighing 250-400 g was allowed continuous access Co food; anocher group was fasced for 18 hours before exposure.
Eight groups of fasted racs and six groups of fed rats, each group consisting of five or six animals, were exposed to vinylidene chloride ac various concencracions for 4 hours . The 4-hour LC50 ac 24 hours for the fasCed racs was 600 ppm (2,382 mg/cu m) and Che 24-hour minimum lethal concencracion was 200 ppm (794 mg/cu m ) . The estimated 4-hour LC50 at 24 hours for Che fed rats was 15,000 ppm (59.6 g/cu m), and the minimum lethal concentration for these animals was 10,000 ppm (39.7 g/cu m ) .
The authors suggested that decreases in glutathione concentration in the livers of fasted rats was a possible explanation for the differences in lethality.
They pointed out that this could be of importance with regard to the occupational risk because of che known circadian pattern of glutathione concentrations in the liver. During the exposures, the animals were checked for any change in weight and activity.
# Siegel ec al
The rats were killed at the end of the exposures, and blood and organs were collected and tested. Macroscopic and microscopic examinations were performed on organs and fixed tissues. Rats exposed at 500 ppm developed nasal irritation (sneezing), did not gain weight normally, and suffered liver cell degeneration. Rats exposed at 200 ppm suffered only slight nasal irritation, and all organ.- were normal on necropsy. No further data were presented by the author.
# Slletchnik and Carlson ri21J published the results of a study to determine Che cardiac sensitizing effects of vinylidene chloride.
Adult male Charles Rivers rats weighing 250-400 g were lightly sedated with 25 mg of sodium pentobarbital/kg 4p, restrained, and then exposed to vinylidene chloride at 25,600 ±700 ppm (101.6 ±2.8 g/cu m) for periods of 10 minutes or longer. The rats were prêtre; ted with 4 Mg of epinephrine/kg, and the m-fn-fmum amount of epinephrine naces^ary to produce cardiac arrhythmias or demonstrate a difference between pairs of animals was determined.
The exfect of phéno barbital was determined using weight-matched pairs of animals. One animal of each pair was administered phenobarbita1, 50 mg/kg ip, daily for 4 days and exposed to vinylidene chloride 24 hours after che last dose. The pair mate in each case received injections of saline and was similarly exposed to vinylidene chloride.
In rats exposed to air alone, epinephrine at 4 ^g/kg did not elicit any cardiac arrhythmias . The authors stated that vinylidene chloride alone caused sinus bradycardia and such arrhythmias as A-V-block, multiple continuous premature ventricular contractions, and ventricular fibrillation; they presented no further Information. Epinephrine at doses as low as 0.5 Mg/kg elicited cardiac arrhythmias in 29 animals exposed to vinylldene chloride.
Sensitivity to epinephrine Increased with increasing length of exposure to vinylldene chloride. The Increased sensitivity to epinephrine was reversible, since 5 minutes after removal from exposure the animals were again able to tolerate high doses of epinephrine without showing arrhythmias.
Premature ventricular contractions were seen in animals pretreated with phénobarbital, exposed to vinylldene chloride, and challenged with epinephrine . No arrhythmias were seen in the saline-traated animals challenged with the same amount of epinephrine.
Arrythmias were produced in the phenobarbital-created animals at a lower epinephrine dose and a shorter exposure to vlnylidene chloride.
The authors concluded that the phénobarbital probably Induced vlnylidene chloride metabolizing hepatic enzymes and that a metabolite caused the cardlotoxlc effects. They further stated that, since the adrenal gland in humans may release up to 4 ;ig/kg/mlnute of adrenalin during stress, workers exposed to high concentrations of vinylldene chloride may be under a risk of these cardlotoxlc events. Immediately after each experiment, the animals were killed and necropsies were performed .
# Prendergast et al conducted a series of
To estimate the effects of vinylldene chloride inhalation, the authors measured liver alkaline phosphatase, SGPT, serum urea nitrogen, and liver lipid content, and made hematologic determinations.
In the control populations, 7/304 rats, 2/314 guinea pigs, 2/48 rabbits, and 1/57 monkeys died prematurely.
Animals repeatedly exposed to vinylldene chloride at 395 mg/cu m (99.5 ppm) showed no microscopic tissue changes relatable to the exposure when compared with control tissues . Gross examination of tissues showed no changes la any anímala except for one rat that had a gelatinous substance on Its kidney and bloody urine In the bladder.
The microscopic examination showed only nonspecific respiratory inflammation, which the authors considered not to be the result of vlnylldene chloride exposure. They did not give any reasons for this conclusion.
None of the repeatedly exposed animals died during the exposure, and only rabbits and monkeys lost weight.
Of the animals exposed continuously at 189 mg/cu m (47.6 ppm), seven guinea pigs died between days 4 and 9, and three monkeys died, one each on days 26, 60, and 64 .
Gross examination showed mottled livers in a majority of the experimental animals. Exposed rabbits, dogs, and monkeys lost weight, while rats and guinea pigs gained weight during the exposure. There were Increases in liver alkaline phosphatase and SGPT activity in rats and guinea pigs, but there were no significant changes in any other biochemical parameters. Two rats also showed Increases of 202 and 34.42 in liver lipid content.
Serum urea nitrogen concentrations in exposed rats were comparable with those of control rats. Three of the 15 guinea pig3 that were exposed at 101 mg/cu m '23.5 ppm) died between the 3^d and 6th exposure days, and 2 of the 3 monkeys exposed at 101 mg/cu m died, 1 on day 39 and the other on day 47.
White or bluish-gray spots and nodules were visible on the lungs of several guinea pigs and rats. Serum urea nitrogen concentrations of exposed guinea pigs were comparable with those of the control group. The appearance and demeanor of the rats ingesting vlnylldene chloride were not different from those of controls throughout-the atudy . Body weight gains and food and water consumption were similar for experimental and control animals. No compound-related abnormalities were m :ed In the results of hematologic studies, urinalyses, or serum chemistry analysas. No significant differences were noted in mean organ weights or organ *"o body weight ratios.
# Gross
and microscopic examinations revealed occasional statistical differences between exposed and control animals .
The differences considered compound-related were fatty changes or fatty degeneration of the liver in female rats in the 50, 100, and 200 ppm groups and in male rats ia the 200-ppm group. Although the incidence of these liver lesions in the males of the 100-ppm group was not significantly increased, it was higher than that in the controls. Centrllobular atrophy and periportal hypertrophy were also seen in the liver of the exposed animal«. No target organ was found that showed a tumorlgenic effect which was considered compound-related, and the total tumor incidence in male and female rats In the various exposure groups was not considered different from that in the controls. Humiston et al concluded chat the only compound-related deviations in these rats were the fatty changes or fatty degenerations of the liver. All other statistically significant deviations observed were considered to be within the normal variation encountered in lifetime studies with this strain of rat. They also concluded that these results did not indicate an "oncogenic effect for vlnylldene chloride ingested by rats." The data indicate that if a carcinogenic potential for rats exists for vinvlidene chloride, these cumulative doses muat be lower than that necessary to promote the expression of that potential. Jaeger et al gave single doses of vlnylldene chloride dissolved in corn oil by stomach tube to lightly ether-anesthetized male Holtzman rats weighing 250-350 g. The rats were fasted for 4 hours before they received the vlnylldene chloride. Anesthetized control rats received corn oil only. Some of the experimental rats were given ip injections of sodium phénobarbital (30 mg/kg) to aid in measuring the time-response relationship for vlnylldene chloride.
This was indicated by the time between loss and recovery of the righting reflex. Three spontaneous rlghtlngs within a minute was considered evidence that the reflex had returned.
The liver glucose-6-phosphatase activity, serum alanine-alpha-ketoglutarate transaminase activity, liver triglyceride values, and the phénobarbital sleeping time were determined at different times after the animals received doses of vlnylldene chloride and were used as indices of liver damage. The rats were killed at intervals up to 24 hours after they were given the vlnylldene chloride or corn oil, and their blood was collected for preparation of serum. The livers were removed and used Co make hoaogenates for biochemical assays.
The phenobarbital sleeping time increased significantly (P<0.05) within 2-4 hours in rats given 400 mg/kg of vinylidene chloride, and Che naxlnrnm increase was observed aC 12-16 hours, alchough there were no statistical differences between the sleeping times at In the same study, Jenkins and associates observed the effects of vinyiidene chloride on 9-co 11-week-old male and female rats given single oral doses of 400 mg/kg. They also studied the effects of vinylidene chloride on 21-to 24-week-old male and female rats given single oral doses of 200 mg/kg.
Liver and plasma enzyme activities were measured 20 hours after oral administration» The older female rats showed a greater increase in liver glucose-6-phosphatase activity than the older male rats, and both groups of females had increased liver alkaline phosphatase activity in comparison with their respective male counterparts. From these observations, the authors concluded that female racs were more susceptible to the hepatotoxlc effects of vinylidene chloride than male rats. Leong and Torkelson conducted two studies of the effects of inhaled vinyl bromide (99.72 pure) on various animal species, including rats, rabbits, and monkeys.
The impurities in the vinyl bromide included a polymerization inhibitor (paramethoxy phenol, 0.1Z), ethylene oxide (0.12Z), vinyl chloride (0.06Z), and traces of ethyl bromide, methylene chloride, acetylene, and various aldehydes. The first study involved exposing four groups of five male Wistar rats each to vinyl bromide at 10,000 ppm (43.8 g/cu m) in a 160-liter stainless steel chamber for 7 hours/day, 5 days/week, for either 3 days or, ', 2, or 4 weeks. Concentrations of vinyl bromide in the chamber were determined by infrared spectroscopy. Two control groups were exposed to room air. Immediately after the exposure, the surviving animals were killed, and macroscopic and microscopic examinations were performed on their organs and on fixed tissue specimens.
The rats exposed to vinyl bromide at 10,000 ppm became hypoactive during the 7-hour exposure period . They seemed drowsy after 1 hour of the first exposure and looked "sluggish" by ¿he 13th exposure. The control animals remained active throughout the exposure period.
Exposed animals showed a significantly lower (P<0.05) weigh: gain than controls between the 15th and 20th days of exposure, and the differince was greater (?<0.01) after the 20th exposure. Gross examination of killed animals showed multifocal gray areas in the lungs, but the authors stated chat no "compound related" tissue changes were observed microscopically.
In Che second study, 2 groups of animals each consisting of 60 Charles River rats» 6 New Zealand white rabbits, and 6 cynomologus monkeys, all equally divided according to sex, were exposed to vinyl bromide at 25C or 500 ppm (1,095 or 2,190 mg/cu m) for 6 hours/day, 5 days/week, for 6 months . A third group of 30 male and 30 female rats, 3 male and 3 female rabbits, and 4 female and 2 male monkeys was exposed to filtered room air. The experiments were conducted in the evening during the first 20 weeks of exposure and then were changed to dayClme. The auChor3 did noc state whether the concurrent control animals were also switched to a daytime schedule. Vinyl bromide concentrations In the chamber were monitored by gas chromatography. Gross and microscopic examinations were performed on organs and fixed tissue of exposed animals at the end of the experiment. The animals exposed for 6 months were observed for changes In activity, body weight, indications of respiratory distress, eye and nasal irritation, and skin condition.
Hematologic tests were performed on rats, rabbles, and monkeys in the control and 500-ppm groups prior to exposure and after 2, 10, and 24 weeks of exposure. All monkeys were examined for nonvolatile bromide in whole blood at the end of weeks 1, 2, 4, 3, 16, and 26.
# AC the end of Che 6 monChs of exposure*, all Che animals exposed Co vinyl bromide ac 250 or 500 ppm showed weight increases aC rates comparable with controls ,
Only rats had a decrease in mean weight when the exposure schedule was changed from evening to daytime during the 20th week of exposure. Microscopic examination of major organs of all groups and species showed no changes that resulted from exposure to vinyl bromide.
Analysis of blood showed elevated concentrations of bromide in all exposed animals, with monkeys chac had been exposed at 250 and 500 ppm having Che highesC levels.
No statistically significant changes were observed in the ocher measurements.
In the same report, Leong and Torkelson summarized the results from unpublished data on the effects of vinyl bromide on rats.
An raspecified number of rats was exposed to vinyl bromide at nominal concentrations of 100.000 (437,600 mg/cu m), 50,000 (218,800 mg/cu m ) , or 25,000 ppm (109,400 ng/cu m) for 1.5 and 7 hours. Two weeks after exposure to vinyl bromide at 25.000 or 50,000 ppm, the rats that survived were killed and examined for microscopic tissue changes.
Exposure at 100,000 ppm caused deep anesthesia and death within 15 minutes . None of the rats exposed at 50,000 ppm for 1.5 hours died, but an unspecified number of deaths occurred during the 7-hour exposure. Within 25 minutes, the rats became unconscious.
Vinyl bromide at 25,000 ppm anesthetized the rats, but they recovered rapidly when removed from exposure. Microscopic examination of tissues showed slight to moderate liver and kidney damage in rats exposed aC 50,000 ppm. . Examination of tissues from rats exposed at 25,000 ppm showed no abnormalities.
# Leong and Torkelson also gave male rats a 5OS solution of vinyl bromide in corn oil by oral intubation to determine the LD50.
They reported an LD50 of approximately 500 mg/kg but presented no supportive data. They also reported that vinyl bromide was irritating to the eyes but not to the skin of rabbits; data were not presented to support these findings.
# '(d) Vinyl Fluoride
In a book on the toxicity of anesthetics [1281, Clayton reported that a mixture of 800,000 ppm (1,504 g/cu m) of vinyl fluoride in oxygen was not lethal for rats exposed Co ic for 12.5 hours. He also stated chac unpublished daca. of Limperos (no furcher identification given) showed chat male and female rats exposed to vinyl fluoride at concentrations of 100,000 ppm (188 g/cu m) for 7 hours/day, 5 days/week, for 6 weeks gained weight normally, exhibited no behavioral changes, had no fatalities, and had no cissue changes as evaluaced by microscopic examlnaCion.
Claycon did not supply decails of Chese experiments.
In 1950, Lester and Greenberg reported Che effeces of single exposures Co vinyl fluoride on adult white rats. Ra.Cs were exposed for 30 minuces aC concenCraCions ranging from 20 co 80% (200,000 to 800,000 ppm; 376 to 1,504 g/cu m) in an 11-liter glass chamber for 30 minutes. The rata were cesced for any abnormalities In Che poscural, righting, and corneal reflexes afcer Che exposure. AC Che 30Z (300,000 ppm; 564 g/cu m) concentration, Che racs exhibited "hindleg lnscabilicy" which Che auchors considered a sign of "slighc intoxication" . Ac 80Z, Che raCs experienced difficulcy in breaching buC recovered afCer a 1-mtnuce exposure to room air.
Postural and righting reflexes were lost between 50 and 60Z and beCween 60 and 70%, respeccively. Loss of Che poscural re fle x was also evidenc in racs exposed Co vinyl fluoride ac 80Z fo r 12.5 hours, buC these ra ts also regained Che reflex es soon a fte r breathing room air.
Results of an investigation on repeated exposures of rats to v-'-ryl fluoride were summarized in a technical report from E I du Pont de Nemours and Company . An unspecified number of rats were exposed to vinyl fluoride at a concentration of 100,000 ppm for 7 hours/day, 5 days/week, for a total of 30 exposures.
There were no differences in the rate of weight gain, in the results of necropsies, of microscopic examinations of fixed tissues, of organ weighcs, or of clinical observadons. It was concluded thac vinyl fluoride did noc conscitute "much of" an inhalation hazard.
However, the technical report did not present any daca for evaluation.
# (e) Vinylidene Fluoride
In a book on che toxicity of anesthetics . Clayton reported thac a mixture of 800,000 ppm (2,096 g/cu m) '>->f vinylidene fluoride in oxygen was not lethal to rats exposed to it for 19 hours.
No experimental details or data were offered Co support this statement.
# Lester and Greenberg
, in 1950, reported the effects of inhaled vinylidene fluoride on an unspecified number of adult white rats.
The rats were exposed to vinylidene fluoride at concentrations.ranging from 10Z Co 80Z (100,000 Co 800,000 ppm; 262 Co 2,096 g/cu m) in an 11-liter glass chamber for 30 minutes.
After the rats were removed from the chamber, their postural, righting, and corneal reflexes were tested.
The rats exposed to vinylidene fluoride at 10Z-80Z for 30 minutes lost none of the reflexes tested, but those exposed at concentrations of 40Z (400,000 ppm; 1,048 g/cu m) or higher showed slight intoxication, which was manifested at 80Z by the development of an unsteady gait without loss of the postural reflex . Rats exposed at 80Z for 19 hours showed no progressive signs of intoxication, and autopsies showed no evidence of pulmonary irritation.
# Carpenter et al [1181 reporced che effeccs of short-Cerm scacic iuhalacion exposures of Sherman albino raCs weighing between 100 and 150 g Co vinylidene fluoride.
The auchors reported that exposure to vinylidene fluoride at a concentration of 128,000 ppm (335.'+ g/cu m) for 4 hours was sufficient to kill two to four of the six test animals.
# From this observation, they concluded that vinylit'ene fluoride was slightly hazardous to rats.
No further iaforaation w*s presented.
Burgison et al studied the myocardial sensitizing properties of viaylidene fluoride in eight dogs and tvo cats.
The animals vere first injected with epinephrine and then exposed by inhalation to vinylidene fluoride at 250,000-500,000 ppm (655-1,310 g/cu m) for 5-15 minute.-which time the epinephrine injection vas repeated.
ECG's, lead II, of h animal were recorded.
None of the animals developed myocardial sensitization to epinephrine.
# (f) Summary
The results of these studies show that each of the vinyl halides is capable of causing CNS effects.
Changes in liver function and structure were also observed in some experiments. The adverse effects noted after exposure were similar to Chose noted in worker populations exposed to Che vinyl halides. 152) in drinking water was also given to mice, rats, and rabbits exposed at 50 and 500 ppm, at 2,500 ppm (6,400 mg/cu m), and at 2,500 ppm, respeccively.
# TeraCogenicity and Effeccs on Reproduction
The auchors seated that ethanol altered the metabolism of vinyl chloride by blocking the primary and most rapid metabolic pathway. Ethanol with vinyl chloride at concentrations of 50 or 500 ppm decreased food consumption, weight gain, and liver weight In mice.
In animals exposed to vinyl chloride at 50 and 500 ppm with 15Z ethanol, weight gain and liver weight were lower than In those exposed to vinyl chloride alone.
In rats exposed at 500 ppm, there was a significant decrease (P<0.05) in maternal weight gain during days 6-21 of gestation. At 2.500 ppm, the liver weight of the rats was increased significantly (P<0.05) at day 21 of gestation. There was also a significant difference in food consumption but no difference between exposed and control animals in weight gain.
Rabbit3 showed a significantly decreased (P<0.05) food consumption ratt at 500 ppm, but no effects were noted at 2,500 ppm. Giving ethanol in addition to exposure to vinyl chloride at 2,500 ppm significantly increased the effects on rabbits and rats. Maternal mortality in mice exposed at 500 ppm was significantly increased (P<0.05) over concurrent control mortality. Also, there was an increase (?<0.05) in the incidence of resorptions, a decrease (P<0.05) In fetal body weight, and a reduction (P<0.05) in litter sl2 e.
No significant effects other than an Increase in crown-rump length (P. chloride at 500 ppm differed significantly (P<0.05) from concurrent control fetuses in incidences of unfused sternebrae, delays in ossification of the sternebrae, and delays in ossification of bones of the skull. The addition of 15Z ethanol to the drinking water of the dams significantly increased (P<0.05) the incidence of skeletal anomalies In the fetuses of these mice, including anomalies in the sternebrae, vertebrae, and skull at 50 and 500 ppm, and In the ribs as well at 500 ppm.
There was only one maternal death In rats exposed to vinyl chloride at 2.500 ppm (6,400 mg/cu o) . The only significant (P<0.05) fetal effects observed after exposure at 500 ppm (1,280 mg/cu m) were a reduction of the body weights, an increase in crown-rump length, and a significant increase (?<0.05) in the number of lumbar spur3 . Examination of the fetuses for soft tissue anomalies showed a significant increase (P< .05) in the incidence of dilated ureters at 2,500 ppm.
Skeletal anomalies at 2,500 ppm included significant decreases (?<0.05) in the incidence of unfused sternebrae and delayed skull ossifications.
One of the seven rabbits exposed to vinyl chloride at 2,500 ppm died during the '¿xperiment . A significant decrease (P<0.05) was seen in the number of live fetuses/litter at 5C0 ppm, but not at 2,500 ppm. No other gross differences were observed.
The incidence of delayed ossification of the fifth sternebra was significantly increased (P<0.05) at 500 ppm. No other skeletal or soft tissue anomalies were seen in rabbits.
# John
et al concluded that exposure to vinyl chloride at concentrations causing some maternal toxicity did not cause teratogenic effects on or embryonal or fetal toxicity in mice, rats, or rabbits. However, it is apparent that adverse effects did occur on the fetuses of the mice exposed at 500 ppm (1,280 mg/cu m) (sternebrae and skull anomalies) and in the f«tusea of rats exposed at 2,500 ppm (6,400 mg/cu m) (dilated ureters).
# The authors'
determination that these effects did cot substantiate an embryotoxlc or fetotoxlc potential and were secondary to the maternal toxicity were not supported by clinical tests.
The authors regarded the changes as minor skeletal and soft tissue variations and concluded that the incidence of "major" skeletal or soft tissue malformations was not significantly greater in exposed animals than in the control groups.
In mice exposed at 500 ppm, there were significant increases in the incidence of resorptions and decreases in the fetal body weight and in litter size.
In rats exposed at 500 ppm, there was a significant decrease in the fetal body weight and an increase in crownrump length, and, in rabbits exposed at 500 ppm.
there was a significant decrease in the number of live fetuses/litter.
In both rats and rabbits, however, there was not a corresponding adverse effect at the 2,500-ppm exposure concentration.
Ethanol at 15% in the drinking water enhanced the effects of inhaled vinyl chloride, but the authors concludcd that maternal toxicity was more enhanced than was fetotoxicity. Twenty-five 3-month-old male albino Wistar rats were exposed to vinyl chloride at a concentration of 30,000 ppm (76.8 g/cu m) for 4 hours/day, 5 days/week, for 1 year; 25 similar rats served as controls. Rats were killed at unspecified intervals, and their tissues were examined microscopically.
These exposed animals w«re apparently the same ones used in the study by Viola that was discussed in Animal Toxicity.
Skin tumors, the most frequent types, were found in 65-70% of the animals after 10 months of exposure . These tumors developed in the paraauricular region, and most were epidermoid carcinomas. The authors also found two mucoepidermoid carcinomas and one papilloma of the skin.
In addition to the tumors, they found warty subauricular grcwtns and papillary epithelial proliferation with progressive increases in the thickness of the epidermis in a few rats.
In four rats, adenocarcinomas of the respiratory tract were found, and in one rat an epidermoid tumor was found. The authors also found osteochondromas in the metacarpal and metatarsal regions of all four limbs in five rats.
None of the control rats developed any of the types of tumors observed in the exposed animals.
In addition to the carcinomas, Viola et al Adapted fromM altoni K ► The Sprague-Dawley rats exposed at 6,000 or 10,000 ppm (15.4 or 25.6 g/cu m) had been followed for only 59 weeks at the time of this report . Zymbal gland carcinoma was observed in some exposed animals, but no nephroblastoma or angiosarcoma of the liver was found. The preliminary results shown in Table III-6 suggest that exposure at 6,000 ppm for 100 hours during 25 weeks has a lower carcinogenic potential than exposure at 10,000 ppm on the same schedule or at 6,000 ppm for 100 hours during 5 weeks. These findings indicate that the severity cf exposure was :r.ore important than the total mass to which the rats were exposed 3nd suggest that metabolic and excretory processes may affect Lhe carcinogenic potential; however, more complete data are needed to substantiate thij inference.
The offspring of pregnant rats exposed to vinyl chloride at concentrations of 6,000 or 10,000 ppm (15.4 and 25.6 g/cu m) for 4 hours/day fror.i the 12th to the ISth day of gestation developed tumors . in 54 offspring from dams exposed at 10,000 ppm, the following tumors developed: 3 Zymbal gland carcinomas, 1 nephroblastoma, 1 subcutaneous angiosarcoma, 1 angiosarcoma of the leg, 1 Zymbal gland fibrosarcoma, and 1 ovarian leiomyosarcoma. In the 32 offspring from dams exposed at 6,00u ppm, 1 Zymbal gland carcinoma, 1 subcutaneous angiosarcoma, 1 intraabdominal' angiosarcoma, 1 Zymbal gland adenoma, 1 skin carcinoma, 1 subcutaneous fibrosarcoma, and 1 mammary carcinoma were observed following a 143-week observation period. Unfortunately, details of this experiment, including reproduction indices and signs of toxicity to the dams, were not reported. The author's suggestion that the results of this experiment indicate that vinyl chloride has a transplacental effect is reasonable, but it cannot be thoroughly evaluated without more detailed information chan was presented.
The experiments outlined by Maltoni and coworkers were well designed. The presentation of data in these papers, however, was often confusing, and the information contained in the tables frequently disagreed with that presented in the text. It was often unclear whether they were presenting the number of animals with tumors or simply the number of tumors. Furthermore, preliminary data on tumors when the followup period was less than the probable latent period are of little value. Maltoni's observations and data do show that vinyl chloride induces various tumors in a variety of rodents, and that angiosarcoma of the liver is a characteristic lesion induced by vinyl chloride. The data also indicate chat chere are strain and species differences in the magnitude of the tumorigenic response elicited by exposure to vinyl chloride.
Lee et al in 1977 and in 1978 reported results of inhalation studies on 2-month-old albino CD-I mice and CD rats exposed to vinyl chloride (99.8% pure, Matheson Gas Products) at 50, 250, or 1,000 ppm (128, C40, or 2,560 mg/cu m). Groups of 36 females and 36 males of each species were exposed for 6 hours/day, 5 days/week, for 12 months. Two similar control groups were exposed to uncontaminated air. Throughout the exposure period, the animals were observed for changes in weight gain, food consumption, and mortality. Four animals of each species and sex exposed at each concentration were killed at the end of exposure months 1, 2, 3, and 9. Their organs were examined grossly» and tissues were £ixed and examined microscopically. Additional laboratory determinations, including hematologic and blood chemistry examinations, cytogenetic analyses of bone marrow cultures, pulmonary macrophage counts, DNA synthesis assays, and urinary analyses were performed at the interim examinations and at the termination of the experiment at 12 months. Roentgenograms of the limbs of those animals exposed for the longest periods were made also.
Of the mice exposed at 1,000 ppm (2,560 mg/cu tn), two males and one female died between the 3rd and 9th days of exposure . Between the 6th and 9th months, 13 male and 21 female nice died or wore removed from exposure because their health had deteriorated. By the end of the 9th month, all animals had been removed from exposure.
By the 6th month, the most evident effects on exposed mice were rough hair coat, lethargy, loss of appetite, and rapid weight loss . Additionally, abdominal distention and external tumor masses, such as mammary tumors in females, were noticeable between the 7th and 9th months, During the first 8 months of the experiment, exposed and nonexposed mice showed comparable weight gains; however, the exposed group showed a decline in weight by the 9th month. Also, by the 9th month of exposure, one female and most of the male mice had elevated pulmonary macrophage counts (from pulmonary washings) and had developed bronchioloalveolar adenomas.
Microscopic examination of hepatic and renal tissues from one female and two male mice that died after being exposed at 1,000 ppm for 3-9 cays revealed a number of lesions characterized by acute toxic hepatitis, focal to marked congestion, diffuse coagulation necrosis of hepatocytes in the centrilobular area, and tubular necrosis in the renal cortex . During the 8th and 9th months of exposure, mitotic figures were observed in mouse livers, but were not seen in livers of mice killed at other times.
Bronchioloalveolar adenomas were observed in 48 of the 72 mice exposed at 1,000 ppm, whereas only 1 of the 72 control mice developed this tumor . These tumors were first noted during the 2nd month of exposure to vinyl chloride and in the 9th month in the control. Mammary tumors. first observed in exposed female mice during the 6th and 7th months, included adenocarcinoma and carcinoma of squamous cells and of anaplastic ceils; metastasis wa3 prevalent to the lungs and pleurae. None of the control mice developed mammary tumors. Angiosarcoma of the liver was found in 31 exposed mice, being observed first during the 6th month of exposure. In addition, angiosarcoma was occasionally seen in the mammary glands, heart, gastrointestinal tract, pancreas, kidneys, epididymis, testis, mesenteric lymph nodes, and skeletal muscles. Angiosarcoma of the liver was more prevalent in females than in males. Two male and three female mice developed malignant lymphoma between the 6th and 9th months of exposure, whereas none were seen in controls.
All female mice exposed to vinyl chloride at 250 ppn (640 mg/cu m) died or had to be removed from exposure by the 9th month because of morbidity , Male mice were more resistant to the toxic actions of vinyl chloride than female mice, and some survived the 12-month exposures. No differences in body weight between> the-exposed and control mice were noted. One female mouse examined after the 9th month of exposure showed an increased pulmonary macrophage count. Bronchioloalveolar adenomas were first detected in exposed mice during the 2nd month; a total of 48, 22, and 12 mice exposed at 1,000, 250, or 50 ppm, respectively, developed this tumor. Only one control male mouse developed bronchioalveolar adenoma during the 9th month. Female mice also developed mammary tumors consisting of adenocarcinomas and squamous and anaplastic cell carcinomas, Of the mice exposed at 250 ppm, 23 (16 females) developed liver angiosarcomas, which were first evident after the 6th month of exposure. During the 9th month of exposure, malignant lymphomas developed in two female mice.
Between the 6th and 12th months of exposure to vinyl chloride at 50 ppm (128 mg/cu m), 6 male and 14 female mice either died or were removed from exposure because of deteriorating health . Throughout the exposure period, the body weights of exposed mice were comparable with those of control mice. Microscopic examination also showed mitotic figures in the liver during the 8th and 3th months of exposure. A significant increase in DNA synthesis, as measured by incorporation of 14C-thymidine, was detected in the livers of male mice exposed at 50 ppm for 11 months. There was no increase in the number of mitotic figures in the livers of these mice. During the 12th month of exposure, one of two surviving male mice had an elevated pulmonary macrophage count.
Bronchioloalveolar adenomas were seen in 12 mice exposed at 50 ppm (128 mg/cu m) . Three male mice developed angiosarcoma of the liver. Mammary tumors were observed in nine of the exposed mice, the first at 7 months. One female mouse developed a malignant lymphoma during the 6th month and another developed a hemangioma in the mediastinum. Control mice developed none of the tumors discussed.
From the effects observed in nice exposed to vinyl chloride at 1,000, 250, and 50 ppm (2,560, 640, and 128 mg/cu m), the authors concluded that weight loss, mortality, and tumor incidence were dependent on the concentration of vinyl chloride and the duration of exposure. Vinyl chloride inhaled at 50-1,000 ppm for 6 hours/da^, 5 days/week, was found to be highly carcinogenic in mice. The duration of exposure before tumors were observed varied from 2 months for bronchioloalveolar adenomas to 6 months for mammary gland tumors and for angiosarcoma. The latter tumors occurred first in the liver and then in other organs. Angiosarcoma was more prevlent in female mice than in male mice exposed at 250 or 1,000 ppm. It was found mostly In the livers of mice exposed to vinyl chloride for -7-9 months. Mammary gland tumors were considered to be a complex type and were characterized by anaplastic and squamous cell metaplasia during the early stages of their development. Metastasis of anaplastic and squamous cell carcinomas to the lungs was common, \ronchloalveolar adenomas occurred in both male and female mice at a very young age and after a short period of exposure. More deaths and tumors were observed at 250 and 1,000 ppm than at 50 ppm. Other types of tumors observed in the exposed mice were hepatic cell carcinomas, renal adenomas, and k.eratoacanthomas of the skin.
In the second part of the experiment, Lee et al exposed 72 adult male and female rats to vinyl chloride at concentrations of 50, 250, or 1,000 ppm (128, 640, or 2,560 mg/cu m) for up to 12 months, using the same exposure schedule previously reported. During the first 7 months of exposure, no remarkable adverse effects were seen in any rats. However, of the rats .exposed at 1,000 ppm, 8 male and 13 female rats either died or were removed from exposure during the 8th-12th months. None of the controls died during Che experiment. After the 4th week of exposure, female rats had gained less weight thar the controls. During the 9th month, the first cases of angiosarcoma of the liver were observed in four rats. By the end of the study, 22 rats had developed angiosarcoma of the liver, more females than males having the tumors. Of the 22 rats with angiosarcoma of the liver, 13 also developed angiosarcoma of the lungs. Additional angiosarcoma was found in the omentum of one rat, and two rats had hemangiomas in the adrenal glands.
Four maie and 10 female rats died or were removed from exposure during months 8-12 of exposure to vinyl chloride at 250 ppm (640 mg/cu m) . None of the control rats died during the exposure period. Body weight gain by exposed rats was comparable with that by the controls. Two rats developed angiosarcoma of the liver during the 9th month of exposure. By the end of the 12th month of exposure, 13 rats, 10 of them females, had developed angiosarcoma of the liver. Of the 10 female rats, 3 also had angiosarcoma of the lur.g3; angiosarcoma of the cmentum or mesentery was also found in two rats. None of the control rats had angiosarcoma.
During months 8-12 of exposure to vinyl chloride at 50 ppm (128 mg/cu m), two female rats died , Exposed and control rats had comparable body weight gain.
Subcutaneous angiosarcoma was found in two rats. No angiosarcoma of the liver, lungs, or other organs was observed in any of the rats exposed at 50 ppm, nor did any of the control rats develop angiosarcoma.
None of the laboratory tests performed on rats exposed to vinyl chloride at 1,000, 250, or 50 ppm showed any persistent changes . The authors indicated that female rats were more sensitive to the toxic action of vinyl chloride and that more females than males died between the 8th and 10th months of exposure. In general, rats were considered more resistant to both the carcinogenic and toxic actions of vinyl chloride than mice.
The experiments by Lee et al indicate that vinyl chloride is carcinogenic in mice and rats, and that mice are more susceptible to its carcinogenic effects than rats. Since the experiments were conducted at various exposure concentrations, it can be concluded that the different responses probably were a function of species-specific factors and were not caused by structural differences between the species such as lung surface area. Since there are differences between species, there may be differences between strains within a species also. Such a situation would account for the apparent inconsistencies between the reports of various authors using different ¿trains of the same species. This confounding factor must be considered in any attempt to extrapolate the results from animal experiments to human exposure situations.
Maltoni et al , in 1975, gave vinyl chloride by gastric intubation to 13-week-old Sprague-Dawley rats. Groups of 40 male and 40 female rats were given 50.00, 16.65, or 3.33 mg/kg of vinyl chloride dissolved in olive oil 5 times/week for 52 weeks. A fourth group of rats, 40 males and 40 females, was given olive oil without vinyl chloride as the control group. The number of tumors that developed in the rats of each group were recorded.
At 50 weeks, no tumors were apparent in the rats given the lowest dose (3.33 mg/kg), but one of the male rats given 16.65 mg/kg developed angiosarcoma of the liver, which was identified during the 49th »'eek of the experiment, and a female rat given 5C mg/kg had angiosarcoma in the thymus.
In an update of this study , Maltoni in 1976 reported that after 34 weeks the rats receiving the 50.00-mg/kg dose had developed eight cases of angiosarcoma of the liver and four other tumors. At 16.65 mg,kg, angiosarcoma of the liver had developed in five rats, a Zymbal gland carcir.coa in one, and a nephroblastoma in one. No tumors were observed in animals given 3.33 mg/kg.
(b) Vinvlidene Chloride Lee et al in 1976 ana 1978 described the effects of inhaled vinylidene chloride (99Z pure, Aldrich Co.) o p 36 male and on 36 female 2xonth-old albino CD-I mice and 36 male and 36 female albino CD rats exposed at 55 ppm (218.4 mg/cu m) for 6 hours/day, 5 days/week, for 12 months. Vapor concentrations were monitored by gas-liquid chromatography during the exposures. Control animals were exposed to uncontaminated air. Organs and tissues from rats and mice were examined for microscopic changes at Che end of the 1st, 2nd, 3rd, 6th, 9th, and 12th months of exposure.
Two male mice died on the 13th day of exposure and were replaced in the study with healthy males . Microscopic examination showed a number of lesions in the two dead male mice. They were characterized by acute toxic hepatitis, focal to marked congestion, marked diffuse coagulation necrosis of 'nepatocytes, and ¡narked tubular necrosis in the renal cortex. No additional deaths occurred, nor did any control animals die. Six mice were observed to have small nodules of bronchioalveolar adenoma by the 12th month of exposure; only one control male mouse developed a bronchioalveolar adenoma. The mice removed from exposure after the 9th month (two males) and 10th month (one female) had developed angiosarcoma in the liver. Neither mammary tumors nor malignant lymphomas were found.
Aftar exposures at 55 ppm (218.4 mg/cu m), fatty changes were seen ir the livers of rats . Two rats had also developed extrahepatic angiosarcoma (mesenteric lymph node, subcutaneous) by the end of the exposure period. None of the control rats developed liver tumors of any type.
In 1977, Maltoni et al reported the research plan and preliminary results for a series of experiments with vinylidene chloride. Groups of 60-120 Sprague-Dawley rat3 were exposed by inhalation to vinylidene chloride at concentrations of 10, 25, 50, ICC, or 150 ppm (39-7, 99.3, 198.5, 397, or 595.5 mg/cu m) far 4 nours/day, 4-5 days/week, for 12 aonths. An increased incidence of mammary tumors was noted in the exposed animals; however, there was no apparent dose-related effect. One Zymbal gland carcinoma was found in an animal exposed at 100 ppm.
Swiss mice were, exposed to vinylidene chloride at 10 and 25 ppm (39.7 and 99.3 mg/cu m) on the same schedule . Exposure of these mice at concentrations of 50 ppm or higher produced an unacceptably high mortality in the study population within 4 days. Adenocarcinoma of the kidneys w?s observed in two groups exposed at 25 ppm with an incidence of 8" and 4". Ncne of these tumors were observed in arinals exposed at 10 ppm or in control animals.
Sprague-Dawley rats were given vinylidene chloride in olive oil by gavage at dose rates of 0.5, 5, 10, and 20 mg/kg/day, 4-5 days/week for 52 weeks . No increase in mammary tumors was observed; one rat developed a Zymbal gland carcinoma at the 10-mg/kg dose.
Although Lee et al had reported that their -lice were exposed at 55 ppm of vinylidene chloride, Maltoni et al found that exposure of his mice at concentrations of 50 ppm or more produced unacceptable mortalities within 4 days. Maltoni's mice exposed at 25 ppm of vinylidene chloride for 12 months did not develop angiosarcoma of the liver, whereas Lee et al stated that 3 of 72 mice exposed at 55 ppm for 12 months did develop this tumor. The discrepancies in the results of these studies may be caused by differences in the strains of animals used, and has been discussed previously. Maltoni et al stated that neither inhalation exposures nor ingestion experiments with rats demonstrated a specific carcinogenic effect from vinylidene chloride such as had been demonstrated for vinyl chloride (Zymbal gland carcinoma and angiosarcoma of the liver). It should be remembered that these data are preliminary for rats, and that a longer followup period may reveal that these tumor types will be formed. The author« also stated that mice were susceptible to a specific carcinogenic effect, adenocarcinoma of the kidney, and that this species difference probably resulted from a more "favorable metabolic condition for expressing the oncogenic potentiality." The authors pointed out that further research using different species was necessary to evaluate this hypothesis. A comparison of the frequencies of angiosarcomas at these exposure concentrations in this experiment with the frequencies of angiosarcoma observed in rats exposed to vinyl chloride after a similar followup period suggests that vinyl bromide may be more effective in inducing this tumor than vinyl chloride.
Further comparison with the vinyl chloride data indicates that after a longer observation period, angiosarcoma will be seen in those rats exposed to vinyl bromide at lower concentrations.
# (d) Vinyl Fluoride and Vinylidene Fluoride
No reports examining the carcinogenic potential of vinyl fluoride or vinylidene fluoride have been located.
# Each o£ the vtayl halides has been shown to be mutagenic in on« or another test aystea.
The mutagenic activity of thcsa compounds increases with aacabolic activation by mammalian microsomal systems, showing that metabolites fcs wall as tha parent compounds have mutagenic potential.
# (a) Vinyl Chloride
Because the reported carcinogenic action of vinyl chloride in workers attracted considerable attention, several investigators became interested la evaluating its mutagenicity and chat of its known or presumed metabolites. Ames et al showed that many known carcinogens activated by mammalian liver enzymes produced back mutations in auxotrcphs of the bacterium Salmonella tynhiaurium. Studies of the genetic activities of vinyl chloride and some of its metabolites have been performed in several bacterial species, yeasts, Neurospora, Drosophila, mammalian cell cultures, and alee. Strain TA1535 reverts to histidine independence by a base-pair surstitution, while in the other strains used, TA1536, TA1537, and TA1538, reversion results from addition or deletion of a base pair (frameshift mutation).
In addition to their Inability to synthesize histidine, these test strains Include a mutation that increases the permeability of the cell and anotner that decreases the repair of damaged DNA; these deletions enhance the sensitivity of the bacteria to certain mutagenic agents.
Because bacteria may not duplicate mammalian metabolic processes in activating potentially mutagenic substances, t';e substances under test were incubated with a 9,000 x gravity (G) microsomal extract frcm the livers of Sprague-Dawley rats .
For some experiments, a microsomal system was produced by mixing the microsomal supernatant with a dihydronicotinaaide adenine dinucleotide phosphate (NADPH) generating system.
In preliminary tests with strain TA1535, Rannug and coworkers bubbled vinyl chloride gas through either water or a suspension containing the microsomal 3ystem, or they exposed the bacteria for 75 minutes to an atmosphere containing 112 (281.6 g/cu n) vinyl chloride. The vinyl chloride was analyzed by ga3 chromatography and mass spectroscopy and was reported to be of "very high" purity, containing only trace amounts of isopropanc*.
In a subsequent experiment, strain TA1535 was exposed to an atmosphere of 202 vinyl chloride with cha microsomal system or with the aicrosomal supernatant with t o MADPH-generating system, and to vinyl chloride alone, for intervals of 30, 60, or 90 minutes.
Controls were incubated with the aicrosomal system without vinyl chloride.
Finally, all four strains were exposed for 90 ainutes under Mutagenicity similar conditions.
To evaluate the lethality of the vinyl chloride preparation*, identically treated bacteria were cultured on a medium containing hiatidine, and the nuaber of surviving cells was determined.
To evaluate the cooperative mutagenicity of the vinyl chloride preparations, histidine-independent mutant colonies were counted on five plates of minimal media for each test, and the number of mutants/ 100 million surviving cells was compared to the spontaneous mutation rate using the Student's t-test. Exposure of strain TA1535 to vinyl chloride and the microsomal supernatant in the absence of the NADPH-geueratiag system also caused no significant change in the mutation rate.
# In the
In the final experiment, involving all four test strains, only TA1535 showed a mutagenic response to. vinyl chloride.
At the concentrations tesced, vinyl chloride with the microsomal 3ystesj did not affect bacterial survival on a complete medium, although vinyl chloride alone-reduced the survival rate slightly.
Rannug et al concluded that vinyl chloride was mutagenic in Salmonella only after metabolic activation by m^tanalian microsomal enrymes. They suggested that the most plausible primary metabolite from vinyl chloride would be chloroethylene oxide, which could be formed in the NADPH-dependent oxidation by microsomal enzymes.
Since only strain TA1535 was affected, the mutagenically active metabolites of vinyl chloride appeared to be capable of Fractions of liver, kidney, and lung tissu« were prepared from BD-IV rats and 0F-1 mice, sooe of which had been pretreated with sodium phénobarbital to increase hepatic microsomal enzyme activity . Microsomal supernatants were prepared by centrifuging homogenized tissue at 9,000 - C.
For sooe experiments, the microsomal supernatant was chan centrifuged at 100,000 x G to produce a purified aicrosoaal fraction and a supernatant containing aicrosomal protein.
The 9,000 x G microsomal supernatants froir four human liver biopsy specimens were also tested.
In the presence of a aicrosomal fraction derived from aice treated with sodium phénobarbital, with an NADPH-generating system, the autation rate in TA1530, the aost sensitive of the strains tested, increased ïhe prevalence of mutations in the presence of the aicrosomal system was seven times as high as wirh vinyl chloride alone after 1.5 hours, but only twice as high after 48 hours; the difference in the autation rata induced by vinyl chloride alone and in the presence of a aicrosooal system reached a plateau after about 9 hours of exposure.
3artsch at al [146j attempted to characterize the enzymes involved in vinyl chloride mutagenicity by testing the activity of various mouse liver fractions on strain TA1530. In the absence of the NADPH-generating system, the 9,000 x G aicrosomal supernatant with 2Z vinyl chloride produced no increase above the autation rates produced by exposure to vinyl chloride alone for periods of up to 48 hours. Purified aicrosomes with an NADPH-generating system produced an increase in the vinyl chloride-induced mutation rate of about half of that obtained with the 9,000 x G microsomal system after 48 hours.
The 100,000 x G liver protein supernatant (cytosol) did not increase the mutagenic response compared to that induced by vinyl chloride alone. Addition of alcohol dehydrogenase and NAD+ to the microsomal systems did not affect the mutation rate, although alcohol dehydrogenase would be expected to convert any chloroethanol produced tc chloroacetaldehyde. This compound is known to be mutagenic to TA1530 .
# Comparison
-f tissues from various sources showed that rat liver aicrosomal supernatant was comparable to that from aouse liver in activating the autagenic response to vinyl chloride . Precreating the animals with phénobarbital increased the activity of the liver microsomal supernatant 15-402. _ _ Kidney and lung fractions from either pretreated or untreated animals increased the mutagenic activity of vinyl chloride only marginally over control values.
One of the four human liver specimens tested was nearly twice as active as those of rat or mouse liver, while the remaining three were somewhat less active than liver tissue from phenobarbital-treated rats and ole*.
Based oa data from pretreated animals, cha relative mutagenic activities of tissue fractions for strain TA1530 were: mouse liver 1002; rat liver, 802; mouse kidney, 202; rat kidney, 162; mouse and rat lung« 72; and human liver, 170, 70, 64, and 4o2.
Bartsch et a.l concluded that vinyl chloride was mutagenic to Salmonella in the absence of mammalian microsomal enzymes, probably through :he action of breakdown products produced either by bacterial enzymes or aonenzymatically. They pointed out that their findings strongly supported the enzymatic formation of mutagenic vinyl chloride metabolites-biuce the 9,000 x G microsomal extract gave a stronger response chan purified microsomes, the authors concluded chat soluble liver proteins (cytosol) played a role la che metabolic activation of vinyl chloride, cither by involvement in a cwo-scep activation mechanism or by prolonging che viability of che microsomal enzymes.
In a subsequent review paper, Bartsch noted that the wide variation in human liver enzymatic activity, which wa- confirmed in mutagenicity testing of N-nltrosomorphollne, indicated that some individuals are genetically liable to a higher risk from exposure to carcinogens. Bartsch suggested that those estimating acceptable envlronaencal levels of suc.i substances should consider che possible risk for the most susceptible individuals.
In 1976, Garro et al reported studies of che modification of che mucagenic activity of viayl chloride on Salmonella tvphiaurium TA1530 by suspensions of untreated and Aroclor 1254-pretreated rat or mouse hepatic microsomes In che presence of an XADPH-generating system ami by a system generating free radicals from riboflavin and N,M,N',>Ttetramethylethylenediamlne (TMED) under irradiation from fluorescent lamps. They described TMED as an accelerator of vinyl chloride phocopolymerizacion. Alchough incubation of vinyl chloride with native microsomal suspensions Increased its mutagenic activity by about 652, incubation of chis chemical with similar suspensions that had been heated to destroy their enzvmatic activlcy also increased, but at a somewhat reduced level, its. mutagenic activity. Mutagenesis In the presence of liver extracts was not XADPE dependent.
Incubation of vinyl chloride with the free-radical generating system apparently increased its mutagenic activity by nearly tenfold.
The authors took these findings to indicate "that the stimulatory effect of liver extracts on the mutagenic activity of vinyl chloride in the Salmonella auxotroph reversion test cannot be ascribed to enzymatic activation by a microsomal mixed function oxidase and...that the mutagenic effect of vinyl chloride may involve a free radical mechanism." Although Garro et al found that riboflavin and che tertiary amine had an effect on the mutagenic activity of vinyl chloride for Salmonella typhimurium TA1530 only in the presence of light, they made no measurements that would confirm tha relationship of the presence of free radicals in thair photoactivated test system to increased mutagenic activity of vinyl chloride. Also, they did not rule out the possibility that the lighc itself altered the apparent mutagenic activity of vinyl chloride. These two uncertainties about At a vinyl chlorld« concentration of 48 millimolar for 60 minutes, tha mutation frequency at tha flv« loci vas 62.43/10,000 surviving call«, compared with a spontaneous mutation rata of 2.00/10,000. In scudies with Saccharomyces. treatment with 48 millimoles vinyl chloride for 300 clnutas in tha praaanca of purlfiad mlcrosomcs producad a gene conversion fraquancy of 8.47/100,000/locus in tha ad«2 system and 4.36/iOO,000/locus in tha trp5 system; spontanaoua frequencies for these systems were 0.49/100,000/locus and 0.85/100,000/locus.
In the host-mediated assay, Loprieno et al found that Schizosaccharomyces showed a autagenic response after incubation for 12 hours in the peritoneum of mice given vinyl chloride orally at 740 ag/'<g. The observed mutation frequency was 6.89 (t0.60)/10,000 cells compared with a concrol race of 1.33 (t0.19)/10,000 In yeast incubaced in racs not given vinyl chloride; regression analysis showed Che effecC Co be significant ac Che 1- level.
Comparing Che mutagenic effectiveness of che 16 millimoles of vinyl chloride used in vitro with that of 11.2 millimoles of vinyl chloride administered to the mice for in vivo studies, the authors concluded chat in vicro creacaenc was more effective. They attribuced chis co Che fact chat, in Che hosc-medlaced assay, Che presence of an acclve concentration of Che purported autagenic metabolite was minimal as a consequence of its short half-Ufe.
In a later study with Saccharomyces cert vislae, Shahin
# Verburgt and Vogel
stated that the data, which demonstrated an increasing frequency of lethal effects between concentrations of 0 ind 1 0 ,0 0 0 ppm, demonstrated that the mutagenic activity of vinyl chloride vas concentration dependent.
Since exposure at 30,000 and .50,000 ppm did not significantly Increase the mutation frequency above that seen at 1 0 ,0 0 0 ppm, they inferred that above a certain concantration (between 850 and 10,000 ppm) Drosophila was incapable of metabolically "activating" (for mutagenesis) further vinyl chloride, and that the enzymatic mechanism- were saturated.
Anderson et al conducted a dominant lethality study with mice to determine whether vinyl chloride can induce genetic effects. Male CD-I mice were exposed in groups of 20 to vinyl chloride (purity not described) at concentrations of 3,000, 10,000, and 30,000 ppm (7.68, 25.6, and 76.8 g/cu m) 6 hours/day for 5 days.
A concentration of 30,000 ppm was the highest exposure level chosen because it had been shown in preliminary tests to be in tne "toxic range" and it was desired that the maximum tolerated dose or higher be included in the study protocol.
Control mice were exposed tc air alone. Two positive control groups of 15 and 25 mice were given 200 ng/kg of ethyl aethanesulphonate orally for 5 days or one ip dose of 200 ¡ng/kg of cyclophosphamide on the 5th day.
After the exposure period, each of the surviving males, then 10-12 weeks old, was mated with two 8to 10-week-old females each week for 8 consecutive weeks. The females were killed 13 days after the asaumed date of mating, and their uteri were examined for live implantations, early fetal deaths, and late fetal deaths.
Of the male mice, only those exposed to vinyl chloride at 30,000 ppm showed significant mortality, with only 9 of 20 mice surviving 5-day exposures .
In females mated to vinyl chloride-exposed males, the frequency of pregnancy and the number of early and lat«» fetal deaths did not differ significantly from untreated control values. The number of implantations/pregnant female was not affected by vinyl chloride, except that it was significantly below the negative control value in females mated during the 4th week to mice that had been exposed at 30,000 ppm (10.00 vs 12.38, P<0.05); however, implantation frequency in this group was slightly above control values for 3 of the 8 week3 of mating. Both positive control groups were significantly higher than negative (air) controls in all the indicators of dominant lethality examined, attesting to the sensitivity of the system.
The authors concluded that vinyl chloride at the stated exposure concentrations was not mutagenic in mice as measured by the dominant lethal test.
This suggests that the active mutagenic compound in the metabolism of vinyl chloride did not affect the germinal cells of these mice.
# Several investigators have attempted to obtain information on the mechanism of mutagenic and carcinogenic activity of vinyl chloride by testing Its known or suspected metabolites for their ability to cause mutations in microorganisms.
Malaveille et al used Salmonella strain TA1530 to evaluate che mutagenic activity of three presumed metabolites, chloroacetaldehyde, chloroethanol, and chloroethylene oxide (chlorooxirane), and a known urinary metabolite, chloroacetic acid. The substances were tested, with and without a liver microsomal system from phenobarbitalpretreated mice, at concentrations of 40, 4.0, and 0.4 ¿uaol/ml of medium; chloroethylene oxide was also tested at 0.04 pool/ml. Chloroacetaldehyde was highly toxic to bacteria at each teat concantration used, reducing survival to laaa chan 0.00&Z of control levels . Chloroacatic acid was toxic at concentration- of 4 and 40 ¿aol/ml and waa tha only substance taatad that showed no autagenic activity. Chloroacetaldehyde caused a sixfold increase over the spontaneous rate at a concentration of 4
Mmol/ml in combination with the microsomal system; this substance also shoved direct mutagenic activity in the absence of mammalian nlcrosomes. At a concentration of 40 ^mol/ml, chloroethanol, which did not afiect bacterial survival, Increased the mutation frequency more chan 10 times in che presence of Che microsomal system and about 6 ciaes in its absence; at 4 ¿¿mol/ml, chloroethanol approximately doubled the autation frequency of the bacteria, and its mutagenic activity was apparently unaffected by the aicroscnal system. Chloroethylene oxide was tested without aicrosomal activation only.. At a concentration of 0.4 ¿aaol/ml. It reduced bacterial survival ta 11Z and produced a sixfold increase in the mutation frequency; at 0.04 (jaol/ml, it caused no increase over spontaneous mutation frequency. These findings indicate that chloroethylene oxide is a far more effective mutagen than chloroacetaldehyde, supporting the suggestion of Henschler and his colleagues that the unstable oxirane was directly involved in vinyl chloride autagenesis.
McCann et al compared the mutagenicity of vinyl chloride wlch that of its probable metabolites chloroacetaldehyde and chloroethanol, both with and without activation by a rat liver mlcrosGtaal system.
They used Salmonella strains TA1535 ana TA100, the latter being identical with TA1535 except that it contains a factor that interferes with DMA repair, thus Increasing its sensitivity to many mutagens.
Bacteria were exposed to 20Z vinyl chloride in air for 3-9 hours, while chloroacetaldehyde and chloroethanol were added directly to che media in concentrations of up to 30 /¿g/plate and 1-3 mg/plate, respectively.
# The mutagenic responses ox strains TA1535 and TA100 to vinyl chloride were similar, and McCann et al
-bserved very little activation by a aicrosomal system from phenobarbital-pretreaced rats with exposure periods of up to 9 hours.
From the authors' graphs, the direct action of vinyl chloride on strain TA100 produced about 25 revertants/plate above spontaneous levels (which averaged 150 revertants/plate) with 3 hours of exposure and 200 at 9 hours; the corresponding levels with the microsomal system added were 65 and 225 revertants/plate above control levels. However, the authors added in a footnote that increasing the concentration of alcrosomes in the system had produced a tvofold increase over the direct activity of vinyl chloride in both TA100 and TA1535.
At the concentrations tested, chloroacetaldehyde, with no microsomal system, effectively reverted strain TA100 to histidine Independence but did not affect strain TA1535 . The mutagenic activity in strain TA100 increased with the concentration of chloroacetaldehyde, reaching 235 revertants/plate above the spontaneous rate at a concentration of 30 Mg/plate. Chloroethanol Increased the mutagenic reponse of strain TA100 to over twice th« spontaneous levels and showed a trace of activity with strain TA1535. However, addition of the microsomal supernatant caused a small increase in the mutagenic response of strain TA1535 and a much greater increase in that of strain TA100; quantitative data were not given. The authors noted that an MADPH-generating system was not necessary for this activation. Comparing the mutagenic activity of the three substances in strain TA100 on an equimolar basis, the authors found that the number of revertants/ponol, with control levels subtracted, was 1.0 for vinyl chloride, 0.6 for chloroethanol, and 746 for chloroacetaldehyde.
Despite the high mutagenic activity of chloroacetaldehyde in strain TA100, McCann and coworkers concluded that this subs, ance was probably not the active astaoolite involved in vinyl chloride mutagenicity. Vinyl chloride, with or without aicrosomes, was about equally active in the two bacterial strains tisted, while chloroacetaldehyde affected only strain TA100. By contrast, activation of chloroethanol by a microsomal system produced a relatively large Increase in the mutation rate of TA100, indicating that chloroacetaldehyde might be the mutagenically active metabolite of chloroethanol. The authors suggested that chloroethylene oxide might be the metabolic intermediate responsible for vinyl chloride mutagenicity. This conclusion is supported by the findings of Malaveiile et al and of Sannug et al chat chloroethylene oxide was several times more active than chloroacetaldehyde in the nonrepair-deficient strains TA1530 and TA1535.
In 1976, Rannug et al compared the mutagenicity of chloroethlyene oxide, chloroacetaldehyde, 2-chloroethanol, and chloroacetic acid in Salmonella typhimurlum strain TA1535. The test substances were added to the bacteria before plating, in concentrations ranging from 0.1 to 1.5 millimolar; 2-cnioroethanol and chloroacetic acid were also studied at concentrations of up to 1 M. Ethylene oxide, described as a "well known mutagen," was used as a positive control. The authors also tested vinyl chloride at a concentration of 22 (51.2 g/cu m) in air for 3 hours, using the procedure described in their earlier study .
Chloroethylene oxide showed both a definite toxic effect and strong mutagenic activity . At a concentration of 0.75 millimolar, it produced 180 revertants/100 million surviving cells, 60 ciaes the spontaneous mutation rate. Chloroacetaldehyde also showed a mutagenic effect in this concentration range but was only about 52 as effective as chloroethylene oxide on a molar basis. 2-Chloroethanol and chloroacetic acid shoved no mutagenic effect up to 1.5 millimolar and were therefore retested at higher concentrations. 2-Chloroetnanol produced a weak mutagenic response only at 1 molar. Chloroacetic acid was highly toxic to bacteria at concentrations up to 0.5 molar, and no increase In mutagenic response could be detected. Ethylene oxide, the positive control substance, did not produce an increase in mutagenic response at concentrations below 5 millimolar, and ethylene oxide at 95.5 millimolar produced an Increase In the mutation rate equivalent to chat produced by chloroethylene oxide at a concentration of 0.15 millimolar.
la their experiments with 22 (51.2 g/cu a) vinyl chlorid*, Rannug «t al found that vinyl chlorida at this concentration in tha presence of a microsooal system produced 10.0 ¿0.9 (SE) revertants/100 million calls, | significantly aora (P<0.001) than tha control rate of 3.8/100 million calls; naithar vinyl chloride alone nor the microsoaal system alone produced a significant increase.
Noting the difficulty of comparing vinyl chloride autagenlclty to that of the other compounds because of the difference in experimental conditions, the authors estimated that only if all the vinyl chloride was converted t-2-chlorcethanol would the concentration of this compound be great enough to account for the observed mutagenic activity of activated vinyl chloride.
Calculating that chloroethylene oxide was 10,000-15,000 times as mutagenic aa ethylene oxide, Rannug et al concluded that this was in reasonable agreement with the ratio of the preliminary rate constants of the two compounds for reaction with the appropriate nucleophiles.
They considered this to be an indication that chloroethylene oxide acts in the same '/ay as ethylene oxide, as a monofunctional alkylating agent. The authors concluded on the basis of interpretations by Hussain and Osterman-Golkar that chloroacetaldehyde was far more active as a mutagen than would be expected from its reactivity as an alkylating agent.
# In an addendum to the paper by Rannug et al , Hussain and Osterman-Golkar analyzed the data of Rannug et al on a kinetic basis.
They noted that the higher-than-expected mutagenic activity of chloroethylene oxide, based on comparison of its rate constant for axK./lation with that of ethylene oxide, indicated "a certain role of the aldehyde groups." Chloroacetaldehyde, however, was several orders of magnitude more effective than expected, indicating "a reaction mechanism different from simple alkylation."
In 1977, Loprleno et al tested vinyl chloride metabolites for mutagenic activity in yeasts.
Chloroethylene oxide, 2-chloroacetaldehyde, and ?-chloroethanol, added to the media in various concentrations, were tested in vitro; 2-chloroacetaldehyde was also tested in the host-mediated assay.
Test organisms and experimental procedures were the same as those used in their previous study .
Chloroethylene oxide showed the highest mutagenic activity in all systems examined . At a concentration of 0.1 millimole, the forward mutation frequency in pombe was 340 tlaes the control rate, and at 1 millimole the gene conversion rate in cerevlsiae was 40-50 times that in controls. 2-Chloroethanol in concentrations up to 50 millimoles showed no mutagenic activity in yeast cells, with or without microsomes.
2-Chloroacetaldehyde showed a weak mutagenic effect in vitro, increasing mutation rates 2-7 fold at concentrations up to 12.5 aillimoles.
When administered to male Swiss albino mice (25 g) in oral doses of 250 mg/kg, 2-chloroacetaldehyde produced no increase in the mutation rate of pombe incubated in the peritoneum for 3 -6 hours.
Elaore «c al evaluated vinyl chloride and it- metabolite- in a study designed to pervic accurate comparison- of their mutagenic activity and to provide additional insight as to the mechanists of thi- activity. Since previous investigators had not ascertained the purity of the metabolites used, Elaore and coworkers tested pure fora- of chloroethanol, chloroacetic acid, chloroethylene oxide, and chloroacetaldehyde.
The last compound can exist In coablnatlon- of four form*, depending on its preparation.
The authors therefore tested pure preparations of the monomer, licer hydrate, and trimer, plus the 50:50 mixture of monomer and monomer hydrate formed when the monomer is dissolved in water or physiologic systems.
They When the wild-type strain was treated with chloroacetaldehyde for 15 minutes before the DNA was extracted, there was a major depression of biologic activity in the transforming DNA, as evidenced by a decrease of 50" cr more in the number of transformants produced. This depression showed genetic-marker specificity, reducing transformation ac seme loci by over 90Z. The authors noced that DNA segments that have previously been shown to be associated with macromolecular structures such as the cell wall or cell aeabrana appeared to be selectively protected froa attack by chloroacetaldehyde.
The addition of a mammalian microsomal systea did not significantly alter the effect of chloroacetaldehyde on transformation efficiency. After exposure, the bacteria were cultured for up to 48 hours, and histidine-revertant colonies were counted on each plate.
# Huberaan et al
From one to four experiments were conducted at each concentration, each using a pool of four mouse livers; bacteria were plated in triplicate for each experiment.
The authors also compared the efficiency of liver, kidney, and lung microsomal systems from male OF-1 mice and female BD-VI rats in inducing mutagenic response to vinylidene chloride in strain TA100.
Both Salmonella strains showed a positive mutagenic response to vinylidene chloride in the presence of a mammalian microsomal system .
Ho mutagenic activity was observed in the absence of the NADPH-generating system.
In strain TA100, exposure to vinylidene chloride for 4 hours at 0.2Z produced an average of about 300 revertants/plate, compared with control levels of less chan 50. At 2Z vinylidene chloride, the number of revertants/plate In strain TA100 reached 500 £23 (SE) above control levels, while at 20Z vinylidene chloride, the number was only 330 *29.
Strain TA1530 followed che saae pattern but was somewhat less sensitive.
The authors suggested that the reduction in mutagenic response at the highest exposure concentration might have resulted from inhibition of the microsomal enzymes responsible for the metabolic activation of vinylidene chloride. The authors seated that the mutagenic activity of vinyl chloride, which caused a 563Z increase in mutations at Che arg+ locus, was "several times higher" than that of vinylidene chloride. It should be noted, however, that the measured concentration of vinyl chloride in the medium (10.6 m m millimoles) was over four cimes ehac of vinylidene chlorid«. In addition, the difference in exposure technique required for the highly insoluble gaseous vinyl chloride necessitates that such comparisons be made with caution. Since Bartsch ec al exposed bacteria to both vinyl chloride and vinylidene chloride in air, and thus determined the concentration in the medium under similar conditions, their comparisons of relative mutagenic activity are probably more accurate.
Furthermore, Henschler et al pointed out in cheir discussion that vinylidene chloride is che most polar of the chlorinated ethylenes they tested, and its oxirane, which has not been successfully synthesized, would be expected to be Che least stable.
Thus, it should be expected to have a higher mutagenic activity than vinyl chloride. At 20Z vinyl bromide in che presence of a microsomal system, there was an average of 1,129 revertanCs/plate in scrain TA100 and 959/plate in TA1535; wichout Che activating system, the mutation rates were 620/plate for TA100 and 721/plate for TA1535.
Control rates of about 130/plate for TA100 and less than 20/plate for TA1535 were not significantly affecCed by che addicion of che microsomal system alone.
Although Simmon and Mangham did noc conduce simuleaneous scudies of vinyl chloride or measure che concencracion of vinyl bromide produced in che medium, chey postulated thac vinyl bromide was slighcly more mucagenic by chese procedures chan vinyl chloride. So data were offered in supporc of chis comparison.
The auChors' findings do support a conclusion thac vinyl bromide induces mucacions in Salmonella scrain TA100 and TA1535 wichouc microsomal accivadon, alchough microsomes enhance ics mucagenic accivicy. The mucagenic responses of both strains were concencracion-dependent in a nearly scraighcline relacionship, which showed a tendency to level out at the highest concentration tested (20Z). Since Simmon and Mangham did noc provide daca on baccerial survival, ic is not possible co determine whether this sacuration resulced from a coxic effecc of vinyl bromide at high concentracions.
In a 1976 review, Barcsch Despite this apparent observation of extremely high mutation rates, the authors were unable to isolate any auxotrophic strains from the treated cultures .
They presented some evidence for the Induction of heritable changes in the fermentation patterns of certain carbohydrates, although it was not unequivocal.
While it is possible that qualitative changes in the mutation frequencies may have occurred, the failure to isolate auxotrophic strains from the treated cultures suggests that any such effects were minimal . Also, the selection method used by these authors does not allow the direct measurement of either the spontaneous or induced mutation rates or frequencies of a culture, although it may greatly enrich the ratio (frequency) of mutated cells to normal cells in the culture. Although strain TA-100 should also identify any potential mutagen capable of inducing base-pair substitution, the results were negative for this strain in these tests.
The most plausible explanation for this apparent discrepancy is that the presence of the resistance transfer factor on the TA-100 was in this case protective.
The results also indicate that some product of the metabolism of vinylidene fluoride is more mutagenic than the parent compound.
Watson also tested the ability of vinylidene fluoride to transform BAL3/3T3 cells. Cells vere exposed for various periods ranging from 0 to 48 hours, with and without tissue culture media. Only the cells exposed with culture media shoved an elevated number of transformations above background; however, these elevations were not significant. The major urinary excretion products of vinyl chloride have been characterized following both inhalation and oral exposures, and the compound has been shovn to be readily absorbed and videly distributed in body tissues and to be metabolized into several major and minor metabolites. Although strain TA-100 should also identify any potential mutagen capable of inducing base-pair substitution, the results were negative for this strain in these tests.
# Metabolism
The most plausible explanation for this apparent discrepancy is chac the presence of Che resistance transfer factor on the TA-100 was ia this case protective.
The resulcs also indicate thac some product of the metabolism of vinylidene fluoride is more mucagenic chan Che parent compound.
Watson also tesced the ability of vinylidene fluoride to transform BAL3/3T3 cells. Cells were exposed for various periods ranging from 0 to 48 hours, with and without tissue culture media. Only the cells exposed with culture media shoved an elevaced number of cransforaacions above background; however, chese elevations were not significant.
# Metabolism
# Metabolic pathways have not beea completely aad conviacingly delineated for any of the vinyl halides. That for vinyl chloride apparently is nearest to completion, but, even here, several key seeps in the initial reactions *re only postulated and have noc been conclusively proven by experimentation designed specifically Co elucidace intermediate metabolic products in vivo.
The proposed pathways for vinylidene chloride are sketchy at best, while the decerminations of pathways for vinyl bromide, vinyl fluoride, and vinylidene fluoride have oaly just begun.
# (a) Vinyl Chloride
Vinyl chloride metabolism has been studied extensively since che discovery of vinyl chlorideinduced angiosarcoma ia humans in early 197£.
The aajor urinary excredon products of vinyl chloride have been characterized following both inhalation and oral exposures, aad the compound has been shown to be readily absorbed and widely distributed in body tissues and to be metabolized into several major and minor metabolites.
(1) Distribution and Elimination Hefner et al , In 1975, found that the metabolism of vinyl chloride during the first 15 hours after exposure of three male rats to 14C-(l,2-)vinyl chloride at 49 ppm (125.4 mg/cu a; a total estimated Intake of 0.49 mg/kg) for 65 minutes resulted in the formation of polar metabolites that were excreted predominantly in the urine (582 of the 14C activity). Lesser amounts of radioactivity were excreted in the feces (2.7Z) and in the expired air as carbon dioxide (9.8Z). At 75 hours after administration, 67.1Z of the radioactivity had been excreted in the urine, 3.8Z in the feces, and 14.0Z as expired carbon dioxide.
Trace amounts of radioactivity (0.02Z of that administered) were eliminated in expired air as unchanged vinyl chloride.
A small but significant amount of radioactivity (1.6Z) was retained in the liver for as long as 75 hours after exposure.
The skin retained 3.6% of the radioactivity, 0.2Z was found in the kidneys, and 7.6Z was found in the remaining carcass.
Hefner et al also reported that the in vivo kinetics of uptake (metabolism) of inhaled vinyl chloride, determined for four male rats exposed together in a chamber at initial concentrations ranging from 50.5 to 1,167.0 ppm (129.3 to 2,987.5 mg/cu m) for 52.5-356.3 minutes, differed at different concentrations of vinyl chloride.
For concentrations of 50-105 ppm, the apparent first-order rate constane derived after seven separate exposures was 0.00804 :0.0034/minute, corresponding to a half-life of 86 minutes. After five separate exposures to vinyl chloride at concentrations ranging from 220 to 1,167 ppm, the first-order rate constant was determined to be 0.00265 ±0.00135/minute, a half-life of 261 minutes. Based on these results, Hefner et al concluded that there were differeut pathways for vinyl chloride metabolism and that at least one of them was readily saturated, so that the amounts of substrate degraded by the individual pathways were dependent on the concentration of vinyl chloride presented to the organism.
However, these variations in apparent first-order constants are equally consistent with the existance of a single saturable process and do not provide any conclusive experimental evidence for the existence of multiple biochemical pathways. For the liver, skin, carcass, muscle, lungs, and kidneys, ;xg equivalents of radioactivity per g of tissue were higher at 1,000 ppm than at 10 ppm. When the data were normalized for metabolized vinyl chloride, an apparent increase, although not a statistically significant one, was observed in the 14C activity of che liver and skin at 1,000 ppm.
# Watanabe et al reported chat the largest amounts of radioactivity in rats exposed
In the rats exposed at 10 ppm, a greater percentage of the total recovered radioactivity was excreted in the urine and a smaller percentage in the expired air than in rats exposed at 1,000 ppm . The apparent first-order rate constants for pulmonary excretion at 10 and 1,000 ppm were 0.034 ±0.002 and 0.031 iO.01/hour, respectively, equivalent to half-lives of 20.4 and 22.4 minutes, respectively.
# The urinary excr' .on of radioactivity as a function of time for both concentrations of airborne vinyl chloride was nonlinear .
This could indicate that elimination was occurring from at least two compartments . From the initial linear portions of these data, apparent first-order rate constants were 0. Ninety-seven percent was excreted within 36 houra-After 36 hours, the excredon curves were qulce variable and represented less than 32 of che total urinary activity; hence, no estimates of rate constants were made. Hefner et al , in 1975, reported that when the whole body (excluding the head) of a male rhesus monkey was exposed to 14C-(1,2-)vinyl chloride at 7,000 ppm (17.9 g/cu m) for 2 hours, and a second monkey was similarly exposed at 800 ppm (2.1 g/cu a) for 2.5 hours, radioactivity could be detected only in the liver, bile, and kidneys. They also reported that very little gaseous vinyl chloride was absorbed percutaneously, 0.023 and 0.0312 of the total available radioactivity at 7,000 and £00 ppm, respectively. The authors stated that, nevertheless, about 100 times more vinyl chloride vas metabolized at the higher oral dose than at the lover one.
# Gehrlng et al reported that metabolism of vinyl chloride by rats did not Increase proportionately as the concentrations
# At an lv dose of 250 Mg/kg, 99Z of the vinyl chloride vas excreted unchanged In expired air vlthln an hour after injection, Including 80Z within 2 minutes , The excretion profile of vinyl chloride after a single ip Injection at the low dose vas Intermediate betveen that occurring vlth oral or lv administration.
The authors suggested that some of che vinyl chloride in blood vas excreted unchanged through the lungs and some was absorbed into Che hepatic-portal system and metabolized by the liver.
From these data, the authors concluded that the change in excretion pattern between high and low doses was due to a "saturable drug metabolism and to a highly efficient arterial-alveolar transfer of unchanged vinyl chloride from systemic blood that leaves a relatively low concentration of material available for biotransformatior. 1=. ¿uctessive passes through the liver."
In another experiment by Green and Hathvay , three rats that received 3, 30, or 300 mg/kg/day of nonradioactlve vinyl chloride by oral intubation for 60 days were given single oral doses of 14C-(1,2-)vinyl chloride (O.b mg/kg, containing 2 aicrocuries) on days 1 and 60. For the first 24 hours after administration of Che radiolabeled vinyl chloride, urine and expired air were monitored fir radioactivity. The auchors concluded Chat chronic exposure for 60 days did not affect the excretion rate for a single oral dose of 14Cvlnyl chloride. The authors also concluded that vinyl chloride did not Induce its own metabolism and that excretion data for a single dose also applied to Che chronic situation. They concluded that the proportion of 14C eliminated by various routes was concentration dependent. Moreover, the dominant route of excretion at both concentrations was in the urine and the metabolites were predominantly nonvolatile or polar.
This finding supports Che earlier conclusions of Hefner et al and Watanabe et al that the elimination of vinyl chloride metabolites was dose dependent. The authors a'.so suggested that metabolism occurred at a reduced rate because body burden in terms of equivalents of radioactivity increased by only 27-fold as Che concentration of vinyl chloride was increased from 10 Co 1,000 ppm. They concluded thac chis also indicated that che primary mecabolic pathway for vinyl chloride was sacurable at high concencraclons, specifically ae 1,000 ppm. This work, in addition co the work of Hefner ec al , tends co support a hypothesis chac, ac concencraclons of vinyl chloride above 220 ppm, aleernaCe metabolic pathways *xist.
Hefner et al reported that the urinary metabolites from rats exposed for 4, 5, or 7 w?eks to vinyl chloride at 5,000 ppm (12.8 g/cu m) were similar.
The polar metabolites in the urine appeared to be conjugated with glutathione or cysteine through covaler.t linkages to the sulfhydryl groups.
# Chromatographic analysis suggested the presence of S-(2-hydroxyethyl)cysteine. Two theoretically possible metabolites, S-(2-chloroethyl)cysteine and S-(2carboxymethyl)cysteine, were noc dececced, buc Hefner ec al posculaced chac S-(2-carboxymethyl)cysteine might not have been adequately resolved from the urine bi.ckgrouad.
When rats were exposed to vinyl chloride at 5,000 ppm for 9 weeks, chromatographic analysis of their urine showed the additional presence of monochloroacetic acid. Muller et al exposed male rat3 continuously to vinyl chloride at 1,000 ppm fox 43 hours, and they found thiodiacetic acid as well as S"(carboxymechyl)cysteine in che urine.
# To identify the probable urinary mecabollces of vinyl chloride afcer ics administration by oral Intubation, Green and Hathway gave each of four rats three doses of 50 mg of 14C-(l,2-)vlnyl chloride/kg at 3-hour intervals, for a total dose of about 10 microcuries of 14C. Major metabolites of vinyl chloride were Identified by mass spectral analysis as thiodiglycolic acid, £»-(2-chloroethyl)cysteine, and N-acetyl-S-(2-chloroethyl)cysteine.
Thiodif Some have stated that one pathway, suggested to be active at low levels of absorption, begins with a hydration reaction whose product is chloroethanol.
A proposed second pathway, suggested to be predominant at high absorption levels.
Involves the oxidation of vinyl chloride to chloroethyiene oxide by microsomal enzymes. These two pathways converge with the formation of chloroacetaldehyde as the second step in each.
# Investigators have also postulated that the metabolism of vinyl chloride Involves the formation of free radicals .
Reynolds et al , in 1976, proposed a scheme in which an epoxide (chloroethyiene oxide) was produced as a primary reactive metabolite of vinyl chloride.
This would involve the hepatic mixed-funcCion oxidase system , particularly the cytochrome P-450 component .
A wide variety of oxidative reactions, including epoxldatlon, can be mediated by these enzymes, located in the membranes oJ the endoplasmic reticulum of liver cells, with n \DPH serving as an electron donor. Rearrangement of the epoxide could then occur, producing a beta-chlorinated acetaldehyde, a dlol, or a glutathione conjugate as a secondary metabolite. These products could be formed by interaction of the epoxide with epoxide hydrase or with glutathione epoiJ.de transferase .
Vacanabe et al reported that, at oral doses of 0.05 or 1 ag cf 14C-(1,2-)vinyl chloride/kg, 14C was consistently eliminated in the urine as nonvolatile, polar metabolites and in the expired air as carbon dioxide. At an oral dose of 100 mg/kg, however, the primary route of excretion was by expiration of unchanged viayl chloride. This indicated to the authors not only that metabolic pathways for vinyl chloride are dose dependent, but also that the process contains a saturable component. Comparing these results with their previous findings , the authors also concluded that the aetabolic fate of vinyl chloride is independent of the route of administration.
# In both oral and inhalation exposures to vinyl chloride, N-acetyl-S-(2hydroxyethyl)cyst«ine was a major urinary metabolite .
One study has indicated chat this compound could be formed from 5-formylmeChyl cyscelne and S-formylaechyl glutathione, which ac one time were considered either not co be formed or to be metabolized to S-carboxymechyl cysteine .
# Alchough Green and Hathvay Identified (2-chloroechyl)cyscelne and M-acecyl-S-(2-chloroechyl;cysceine as urinary mecabolices of vinyl chloride in racs, Che formaclon of Chese produces may be an arclface of Che
acthod of separation (methanol derivacizacion produces the chloro-coopound, whereas dlazoaechane derlvatlzatlon produces che hydroxy-compound) .
Froo daca presented by other authors , showing the formation of 14C-carbon dioxide after adminiseracion of 14C-vinyl chloride, Plugge and Safe proposed two additional alternatives for its pulmonary metabolism. The first assumed chat the metabolism of vinyl chloride proceeded by che addicion and cransfer of a chloroacecyl group Co coenzyme A and by subsequenC metabolism in che Krebs cycle. The second scheme assumed formation of glycolate followed by oxidation Co glyoxylate, which entered the C-2 and C-l pools. By analogy with the metabolic pathway for chloroechylene oxide, the glycolate alternative seems to be the more feasible one . . Chloroacetate has been detecCed as a metabolite of chloroechanol , Mecabollsm of eicher chloroechanol or chloroacetace yields S-carboxymechyl cysteine, chlodlacecate (thlodlglycolate), and small amounts of glycolate .
# Chloroechanol has been reported to be transformed in vivo and in vitro via rat liver enzymes to S-carboxymechyl glutathione, which can also be derived froo the Cwo compounds, S-fonnylmethyl glutathione and chloroacetate
In another Investigation, thiodiacetate was detected as a product of the metabolism of S-carboxymethyl cysteine ; this conversion has been confirmed by Yllner .
# Bonse and
Henschler , in a 1976 review of the metabolism of chlorinated ethylenes, concluded that their oxidation via moaooxygenases to corresponding oxlranes (epoxides) conaltuted the Initial metabolic reaction. The authors suggesced chac eleccrophlllc reactions or alkylaeion of cellular components were essenclally responsible for che Coxlcity of che chlorinated ethylenes aud chac che ocher pachways were generally pare of a deCoxification mechanism. Afcer rearrangemenc, addicional mecabolic steps, including oxidacion of either aldehydic or alcoholic derivatives to carboxylic acids were suggested. The authors also suggested hydrolysis of acyl chlorides to acids as an alternate pathway.
Green and Hachway concluded that chlorcacatic acid was not a part of tha major de^radative pathway for vinyl chlorida, but sinply a byproduct of chloroacetaldehyde matabolism, unless however, Cha glutathion- conjugation aechanism were Inhibited, whereupon the conversion of chloroacetaldehyr'e to cnloroacetlc acid would become an important detoxification alternative. They further supported this hypothesis bv administering lev e r a l vinyi c h l o r i d e aetabclites to rats and iticvl-.'.g that chlorcacetaldehvde -ind S-(carboxyaethy.)cystein«, but not chloroacetic acid, were In the direct pathway for the formation of thiodiglycolic acid. These authors also identified small quantities of H-acety1-S-vlnylcysteine as » urinary metabolite of vinyl chloride in rats, thus li-ndiag credanca to ti a hypothasizad pathway of an equilibrium between tha chloro-and t(<« hydroxy-athyl glutathione derivatives, possibly through an aoi*ul;honlua ion Intermediate.
Plugge and Safe , in a 1977 review postulated that the metabolism of vinyl chlorida occurs both in vivo and in vitro via tha aixed-function oxidase system, primarily through the cytochrom- P--50 system, to an oxirane , In this case chloroethylana oxide.
The oxirane (epoxide) formed from vinyl chlorida would ba a strong alectrophilic molecule, and it may be unstable becsuse of tha presenca of its asymmetric chlorine.
This instability could result in intramolecular rearrangement of the chloroethylene oxide to chloroacacaldahyda. Tha authors considered that both chloroethylena oxide and chloroacataldahyda would bind either directly or enrymatlcally to glutathione, thereby forming S-formylmethyl glutathione. Via an NAIH-dependent aldehyde dehydrogenase, chlorocetaldehyde could also be oxidized to chloroacetate. This particular compound, if it was not. excreted, could bind with glutathione to form S-carboxymethy1 glutathione, which in turn could be hydrolyred to S-carboxymethy1 cysteine. S-carboxynetnyl cysteine could be either deaainated and decarboxylated to fora thiodiglycolace, N-acetylated, or excreted.
Van Duuren , in 1975, hypothesized that when rats were given large aaouncs of vinyl chloride, an epoxidatlon reaction would occur in which vinyl chloride was converted to chloroethylene oxide by the microsomal mixedfunction oxldaae system. Chloroethylene oxide would subsequently be reacted or would spontaneously rearrange to chloracetaldehyde .
Hafnar at al conjectured that, with exposures balow 100 ppa of - Tinyl chloride, its metabolisa would occur by hydration to 2-chloroechanol followed by oxidation to 2-chloroacetaldehyde by cha vary rapid alcohol dehydrogenase pathway. Sloe- chloroacataldahyda reacts rapidly wlch the sulfhydryl of glutathione, only a traca amount of aonochloroacetlc acid would be formed. In 1977, Bolt et al stated that vinyl chloride in the atmosphere of a closed system equilibrated with that in the tissues of rats within 15 minutes at varioua concentrations below 250 ppm (the concentration at which tha authors stated that saturation of the vinyl chloride-metabolizing enzymes is achieved) when metabolism of vinyl chloride was blocked by 6-nltro-l,2,3benzothiadiazole (an inhibitor of some cytochrome P-450-dependent oxidations). When rats not given the metabolic bio icing agent were tested, the concentration of vinyl chloride in the atmosphere declined exponentially with a half-life of about 1.1 hours. In polychlorinaced biphenyl (Aroclor 1254) or phenobarbical-precreaced male racs, pyrazole and SKF-525A proCecCed againsc acuce hepacotoxicicy, while disulfiram (an inhibitor of aceCaldehyde dehydrogenase) and echanol pocenciaced che toxic effects of vinyl chloride.
# Bolt et al
# The auchors speculated chac che proceccion afforded by SKF-525A was due to mixed-function oxidase inhibicion, indicacing a disrupCion in che mecabolism of vinyl chloride co an acCive mecabolice. Protection by pyrazole was probably due parcially co inhibicion of mixedfunction oxidases and partially Co inhibicion of acecaidehyde dehydrogenase. The lack of coopedcive effeccs by echanol indicaCed Co the auchors chac che conversion of chloroechylene oxide co chloroeehanol was decreased, possibly because of Increased sponcaneous rearrangement of chloroechylene oxide co chloroacecaldehyde (bypassing che acecaidehyde dehydrogenase conversion seep) or because of caCalase conversion of chloroeehanol co a peroxide and subsequenc rearrangemenC co chloroacecaldehyde (again bypassing acecaidehyde dehydrogenase).
Hefner ec al decermined the effeccs of vinyl chloride on liver sulfhydryl levels. Male racs were exposed for 7 hours/day co vinyl chloride ac concentrations of 15,000 ppm (38.4 g/cu m) for 5 days, 5,000 or 500 ppm (12.8 or 1.3 g/cu m) for 5 days/week for eicher 1, 3, or 7 weeks, or 50 ppm (128 mg/cu a) for either 1 hour, 7 hours, cr 5 days. No overt signs of toxicity were observed in rets exposed at any concentration. Significant reductions in nonprotein sulfhydryl levels were found in rats exposed to vinyl chloride at 50 ppm for 7 hours, at 500 and 5,000 ppa for 1 and 3 weeks, and 15,000 ppm for 1 week. Although the reduction in nonprotein sulfhydryl levels could not be definitively correlated with exposure concentration, the authors concluded that there was a tendency for such reduction to become leas obvious with repeated exposures. Four rats treated with ethanol before exposura to vinyl chloride at 1,070 ppm (2.7 g/cu m) for 105 minutes had significant decreases in hepatic nonprotein sulfhydryl levels (77.0 *12.8%) as compared with the controls (95,0 ±3.4%). The authors indicated that ethanol alone did not affect the liver nonprotein sulfhydryl levels. , in 1975, showed that male rats given drinking water containing 0.1% pentobarbital 7 days before either one or five consecutive 6hour exposures to airborne vinyl chloride at a concentration of 5% (128 g/cu m) exhibited a diffuse vacuolization of the cytop^sm of cells of the centrllobular liver parenchyma and focal areas of necrosis of midzonal parenchyma after the exposure. In the livers of pratreated rata exposed to vinyl chloride for 5 consecutive days, the authors found broad tracts of stroma depleted of parenchymal cells that corresponded in distribution and extent to the areas of necrosis found 24 hours after a single exposure. Since pentobarbital, an inducer of mixed-function oxidase activity, appeared to increase the liver toxicity of vinyl chloride, the authors concluded that the endoplasmic reticulum was the primary site for generation of toxic vinyl chloride metabolites. Moreover, Reynolds and colleagues suggested that these metabolites, possioly epoxides, were presumably responsible for the observed cellular injury as well as the potential for tumorigenesis.
# Reynolds et al
# Several reports have detailed the in vivo and in vitro requirements for covalent binding of vinyl chloride (or its metabolites) with cellular macromolecular constituents, including DMA, RNA, and protein. In addition, these reports have identified some effects cf various chemical inducers or inhibitors of cellular metabolic processes on the metabolism of vinyl chloride.
In 1975, Bolt et al reported that rat liver microsomes metabolized vinyl chloride to more polar metabolites during a 90-minute incubation and that these metabolites became covalently bound to the microsomal proteins. In addition, vinyl chloride metabolices became covalently bound to other 3ulfhydry1-containing proteins or to RNA when added to the incubation mixture. NADPH was reported eo be essential to the binding process, hence essential to this metabolic route for vinyl chloride. Similar results ^ere reported with microsomes from human liver, but the authors did not give experimental data. , using the same incubation procedure as that used by Bolt et al , confirmed the essentiality of NADPH in the covalent binding process.
# Kappus et al
In an additional paper, Kappus et al reported that continued uptake of 14C-vinyl chloride by liver microsomes depended on NADPH. Without MADPH, uptake of 14C-(1,2-)vinyl chloride increased rapidly during the first 2 minutes and reached saturation after 5 minutes. With NADPH, the uptake of vinyl chloride continued beyond the incubation time of 60 minur¿s and a tenfold increase in the uptake of 14C-vinyl chloride by the 'iicrosomal preparation was noted. The authors also showed that vinyl chloride could he taken up by both protein and lipid components of microsomal meabranes; the single difference noted was that the time course differed from that for uptake by microsomes. Their data also suggested a greater ability on the part of liposomes to bind vinyl chloride reversibly. Prom these studies, the authors inferred that the NADPH-independent part of microsomal vinyl chloride uptake was at least partially due to the reversible binding of vinyl chloride to the lipids and proteins of the microsomal membranes.
# Kappus et al also found that, of the total 14C-(1,2-)vinyl chloride taken up by the microsomes, about 1Z was bound irreversibly to the microsomal protein.
Moreover, Irreversible binding of the vinyl chloride metabolites to the microsomal proteins appeared to depend on the presence of NADPH, the incubation time, and the concentration of the metabolites.
In addition, the authors demonstrated that when atmospheric air was replaced by nitrogen in the presence of NADPH, vinyl chloride uptake was reduced. The amount of vinyl chloride metabolites irreversibly bound to protein also was lowered. reported that vinyl chloride metabolites were also bound to added albumin, but not to concanavalin A, which contains no sulfhydryl groups.
# Kappus et al
# They found that the addition of glutathione or glutathione-containing cytosol to the incubation medium caused a 30Z depression in covalent binding to cellular proteins , thus Indirectly supporting the concept that free sulfhydryl groups must be present for binding to proteins to occur or for detoxification to occur. In further support of this concept, they reported that inhibition of the microsomal cytochrome P-450-dependent mixed-function oxidases by 4-(l-naphthyl)imidazole inhibited covalent binding by about 85Z.
Microsomal uptake of vinyl chloride was completely blocked by carbon monoxide, an inhibitor of cytochrome P-450 oxidation reactions . Irreversible binding of vinyl chloride metabolites to proteins was also blocked by carbon monoxide, whether NADPH was present or not. Boiling of the microsomes prior to incubation reduced vinyl chloride uptake in the presence of NADPH, and no irreversibly protein-bound vinyl chloride metabolites were detected. Addition of reduced glutathione to the microsomal incubation mixture with NADPH resulted in very little change in vinyl chloride uptake by the microsomes, but did result in a 25Z inhibition of irreversible protein binding.
# Microsomal uptake of vinyl chloride was not affected by trichloropropene oxide, an Inhibitor of epoxide hydras« , However, irreversibly proteinbound vinyl chloride metabolites were increased twofold. No induction effect on vinyl chloride uptake could be
demonstrated in microsomes from phenobarbital-pretreated rats.
There also .were no changes in irreversible protein binding. Cytochrome P-450, however, was increased in the liver -m - microsomes from racs precreated with phenobarbiCal. Vinyl chloride uptake was similar in liver microsomal preparations containing added glutathione and cytosol obtained from control and pretreated rats. From the data presented, the authors suggested that the initial step in vinyl chloride metabolism involved an oxygenation reaction catalyzed by an enzyme system containing cytochrome P-450.
In addition, the arthors concluded that chloroethylene oxide was probably the initial reactive aecabolice. presented additional evidence for involvement of the epoxide in the covalent binding reaction by using the xanthine oxidase model system, which generates hydrogen peroxide and an oxygen-radical.
# Kappus et al
They concluded chat demonstration of the binding of vinyl chloride metabolices Co albumin in Che presence of a complete xanthine oxidase syscem scrongly suggesced chac oxygen radicals, known Co be involved in epoxidacion by Che microsomal enzyme syscem, converc vinyl chloride Co a metabolite chac Chen binds covalencly to a protein, such as albumin. , in 1977, attempted to characterize the binding of vinyl chloride to hepatic macromolecules and nucleic acids by exposing male rats to 14C-(1,2-)vinyl chloride at nominal concentrations of 1, 10, 25, 50, 100, 250, 500, 1,000, or 5,000 ppm (range 2.96-12,800 mg/cu m) .
# Watanabe et al
The results suggested that increases in exposure concentration did not proportionately Increase the total amounts of radioactivity bound to hepatic macromolecules. The percentage of total 14C activity bound to hepatic macromolecules ranged between 20 and 222, except in rats pretreated with phenobarbital where it reached 392.
# Both the metabolized and bound vinyl chloride increased with increasing nominal concentration, but the ratio of bound to metabolized vinyl chloride declined with Increasing nominal concentration.
In rats exposed at nominal concentrations of 1, 10, 25, or 50 ppm (range 2.56-128 mg/cu m ) , Watanabe et al found that the nonprotein sulfhydryl content of the liver was depleted, but not significantly so. However, at 100 ppm (256 mg/cu m) and above, a significant depletion (?<0.05) was noted. Since it seemed a reasonable assumption that Che major decoxificaclon of Che reacclve metabolites of vinyl chloride occurs by reaction with nonprotein sulfhydryl groups , Watanabe et al considered it Important that at nominal concentraclons of 100 ppm and higher, h slgnflcanc dose-relaced deplecion of chac pool could be demonstraced. They also suggesced chat the carcinogenicicy of vinyl chloride was relaCed Co che decreased abilicy of exposed organisms Co decoxify che reacclve mecabolices of vinyl chloride. reported shat five rats pretreated with phénobarbital (80 mg/kg/day) by ip injection 3 days before exposure to 14C-vinyl chloride at a nominal concentration of 100 ppm showed a markedly increased binding of 14C to hepatic oacromolecules when compared with nonpretreated animals, even chough there was no observable Increase in vinyl chloride metabolism. W3tanabe et al concluded that their findings did not associate the carcinogenicity of vinyl chloride with a disproportionate increase in binding of Its electrophilic metabolites to hepatic macromolecules as the exposure concentration was increased. Because there was no demonstrable evidence to support preferential binding of the electrophilic metabolites to nucleic acids of the hepatocytes, they raised the possibility that the carcinogenic potential of vinyl chloride could not be associated with this commonly accepted mechanism for carcinogenesis. The authors suggested, however, that before alkylation of nucleic acids was excluded as the carcinogenic mechanism, more studies to determine the absence of such alkylation activity in the target tissue rather than the hepatocytes are required. They suggested that the hepatocyte itself, since it is susceptible to the toxicity of vinyl chloride, may function as a detoxifier. This would imply that tissues having « lesser ability to detoxify the reactive metabolites of vinyl chloride ma" themselves become the victims of toxicity.
# Gehrlng et al stated that since "Toxicity, including carcinogenesis, is a dynamic process involving absorption of a chemical into the body, distuiDution to various tissues, reversible or irreversible reactions with cellular components, and ultimately clearance from the tissues and the body via metabolism and/or excretion," it follows that "the predictability of animal toxicological data for assessing the hazard of a chemical to man is enhanced if the fate of the chemical per se and/or its degradation products in animals is equated to the fate in man."
They stated that under ordinary conditions and ov r a selected range of doses, chemical kinetics fit a linear differential equation, the implication being that a knowir increase in dose will result in a linear Increase in tissue levels. They added, however, that many metabolic and excretory processes were easily overwhelmed and saturated, thus leading to a situation in which nonlinear Michaelis-Menten pharmacokinetics would prevail. The authors suggested that the results of Hefner et al , in which the rate of metabolism of vinyl chloride at low concentrations was rapid and the primary metabolic pathways were overwhelmed at high concentrations, supported this theoretical model.
# The data presented above suggest that alkylating metabolites of \-inyl chloride, such as chloroacetaldehyde ard chloroethylene oxide, are formed in vivo.
Both chloroacetaldehyde and chloroethylene oxide can conjugate with glutathione and cysteine and subsequently form the vinyl chloride metabolites that have been identified in urine, such as N~acetyl-S-(2-chloroethyl)cysteine , which may itself be a potent alkylating agent by virtue of its half mustard forming ability.
In terms of assessing the hazard of exposure to vinyl chloride, experimental data support the conclusion that the carcinog«nicity of vinyl chloride is a function of the metabolic formation of alkylating metabolites. The urinary metabolites identified so far indicate that the primary deactivation mechanism is conjugation with glutathione, a nonprotein, free-sulfhydryl containing compounds. Experimental data have indicated that as the nonprotein, free-sulfhydryl groups are depleted as a sequel of the absorption and metabolism of vinyl chloride reaction of the alkylating metabolites with tissue macromolecules, such as DNA or RNA, may be more likely to occur. As a result, toxicity of a different order of magnitude may be elicited at higher exposure concentrations.
# (b) Vinylidene Chloride
McKenna et al showed that in rats exposed to airborne 14C ( The former compound represented about 45Z of the total urinary radioactivity, whereas the latter accounted for 25Z.
The authors concluded that the identification of these two compounds as urinary metabolites supported the hypothesis that glutathione conjugation was a major step in the biotransformation of vlnylidene chloride.
# Jaeger et al ,
in 1977, reported that fed and fasted male rats exposed to 14C-vinylidene chloride at 2,000 ppm (7.9 g/cu m) for 2 hours did not differ significantly in the rate of vlnylidene chloride uptake or of urinary excretion of 14C during che first 24 hours. Of the calculated dose, 36.7 and 36.5% were recovered within 24 hours in che urine of fed and fasted animals, respectively.
Thirty minutes after a 2-hour exposure, the kidneys of fasted rats concalned both the greater amount of total radioactivity and the larger amount of metabolites that were soluble in trichloroacetic acid (hence, not bound).
Fasted rats also had significantly greater amounts of total radioactivity in the spleen, heart, and serum than did fad rats. There was no significant difference between the 14C contents of the brain« of fad and fasted rats. Th« livers of fastad rats contained substantial amounts of radioactivity that was trichloroacetic acid-insoluble . This component represented either 14C that was tightly bound to microsomal or mitochondrial macromolecules or that had entered the metabolic pool.
The rates of disappearance of the trichloroacetic acid-insoluble 14C from the microsomal and mitochondrial fractions of the liver in fed and fasted rats were similar, although the amounts differed significantly; both had estimated half-lives of less than 3 hours.
Significantly sore radioactivity was found in the hepatic cytoplasmic fractions of fasted r 'ts than in those of fad rats. The authors suggested that on the basts of their data metabolism of vinylidene chloride was quite rapid, with trichloroacetic acid-soluble components being excreted by the kidney and trichloroacetic acid-insoluble components entering the metabolic pool.
They concluded that since a rapid turnover of bound 14C material occurred (half-life less than 3 hours), covalent binding to protein or tissue constituents must have been minimal. Jaeger et al concluded that fasting had no effect on the rate or on the amount of in vivo metabolism, but that the in vivo metabolic pathway appeared to be significantly different la fasted rats than in fed rats.
They also reported that pretreatment with trichloropropane epoxide (an epoxide hydrase inhibitor) significantly Increased the toxicity of vinylidene chloride in rats, and on this basis, suggested the possible formation of an epoxide Intermediate as a result of the hepatic metabolism of vinylidene chloride. occurred in fasted rats at 150-200 ppm but only at or above 2,000 ppm in fed rats ;
# Several
this difference was also evident in an Isolated perfused rat liver system .
Diethylmaleate, a material that depletes glutathione, potentiated the hepatotoxic effects of vinylidene chloride on fed rats and on perfused livers from fed rats . Surgical or chemical thyroidectomy (resulting in increased liver glutathione) reduced the severity of hepatotoxic Injury in fasted rats exposed to vinylidene chloride at a concentration of 2 ,0 0 0 ppm for response of fed rats exposed to vinylidene chloride nor did it protect fasted rats fro» the hepatotoxic effects of such exposure .
However, Reichert and Bashti reported that SKF-525A in perfused rat liver preparations diminished the rate of metabolism of vinylidene chloride.
These data support the hypothesis that a major pathway for the detoxification of vinylidene chloride involves conjugation with glutathione and that blockage of its conjugation with glutathione greatly enhances its hepatotoxicity.
# Jaeger noted that a significant elevation (P<0.05) of hepatic citric acid occurred after exposure of rats to vinylidene chloride at a concentration of 250 ppm.
He concluded that vinylidene chloride probably affected the mitochondria, leading to inhibition of the Krebs, cycle and subsequent mitochondrial damage.
Jaeger et ai , on the basis of prior work , hypothesized that chloroacetyl chloride, a metabolite of viaylidene chloride, was converted to monochloroacetic acid, which in turn was converted into chlorocitric acid, an inhibitor of the enzyme aconitase.
Inhibition of this enzyme would result in an accumulation of citric acid. This hypothesis was predicated oy analogy on the hypothesis of lethal synthesis, which suggests that fluoracetic acid is converted to fluorocitric acid, which inhibits aconitase . reported that the lethality of vinylidene chloride in male and female mice exposed at various concentrations for 22-23 hours was reduced substantially by pretreatment with disulfiram. When exposures were increased to 22-23 hours/day for 2 days, prefreatment with disulfiram, dlethyldithiocarbamate, or thiram reduced the lethality of vinylidene chloride in male mice.
# Short et al
Pretreatment with methionine or cysteine also reduced the lethality of vinylidene chloride, but instead of protecting in terms of increasing the acut® LC50 (as did the other three compounds), these compounds protected by delaying the onset of mortality after exposure. also reported that disulfiram pretreatnent protected male mice from the hepatotoxic effects of exposure to vinylidene chloride at 60 ppm (238.2 mg/cu m) for 22-23 hours.
# Short et al
However, after two consecutive 22-to 23-hour exposures at 60 ppm, there was no protection frcm hepatotoxic!ty. Short et al showed that covalently bound radioactivity could be detected in the macromolecule3 of the liver and kidneys of mice 4 and 24 hours after an ip injection of 3 mg of 14C-vinylidene chloride/kg. The kidneys contained more bound 14C/mg of protein than did the liver at 4 and 24 hours after administration of vinylidene chloride.
Pretreatment with'disulfiram greatly reduced the bcund 14C in both tissues at both time periods.
# Bonse and Henschler
proposed in a 1976 review of polychlorinated aliphatic compound metabolism that an oxirane, a postulated metabolic intermediate of vinylidene chloride, spontaneously rearranged to chloroacetyl chloride and was then hydrolyzed to monochloroacetic acid. Since monochloroacetic acid is also a product of vinyl chloride metabolism, they speculated that its formation and fate in vinylidene chloride metabolism followed a similar scheme. The authors also noticed a cyclic excretion pattern for vinylidene fluoride in vhich ther- vas major excretion directly after exposure and again 5 days after exposure . The Authors proposed zhat either the fluoride ion, the parent compound, or the fluorine-containing metabolites were stored in some biologic compartment and then released with a turnover rate of about 5 days.
# Conolly
and Jaeger reported that fasting had no effect on hepatoeoxicity in polychlorinated biphenyl-treated rats exposed to vinyl fluoride at 10,000 ppm (18.8 g/cu m) for 4 hours. Three consecutive daily doses of trichloropropane epoxide (an inhibitor of epoxide hydrase) before exposure to vinyl fiuoiicte produced Increased mortality in rats; however, fasting in addition to administration of trichloropropane epoxide did not exert a synergistic effect. This indicated to the authors chat glutathione may not be important in the detoxification of vinyl fluoride metabolites in polychlorinated biph.'.nyl-pre treated racs, but chat epoxida hydraie may be Involved in detoxifying fluoroethylene oxide.
# Structure-Activity Considerations
# Such an assessment of the carcinogenic potential of the vinyl halides requires an understanding of their mechanisms of action.
A review of the available metabolic data on vinyl chloride indicates that several reactive intermediates are produced (Figure XVXI-3). These intermediates are electrophiles, which have been shown to form covalent bonds with cellular macromolecules.
A similar metabolic route may be operative for che ocher vinyl halides, producing similar intermediates that also may bind covalently to cellular constituents.
The carcinogenic potential of each of the vinyl halides would therefore be the resultant potentials of these possible metabolites and of any unmetabolized compound.
One hypothesis for the mechanism by which vinyl chloride expresses its carcinogenic potential involves its metabolism, via the microsomal cytochrome P-450 mixed-function oxidase system, to the oxirane (epoxide), chloroethylene oxide .
One basis for predicting the carcinogenicity of Inadequately tasted vinyl compounds Is Che assumption that metabolic epoxldation Is Involved in Chelr metabolism as veil, and that the epoxide is the major reactive intermediate. The relative carcinogenic potential would then be Influenced by the relative ease with which the epoxides are formed from the corresponding halo-olefins and the relative reactivity of a particular epoxide toward cellular nucleophiles.
The propensities of vinyl halides to undergo chemical transformation to epoxides are dependent on the electronegativities of the substituents on the olefinic nucleus .
Increasing electronegativity result3 in rarefaction of the electron density of the double bond, and thereby in a decrease in the susceptibility of the olefin to apoxidation.
Thus, for the vinyl halides under consideration, the expected order of nou-enzymatic epoxldation would appear to be vinyl bromide > vinyl chloride > vinylidene chloride > vinyl fluoride > vinylidene fluoride.
Bonse and Henschler supported this scheme of epoxldation by showing that the order of the rate of ozonization of a series of olefins was ethylene > vinyl chloride > trlchloroethylene > tetrachloroethylene.
Also, a study of the reaction rates of ozone, a strong oxidizing agent, vlth cnlorir.ated and conjugated olefins showed that the rate of ozone attack decreased strongly (1 ,0 00-fold from vinyl chloride to tetrachloroethylene) as the number of chlorine atoms in the olefin molecule increased .
Hanzlik et al found that a series of para-substituted styrenes (vinyl benzenes) underwent metabolism via the cytochrome P-450 system of ratllver mlcrosomes at rates that were essentially substituent independent.
From the kinetic data of these authors, RL Schowen (written communication, August 1977) calculated a Hammett rho value (a measure of the sensitivity cf the reaction series to ring substitution) of around -0. , Schowen concluded that these vinyl compounds would be expected to generate epoxide metabolites at similar rates.
Schowen also addressed the problem of estimating the reactivity of the epoxide intermediates that ere assumed to be formed.
He made two analyses, each dependent on an experimentally derived measure of reactivity for halide compounds.
One analysis used Swain-Scott substrate factors, S, which reflect electrophiliclty .
Based on S values determined for other halide alkylating agents, eg, S 1.00 for C1CH2C02-, 1.10 for BrCH2C02-, 1.33 for ICH2C02-, and Jor nonhalide agents, Schowen suggested that alkylating potentials of the vinyls relative to vinyl chloride would vary over a range of no more than 16.
In a second analysis using a leaving group parameter, L, from the Swaln-Lohmann equation
# 053, reapacclvaly (saa Table XVII-I).
Their solubilities In blood alghe wall follow this same order alehough lipid solublliey and various eypes of aacroaolecular 5lading cannot ba Ignored. That Is '.o say, the relative aaouae absorbed Chrough Cha lungs aay ba about 20 times greater for vinyl broaida chan for vlnylidana fluorlda.
# Th« expe^iaantal avldanca and cheoreeical considerations ara coapacibla wlch cha hypcehasis Chat eha five vinyl halides ara capable of undergoing biochaaical apoxidacion buc ehac cha cvo fluoro olefins aay ba soaewhaC lass suseapclbl« Chan ch« ochcr vinyl halidaa co this raacclon. Exerapolaelon of Ch« estimates of Schowaa suggesCa ehac vinylidane chlorld«, and vlayl broaida, aay have a similar or graacar carcinogenic poceneial chan vinyl chloride, but chat vinyl fluorlda and vlnylidana fluoride, although possibly carcinogenic, ' « - U a t on the basis of limited mutagenicity data, are likely to be less reactive (possibly up to 1,500 times less) than the other three.
The suggestion that the other vinyl halides resembl» vinyl chloride in theit effects and mechanism of action is debatable on the basis that normal mammalian biochemical constituents such as steroids and unsaturated fatty acids contain the vinyl moiety and are not suspected of carcinogenicity.
It should be noted that variation either above or below a physiologically critical concentration range for many of these compounds may lead to toxic effects.
NIOSH is unable to suggest any systematic relationship between the toxic?ties of all compounds containing the vinyl group, although vinyl halides are treated here as a group because of the similarities in chemical and physical properties.
It may well be that it is the presence of the halide moiety which imparts a particular toxicity range to these agents by allowing them to be activated, epoxidated, or hydrated at sufficiently high rates, and, at the same time imparting to the metabolites a characteristic stab.'j.ity and inti^nsic activity that allows them to reach and interact with vital cellular constituents.
In the absence of additional toxicologic data on the vinyl halides under consideration, the relevance of th- theoretical considerations discussed above to estimation of the potential toxicity of these compounds might be tested by developing mathematical relationships between biologic activities and physicochemical parameters. These parameters could also consider any variations in steric interactions.
Sufficient consistent biologic data are not yet available, however, to permit the development of such equations.
# Correlation of Exposure and Effect
The vinyl halides have been shown to induce effects on the nervous, circulatory, respiratory, integumentary, skeletal, and digestive systems in humans and laboratory animals.
These effects are summarized in Tables III-2 to 111-10.
The modes of action of the vinyl halides art not clearly understood, and there is little published information on any of them other than vinyl chloride.
# It has been proposed that expression of the tumorigenic and mutagenic potential of vinyls depends on metabolic intermediates; this possibility is discussed in the section on Carcinogenicity. Mutagenicity. Teratogenicity, and Effects on Reproduction.
# Lethal concentrations of the vinyl halides have been determined.
For vinyl chloride, the LC50 for mice exposed for 10 minutes and observed for an unspecified period was 239,580 ppm .
In another study, the LC50's for 2hour exposures were calculated as 117,500 ppm for mice, 156,000 ppm for rats, and 238,000 ppm for guinea pigs and rabbits . The LClOO's were also determined In this experiment; they were 150,000 ppm for nice, 210,000 ppm for rats, and 280,000 ppm for guinea pigs and rabbles.
The duration of the observation period was not reported.
For vlnylldene chloride, t! - 4-hour LC50 In rats has been reported as 6,350 ppm with a 14-day observation period .
Other authors reported that from two to four of six rats died within 14 days after one 4hour exposure at a concentration of 32,000 ppm.
A single 22-to 23-hour exposure of rats produced LC50 values of 98 ppm for males and 105 ppm for females . Two exposures for 22-23 hours gave LC50 values of 35 ppm for male rats, and the LT50 for exposure at 20 ppm was reported as 4 days. Thl3 experiment showed that mortality In rats exposed to vlnylldene chloride is in part a function of the total a'cumulated dose. The 24-hour LC50 valae in rats after one 4-hour exposure was determines as 15,000 ppm for fed animals and 600 ppm for animals fasted for 18 hours . The oral LD50 has tlso been calculated for adrenalectomized rats as 84 mg/kg for 24 hours and 81 mg/kg for 96 hours; In sham-operated controls, the LD50 was 1,550 mg/kg for 24 hours and 1,510 mg/kg for 96 hours . This Indicated that adrenal hormones provide some protection against the acute effects of vlnylldene chloride in rats.
# Exposure
to vinyl bromide at concentrations of 171,000 ppm has been tolerated by rats for 10 minutes .
In another experiment 1127], concentrations of 100,000 ppm were shown to be 1002 lethal in rats after 15 minutes, and some rats died (percentage unspecified) at concentrations of 50.000 ppm for 7 hours. An oral LD50 of 500 mg/kg (given in c o m oil) for vinyl bromide was also determined for rats .
Vinyl fluoride has been reported to be not lethal to rats exposed at a concentration of 802, with 202 oxygen, for 12.5 hours .
Vinyl fluoride at 100,000 ppm, 7 hours/day, reported that four of seven rats pretreated with Aroclor 1254 and exposed to vinyl fluoride at concentrations of 102,000 ppm died after a 4-hour exposure.
Vlnylldene fluoride has been reported not to be lethal to rats exposed at a concentration of 802, with 202 oxygen, for 19 hours .
Another author , however, stated that a static exposure to vlnylldene fluoride at 128.000 ppm for 4 hours was sufficient to kill from two to four of six rats within 14 days, hot enough information was presented to evaluate whether or not the static exposure conditions, eg, decreased partial pressure of oxygen, were directly responsible for this difference in lethality.
These studies Indicate that vinyl chloride, vlnylldene chloride, and vinyl bromide present a low degree of acute toxic hazard in animals, and that, a- the duration of exposure increases, the concentration necessary to produce lethal effects decreases. One study showed that «'xposure of male mice to vlnylldene chloride at concentrations as low as 20 ppm was lethal after dally exposures of 22-23 hours for 4 days. The information available suggests v chat Che acute toxicity of these compounds is dependent in pare on Che races of metabolism and excretion and the subsequent total accumulated dose.
The variability of the estimates of LC50 values, resulting in part from variability in the experimental protocols used, does not allow a ranking of the compounds according to acute toxicity, except that vinyl fluoride and vlnylldene fluoride are less toxic than the other vinyls and have not been found to have acute lethal actions on dynamic inhalation exposure in normal rats.
# Cardiovascular effects have been reported for vinyl chloride, vinylidene chloride, and vinylidene fluoride.
Twenty percent of 51 workers examined in 1965, who were exposed to vinyl chloride at a TWA concentration of 49 ppm, and 422 of 60 workers examined in 1969, exposed at a TWA concentrations of 43 ppm, showed elevated pulmonary arterial pressure . Durations of exposure for these workers were not presented.
In another report , workers who were currently exposed to vinyl chloride or who had been exposed In the past had a 39. . Cardiac sensitization to epinephrine has also been demonstrated in dogs exposed to vinyl chloride at 50,000 ppm for 5 minutes .
Changes, such as tachycardia, sinus arrhythmia, and ventricular multifocal extrasystoles, have been observed in the ECG records of dogs exposed to vinyl chloride at 100,000 ppm without epinephrine stimulation . A significant decrease (P<0.05) of 28.52 in myocardial force of contraction has been observed in monkeys exposed to vinyl chloride at the same concentration for 5 minutes . Monkeys exposed at 50,00C ppm showed a decrease of 9.12 in myocardial force of contraction, and those exposed at 25,000 ppm showed a 2.32 decrease in force of contraction.
Decreases in aortic blood pressure followed a similar pattern in the monkeys.
This experiment indicates that the cardiovascular effects of vinyl chloride are dose dependent.
Vinylidene chloride, inhaled at concentrations of 25,000 ppm for 10 minutes, caused sinus bradycardia and such cardiac arrhythmias as AV-block and ventricular fibrillation in rats . Vinylidene chloride also>enhanced the cardiac sensitivity to epinephrine, and this sensitivity increased with increasing duration of vinylidene chloride exposure.
# No cardiac sensitization to epinephrine was noted in cats or dogs exposed
Co vinylidene fluoride at concentrations of 250,000-500,000 ppm for 5-15 minutes . This suggests that vinylidene fluoride does not have the same mode of action on the cardiovascular system as vinyl and vinylidene chlorides. Similar information is aot available for vinyl bromide or vinyl fluoride.
# CNS effects have been observed on animals after exposure to each of ¿he vinyl halides.
In some instances, these effects might have been secondary to systemic effects caused by cardiovascular changes. The authors of the reports did not address this possibility. Vinyl chloride caused "certain anesthesia" in mice after a 10-minute exposure at 122,000 ppm, and 50% anesthesia after exposure a 10-minute exposure at 100,000 ppm .
Exposures at LC50's caused excitement, convulsions, and contractions in mice, rats, and rabbits . Exposure of dogs co vinyl chloride at 500,000 ppm momentarily, then at 70,000 ppm for an unspecified period, caused rigidity of the legs and uncoordinated muscular movements .
An experiment with human subjects showed that 5-minute exposures to vinyl chloride at 1 2 ,0 0 0 ppm caused dizziness in two, dizziness, nausea, lightheadedness, and dulling of vision and hearing at 16,000 ppm in five, and the same symptoms with more Intensity in all six, and a headache in one of them at 20,000 ppm . Among 168 workers exposed to vinyl chloride at a TWA concentration of 899 ppm for up to a few months, 47% complained of dizziness, 45% of somnolence, 36.6% of headache, 13% of loss of memory, 11% of euphoria, and 9% of nervousness . Of 168 workers exposed to vinyl chloride at a TWA concentration of 9d ppm (prior exposure information not available), 10.2% reported dizziness, 16.6% somnolence, 6.9% headache, 8 % loss of memory, 1.2% euphoria, and 0.6% nervousness. Frequent dizziness and weakness in the legs were also reported by other workers exposed to vinyl chloride , as was fatigut in 38.6% and headache in 12.9% and in 13.7% of another group, of workers . These data Indicate that a decrease in the TWA concentration of vinyl chloride led to a decrease in the frequency of CNS symptoms, and that exposure at TWA concentrations as low as 38 ppm produced adverse effects.
Other findings that might be considered manifestations of CNS effect3 have included a "large proportion of accidents" occurring in a worker population and an excess of suicides as the actual causes of death in another worker population (10 observed vs 5.3 expected) . Effects on brain cells have also been observed, ie, a localized atrophy of cerebellar Purkinje cells and necrosis of the frontoparietal cerebral cortex in one worker . Also, fibrotic processes surrounding and invading the small nerve bundles of the gray matter, neuronal phagokaryosis with satellitosis and deposition of neurologic elements around altered nerve cells of the white matter, and atrophy of the granular layer and degenerative changes in the Purkinje cell layer were observed in rats exposed at 30,000 ppm, 4 hours/day, 5 days/week,
for 1 year .
Exposure to vinylidene chloride at about 4,000 ppm for a brief period caused "drunkenness" that progressed to unconsciousness in animals . Workers exposed to a vinylidene chloride copolymer while cleaning tanks developed pains in the lips, nose, and eyes, headache, somnolence, facial anesthesia, corneal anesthesia, hypoesthesii, difficulty in speaking and eating , and fatigue, weakness, nausea, and dizziness . Because of the composition of the copolymers to which these workers were exposed In aqueous suspension, it is not reasonable to assume that vinylidene chloride was the only causative agent of these CNS effects.
Vinyl bromide caused a ' 'certain anesthesia" at a concentration of 86,000 ppo for a 10-minute exposure in rats .
In another experiment, a 1.5-hour exposure at 25,000 ppm anesthetized all rats, and a 7-hour exposure at 10,000 ppa caused drowsiness in rats .
# Rats exposed for 30 minutes
to vinyl fluoride at 300,000 ppm showed instability of the hindlegs; at 500,000 ppm there was loss of postural reflexes, and at 600,000 ppo there was loss of the righting reflex .
Rats exposed to vinylidene fluoride for 30 minutes at 400,000 ppm showed slight Intoxication, and at 800,000 ppm showed an unsteady gait without loss of the postural reflexes .
# The available data indicate that vinyl chloride and vinylidene chloride can produce similar C N S ' manifestations
that may be secondary to the cardiovascular effects of these compounds. Arterial vessels of the paws also showed endothelial fibrosis. The same rats also ahowed akin papillomas and warty aubauricular growths .
# Human exposure information is not available concerning CNS
No reports were located of integumentary effects with exposure to vinylidene chloride, vinyl bromide, vinyl fluoride, or vinylidene fluoride. In one study , exposures to vinyl chloride were estimated at 50-100 ppm in air and 600-1,000 ppm close to tne workers' hands during reactor cleaning operations. No information was available in any of these reports on the duration of employment or exposure for the workers with acroosteolysir . It was noted that all workera with acrooateolyais had been ¿mployed in reactor cleaning, whereas only 21Z of the entire vinyl chlorideexposed population had had experience as reactor cleaners . In rats exposed to vinyl chloride at 30,000 ppm for 4 hours/day, 5 days/week, for 1 ysar, observed effects included alteration in the characteristic bone deposition in the small bones with a mucoid Impregnation The systemic effects that reportedly resulted from exposure to vinyl halides suggest that the vinyls or their metabolites Interfere with cellular processes primarily in the liver and the cardiovascular system.
# Skeletal changes have also been observed in humans and
Exposures to large quantities may also involve the skeletal, integumentary, and central nervous systems; however, these systems may also be affected secondarily by changes in the cardiovascular system, and the relative magnitudes of the effects resulting from primary action of the vinyl halides or their metabolites and from the suggested secondary ones are uncertain.
# Carcinogenicity. Mutagenicity, Teratogenicity, and Effects on Reproduction
The vinyl halides produce reactive metabolic intermediates, possibly including asymmetric oxlranes.
Several proposed intermediates in the metabolic pathways of the vinyl halides are capable of covalent binding with cellular macromolecules, and may therefore induce carcinogenesis and mutagenesis.
Although few experiments have been reported in which the carcinogenic or mutagenic potentials of the vinyl halides other than vinyl chloride arc evaluated, the vinyl halides considered In this document, as suggested earlier In this chapter, may be qualitatively If not quantitatively similar In their Induction potentials.
One report has suggested that exposure of male workers to vinyl chloride can Induce excess fetal deaths among their offspring .
However, as has been previously discussed, the methods of data collection used in this report are considered inappropriate.
The fetuses of mice, rats, and rabbits from dams exposed to vinyl chloride during pregnancy showed significant abnormalities, including increased crownrump length, in mice exposed at 50 ppm, and increased Incidence of resorptions, decreased fetal body weight, reduction in litter size, Increased numbers of unfused sternebrae, and delayed ossification of the bones of the skull In mice exposed at 500 ppm . Fetuses from rats exposed at 500 ppm showed a significant reductiou in the fetal body weights and a significant increase in the number of lumbar spurs and In crown-rump length. Fetuses from rats exposed at 2,500 ppm showed a significant increase in th<5 Incidence of dilated ureters and significant decreases in the incidence of unfused sternebrae and delayed 3kull ossifications.
The only other significant difference noted was an increase in the incidence of delayed ossification of the fifth sternebra in rabbits exposed at 500 ppm. The authors of this study regarded these effects as due to maternal toxicity and not to direct toxicity to embryos and fetuses.
However, the observation of characteristic cancers in progeny of dams exposed to vinyl chloride during their pregnancies suggests that vinyl chloride or its metabolites cross the placental barrier and may have induced the abnormalities reported.
Pregnant rats exposed to vinylidene chloride at 80 ppm had significantly increased numbers of resorptions/implants and fetuses with significant increases in skeletal abnormalities such as delayed ossification of parietal bones, wavy ribs, and lumbar spurs .
Delayed ossification of skull bones and cervical vertebrae, along with wavy ribs, were also observed in fetuses of rats exposed at 160 ppm. No adverse effects were observed at 20 ppm.
In the rabbits exposed at 160 ppm, there was a significant increase in the number of resorptions/implants and a significantly decreased Incidence of delayed ossification of the fifth sternebra, and an increased incidence of a 13th pair of ribs.
No adverse effects were noted on the fetuses from dams exposed at 80 ppm.
In the fetuses of rats given access to drinking water containing vinylidene chloride at 200 ppm, the only significant difference from controls was an Increase in the fetal crown-rump length.
No reports have been located of studies in which the reproductive or teratogenic effects of vinyl bromide, vinyl fluoride, or vinylidene fluoride were investigated. Absorption and subsequent metabolism of vinyl chloride has been reported to be concentration dependent, with a saturable enzyme system responsible for metabolism at low concentrations and a secondary oxidative system predominant at higher concentrations .
# Covalent binding of the vinyl halides to cellular macromolecules such as
It has been postulated that the oxirane is formed only at the higher concentrations through the secondary oxidative pathway, but that halogenated acetalydehyde is common to both pathways.
Thus, even without the formation of the oxirane at low concentrations, the possibility of macromolecular alkylatlon exists through the action of the aldehyde intermediate. Accurate exposure information, to allow calculation of total accumulated dose, is not available for these workers.
# Kuzmack and McGaughy
, using data derived from epidemiologic studies, calculated that the incidence rates of angiosarcomi in workers exposed at concentrations of 350 ppm, 7 hours/day, for 5 days/week should be 0.0031/person/year of exposure.
Ad incidence rate of 0.0052/person/year of exposure was also calculated from the incidence rate in rats. Angiosarcoma of the liver was also observed in 1 of 59 rats exposed to vinyl chloride at 50 ppm, 4 hours/day, 5 days/week, for 52 weeks . The percentage of animals with angiosarccma of the liver increased from 2% at 50 ppm, to 71 at 250 ppm, 12% at 500 ppm, 22% at 2,500 ppm, 22% at 6,000 ppm, 15% at 10,000 ppm, and 30% at 30,000 ppm. Average latency increased with decreasing exposure concentration, from 53 weeks at 30,000 ppm to 135 weeks at 50 rpm. In another experiment , rats exposed at 50 ppm for 4 hours/day, 5 days/vi?ek, for 12 months showed no tumors over an unspecified observation period.
At 500 ppm., however, there was a significant increase (P<0.03) in the number of tumors seen, with 3% of the animals having angiosarcomas of the liver.
Increasing exposure concentrations to 2,000, 5,000, 10,000, and 20,000 ppm increased the frequency of all turners and of angiosarcoma of the liver to In addlclon, these racs did noc show Increased Incidences of ocher tumors compared wich Chose In concrols. A further experlmenc showed an Increased Incidence of adenocarcinoma of Che ''Ldnay In Swiss mice exposed to vinylidene chloride aC 25 ppm for 4 hours/day, 4-5 days/week, for 12 months, but no increase in other tumors. The authors stated that exposures at 50 ppm and higher were lethal to these animals.
Sprague-Dawley rats exposed on the same schedule at concentrations of from 10 to 150 ppm showed an increased incidence of mammary tumors; howevar, this increase was not dose dependent.
No cases of cancer in humans have been attributed to vinyl bromide exposure.
The preliminary results of one animal experiment The carcinogenicity data show that vinyl bromide at a concentration of 250 ppm induced angiosarcoma in 72 of Che animals after 1 year of exposure , and that vinyl chloride at a concentration of 250 ppm also induced angiosarcoma in 71 of Che animals, buC after 2 years of exposure . This indicates that, for this characteristic tumor, vinyl bromide has a stronger induction potential, since It is reasonable to assume that, within another year, more of Che vinyl bromide exposed animals would develop the tumors. A comparison of the induction potential of vinylidene chloride with that of vinyl chloride or vinyl bromide is more difficult.
One experiment with vinylidene chloride showed 42 angiosarcoma of the liver In CD-I mice exposed at a concentration of 55 ppm for 85,800 ppm-hcurs , another experiment showed no angiosarcoma in Swiss mice exposed to vinylidene chloride at 25 ppm for 23,000 ppm-hours , while an experiment with Swiss mice exposed Co vinyl chloride aC a concentration of 50 ppm for 30,000 ppm-hours showed a 22 Incidence of angiosarcoma of the liver .
These data are noC so easily compared as the data from the vinyl bromide and vinyl chloride experiments, but they do suggest that vinylidene chloride may be less active in Inducing angiosarcoma of the liver than vinyl chloride.
Because the behaviors of these compounds appear to fit the expecCed scructure-activity relationship, it is reasonable Purified microsomal fractions (100,000 x G) have been shown to be less active than the 9,000 x G fraction , while the supernatant cytosol produced no increase in mutagenic activity above that found for vinyl chloride alone.
Vinyl chloride has been shown to affect only loci at which mutation occurs by substitution . One dominant lethal study with male mice exposed to vinyl chloride at 30,000 ppm for 6 hours/day for 5 days showed no mutations, which suggests that mammalian gametes may not be affected by exposure.
Vlnylldene chloride has been shown to be similarly mutagenic in bacterial systems .
Exposure of Salmonella for 2 hours, in media containing 0.33-33 millimoles of vlnylldene chloride and a mammalian microscmal system, resulted in 6-10 clmes the spontaneous number of ravertants .
Bartsch et al reported that vlnylldene chloride was about three times as active as a mutagen as vinyl chloride on an equimolar basis.
In a study using E^ coli, Greim et al concluded thac vinyl chloride was "several times" more mutagenically active than vlnylldene ' chloride; their data showed that vlnylldene chloride produced a onefold Increase over the spontaneous number of mutaclonj at a concentration of 2.5 mil.'.imolar, while vinyl chloride produced six Clmes the spontaneous number of mutations at the same locus at a concentration of 1 0 .6 mlllimolar. McCann et al suggested that, although chloroacetaldehyde was the most active of the metabolites of vinyl chloride that they tested, i<: was probably not the major mutagenic metabolite of vinyl chloride, since chloroacataldahyde affected only tha repair-deficient Salmonella strain TA1C0. These authors suggested that chloroechylene oxide was the most likely mutagenic metabolite, although they did no; test its activity.
# Vinyl bromide aC concentrations of 202 in air was shown to be mutagenic in
However, the compound has been tested by other investigators on both repair-deficient and nonrepair-deficient strains, and it apocirj, like vinyl chloride, to be similarly active in each. These studies show that each of tha vinyl halides is mutagenic in various Case systems and that Cha putaclva metabolites are more strongly mutagenic chaa che parenc compounds.
The studies support the conclusions drawn from the limited carcinogenic data available and, further, raise a suspicion that the fluorides also may be carcinogens.
The failure of vinyl chloride to be mutagenic in tha mouse dominant-lethal test may indicate that ~he mutagenic factor may not ba distributed to mammallan gametes in sufficient concencradonj co produce slgnlficanc changes; however, further research is necessary Co determine fully cha pocenelal of ehese compounds for mammalian mutagenicity. )0,000
# Tha available evidence c s e e m i n g the vinyl halides indicates that e«eh
# S 3 N T T > O R lG E N IC E F F E C T S CM ANIM ALS FROM EX P O SU R E TO V 1 N Y L ID E N E F L U O R ID E 13Y ir^tA L A T lC N R e f -S p e c i e s C o n c e n t r a t i o n D u t a t i o n E f f e c t s e r e n c e (ppm
IV. ENVIRONMENTAL DATA Moat of the sampling and analytical procedures for airborne vinyls in occupational environments have been developed and tested for vinyl chloride. While these procedures may provide some guidance for choosing sampling and analytical conditions for the other vinyl halides, caution must be exercised in extrapolating or interpolating from vinyl chloride to vinylidene chloride, vinyl bromide, vinyl fluoride, and vinylidene fluoride.
# Certain physical and chemical properties of the latter compounds are quite different from those of vinyl chloride at ambient temperatures and pressures (see Table XVII-1).
The collection media used for sampling the various vinyls should be selected to permit reproducible air sampling, adequate collection efficiency, storage stability, retention, and minimum breakthrough of the specific compounds.
# Murdoch and Hammond
used evacuated glass bottles to collect grab samples for the determination of vinyl chloride concentrations in polyvinyl chloride work areas.
After the samples were collected, the bottles were sealed with silicone rubber septa. Aliquots were later removed by syringe for analysis by gas chromatography.
# Williams et al 1236]
also reported using evacuated stainless-steel containers equipped with critical orifices for collecting grab or integrated samples of air that contained mixtures of vinyl chloride,.vinylidene chloride, and other compounds under laboratory conditions.
Identical results were obtained for samples collected in steel containers from a chamber and for samples taken directly from the chamber.
Stainless steel canisters and Tedlar bags have both been used for sampling for vinyl chloride. Losses of 0-10% vinyl chloride/day were reported for samples stored in Tedlar bags . The losses were attributed either to leakage from the bags or to reactions of vinyl chloride with other air contaminants, such as nitrogen dioxide and ozone.
# Sampling and Analytical Methods
Levine et al compared the storage stability of vinyl chloride collected in Teflon and aluminized Scotchpak gas sampling bags.
The comparison showed the loss from Teflon bags to be about 20%/day, but it was not determined whether the loss resulted from the permeability of thé Teflon, from chemical reaction, or from mechanical problems.
No detectable loss occurred during a 1-week period from aluminized Scotchpak bags that contained samples of vinyl chloride at concentrations of 0.1-1.1 ppm (0.256-2.8 mg/cu m ) .
# Ketterer
found Teflon bags to be satisfactory for holding samples that would be analyzed for vinyl chloride soon after collection but did not repoct the time between sample collection and analysis. The relative standard deviations for seven samples of vinyl chloride at a concentration of 25 ppm (64 mg/cu tu) and for eight samples at a concentration of 52.2 ppm (133.6 og/cu m) were 2.24 ard 1.772, respectively, and the reported accuracy at both concentrations was 95Z. The author concluded that this degree of accuracy and reproducibility should be readily attainable in field use, since fluctuations in temperature and hunidity and the presence of other volatile organic materials had little effect. .
The major advantage of bag sampling is that it permits direct analysis of the sampled air, ie, without the adsorption and desorption steps required for collection on solid sorbents . Its disadvantages include the bulky equipment required for personal sampling and the relatively high detection limit, approximately 50 ppb (0.13 mg/cu m) tuat results from the sample not being concentrated. Another disadvantage of bag sampling is that the sample volume is limited.
# The most widely used sampling technique involves adsorption on solid sorbents such as Tenax-GC resin and activated charcoal. The major sampling problem in collecting vinyls on solid media is that vinyls have appreciable vapor pressures, which can result in sample migration or loss.
Tenax-GC resin was used by Ives to concentrate grab samples of contaminated air. Average recoveries of 90Z for vinyl chloride at 6 ppb (0.015 mg/cu m) and 1002 at 60 ppb (0.15 mg/cu m) were reported when contaminated air in a 500-ml gas sampling tube was flushed with nitrogen through the Tenax-GC resin trap at a flowrate of 85 ml/minute for 35 minutes. The crap was cooled in dry ice. Ahlstrom et al reported that Tenax-GC resin did not quantitatively adsorb vinyl chloride from the atmosphere, but they presented no data supporting thi3 conclusion.
Zado and Rasmuson reported that the breakthrough volume for vinyl chloride on Tenax-GC resin was 170 ml at a flowrate of 30 ml/minute, but they did not specify the concentration sampled nor the dimensions of th*s resin bed. They stated that Tenax-GC resin had the next to the poorest breakthrough performance of 10 absorbents tested. Nelms et al described a permeation sampling technique using a charcoal badge, 41 x 48 mm and 7 mm thick, pinned to the worker's clothing for has most oftan been analyzed by gas chromatography. This method has the advantages of being more specific and less expensive than infrared analysis. The authors noted that Its disadvantages Include the requirement of expensive equipment and the inability to make repeated injections of the sample.
They suggested using the less expensive Porapak N instead of Carbosleve B. However, the disadvantage of Porapak X Is its low breakthrough volume of 976 al, compared with greater than 2,000 ml for Carbosleve B, at a flowrate of 50 ml/mlnute. of charcoal.
After the charcoal was added, the flask was sealed and covered with dark paper, and the contents were stirred magnetically for 7 minutes. During this procedure, vinyl chloride is converted to 1,2-dibromo-lchloroethane, which has a much greater sensitivity to electron-capture detection than vinyl chloride.
For samples enriched by column chromatography and determined by gas chromatography using an electron-capture detector, the authors reported a recovery of at least 35%.
# Desorption of vinyl chloride with tetrahydrofuran was reported in an
Environmental Protection Agency (EPA) publication . Recovery was 88%, and there was le«.. diffusion into the heaaspace than was evident with carbon disulfide d«3orp:ion; however, the solvent volume and the desorption conditions were not reported.
Ethyl Corporation reported that a carbon disulfide-pentane .xture was used to desorb vinyl bromide from about 14 g of Pittsburgh 20x50 activated carbon.
No data were located on the desorbing agent for vinyl fluoride and vinylldene fluoride samples. XVII-8 shows the specificity and approximate detection limit of each of these detectors for vinyl chloride.
# Electron-capture detectors belong to the general class of direct-current ion chambers.
Nitrogen or argon is used as the carrier gas, and 3H or 63N1 is used as the radioactive source to excite the gas. As compounds are eluted from the gas chromatographic column, they become ionized by the excited carrier gas and produce an increased current flow across parallel electrodes. The current flow is proportional to the amount of compound present.
Electroncapture detection is more selective than flame-ionization detection, but it is less reliable and has a smaller dynamic range . A further disadvantage of electron-capture detection with respect to vinyl chloride analysis is that response to aromatic halides and polychlorinated hydrocarbons is relatively low . %}ffmann et al have extended the electron-capture detection limit for vinyl chloride by brominating vinyl chloride to produce 1 ,2-dibromo-1-chloroethane. The detection limit for this compound was 15 pg/injection, and the response was linear between 50 and 300 pg . Mlcrocoulometrlc detection is highly sensitive and accurate for chloride ions . As chlorinated hydrocarbons are eluted from the gas chromatograph column, they are pyrolyzed to form hydrogen chloride gas. The hydrogen chloride causes silver chloride to precipitate, and to disturb the electrical balance at the positive silver electrode.
The couxometer regenerates silver ions until the electrical balance is restored, and the current generated to restore th« balance is proportional to the number of chloride ions generated. Ernst and Van Lierop used a Hall detector (microcoulometer) for the analysis of vinyl chloride; the vinyl chloride was pyrolyzed in a quartz tube in the presence of hydrogen and the hydrogen chloride formed was detected as a function of the increased conductivity of an aqueous reservoir. A detection limit of 0.07 ng, slightly better than the flame-ionization detection limit, has been reported , The major advantage of nicrocoulometry is its sensitivity to organohalides .
Its disadvantage is its electrical power requirements, which make the detector impractical for field use According to reports of t;ie NIOSH-acceptad method for the analysis of vinyl chloride , and the MCA uethod for vinylidene chloride , a sample of 0 .2 ng/injection of vinyl chloride and vinylidene chloride, respectively, can be detected by flame-ionization. However, the conditions under which the MCA method was tested were not specified.
For vinylidene chloride analysis, the NIOSH-proposed method reports that a sample loading of 7 ¿ig (about 35 ng/injection) had - desorption efficiency of greater than 802. Detector response is generally a function of the number of carbon atoms in a molecule of a compound, although a reduced response or no response may occur when the carbon atom is attached to atoms other than hydrogen, such as chlorine, oxygen, or sulfur several substances that are present in ambient air act as interferences, and thus the method is not specific for vinyl chloride. The EPA report noted chat vinyl chloride is detectable at an absorption frequency of 941 or 917 1/an. The authors pointed out that these major problems could be circumvented by additional instrumentation, but they cautioned that the cost would be high.
Effective optical paths of 20 meters are required in order for infrared analyzers to achieve a detection limit of 1 ppm (2.56 mg/cu m ) .
Other methods that have been used to determine vinyl chloride concentrations Include colorimetry and polarography .
The sensitivities of the colorimetric methods are very much affected by such interferences as ethylene and methanol, and the sensitivity of the polarographic method is affected by any other volatile materials that may be present.
# Gronsberg
used a photometric method to determine concentrations of vlnylldene chloride in air. Bis method is based on the reaction of vinylidene chloride with pyridine and on subsequent condensation of the reaction products with aniline or barbituric acid.
After the reaction, a polymethine dye complex is formed.
The method has a sensitivity of 2 ¿tg/photometric cuvette volume and is capable of determining vlnylldene chloride in air at concentrations of 10 mg/cu m (2.5 ppm). . The author reported that vinyl chloride, acrylonltrile, dichloroethane, and hydrogen chloride had no effect on the analysis for vlnylldene chloride but noted that trichloroethylene and 1 ,1 ,2-trichloroethane produced analogous reactions.
# Color-specific detector tubes are available fcr the determination of vinyl chloride or vlnylldene chloride in the work environment.
Two types of color reactions, one using chromate and bromophenol blue, and the other using permanganate and o-tolidine, were used for analyzing for vinyl chloride . Their ranges of linearity were 0. Interferences by nitric oxide, carbon disulfide, and trlchloroethylene, however, are not removed by this scrubbing method.
# (4) Recommendations
The analytical methods described in detail in Appendices II-VI offer the necessary quantitative sensitivity and precision.
Their accuracy, technical requirements, and cost requirements are easily within the range of most analytical laboratories. 6)11. Continuous monitors should be Installed to monitor vinyl bromide, vinyl fluoride, or vlnylide.ie fluoride as soon as systems sensitive enough become available. The system Ideally should be highly sensitive and specific to the vinyl halides sampled and free of Interferences.
# Environmental Levels
In 1975, Barnhart et al reported the results of NIOSH Industrial hygiene surveys of vinyl chloride monomer producers and polyvinyl chloride processers. Three vinyl chloride manufacturing plants and seven polyvinyl chloride processlr% plants were included in the study. Workplace air samples were collected on charcoal tubes and analyzed by gas chromatograph after desorption with carbon disulfide. The survey found concentrations in the range of 0.1-9.20 ppm (0.256-23.55 mg/cu m) in the monomer plants and 0.01-0.35 ppm (0.03-2.18 mg/cu m) in the polyvinyl chloride processing plants. The authors concluded from these data that polyvinyl chloride processors were rarely exposed to vinyl chloride at concentrations greater than 0.5 ppm (1.28 mg/cu m), which was the Federal action level at that time. Monomer production workers, on the other hand, had a greater risk of vinyl chloride exposure. Polyvinyl chloride producers were not included in this study.
Results of other vinyl chloride monitoring surveys have also been reported and the data are presented in Tables IV-1 and IV-2.
Baretta et al monitored the concentrations of vinyl chloride in a vinyl chloride polymerization plant, apparently before and during 1967, continuously with an infrared spectrometer. Five sampling probes were placed in the work area for each of four job classifications, and mean vinyl chloride concentrations were calculated weekly. Mean area concentrations for the coagulator operator, dryer operator, blender-packager, and polymer operator declined steadily during the 7 months of the study. The authors attributed the decline to undescrlbed "actions undertaken to reduce the atmospheric concentration" of vinyl chloride.
Weekly average vinyl chloride concentrations decreased from 205 to 40 ppm (524.8 to 102.4 mg/cu m) for coagulator operators and from 90 to 20 ppm (230 to 51 mg/cu m) for dryer operators. Blender-packagers and polymer operators were consistently exposed at concentrations below the target concentration of 50 ppm (128 mg/cu m) used in the plant at that time.
Kramer and Mutchler , in 1972, reported 8-hour TWA concentrations for vinyl chloride of from 0 to 300 ppm (0 to 768 mg/cu o) in a vinyl chloride polymerization plant. They stated that the mean TWA concentration was 155 ppm (396.8 mg/cu m) in 1950 and 30 ppm (76.8 mg/cu m) in 1965. Concentrations were estimated on the btsis of area sampling data. Cook et al , in 1971, reported that concentrations of airborne vinyl chloride inside a reactor during scraping operations tended to be below 100 ppm (256 mg/cu m) and were usually about 50 ppm (128 mg/cu m). These estimates were developed from information supplied by a "small number" of unspecified plants, and the dates of the analyses were not reported.
In 1975, Ott et al Ott et al , in 1976, also estimated TWA concentrations of vlnylldene chloride from data gathered by area monitoring In production and polymerization facilities from 1956 through 1965. Estimated concentrations ranged from 5 to 70 ppm (19.85-277.9 mg/cu m) with excursions to 1,900 ppm (7,543 mg/cu m). Dow Chemical USA (230] has provided NIOSH with vinylidene chloride monitoring data. These data are summarized in Table IV
Bales , in 1977, provided NIOSH with the results of two industrial hygiene surveys conducted at a vinyl fluoride polymerization and monomer production plant. Samples were collected in 7.7 -liter Teflon bags and were analyzed for vinyl fluoride by gas chromatography. TWA concentrations of vinyl fluoride ranged from 1 to 5 ppm (1.88 to 9.4 mg/cu a) for 11 samples collected from employees' breathing zones. Both personal and area samples collected in the monomer plant showed that concentrations of vinyl fluoride were less than 2 ppm (3.76 mg/cu m) for all but one (21 ppm or 39.5 mg/cu m) sample taken at the start of the operation.
Bales also reported results from an industrial hygiene survey in a vinyl bromide monomer plant. The data are shown in Table IV
# Engineering Controls
Engineering controls should be used to eliminate the potential for exposure to vinyl halides in the workplace and to prevent fire and explosion. These goals can be achieved with properly constructed and maintained closedsyscem operations and appropriate safety precautions.
Closed-system operations provide the best means for eliminating employee exposures to vinyl halides. Closed-system operations are effective only when tl i integrity of the system is maintained by frequent inspection and by prompt repair of any leaks that are found. Closed-system operations should be performed under negative pressure.
Where closed systems cannot be adequately designed and effectively used, local exhaust ventilation systems should be provided to direct vapors and gases away from employe.es and to prevent the recirculation of contaminated exhaust air. Contaminated air should be directed to an incinerator equipped with scrubbers to remove any toxic combustion products. Exhaust ventilation systems for quality control laboratories or laboratory hoods where samples are prepared for analysis should be equipped with sorbers. Guidance for designing a local exhaust ventilation system can be found in Recommended Industrial Ventilation Guidelines , Industrial Ventilation-A Manual of Recommended Practice , or more recent revisions, and in Fundamentals Governing the Design and Operation of Local Exhaust Syst »ms, ANSI Z9. 2-1971 . Ventilation systems of this type require regular inspection and maintenance for effective operation. These Inspections should include face-velocity measurements of the collecting hood or duct, inspection of the air mover and collector, and measurements of vinyl concentrations in workroom air. Continuous airflow indicators, such as oil or water manometers, are recommended and should be properly mounted on collection hoods, ductwork, or laboratory hoods and marked to indicate the appropriate airflow. Adav-ed from Bales Although ic may be unnecessary to ventilate monomer production equipment, since it is usually located outdoors, proper ventilation must be provided for Che building from which Che process is controlled . The control building should be maintained under positive pressure, and its air intake should be positioned so as to provide clean fresh air.
The procedures developed for vinyl chloride as discussed in the following paragraphs can also be used to control exposure to the ocher vinyl compounds.
Vinyl chloride monomer is manufactured in closed systems. The maintenance of the integrity of such systems is dependent on careful inspection of seals, especially at joints, valves, and pumps. Generally, where seals are closely inspected and maintained, escape of vinyl chloride can be prevented during monomer production. Several reports, however, have mentioned processes, periods, or areas of potential exposure . These include quality control sampling points , tank car loading , tank car gauging , storage and transfer systems , d istillatio n areas , and leaks from other equipment . These potential sources of exposure in monomer production should be avoided by the use of alternative methods of tank car gauging (slip-tube gauging has been a source of exposure and should be controlled and monitored ), by the use of purge gas in loading and storage operations, prior to maintenance entry, and in quality-control sample collection , by the careful collection and purification of purge gas, by the use of closed-loop systems for tank car loading and quality control sampling , and by the use of properly maintained laboratory hoods for quality control laboratory procedures.
Most of the cases of angiosarcoma of the liver found in vinyl chloride workers have occurred among employees of vinyl chloride polymerization plants. Controls are needed to reduce worker exposure to vinyl chloride during the opening and cleaning of reaction vessels, at discharge points from relief valves and piping joints, while monomer is being stripped from the polymer, and while tank cars are being loaded or unloaded. A reduction in the frequency of manual cleaning of reactor vessels-is absolutely necessary. The crust on the vessel can contain up to 3-52 monomer, and 30-502 of this may be liberated during cleaning . Automatic high-pressure water, steam, or organic solvent vessel-cleaning systems can reduce the frequency of worker entry into reactor vesselsf . Organic solvents are also toxic to varying degrees, however, and theij use should therefore be carefully controlled. A proposed proprietary system for lining reactc essels is claimed to eliminate resin buildup on the reactor vessel surfaces 1286].
Because vinyl chloride polymerization reactions do not go to completion, polyvinyl chloride resins should be stripped of unreacted monomer. Stripping vessels, slurry tanks, centrifuges, and dryers must be enclosed and exhaustventilated .
If amount of residual monomers in polymers is reduced, the exposure of fabricating workers to vinyl chloride will be substantially reduced. The quantity of residual monomer that is released from a resin depends on the process temperature, the surface area of the resin, and the quantity of unreacted monomer in the resin . Manufacturers should control their process to reduce these factors to the greatest extent consistent with the demands of the process. Fabricators should know the residual monomer content of the polyvinyl chloride resins that they use. Methods are available, eg, aspiration and air stripping , for reducing the residual monomer content of resins early in the fabricating cycle, preferably during the compounding and dry-blending stages. Oberg found vinyl chloride concentrations of 0.04-1.5 ppm during laboratory blending operations.
Unless adequate ventilation is provided, the escape of residual vinyl chloride from bagged or boxed resin can result in buildup of the vinyl chloride concentration in warehouses and storerooms . According to Oberg , vinyl chloride concentrations of 0.8-1.5 ppm have been found in storage areas. A manufacturer of vinylidene chloride reported that a latex containing 2,000 ppm of unreacted vinylidene chloride released 1,000 ppm of vinylidene chloride in 1 week of storage . High storage or processing temperatures aay also accelerate Che release of unreacted monomer. Oberg found ranges of vinyl chloride concentracions of 1.5-2.2 ppm aC 46-68 C and 60-120 ppm at 71-110 C.
In vinyl production areas, employers should in stall automatic, multipoint continuous monitoring systems with alarm devices sensitive to airborne vinyls at the recommended exposure lim its. Baker and Reiter reported on a highly sensitive automatic monitoring system using a gas chromatograph equipped with a flame-ionization detector and an alarm device that activated when either the vinyl chloride concentration exceeded a preselected level, the check 3ample analysis was out of rar-ge, or the sample flow was insufficient. The system required 2 minutes for an analysis, and each analyzer could monitor up to 20 locations. Areas in which high vinyl concentrations are detected should immediately be monitored for gas leaks with a portable organic vapor analyzer. Also, entry into such areas should be limited to authorized personnel with proper protective clothing and equipment. As soon as a leak is located, properly equipped maintenance personnel familiar with emergency procedures should try to repair it.
Efforts should be made to minimize the extent to which vinyl vapors mix with air in confined or regulated areas and to prevent vapors from being exposed to any ignition source. A flexible hose ventilation unit and recovery system which can be moved to the source of the leakage should be available to control leaks which are not readily repaired.
Unloading vinyls from railroad tank cars Is especially hazardous while lines and hoses are being coupled and uncoupled. Vinyl chloride vapor and vapor from other vinyl halides also remain in tank cars after the liquid is removed. If compressors and pumps are used to remove the vapor, care should be taken fo avoid leaks from this equipment. It has been suggested that tank cars be emptied down to only the vinyl chloride vapor pressure, which eliminates the need to purge the car and reduces the use of pumps and compressors .
Storage and process areas where vinyl halides are stored as liquids shouJd be diked to prevent the uncontrolled spread of any spilled material. The diked areas should be designed with drainage systems to carry spilled material into holding ponds or other areas where the product can be recovered or disposed of in a safe manner.
The flaamability of some of thes- compounds mandates careful design and operation of a ll spark-or heat-producing equipment in vinyl halide work areas. Electric systems and motors must be spark-and explosion-proof. Sump pumps in diked areas must also be explosion-proof.
Achievement and maintenance of reduced concentrations of airborne vinyls In the workplace are dependent on the implementation of the engineering control recommendations. According to an unpublished report submitted to NIOSH, Che available data suggest thac a combination of many control measures Is required to keep vinyl chloride concentrations at or below the current Federal occupational lim it of 1 ppm (2.56 mg/cu o ). Since the promulgation of the vinyl chloride standard in 1974, many different types of control techniques have been employed Id work areas, and employee vinyl chlorides to vinyl chloride have been greatly reduced. Table 17-5 shows the apparent reductions in vinyl chloride concentrations that have been achieved in typic.il vinyl chloride polymerization slants, especially during 1974-1975, when most of the controls were installed , r « In all workplaces where the vinyl halides are produced, handled, u se d o r stored, employers should supplement engineering and administrative controls with appropriate work practice programs. Work practice programs 3hould be oriented toward methods for handling vinyl halides, procedures for cleaning up spills and responding to emergencies, and use and care of personal protective clothing and equipment. In addition, a regular program of instruction should be established to ensure that all potentially exposed employees are familiar with the specific hazards of each vinyl halide and with appropriate procedures for handling them. Employers should inform employees of any adverse effects that could be caused by inhalation of decomposition products. If contractors are employed for maintenance and repair activities or cleaning of vinyl contaminated equipment, employers should ensure that the contractor personnel are also familiar with the hazards of the compounds and with precautions to be taken when performing their duties. Employers may use the Material Safety Data Sheet presented in Appendix XVI as a guide in providing employees with the necessary information.
The vinyl halides vary In their toxicities (Chapter III) and their chemical and physical properties (see Table XVII -1). Although this chapter and the literature cited in it deal mainly with vinyl chloride, all of the vinyl halides are similarly produced, handled, stored, and transported. Similar practices and engineering controls will usually be applicable, therefore, to all vinyl halides; those specific for each halide ate discussed separately. The control procedures outlined in Chapter IV for specific processes involving vinyl chloride are not a substitute for good general work practices.
Since the promulgation of the 1974 Federal occupational exposure limit of 1 ppm (2.56 mg/cu m) for vinyl chloride, many papers have been published on various ways to reduce workar exposure to this compound. Although some practices are applicable to work with vinyl chloride at any time, most controls and practices can be separated into those that apply to monomer production, those that apply during polymer production, and those that apply during polymer fabrication or processing.
Although closed loop systems may be used for quality-control sampling, the proximity of the employee to the sample cylinder connections greatly Increases the likelihood of exposure in the event of leaks . Therefore, caution should be used in collecting quality-control samples even where closed loops are used.
All work areas in which exposure to vinylidene chloride or vinyl bromide may occur should be posted with warnings that a potential human carcinogen is present. For potential exposure to vinyl chloride, the area should be posted to warn that a human carcinogen is present.
Entry into regulated areas, as defined in Appendix I (29 CFR 1919.1017 (e)), or confined or enclosed spaces should be carefully controlled by a permit system or the equivalent. A confined or enclosed space is usually thought of as any reactor, autoclave, tank, chamber, vat, pit, pipe, flue, duct, bunker, or undergrade room and only properly protected personnel trained in emergency procedures should be permitted to enter such areas . Unauthorized personnel and those not properly protected should not be permitted to enter regulated areas oi confined or enclosed spaces. Records of those who enter these spaces should be kept by means of a daily log, employment records, or the equivalent. Properly fitted protective clothing and equipment should be worn by anyone entering such areas, and suitable respiratory equipment should be worn if vinyl concentrations exceed the permissible exposure limits.
Whenever airborne vinyl halide concentrations exceed the recoranended environmental limits, respirators must be used in accordance with Table 1-1. The current Federal standard for vinyl chloride allows the use of a chemical cartridge respirator or a gas mask, front-or back-mounted canister, at concentrations of vinyl chloride not exceeding 10 ppm or 25 ppm, respectively. Service life requirements, 1 hour for a cartridge and 4 hours for a canister, are also listed (29 CFR 1910.1017 (g)). NI0SH, however, has also requited that end-of-servlce-life indicators be used with cartridge and canister airpurifying respirators. In December 1974, NI0SH and MSHA published the requirements for a canxster or cartridge respirator with end-of-service-life indicators for use in vinyl chloiride atmospheres . NIOSH has recently approved the 3M No. 8716 vinyl chloride cartridge respirator, which has an end-of-service-life indicator, for use in vinyl chloride at concentrations up to 10 ppm (DP Wilmes, written communication, February 1978). End-of-service-life indicators for canister gas masks for vinyl halides have yet to he developed. To prevent exposure through leakage, NIOSH recommends that each employee be provided ar- appropriate individuallyfitted respirator in good, clean condition, and that employees be drilled In the use of these respirators and In testing them for leakage, proper fit, and proper operation.
Since vinyl chloride, vinyl bromide, vinyl fluoride, and vinylidene fluoride are gases at ambient conditions and are liquids only under pressure, a hazard from splashes rarely exists under normal working conditions. These compounds can nevertheless cause eye and skin irritation, and contact with them should be avoided. Vinylidene chloride is a liquid at ambient conditions. Bccause the pressurized materials evaporate rapidly on release, excessive exposure to undiluted liquid vinyls could cause a "frostbite" type of "burn" . Warnings against skin irritation and burns from contact with liquid vinylidene chloride and vinyl chloride have been published . Phenolic inhibitors of polymerization, formerly used widely, have been implicated in the causation of burns by contact with surfaces from which he inhibiced vinyl monomer had evaporated, leaving a film of Che inhibicor . If a vinyl halide is splashed on the skin, the affected areas should immediately be washed v'ith soap and water. If eye exposure occurs, the affected eye should be rinsed with water for at least 15 mir.utes, and medical attention should be obtained as soon as possible , Eyewash fountains and emergency showers should be located near all vinyl exposure areas and should be readily accessible.
Employees who handle vinyl'1 or enter vinyl exposure areas should be provided with appropriate clothing. Protective clothing should be provided clean and dry for each use. To prevent contamination of otheir work areas, employees should not wear protective clothing outside exposure areas. In most vinyl operations, employees should use coveralls made of any nonsparking material . Employees should also wear safety goggles or glasses with side shields, hardhats, respiratory protective equipment, rubber gloves, and boots whenever they enter confined or regulated areas . One vinyl bromide manufacturer has recommended that neoprene gloves and boots be worn by employees opening process lines and repairing pumps and that a one-piece nylon suit, vinyl-coated on both sides, with attached neoprene boots and gloves be worn by employees entering a reactor vessel or tank . Employers should warn employees that heat stroke may result from the wearing of impervious clothing.
Vinyl-contaminated work clothing should be kept separate from street clothing and should not be removed from the work area. Employers should provide shower and change rooms with locker room facilities that allow for complete separation of work and street clothing. Employers should encourage all employees working in areas where exposure to vinyls might occur to shower before changing from work clothes into street clothes. Employers should be responsible for the laundering of contaminated or soiled clothing, and no employee should be allowed to take or wear home any work clothing. All work clothing should be adequately cleaned after each wearing. Employers should inform laundry personnel of the possible hazard from vinyl contaminants on work clothing. Although the vinyl halides 3re at most only slightly soluble in water, clothing contaminated with liquid vinyls should be allowed to dry befor® being laundered. This should be done in a vacuum or other enclosed system provided with air ventilation devices in order to prevent vinyl halide release into the laundry or work area. Waste water should be handled in accordance with all applicable Federal, state, and local regulations.
The vinyl halides addressed in this document are flammable over wide ranges of concentrations in air, and contact with ignition source-should therefore be avoided. Vinyl chloride, vinylidene chloride, vinyl fluoride, and vinylidene fluoride have been reported to be explosive at concenCrations of 3.6-33.0, 7-16.0, 2.6-21.7, and 5.5-21.32 by volume in air, respectively . A producer of vinyl bromide reported that vinyl bromide at concentrations of 6.0-152 by volume in air may-ignite in the presence of highenergy ignition sources and suggested that vinyl bromide be handled as a moderately flammable material .
Since the vinyl halides are so readily flammable, it is important to prohibit smoking, carrying of uncovered smoking materials such as matches and lighters, open flames, and use of materials that can cause sparks in areas where vinyls are present. Smoking If allowed at all on the plant site should be restricted to designated areas. Signs warning of a danger of fire or explosion should be posted in areas where vinyls are produced, handled, or stored, and transport, containers should have warning labels. Warning 3igns should also be prominently posted in areas where spills and leaks are likely to occur. °rocess equipment, 3uch as tanks, pipelines, pumps, and compressors, should be grounded to prevent tne build up of static electricity . Firefighting and respiratory protective equipment should be readily available for use in case of emergency. Employers should inform firefighting personnel of the possible combustion products of the vinyl halides. Vinyl chloride combustion products include phosgene, hydrogen chloride, carbon monoxide, carbon dioxide, and water. Hydrogen chloride is also a major combustion product of vlnylidene chloride. Employers should therefore provide firefighters with protective equipment to prevent injury from inhalation or contact with the combustion products. Vandervort and Brooks reported that the major thermal decomposition products of polyvinyl chloride films were di-2-ethylhexyl adipate and hydrogen chloride. The authors found no vinyl chloride emissions during hot-wire cutting of the film, but warned against inhalation of aerosol particles from di-2-ethylhexyl adipate and hydrogen chloride.
To ensure tne effectiveness of recommended work practices in protecting the employees' health, employers should require that a ll employees participate in an orientation program when they ara hired and in periodic Information seminars led by personnel qualified by experience or training. During orientation, employees should be informed of the hazards associated with handling of the vinyl halides and of the precautions that should be taken to prevent injury or illness. Employers should also inform employees that vinyl chloride is a known human carcinogen and that the other vinyls are potential human carcinogens. Employees should be made thoroughly familiar with emergency and evacuation procedures.
Periodic training of employees should include opportunities for employees to meet with management personnel to discuss or review safety procedures and new toxicologic findings. New Information an the vinyl halides should be posted in designated areas accessible to employees. It is essential to stress the Importance of the employees' cooperation with management in preventing adverse effects of exposure to vinyls, and employees should be encouraged to report all circumstances that might create the potential for such exposure. significantly pathologic, and they concluded that the accepted TLV for vinyl chloride of 500 ppm (1,300 mg/cu m) was adequate to protect workers. The 1966 documentation concluded that, "although the available data are conflicting, the preponderance indicatej a compound of relatively low toxicity with which a threshold limit of 500 ppm is consistent."
In 1970, the ACGIH announced its intentiou to reduce the TLV for vinyl chloride to 200 ppm. In 1972, the ACGIH reduced the TLV for vinyl chloride to 200 ppm (770 mg/cu m , actually equivalent to 512 mg/cu m) as an 8-hour TWA concentration. Several studies supporting r.his action were cited in the 1971 Documentation of Threshold Limit Values for Substances in Workroom Air , including the 1961 study by Torkelson et al and the 1963 study by Lester et al . The documentation alsc cited a study, conducted between 1950 and 1967 and presented in 1968 by Mutchler and Kramer , of exposure of chemical plant workers. Workers exposed to vinyl chloride (with about 5 ppm of vinylidene chloride) at a mean concentration of 160 ppm (410 mg/cu m) did not have significant changes in blood pressure, concentration of hemoglobin in the blood, or ECG's, and acroosteolysis was not found. Howevar, changes of possible physiologic significance were acted in serum beta-lipoprotein, Icteric index, and sulfobromophthalein retention. Based on analysis of these data, the authors suggested that some liver dysfunction might result from exposure to vinyl chloride (combined with 5 ppm of vinylidene chloride) at a TWA concentration of 300 ppm (768 mg/cu m) over a working lifetime. The 1971 documentation concluded that a TWA environmental limit of 200 ppm (770 mg/cu m) for vinyl chloride (with a few ppm of vinylidene chloride) "seems appropriate to prevent adverse systemic effects from long-continued daily exposure."
In 1974, the ACGIH published a notice that the TLV for vinyl chloride would be reassigned as a result of its newly discovered carcinogenic potential. No specific studies were citfid in support of this action. As of 1977, the TLV for vinyl chloride s till awaited reassignment pending the acquisition of more definitive data .
According to a 1968 joint report of the International Labour Office and the World Health Organization , permissible limits set by foreign countries for vinyl chloride In the working environment include 30 mg/cu m for Bulgaria and 1 mg/cu m for the United Arab Republic and the Syrian Arab Republic. The German Democratic Republic has a limit of 500 mg/cu m for vinyl chloride in the work environment .
Limits adopted in foreign countries since 1974 reflect the accumulating evidence of the carcinogenic potential of vinyl chloride. The United Kingdom has set 25 ppm (64 mg/cu m) a s a TWA limit, with a 50 ppm (128 mg/cu m) ceiling limit , until more definitive information is available. In 1976, the Federal Republic of Germany established Technical Guideline Concentrations for vinyl chloride of 10 ppm (26 mg/cu m) in .existing polymerization plants and 5 ppm (13 mg/cu m) elsewhere until such time as an MAC value could be assigned . Sweden established an 8-hour TWA limit of 1 ppm (2.5 mg/cu m) and a calling limit of 5 ppm (13 mg/cu a) for exposura to vinyl chlorida . The Swedish document notad that vinyl chlorida has carcinoganic properties and that It may be absorbed to a considerable extant through the skin.
The International Labour Office recently published the following national occupational exposure limits for vinyl chloride: Yugloslavla, 75 ppm (300 mg/cu m , actually equivalent to 195 mg/cu m); Rumania, 100 mg/cu m as a TWA limit and 200 mg/cu m as a celling limit; Australia, 25 ppm (95 mg/cu m , actually equivalent to 64 mg/cu m); Hungary, 50 mg/cu m; Poland and USSR, 30 mg/cu m; Netherlands, 10 ppm (26 mg/cu m) as a ceiling limit; Finland, 10 ppm (26 mg/cu m); and Japan, 2.5 mg/cu m. In Italy, vinyl chloride is regarded as a human carcinogen, and an exposure limit of 5 ppm (13 mg/cu m) has been recommended. However, the exposure limit is intended as a guideline, as are those of Australia, Japan, and tne Netherlands, and is not legally blading. In Cvltzerland, vinyl chloride is regarded as a probable human carcinogen also, and a provisional exposure limit of 10 ppm (26.5 mg/cu m) has been established. Switzerland also requires that the best available technical and medical protective measures be applied to ensure maximum reduction o£ risk froiu exposure to vinyl chloride.
The 1971 US Federal standard for workplace exposure to vinyl chloride (29 CFR 1910.93) was a ceiling limit of 500 ppm (1,280 mg/cu m), based on the 1968 TLV . Cn January 22, 1974, NIOSH informed the Occupational Safety and Health Administration. (OSHA) that the BF Goodrich Chemical Company had reported the deaths of several or its employees from angiosarcoma of the liver, and that the deaths may have been occupationally related. A fact finding hearing began on February 15, 1974 (reported in the Federal Register 39:35890, October 4, 1974), after consultation with NIOSH and a joint inspection of the BF Goodrich plant by OSHA, NIOSH, and Kentucky Department oi Labor personnel. Preliminary reports of experiments conducted by Cesare Maltoni of the Instituto di Oncologia, Bologna, Italy, and other information disclosed at this hearing indicated that vinyl chloride could induce angiosarcoma in the liver of rats at exposure concentrations as low as 250 ppm (640 mg/cu m). OSHA concluded from the Information presented at the hearing and in posthearing comments that occupational exposure to vinyl chloride was probably the cause of angiosarcoma of the livers observed in workers in the industry. An Emergency Temporary Standard (ETS) was promulgated on April 5, 1974 (Federal Register 39:12342), as 29 CFR 1910.93(q). This standard reduced the permissible exposure level to 50 ppm (128 mg/cu m), as a ceiling limit, and established other requirements, Including monitoring and respiratory protection.
OSHA published a proposed permanent standard (Federal Register 39:16896, May 10, 1974) for the regulation of exposure to vinyl chloride. The proposed standard specified that employee exposure be limited to "no detectable level" as determined by a sampling and analytical method sensitive to 1 ppm with an accuracy of 1 ppm 4502. The proposal also called for monitoring employee exposures and Implementing engineering control and work practice programs when necessary. Hearings on this proposal were conducted from June 25 through June 28 and from July 8 through July 11, 1974. The carcinogenicity of vinyl chloride in three animal species was documented in the record of this proceeding by the studies of Maltoni and of Industrial Bio-Test Laboratories (Federal Register 39:35891, October 4, 1974). These studies demonstrated the induction of angiosarcoma of the liver in rats and mice exposed to vinyl chloride at concentrations as lov as 50 ppm (128 mg/cu m) and in hamsters exposed at higher concentrations. Evidence presented by these and other investigators also Indicated additional tumorlgenic and toxicologic properries of vinyl chloride. CSHA concluded from these findings of angiosarcoma of the liver in experimental animals and employees exposed ,:o vinyl chloride chat vinyl chloride "must be regarded as a human carcinogen, and the probable causal agenc of angiosarcoma of Che liver, and chat exposure of employees to vinyl chloride muse be controlled."
The current permanent standard for worker exposure to vinyl chloride was promulgated on October 4, 1974 (Federal Register 39:35896) and became effective Januarry 1, 1975. The standard (29 CFR 1910(29 CFR .1017, presented as Appendix I of this document, Includes an 8~uur TWA exposure limit of 1 ppm and a ceiling limit of 5 ppm, averaged over any period not exceeding 15 minutes. The standard specifies that no employee may be exposed to direct contact with liquid vinyl chloride.
The standard also establishes requirements for monitoring employee exposure, providing respiratory protection, and instituting medical surveillance programs. A TWA action level of 0.5 ppm (1.3 mg/cu m) also is specified in the standard. Where the results of monitoring show that no employee is exposed in excess of the action level, employers are exempted from certain provisions of the standard.
(b) Vinylidene Chloride In 1975, the ACGIH adopted a TLV of 10 ppm (40 mg/cu m) for vinylidene chloride . Several studies were cited in the 1971 Documentation of Threshold Limit Values for Substances in Workroom Air in support of this limit. Increased mortality in rats, rabbits, guinea pigs, and monkeys exposed to vinylidene chloride at concentrations as low as 61 mg/cu m (15.4 ppm) for 90 days was reported by Prendergast et al , Gage found that after vinylidene chloride inhalation 6 hours/day for 20 days at 500 ppm (1,985 mg/cu m) there was nasal irritation, retarded weight gain, and liver cell degeneration in rats. At 200 ppm (794 mg/cu m), there was only slight nasal irritation, and no liver cell abnormalities were observed. Irish reported liver and kidney damage in rats, rabbits, guinea pigs, and dogs exposed to vinylidene chloride for 6 months at concentrations as low as 25 ppm (99 mg/cu m), and he suggested that concentrations In workplace- be maintained b-'low 25 ppm.
In 1976, the ACGIH adopted a tentative Threshold Limit Value-Short Term Exposure Limit (TLV-STEL) of 20 ppm -(79 mg/cu m) for vinylidene chloride.
The TLV-STEL was described is the maximum concentration at which employees could be exposed continuously for up to 15 mirutes without suffering 1 i froa intolerable irritation, chronic or irreversible tissue changa, or narcosis sufficient to increase accident proneness, impair self-rescuc, or reduce work efficiency. It should be noted that the 1976 STEL's were not deterained on the basis of occupational or experimental data; rather, they were set eapirlcally. A provision limiting the number of 20-ppm excursions to no acre than four each day, with at least 60'minutes between exposure periods, was also Included.
According to the 1968 joint report of the International Labour Office and the World Health Organization , national permissible H alts for vinylidene chloride in the working environment include the following: Yugoslavia, 200 ppm (794 mg/cu m), listed as "dichloroethylene," and Bulgaria and Hungary, 50 mg/cu m, listed as "dlchloroethvlene." A 1977 publication of the International Labour Office lists the following occupational exposure limits for vinylidene chloride in foreign countries: Rumania, 500 mg/cu a as a TWA limit and 700 mg/cu m as a celling limit; Poland and USSR, 50 ag/cu a as a celling limit; and Belgiom. Federal Republic of Germany, Netherlands, and Switzerland, 10 ppm (40 mg/cu m). Australia has established a provisional exposure limit of 10 ppm (40 mg/cu m) for vinylidene chloride. The exposure limits shown for Australia and the Netherlands are intended as guidelines and are not legally binding.
No US F---eral standard for workplace exposure to vinylidene chloride currently exists.
(c) Vinyl Bromide In 1971, the ACGIH recommended a TLV for vinyl bromide of 250 ppm (1,095 mg/cu m). This TLV was adopted in 1972 .
Two studies were included in the 1971 Documentation of Threshold Limit Values for Substances in Workroom Air as bases for this TLV. In an unpublished study cited by ACGIH, Torkelson determined an oral LD50 of 500 mg/kg in male rats. In acute inhalation studies, Torkelson observed no tissue changes in rats exposed to vinyl bromide at concentrations as high as 25,000 ppm (109,500 mg/cum). Leong and Torkelson reported no significant pathologic changes in rats exposed for 20 days to vinyl bromide at 10,000 ppm (43,800 mg/cu m). In a chronic inhalation study, they found no significant changes in growth rate, hematology, organ-to-body weight ratio, or gross and microscopic tissue findings as a result of exposure to vinyl bromide at 250 or 500 ppm (1,095 or 2,190 mg/cu m). The ACGIH concluded that "a TLV of 250 ppm should protect against bromide intoxication and organic injury, and.. . excursions even to 500 ppm would be acceptable provided the timeweighted average does not exceed 250 ppm."
In 1976, in addition to the TWA exposure limit of 250 ppm (1,095 mg/cu m) for vinyl bromide, the ACGIH adopted a tentative TLV-STEL of 250 ppm (1,100 mg/cu m). In 1977, the ACGIH proposed a reduction of the TLV to 5 ppm (22 mg/cu m).
According to a 1977 publication of the International Labour Offica , axposura limits of 250 ppm (1,095 mg/cu m) for vinyl bromide hava baan sat by Australia, Balgiua, Finland, and tha Netherlands. Tha Australian and Dutch lialts ara intandad as guidalinas and ara not legally binding.
No US Federal standard for workplace exposure to vinyl bromide currently exists.
# (d) Vinyl Fluoride
The ACC1H has not adopted a TLV for vinyl fluoride. No US Federal standard for exposure to vinyl fluoride currently exists. No foreign standard3 have been located.
# (e) Vinylidene Fluoride
The ACGIH has not adopted a TLV for vinylidene fluoride. No US Federal standard for exposure to vinylidene fluoride currently exists. No foreign standards have been located.
Basis for the Recommended Standard (a) Permissible Exposure Limits Among the vinyl halides discussed in this document, only vinyl cr.lorlde is regarded as a known human carcinogen chat can induce a characteristic tumor, angiosarcoma of the liver . Animal studies have shown that vinyl chloride , vinyl bromide , and vinylidene chloride are capable of inducing angiosarcoma of the liver and other tumors. In these experiments, erp<-rure to vinyl chloride at 50 ppm for 4 hours/day, 5 days/week, for 52 weeks induced angiosarcoma of the liver in 1/59 rats after 135 weeks ; vinyl, bromide at 250 ppm caused angiosarcoma of the liver in 2/30 rats after 52 weeks ; vinylidene chloride at 55 ppm for 6 hours/day, 5 days/week, for up to 12 months caused angiosarcoma of the liver in 3/72 mice . Exposure at higher concentrations induced a greater incidence of tumors and shortened the latency ior their development, indicating that there was a dose-response relationship for tumor Induction.
No reports in regard to the carcinogenicity of vinyl fluoride or vinylidene fluoride have been located. However, this lack of information cannot be construed as an indication that these compounds have no carcinogenic potential. Each of the vinyl halides may form reactive intermediates that can bind to cellular macromolecules 210]. Putative metabolic pathways and reactive Intermediates are shown in Figure XVII-3. The metabolic studies referenced with the figure, along with Information from reports on structureactivity relationships , indicate that both vinyl fluoride and vinylidene fluoride may have the capacity to fora intermediates capable of alkylating DNA, RNA, or proteins.
The hazard potential of these compounds In a biologic system Is difficult to determine, however, because of detoxication mechanisms (reduction, hydrolysis, and conjugation) that compete with alkylation, as well as repair, mechanisms.
Each of the vinyl halides has been found to be mutagenic in some test system. Vinyl chloride has been shown to have a direct mutagenic effect on Salmonella . Vinylidene chloride , vinyl bromide (VT Simoon and R Nangham, written communication, August 1977), and vinyl and vinylidene fluorides have also been shown to be mutagenic in bacterial test systems. Since many mutagenic compounds are known to also be carcinogenic, these findings suggest that all the vinyl halides might be potential caicinogens.
No studies have demonstrated teratogenic or other affects on human reproduction from exposure to any of the vinyl halides. Structural abnormalities, including Increased numbers of unfused sternebrae, delayed ossification of skull bones, and an increase in the number of lumbar spurs in mice whose dams were exposed to vinyl chloride at 500 ppm during days 6-15 of gestation and in ra^s exposed in utero to vinylidene chloride at 80 ppm during the same period , have been observed. Other reproductive effects included increased resorptions/implants, decreased numbers of live fetuses/litter, and increased fetal crown-rump length . The authors of these studies suggested that the abnormalities observed wer-seccndary to the maternal toxicity of the compounds.
Although tnese changes are not generally considered to be evidence of teratogenicity, they do indicate fetotoxic effects from maternal exposure to vinyl chloride.
Other adverse health effects attributed to exposure to vinyl halides include CNS 33,78,114,127,129], cardiovascular , respiratory , skin 32,111,112], and skeletal effects , as well as liver and spleen abnormalities 78,113].
The risk to the health of employees exposed to the vinyl halides is a combination of the risks of moplastic and other systemic effects from their inhalation or ingestion and of their subsequent metabolism to reactive intermediates.
The observation of neoplasms in humans and animals exposed to vinyl chloride and in animals exposed to vinylidene chloride and vinyl bromide, the similarities in the excreted metabolic products of the vinyl halides, and the calculations of relative reactivity of these compounds on the basis of their physical and chemical properties suggest that each of the five may have a neoplastic potential.
Concern for employee health requires that risk of carcinogenesis as a result of workplace exposure to these compounds be minimized. NIOSH believes that sufficient information does not exist to warrant changing the present Federal Standard for vinyl chloride as stated in 29 CFR 191029 CFR .1017.
Further,
NIOSH believes that the available information on vinylideie chloride and vinyl brooid« indicates that they are at least as toxic as vinyl chloride. Although sufficient biologic information is not available concerning vinyl fluoride and vinylidene fluoride, chemical information suggests that these compounds may also exhibit toxicities similar to that of vinyl chloride, ie, until better animal toxicity and metabolism data are available, there appears to be no reason to treat the fluorides differently from the other vinyl halides.
Therefore, NIOSH recommends that workplace exposure to each of the five vinyl halides be controlled by adherence to the provisions of 29 CFR 191029 CFR .1017, and on the basis of animal carcinogenicity date:, NIOSH suggests that employers make every effort to limit employee exposures to the lowest feasible levels with an eventual goal of zero exposure. As pointed out in Chapter IV there has been a steady decline in workplace environmental concentrations of vinyl chloride since 1974. The lower limits of reliable detectability (see Appendices II-III) are 0.003 ppm for vinyl chloride and 0.5 ppm for vinylidene chloride. Workplace concentrations of vinyl bromide have been measured as low as 0.01 ppm . Vinyl fluoride and vinylidene fluoride in air samples have been measured at concentrations as low as 1 ppm and 2 ppm respectively (see Appendices V-VI).
Since the promulgation cf the vinyl chloride standard in 29 CF3 191029 CF3 .1017 in October, 1974, several advances in respirator technology have taken place.
# V I I. COMPATIBILITY WITH OTHLR STANDARDS
The Environmental Protection A^ancy (EPA), the Department of Transportation (DOT), the Food and Drug Adaialstratlon (FDA), and other Federal agencies have proposed or enacted standards regulating the use or release of several vinyl compounds. The standard recommended by NIOSH in this document for the vinyl halides 13 compatible with the standards promulgated and proposed by other Federal agencies. Standards proposed by other government agencies chat are direct'.y applicable to the standard proposed by NIOSH are reviewed below.
(a) Vinyl Chloride In 1976, CPA established a national emission standard for vinyl chloride (40 CFR 61.60-71) because vinyl chloride had been implicated in the development of angiosarcoma and other serious disorders in occupationally exposed persons and in experimentally exposed animals. Vinyl chloride emissions from ethylene dichlori^e and vinyl chloride production and purification processes were thereby limited to 10 ppm. For the oxychlorlnatlon process, vinyl chloride emissions were restricted to 0.2 g/kg of ethylene dichloride product. Vinyl chloride emissions from polymerization plants were limited to 10 ppm through the stripping stage and to 0.02 g/kg of polyvinyl chloride product when reactors were opened. Emissions of vinyl chloride were required to be controlled after stripping operations by reduction of residual monomer in the polymer to below 400 ppm (2,000 ppm for dispersion resins). Where control devices rather than stripping technology were used to limit emissions, dispersion resins were required to be controlled to 2 g/kg of polyvinyl chloride product and all other resins to 0.1 g/kg of polyvinyl chloride product. EPA assumed that adherence to these limits would reduce hazards to the health of the estimated i.6 million people who live within 5 miles of controlled plants so that '-he incidence of new primary cancers as a result of exposure to vinyl chloride in this group of people would not exceed 1/year of exposure (Federal Register 41:46560, October 21,1976). EPA stated that a complete ban on vinyl chloride emissions was not desirable because (1) vinyl chloride has beneficial uses for which substitutes are not available, (2) potential substitutes may have unknown health effects, (3) unemployment would result, and (4) control technology is available to greatly reduce vinyl chloride emissions.
On June 2, 1977, EPA proposed amendments to the national emission standard (Federal Register 42:28154-28159). Sources currently subject to a 10-ppm emission limit and new sources of this type would be required to limit emissions to 5 ppm. Emissions fron oxychlorlnatlon reactors la ethylene dichlorlde-vlnyl chloride plants would also be limited to 5 ppm. The amendments would direct that residual monomer .in the polymer after stripping be limited to 500 ppm in new dispersion resins and 100 ppm in all other new resins. Where control devices rather than stripping technology would be used co lióle mission«, a«w dispersion resins would have to be controlled Co 0.5 */*g of polyvinyl chloride produce and all ochar new resins co 0.1 g/kg of polyvinyl chlorida produce. The proposed amendments also would prohibit any Increase in ealsslons due co che conseruccion of new sources wlchln 8 km - of exlsdng sources. EPA proposed chese amendments in an effort co conciaue tu approach the rero-em^ssions level for vinyl chloride with available technology because of Its determination that this is the only level absolutely procectivs of health-These H alts and proposed amendments are not directly comparable with those proposed by MlOS'i, since they do not specify breathing zone sampling. They do, however, reflect the same philosophy espoused by NIOSH; chit is, chat the final goal is zero exposure.
Aeroscl drug products containing vinyl chloride as an Ingredient or propellant are considered to be nsw drugs by FDA and are regulated «s such (21 CFR 310.506). EPA (Federal Register 39:14753, April 26, 1974), FDA (21 C7R 700.14), and Che Consumer Product Safety Commission (16 CFR 1500.17(a)( 10)) have banned Che use of vinyl chloride as an lngredienc or propellanc in aerosol produces, including pesticides, cosmetics, and foods, intended for consumer use. These standards are more conservative than that proposed by NIOSH; however, they relate primarily to use of the product and only secondarily to occupational exposure.
FDA proposed rules for regulating the use of vinyl chloride polymers in contacC wich food on SepCember 3, 1975 (Federal Register 40:40529-37). FDA stated that the use of vinyl chloride polymers and copolymers should be prohibited where there was a reasonable expectation of migration of vinyl chloride monomer into food. FDA proposed a ban on the use of vinyl chloride polymers and copolymers in food-contact articles except where specifically permitted in the FDA regulations. Exceptions to this ban included coatings, gaskets, cap liners, flexible tubing, and plasticized films. Use of polyvinyl chloride in water pipe was also permitted on an Interim basis pending the outcome of studies to determine whether vinyl chloride could be extracted by water passing through such pipes. FDA has subsequently published regulations concerning the formulations and amounts of extractable monomer allowable in /inyl chloride copolymer components of paper and paperboard in contact with foods (21 CFR 176). Similar regulations for vinyl chloride copolymers used as basic components of single and repeated use food concacc surfaces have also been promulgated (21 CFR 177). These are compaclble, alchough noc directly comparable, wlch the provisions of the NIOSH standard specifying that no food 3hall be stored, dispensed, prepared, or consumed in vinyl halide exposure areas.
The Materials Transportación Bureau of DOT has designaced vinyl chloride as a hazardous material for purposes of transportation in commerce and has escablished requirements pertaining Co les labeling, packaging, and eransporcation (49 CFR 172.101). Regulations for Che bulk cransport of vinyl chloride by water have been escablished by che US Coasc Guard . These regulations also sec an exposure llmic of 1 ppm (3 mg/cu a), averaged over any 8-hour period, or 5 ppm (13 mg/cu m), averaged - ft over any 8-hour period, or 5 ppm (13 ag/cu o), averaged jver any period noc exceeding 15 miautes, for personnel Involved in vinyl chloride transfer operations. Continuous nonltorlng oust be conducted during such operations, using a method with an accuracy of not less than t502 from 0.25 through 0.5 ppo, t35Z from 0.5 ppm through 1 ppm, and *252 over 1.0 ppm. The US Coast Guard has also developed a cargo compatibility guide for bulk liquid chemicals indicating combinations of chemicals chat result In dangerous chemical reactions when accidentally mixed inside a cargo tank or pipe. Vinyl chloride Is listed as incompatible with nitric acid and caprolactao solution. Regulations for unmanned barges carrying certain dangerous bulk cargoes, including vinyl chloride, also nave been established by Che US Coast Guard (46 CFR 151). The Coasc Guard stai.iard for occupational exposure is less stringent than chat proposed by NIOSH.
(b) Vlnylldene Chloride FDA has published regulations concerning the formulations and amounts of extractable monomer allowable in vlnylldene chloride copolymer components of paper and paperboard that come into contact with foods (21 CFR 176). Similar regulations for vlnylldene chloride copolymers used as basic components of single and repeated use food contact surfaces have also been established (21 CFK 177).
The Materials Transposition Bureau of DOT has designated vlnylldene chloride as a hazardous material for purposes of cransportatlon in commerce and has established requirements pertaining to les labeling, packaging, and transportation (49 CFR 172.101). In its cargo compatibility guide for bulk liquid chemicals, the DS Coast Guard has listed vinylidir.e chloride as incompatible with nitric acid and caprolactam solution. Regulations for unmanned barges carrying certain dar~erous bulk cargoes, including vlnylldene chloride, nave been established by the US Coast Guard (46 CFR 151).
NFPA provides a compilation of information on the hazardous properties and firefighting aspects of vlnylldene chloride. This compound is very flammable and readily forms explosive mixtures in air. Polymerization may occur at elevated temperatures, possibly rupturing containers. A readily explosive peroxide may be formed during long-term storage. In the 1975 Manual of Hazardous Reactions , NFPA notes that vlnylldene c.iloride polymer is self-reactive and may explode under appropriate conditions. It also reports that mixtures of vlnylldene chloride and chlorosulfonic acid, nitric acid, or oleum (fuming sulfuric acid) in closed containers cause increased temperature and pressure. In firefighting operations, NFPA recommended that the gas flow be stopped and that dry chemical, foam, or carbon dioxide be used to extinguish flames. Water may be ineffective in putting out fires, but It should be used to cool containers, protect personnel la the area, flush spills away from flar.es, aad disperse vapors if appropriate. The provisions of the National Electrical Code and those of the Sections of the National Fire Codes dealing with flammable and combustible liquids and static electricity should be complied with where applicable. (d) VInylidene Fluoride FDA has published regulations concerning the formulations and amount of extractable aononer allowable in polyvinylidene fluoride resin ccmponints of articles intended for repeated food-contact 'jse (21 CFR 177.2510).
(e) Vinyl Bromide No other standards were located for this compound.
# V I I I . RESEARCH SEEDS
The current information on biologic effects of exposure to the vinyl halides is incomplete. Vinyl chloride has been studied more extensively chan the ocher vinyl halides; however, ch? exact mechanism of its toxic action is not known. Further studies are needed Co obcain additional information.
# (a) Epidemiology
Since one scudy has suggesced chac vinyl chloride causes increased fecal mortality in che wives of exposed workers, stjcies should be performed to invesr.igace tuis potential for each of che vinyl halides.
Epidemiologic studies should be conducted to compare cohorts from the same plart having various magnitudes of exposure. This can be done relatively easixy for Che vinyl halides since these compounds are generally produced and used in specific units of large chemical planes. The epidemiologic studies should include precautions to minimize the "healthy worker" and "survivor" effects usually apparent in any worker population.
(j) Toxic Effects Exposure to vinyl chloride has been shown to induce a wide variety of coxic effects including central nervous, respiratory, cardiovascular, digestive, skin, and skeletal system effects. Studies should be designed to determine which of these systems are affected directly by vinyl chloride or its metabolites and which effects if any are secondary to the primary systemic effects. Studies should also be conducted to determine che range of coxic effects of exposure to the other vinyl halides. These studies should be designed so that comparison of primary coxic effeccs can be made becweer. che compounds, ie, the same species, strains, and protocols should be used for each study.
Studies should be conducted to determine the long-term effects of inhaled and Ingested vinyl fluorides. Because of the Increasing latency of turner induction with decreasing exposure concentrations reported in studies of animals exposed to vinyl chloride , future experiments should not be terminated until the animals become moribund or die.
(c) Sampling and Analysis Experiments are needed to validate the lower range of the sampling ani analytical methods proposed for vinyl bromide, vinyl fluoride and vinylidene fluoride. Procedures and equipment should be improved to further minimize Interferences and standardize the measurement of these compounds.
Although continuous monitoring devices are conmerctally available for vinyl chloride and vinylidene chloride, such devices are needed for the other vinyl halides. Research should also be conducted to Increase che sensitivity and accuracy of che existing equipaent so that reliable, continuous records of exposure for all work areas can be obtained.
Research is also necessary to develop techniques for biologic nonicorlr.g. At present, because of che rapid aetabolism of che vinyl halides, blood ar.alyses have only indicated adverse effects rather chan determining exposures, and urinalysis has not been developed co che extent necessary to define exposures. Further studies of metabolism and excrecion may develop che information necessary co calculate che bodv burden from che excrecion produces, so chat an accurate assessment of che rotal accumulated dose :an be tia d e .
In addicion, resources should be expended co assess che currenc st re of concrol technology and the feasibility of implementing advances in this area. Thought should be given to the feasibility of using less toxic substitutes. Finally, respirators with end-of-service-life indicators should be developed for the vinyl halides for which they are not available. (2) No employee may be exposed to vinyl chloride at concentrations greater than 5 ppm averaged over any period not exceeding 15 minutes.
(3) No employee may be expor^d to vinyl chloride by direct contact with liquid vinyl chloride.
<rt) M onitoring. Cl) A program of Initial monitoring and measurement shall be undertaken In each establish ment to determine If there Is any em ployee exposed, without regard to the use of respirators, In excess of the action level.
(2) Where a determination conducted urider paragrapn (d)(1) of this section ihovvs any employee exposures, without regard to the use of respirators. In ex cess of the action level, a program for de termining exposures for each such em ployee shall be established. Such a pro gram:
Shall be repeated a t least monthly where any employee Is exposed, without regard to the use of respirators. In ex cess of the permissible exposure limit.
(11) Shall be repeated not less then Tuarterly where any employee is exposed, without regard to the use of respirators. In excess of the action level.
(Ill) May be discontinued for any em ployee only when at least two consecu tive monitoring determinations, made not less than 5 working days apart, show ex posures for that employee a t or below the action level.
(3) Whenever there has been a pro duction, process or control change which may result In an Increase In the release of vinyl chloride, or the employer has any other reason to suspect that any em ployee may be exposed in excess of the action level, a determination of employee exposure under paragraph (d) (1) of this section shall be performed.
(4) The method of monitoring and measurement shall have an accuracy (with a confidence level of 95 percent) of not less than plus or minus 50 pcrcenfc from 0.25 through 0.5 ppm, plus or minus 35 pcrccnt from over 0.5 ppm through 1.0 ppm, nnd plus or minus 25 percent over 1.0 ppm-(Methods meeting these accuracy requirements are available In the "NIOSU Manual of Analytical Methods").
(5) Employees or their designated rep resentatives shall be afforded reasonable opportunity to observe the monitor ing and measuring requited by this paragraph ic) R egulated area. (1) A regulated area shall be established where:
(1) Vinyl chloride or polyvinyl chloride is manufactured, reacted, repackaged, stored, handled or used; and <U) Vinyl chloride concentrations are In excess of the permissible exposure limit.
(2) Access to regulated areas shall be limited to authorized persons. A dally roster shall be made of authorized p e r sons who enter.
(f) M ethods of com-pliance. Employee exposures to vinyl chloride shall be con trolled to at or below the permissible ex posure limit provided in paragraph (c) of this section by engineering, wori prac tice. and personal protective controls as follows:
(1) Feasible engineering and work practice controls shall immediately be used to reduce exposures to at or below the permissible exposure limit.
(2 Wherever feasible en?ineering a id work practice controls which cr.n be In stituted immediately are not sufficient to reduce exposures to at or below the per missible exposure limit, they shall none theless be used to reduce exposures to the lowest practicable level, and shall be supplemented by respiratory protection in accordance with paragraph (g) of this section. A program shall be established and implemented to reduce exposures to at or below the permissible exposure limit, or to the greatest extent lcasible, solely by means of engineering and work practice controls, as soon as feasible.
(3) W ritten plans for such a p e gram shall be developed and furnished upon request for examination and copying to authorized representatives of the Assis tan t Secretary and the Director. Such Plans shall be updated a t least every six months. (3) A respiratory protection program meetinc the requirements of ! 1910.134 shall established and maintained.
Selection of respirators for vinyl chloi.le shall be os follows:
(5) (i) Entry Into unkoTvn concentra tions or concentrations greater than 36.0C0 ppm (lower explosive limit) may be made only for purposes of life rescue; and
(II) Entry Into concentrations of less than 36,000 ppm, but greater than 3,600 ppm may be made only for purposes of life rescue, firefighting, or securing equipment so as to prevent a greater hazard from release of vinyl chloride.
(6) Where air-purifying respirators are used:
(I) Air-purifying cannlsters or car tridges shall be replaced prior to the expiration of their service lire or the end of the shift in which they <Lve first used, whichever occurs first, and (II) A continuous monitoring and alarm system shall be provided where concentrations of vinyl chloride could reasonably excccd the allowable concen trations for the devices In use. Such sys tem shall be used to alert employees when vinyl chloride concentrations exreed the allowable concentrations for the devices In use.
(7) Apparatus prescribed for higher concentrations may be used for any lower concentration.
(h) Hazardous operations. (1) Em ployees engaged In hazardous operations. Including entry of vessels to clean poly vinyl chloride residue from vessel walls, shall be provided and required to wear and use;
(1) Respiratory protection in accord ance with paragraphs (c) and (g) of this section; and
(il) Protective garments to prevent skin contact with liquid vinyl chloride or with polyvinyl chloride residue from vessel walls. The protective garments shall be selected for the operation and Its possible exposure conditions.
(2) Protective garments shall be pro vided clean and dry for each use.
(1) Emergency situations. A written operational plan for emergency situa tions shall be developed for each facility storing, handling, or otherwise using vinyl chloride as a liquid or compressed gas. Appropriate portions of the plan shall be Implemented In the event of ai. emergency. The plan shall specifically provide that:
(1) Employees engaged in hazardous operations or correcting situations of ex isting hazardous releases shall be equipped as required in paragraph (h) of this section;
(2) Other employees not so equipped shall evacuate the area and not return until conditions are controlled by the methods required in paragraph (f) of this section and the emergency is abated.
(J) Training. Each employee engaged In vinyl chloride or polyvinyl chlor'de operations shall be provided training in a program relating to the hazards of vinyl chloride and precautions for its safe use.
(1) The program shall include:
(1) The nature of the health hazard from chronic exposure to vinyl chloride including specifically the carcinogenic hazard:
(li) The specific nature of operations which could result In exposure to vinyl chloride In c.xcess of the permissible limit and necessaiy protective steps;
(ili) The purpose for. proper use, and limitations of respiratory protective devices;
(Iv) The fire hazard and acute toxic ity of vinyl chloride, and the necessary protective steps;
(v) The purpose for and a description . of the monitoring program;
(vi) The purpose for, and a descrip tion of, the medical surveillance program;
(vii) Emergency procedures;
(viii) Specific Information to aid the employee In recognition of conditions which may result in the release of vinyl chloride; and
(lx) A review of this standard at the employee's first training and Indoctrina tion program, and annually thereafter.
(2) All materials relating to the pro gram shall be provided upon request to the Assistant Secretary and tl'? Director.
(k) M edical surveillance. A p.csram of medical surveillance shall be 'nstltuted for each employee exposed, with out regard to the use of respirators, to vinyl chloride In excess of the action level. The program shall provide each such employee with an opportunity for examinations and tests in accordance with this paragraph. All medical ex aminations and procedures shall be per formed by or under the supervision of a licensed physician, and shall be provided without cost to the employee.
(1) At the time of Initial assignment, or upon Institution of medical surveil lance;
(I) A general physical examination rhall be performed, with specific atten tion to detecting enlargement of liver, spleen or kidneys, or dysfunction In these organs, and for abnormalties in skin, connective tissues and the pulmonary system (See Appendix A).
(II) A medical history shall be taken. Including the following topics: (E) Gamma glustamyl transpeptidase.
(2) Examinations piovlded in accord ance with this paragraph shall be per formed at least:
(i) Every 6 months for each employee who has been employed in vinyl chlo ride or polyvinyl chloride manufacturing for 10 years or longer; and
(li) Annually for all other employees.
(3) Each employee exposed to an emergency shall be afforded appropriate medical surveillance.
(4) A statem ent of each employee'. ' ; suitability for continued exposure to vinyl chloride Including use of protec tive equipment and respirators, shall be obtained from the examining physician promptly after any examination. A copy of the physician's statement shall be pro vided each employee.
(5) If any employee's health would be materially impaired by continued ex posure. such employee shall be with drawn from possible contact with vinyl chloride.
(S) Laboratory analyses for all bio logical specimens included in medical examinations shall be performed In labo ratories licensed under 42 CFR Part 74.
(7) If the examining physician deter mines that alternative medical examina tions to those required by paragraph of this section, if the employer obtains a statem ent from the examining physician setting forth the alternative examinations and the rationale for sub stitution. This statem ent shall be avail able upon request for examination and copying to authorized representatives of the Assistant Secretary and the Director.
(1) Higns and labels. (m) Records. (1) All records main tained in accordance with this section shall include the name and social secu rity number of each employee where relevant.
(2) Records of required monitoring and measuring, medical records, and au thorized personnel rosters, shall be made and shall be available upon request for examination and copying to authorized representatives of the Assistant Secre tary and the Director.
(1) Monitoring and measuring records shall:
( (C) Be maintained for not less than 30 years.
(il) Authorized personnel rosters shall be maintained for not ¡ess than 30 years.
(iii) Medical records shall be main tained for the duration of the employ ment of cach employee plus 20 years, or 30 years, whichever Is longer.
(3) In the event that the employer ceases to do business and there is no successor to receive and retain his rec ords for the prescribed period, these rec ords shall be transmitted by registered man to the Director, and each employee Individually notified in writing o ' this transfer.
(4) Employees or their designated representatives shall be provided access to examine and cop'' records of required monitoring and measuring.
(5) Former employees shall be pro vided access to examine and copy re quired monitoring and measuring records reflecting their o .?n exposures.
(6) Upon written request of any em ployee, a copy of the medical record of that employee shall be furnished to any physician designated by the ;mploy>"i.
(n) Reports. (1) Not later than 1 month after the establishment of a reg ulated area, the following information shall be reported to the OSHA Area Di rector. Any changes to such information shall be reported within 15 days.
(i) The address and location of each establishment which has one or m arl regulated are?s; and
(ii) The number of employees In each regulated area during normal operations. Including maintenance.
- 2) Emere<'ncl<*<!. and the facts ob tainable at that time, shall be reported within 24 hours to the OSHA Area Di rector. Upon request of the Area Direc tor, the employer shall submit additional Information In writin? relevant to the nature and extent of employee exposures and measures taken to prevent future emergencies of similar nature.
(3) W ithin 10 -working days following any monitoring and measuring which discloses that any emolovee has been exposed, without regard to the use of respirators, in excess of the permissible exposure limit, each such employee shall be notified in writing of the results of the exposure measurement and the steps being taken to reduce the exposure to within the permissible exposure limit.
(o) Effective dates. (1) Until April 1, 1975, the provisions currently set forth in 5 1910.93q of this Part shall apply.
(2) Effective April 1, 1975, the pro visions set forth in § 1910.93q of this P art shall apply.
A r p r v t j i x A -S u p p l i m e n t a r t M c s ic a I.
I n f o r m a t i o n W hen required tests »inder parnsrapli ( k ) ( l) of tills section chow abnorm alities, th e tests should be repeated as 6oon as p rac ticable. preferably w ithin 3 to 4 weeks. I i tests rem ain abnorm al, ">nslcJeratlon should be given to tvlthdraw al oi th e em ployee from co n tact w ith vinyl chlorldc, w hile a m ors com prehensive exam ination Is m ade.
A dditional tests which m ay be useful:
A. January 29, 1976 . This method involves adsorption on activated carbon, desorption with carbon disulfide, and gas chromatography. The range for determination of vinyl cnloride using this method is 0.008-5.2 mg/cu m (0.003-2.03 ppm) in a 5-liter air sample. The precision (coefficient of variation-CV(T)) is approximately 0.08 at levels of 7 and 71 mg/cu m (2.73 and 27.7 ppm).
# Principle of the Method
A known volume of air is drawn through two sorbent tubes in series containing activated carbon (made from coconut sh ells), which adsorbs the vinyl chloride present in the air sample. The c o lle c te d vinyl chloride is then desorbed with carbon disulfide, and the resulting solutions are analyzed by gas chromatography with a flame-ionization detector. The areas under the resulting peaks are compared with areas obtained from the injection of standards.
# Range and Sensitivity
(a) The minimum detectable amount of vinyl chloride was found to be 0.2 ng/injection at a 1 x 1 attenuation of a gas chromatograph. This corresponds co an estimated concentration of 0.008 mg/cu m in a 5 -liter air sample analyzed by this method. However, the desorption efficiency from activated carbon of amounts of vinyl chloride as small as 40 ng (0.008 yg/liter x 5 liters) has not been determined. Therefore, the detection limit of the overall method may be somewhat higher than 0.008 mg/cu m.
(b) At a sampling flowrate of 50 ml/minute, the total volume to be sampled should not exceed 5 lite rs. Thu value is based on data indicating that more than 10 lite rs of air containing 2.6 ¿¿g/liter (1 ppm) of vinyl chloride could be sampled on activated carbon before 52 breakthrough was observed. This would indicate that 5 lite rs of air containing no more than 5.2 mg/cu m may be sampled without significant breakthrough. If a particular atmosphere Is suspected of containing a high concentration of contaminants or a high humidity is suspected, the sampling volume should be reduced by 50%. A safety factor has been included in the 5-11ter volume, and the capacity of the first tube should be adequate within these limits except under the most extreme conditions.
# Interferences
(a) When Che amount of water in the air is so great that condensation actually occurs in the tube, organic vapors 'will not be trapped effectively. Experiments indicate that high humidity severely decreases the capacity of activated carbon for organic vapors.
(b) When two or more substances are known or suspected to be present in the air, this information, Including their suspected identitie.1, should be transmitted with the sample, since these compounds may interfere with the analysis for vinyl chloride.
(c) Any compound that has the same retention time as vinyl chloride under the operating conditions described in this method is an Interference. Hence, retention time data on a single column, or even on a number of columns, may not provide proof of chemical identity. Often, operating conditions can be modified to eliminate Interferences. Samples should be analysed by an independent method when overlapping gas-chromatographic peaks canno: be resolved.
# Precision and Accuracy
(a) A coeificient of variation of 0.0.76 was obtained from analysis of each of two sets of sorbent tubes, one set of 27 tubes exposed to vinyl chloride at a concentration of 7.2 mg/cu m in air and another set of 29 tubes exposed at a concentration of 71.3 mg/cu m. These values reflect total sampling and analytical error as well as desorption efficiency correction errors.
(b) Experiments were performed to obtain some indication of the accuracy of this method, although accuracy was difficult to evaluate. These experiments generally involved six sorbent tube samples exposed to a synthetic atmosphere. The calculated value was the concentration expected based on the measured amounts of vinyl chloride and air mixed to prepare the synthetic atmosphere. The calculated value was not the true value, since it was subject to experimental error. The value found from analysis of each sorbent tube, after correction for desorption efficiency, was also compared with that found by the direct injection of gas samples from the same synthetic atmosphere used in loading the tubes. The results of these experiments are shown in Table XI-1. It should be noted that average concentrations determined by analysis of sorbent tubes were within 62 of the average concentrations determined by analysis of gas samples. may lead to sane confusion as to whether sample loss has occurred. This migration effect can be considerably decreased by shipping and storing the tubes at -20 C or by using two separately capped tubes for the front and backup sections.
(c) The precision of the method is limited by the reproducibility of the pressure drop and, therefore, by the flowrates across the tubes. Because the pump is usually calibrated for one particular tube, differences in flowrates from tube to tube can cause sample volumes to vary.
# Apparatus
(a) Personal sampling pump: The pump should be a properly calibrated personal sampling pump for personal and area samples. The pump should also be capable of accurate performance at the recommended flowrates. It should be calibrated with a representative sorbent tube in the sampling line. A dry or wet test meter or a glass rotameter that w ill determine the flowrate to within ±52 may be used for the calibration.
(b) Sorbent tubes: The glass tubes are flame sealed at both >nds. Each is 7 cm long, 6-mm outer diameter, 4-mm inner diameter, and contains two sections of 20/40-mesh activated carbon separated by a 3-nm portion of urethane foam. The activated carbon is prepared from coconut shells and is fired at 600 C prior to packing to remove adsorbed m aterials. The primary adsorbing section contains 100 mg of sorbent, the backup section 50 mg. A plug of silanized glass wool is placed in front of the adsorbing section. The pressure drop across the tube must be less than 2 inches of water at a flowrate z£ 0. (e) A mechanical or electronic integrator and a recorder or some method for determining peak area.
(f) Vials (2 ml) that can be sealed with caps containing Teflon-lined silicone rubber septa.
(g) Microliter syringes (10 fil and other convenient sizes for making standards).
(h) Gastight syringe (1 ml, with a gastight valve).
(1) Pipets (0.5-ml delivery pipets or 1.0-ml pipets graduated in 0.1-ml increments). (8) Cap the sorbent tubes with the supplied plastic caps immediately after sampling. Under no circumstances should rubber caps be used.
(9) Treat one tube in the same manner as a sample tube (break, seal, and transport), but do not sample any air through the tube. This tube is labeled as a blank.
(10) Pack capped tubes tightly to minimize tube breakage during transport to the laboratory. The use of two tubes in series during sampling eliminates the need for cooling during shipping. However, if only one tuba is used, and if the samples will spend a day or more in transit, then cool the tubes, eg, with dry ice, to minimize migration of the vinyl chloride to the backup section.
(11) Samples received at the laboratory are logged in and immediately storad in a freezer (around -20 C) until time for analysis. Samples may be stored in this manner for long periods of time witn no appreciable loss of vinyl chloride (2 months). Even.around -20 C, vinyl chloride will equilibrate between the two sections of activated carbon in one tube, ie, it will migrate to the backup section. This phenomenon is observable after 2 weeks and may be confused with sample loss after 1-2 months.
(b) Analysis of Samples (1) Cleaning of Equipment. All glassware used for the laboratory analysis should be washed with detergent and thoroughly rinsed with tapwater and distilled water.
(2) Preparation and Desorption of Samples. The two tubes used in the collection of a single sample are analyzed separately. If only one tube is used for sampling, then each section of activated carbon should be analyzed separately. Discard the glass wool from each tube. Transfer both sections of each tube to a small vial containing 1 ml of the precooled carbon disulfide. It is important to add the sorbent to carbon disulfide and not the carbon disulfide to the sorbent. Top vial with a septum cap. Discard the separating section in each tube. Tests indicate that desorption is complete in 30 minutes if the sample is agitated occasionally during this period. The samples should be analyzed within 60 minutes after addition to carbon disulfide.
(3) Gas-Chromatographic Conditions. The typical operating conditions for the gas chromatograph are: (4) Injection: The first step in the analysis is the injection of the sample into the gas chromatograph. To eliminate difficulties arising from blowback or d istillation within the syringe needle, use the solvent flush injection technqiue. Flush a 10-jil syringe with solvent several times to wet the barrel and plunger. Draw 2 jil of solvent into the syringe to increase the accuracy and reproducibility of the injected sample volume. Remove the needle from the solvent and pull the plunger back about 0.4 yl to separate the solvent flush from the sample with a pocket of air to be used as a marker. Then immerse the needle in the sample and withdraw a 5-jul aliquot to the 7.4-^1 mark (2 >¿1 of solvent + 0.4 ¡¿1 of air + 5 jul of sample »7.4 jil). After the needle is removed from the sample and prior to injection, the plunger is pulled back a short distance to minimize evaporation of the sample from the tip of the needle.
Make duplicate injections of each sample and standard. No more than a 3Z difference in area from repeated injections is to be expected. Automatic sampling devices may also be used. A syringe equipped with a Chaney adapter may also be used in lieu of the solvent flush technique.
(5) Measurement of Area: Measure the area under the sample peak using an electronic integrator or some other suitable form of area measurement. Area measurements are compared with a standard curve prepared as discussed in Preparation of Standards. Preliminary results are read from a standard curve prepared as discussed below.
(c) Determination of Desorption Efficiency (1) Importance of Determination. The efficiency of desorption of a particular compound can vary from one laboratory to another and also from one batch of sorbenc to another. Thus, it is necessary to determine at least once the percentage of vinyl chloride that Is removed in the desorption process. Desorption efficiency should be determined on the same batch of sorbent tubes used in sampling. Results indicate that desorption efficiency varies with loading (total vinyl chloride on the tube), particularly at lower values, eg, 2.5 Mg.
(2) Procedure for Determining Desorption Efficiency. Sorbent tubes from the same batch as that used in obtaining samples are used in this determination. Inject a measured volume of vinyl chloride gas into a bag containing a measured volume of air. The concentration in the bag may be calculated if room temperature and pressure are known. The bag is made of Tediar (or other material that will retain the vinyl chloride and not absorb it) and should have a gas sampling valve and a septum injection port. Sample a measured volume from the bag through a sorbent tube using a calibrated sampling pump. Prepare at least five tubes in this manner. These tubes are desorbed and analyzed in the same nanner as the samples. Samples taken with a gastight syringe from the bag are also injected into the gas chromatograph. The concentration in the bag (standard) is compared with the concentration obtained from Che tube (sample).
The desorption efficiency equals the amount of vinyl chloride desorbed from the.charcoal divided by the product of the vinyl chloride concentration in the bag times the volume of synthetic atmosphere sampled, or; ___________ (amount of vinyl chloride desorbed from sorbent)___________ (vinyl chloride concentration in bag) x (volume of atmosphere sampled)
# Preparations of Standards
Caution: These laboratory operations involve carcinogens. Vinyl chloride has been identified as a human carcinogen and appropriate precautions must be taken in handling this compound.
A series of standards, varying in concentration over the range of interest, is prepared and analyzed under the same gas-chromatographic conditions and during the same time period as the unknown samples. Curves are established by plotting concentration in jug/al vs peak area or peak height. There are two methods of preparing standards, and they are comparable if highly purified vinyl chloride is used. If no internal standard is used in the method, standard solutions must be analyzed at the same time as the sample. This w ill minimize Che effect of day-to-day variations of the flameionization response.
(a) Gravimetric Method. Slowly bubble vinyl chloride into a weighed 10ml volumetric flask containing approximately 5 ml of toluene. After 3 minutes, weigh the flask again. A weight change of 100-300 mg will usually be observed. Dilute the solution to exactly 10 ml with carbon disulfide and use to prepare other standards by removing aliquots with syringes of various sizes. Subsequent dilution of these aliquots with carbon disulfide results in a series of standards that have linear values from the range of 0.2 ng/lnjection, the minimum detectable amount of vinyl chloride, to 1.5 Mg/injection. Standards are stored in a f:e e:er it -20 C and have been found to be stable at this cempenture for 3 davi. Ti^ht-fit t ing plastic tops on :he volumetric teem to retain the vinyl chloride better than ground-glass s toppers.
# Calculations
(a) The weight, in Mg, corresponding to the area under each peak is read frcm the standara curve for vinyl chloride. Mo liquid volume corrections are needed because S ch the standards and the samples are based on the number of Mg in 1.0 ml c carbon disulfide and the volume injected in both cases is identical.
(b) Corrections £'>'- the blank are made for each sample: Mg Mg(sample) -Mg(bian>0 A similar procedure is followed for the backup sections.
(c) Add ,the amounts present in the front and backup sections of the same sample tube to determine the total amount of vinyl chloride in the sample.
(d) The total amount is corrected for the desorption efficiency at the level of vinyl chloride measured:
Corrected amount (in Mg) - amount (in jig) desorption efficiency (e) The coueentration of vinyl chloride in air may be expressed in mg/cu m: mg/cu a corrected weight (in Mg)______ volume of air sampled (in liters) (f) The concentration, may also be expressed in terms of ppa by volume:
ppm " ag/cu a x 24.¿5 November 21, 1977 . This method involves adsorption on charcoal, desorption with carbon disulfide, and gas chromatography. The range for determination of vinylidene chloride using this method is 2-12 mg/cu m (0.5-3.02 ppm) in 7 lite rs of air. The precision (pooled relative standard deviation) i3 approximately 5% for analysis of samples containing 12-85 MS of vinylidene chloride/sample.
# Synopsis
(a) A known volume of air is drawn through a charcoal tube to trap the vinylidene chloride present.
(b) The charcoal in the tube is transferred to a small vial where the vinylidene chloride is desorbed with carbon disulfide.
(c) An aliquot of the desorbed sample is injected into a gas chromatograph.
(d) The area or the height of the resulting peak is determined and compared with either the peak areas or heights obtained from injection of standards.
Working Range. Sensitivity, and Detection Limit (a) The method was tested with sample loading between 12 and 85 Mg of vinylidene chloride/charcoal tube. The samples were collected from atmospheres containing vinylidene chloride in the range of 7.6-10.0 mg/cu m and having a relative humidity of greater than 802.
(b) The slope of the calibration curve (response vs weight/sample) was 0.0322 area count/Mg when analysis was done by electronic integration. When analysis was done using peak height, the slope of the calibration curve was 4.75 x lO^^ajnpg/jjg.
(c) The lowest quantifiable limit for this method was determined to be 7 Mg of vinylidene chloride/sample. At this level the relative standard deviation of replicate samples was found to be less than 102 and the desorption efficiency was greater than 802. This limit could be lower if the charcoal used is shown to give better desorption characteristics at the lower level.
# Interferences
(a) When two or more substances are known or suspected to be present in the air, this information, including their suspected identities, should be transmitted with the sample, since these compounds may interfere with the analysis for vinylidene chloride.
(b) Any compound that has the same retention time as vinylidene chloride under the operating conditions described in this method is an interference. Therefore, retention time data on single or multiple columns cannot be considered proof of chemical identity.
(c) If the possibility of interference exists, separation conditions, eg, column packing, temperature, carrier flow, and detector, must be changed to circumvent the problem.
# Precision and Accuracy
(a) The pooled relative standard deviation of the analytical method was 4.87. for the analysis of 36 samples jver the range of 12-85 ^g vinylidene chloride/sample.
(b) The concentration of the sampled air was also determined using a gas phase infrared analyzer. The gas-chromatographic determinations averaged 57. lower when compared with the results of the infrared analyzer. No desorption efficiency corrections were used.
(c) The breakthrough volume and, therefore, the capacity ox charcoal for vinylidene chloride decreased with increasing relative humidity. At 872 relative humidity the breakthrough volume was 102 of the breakthrough volume at 102 relative humidity. The breakthrough volume was also found to be a function of concentration of vinylidene chloride. When high relative humidity air containing 144 mg/cu m of vinylidene chloride was sampled at 0.2 liter/minute the breakthrough volume was 3.7 lite rs . At a vinylidene chloride concentration of 10 mg/cu m and high relative humidity the breakthrough volume was 7.3 liters.
(d) Samples of vinylidene chloride on charcoal were found to be stable at 25 C for 7 days and for 21 days if stored at 5 C for the remainder of the period.
# Advantages and Disadvantages of Che Method
(a) The sampling device is small, portable, and involves no liquids. Many of the interferences can be eliminated by altering chromatographic conditions. The tubes are analyzed by means of a quick instrumental method.
(b) One disadvantage of the method is chat the amount of sample that can be taken is limited by the capacity of the charcoal tube. When the sample value obtained for the backup section of the charcoal tube exceeds 202 of that found on Che fronc section, Che possibility of sample loss exists. During sample storage the volatile compounds may migrate throughout the tube until equilibrium is reached (33% of the sample on the backup section). This can be minimized by storing the samples in a refrigerator until the analysis is performed.
(c) The precision of the method is limited by the reproducibility of the pressure drop across the tubes. Variation in pressure drop will affect the flovrate. The reported salnple volume will then be imprecise because the pump is usually calibrated for one tube only.
(d) The recommended gas-chromatographic packing will not separate vinyl chloride and carbon disulfide. Other gas-chromatographic packings that separate vinyl chloride and carbon disulfide do not separate vinylidene chloride and carbon disulfide. If analysis for each of these monomers is to be performed, it is necessary to use different columns to analyze the samples.
# Apparatus
(a) Personal sampling pump capable of accurate performance at 0.2 liter/minute and calibrated with a representative charcoal tube in the line.
(b) Charcoal tubes: Glass tubes with both ends flame-sealed, 7 cm long with a 6-mm outer diameter, and a ¿»-mm inner diameter, containing two sections of 20/40-mesh activated carbon separated by a 2-mm portion of urethane foam. The activated charcoal is prepared from coconut shells and is fired at 600 C prior to packing. The adsorbing section contains 100 mg of charcoal, the backup section 50 mg. A plug of silylated glass wool is placed in front of the adsorbing section. The pressure drop across the tube must be less than 1 inch of mercury at a flowrate of 0.2 liter/minute.
(c) Gas chromatograph equipped with a flame-ionization detector. Optional: electronic integrator.
(d) Silanized glass gas-chromatographic column (10 feet x 1/4-inch outer diameter) packed with Durapak OPN 100/120 mash. Any gas-chromatographic column capable of separating carbon disulfide and vinylidene chloride may be used.
(5) Measure and report the flovrate and time, or the sampling volume, as accurately as possible. The sample is taken at 0.2 liter/minute or less. The maximum volume sampled should not exceed 7.0 lite rs.
(6) Measure and record the temperature and pressure of the atmosphere being sampled. (7) Cap the charcoal tubes with the plastic cap3 supplied immediately after sampling. Under no circumstances should rubber caps be used.
For every 10 sa*3Dles taken, one charcoal tube shculd be handled in the same manner as the samples (break, seal, and transport), except that no air is sampled through this tube. This should be labeled as a blank.
(9) If samples are shipped to a laboratory, they should be packed tightly to minimize tube breakage during shipping.
(10) Six to twelve unopened charcoal tubes should also be shipped so desorption efficiency studies can be performed on the same type and lot of charcoal used for sampling.
(11) Samples received at the laboratory are logged in and immediately stored in a refrigerator.
(c) Analysis of Samples (1) Preparation of Samples. The charcoal tubes are removed from the refrigerator and permitted to equilibrate to room temperature to prevent water condensation on the cold charcoal. Each charcoal tube is scored with a file in front of the firs t section of charcoal and broken open. The glats wool is removed and discarded. The front section (larger) is transferred to a sm?l.". vial. The separating foam is removed from the tube and discarded. The backup section is also transferred to a small vial. The contents of each individual tube are desorbed before the next sample tube is opened.
(2) Desorption of Samples. After the two sections of a charcoal tube are transferred to small vials, 1.00 ml of carbon disulfide is pipetted into each of the two vials. A serum cap is then crimped into place immediately after the carbon disulfide has been added. (All work with carbon disulfide should be performed in a hood because of. the high toxicity of carbon disulfide). The capped samples are kept at room temperature with occasional agitation. Desorption is complete In 30 minutes. The samples should be analyzed the same day they are desorbed.
(3) Gas-Chromatographic Conditions: (A) 70 ml/minute helium carrier flow.
prepared by injecting an identical amount of cyclohexane stock solution into 1.0 ml of carbon disulfide. The samples and standards are analyzed as described in Analysis of Samples.
The desorption efficiency at each level is the ratio of the average amount found to the amount taken. A blank correction is not expected to be necessary. The desorption efficiency curve is constructed by plotting the amount of vinylidene chloride found in a sample vs the desorption efficiency.
Calibration and Standardization CAUTION: Vinylidene chloride has been tentatively identified as a carcinogen.
Precautions must be taken while handling this compound - to prevent contamination of personnel and the working area. It is convenient to express the concentration of standards in terms of M g / 1.0 ml of carbon disulfide or Mg/sample. The density of vinylidene chloride is used to convert the volume taken to the mass taken (1.218 mg/jil). A series of standards varying in concentration over the range of interest is prepared end analyzed under the same gas-chromatographic conditions and during the same t;ime period as the samples. It is best to alternate standard then sample, during the analysis. Curves are established by plotting the concentra:ion of the standards in Mg/1*0 ml of carbon disulfide vs peak area or p'eak hiight.
# Calculations
(a) The sample weight in Mg i3 read from the standard curve. (b) Blank corrections are not expected but, if the analysis shows a blank correction is needed, the correction is: WF -Ws -Wb where: WF - corrected amount (Mg) on the front section of the charcoal tube Ws - amount found on the 'front section of the charcoal tube Wb amount (Mg) found on the front section of the blank charcoal tube A similar procedure is followed for the backup sections. The data presented in this proposed sampling and analytical method for vinyl bromide were adapted from NIOSH method No. P&CAM 127 for Organic Solvents in Air and from information provided by Bales and DW Yeager (written communication, February 1978). This proposed method, as outlined below, has not been tested by NIOSH but should allow routine analyses in the 1-ppm range.
Principle of the Method (a) A known volume .of air is drawn through a charcoal tube to trap the vinyl bromide present.
(b) The charcoal in the tube is transferred to a small, graduated test tube and desorbed with carbon disulfide.
(c) An aliquot of the desorbec sample is injected into a gas chromatograph.
(d) The area of the resulting peak is determined and compared with areas obtained from the injection of standards.
# Range and Sensitivity
No data are currently available. However, Bales reported measurement of vinyl bromide concentrations down to 0.01 ppm using this general method.
# Interferences
(a) When the amount of water in the air is so great that condensation actually occurs in the tube, vinyl bromide w ill not be trapped. Preliminary experiments indicate that high humidity severely decreases the breakthrough volume.
(b) It must be emphasized that any compound which has the same retention time as vinyl bromide at the operating conditions described in this method is an interference. Hence, retention time data on a single column, or even on a number of columns, cannot be considered as proof of chemical identity. For this reason it is important that a sample of the solvent(s) be submitted at the same time so that identity(ies) can be established by other means.
# (c)
If the possibility of interference exists, separation conditions (column packing, temperatures, etc) must be changed to circumvent the problem.
# Precision and Accuracy
No data are currently available.
Advantages and Disadvantages of the Method (a) The sampling device is small, portable, and involves no liquids. Interferences are minimal, and most of those which do occur can be eliminated by altering chromatographic conditions. The charcoal tubes are analyzed by means of a quick, instrumental method.
(b) One disadvantage of the method is that the amount of sample which can be taken is limited by the number of mg that the tube will hold before overloading. When the sample value obtained for the backup section of the charcoal trap exceeds approximately 20% of that found on the front section, the possibility of sample loss exists.
During sample storage the more volatile compounds will migrate throughout the tube until equilibrium is reached.
(c) Furthermore, the precision of the method is limited by the reproducibility of the pressure drop across the two sections of the sampling tube. This drop w ill affect the flowrate and cause the volume to be imprecise, because the pump is usually calibrated for one tube only. This disadvantage could be eliminated by calibrating the pump with a representative charcoal tube.
# Apparatus
(a) An approved and calibrated personal sampling pump for personal samples. For an area sample any vacuum pump whose flow can be determined accurately at 1 liter/minute or less.
(b) Charcoal tubes: glass tube with both ends flame-sealed, 7 cm long with a 6-tnm outer diameter and a 4-mm inner diameter, containing two sections of 20/40-mesh activated charcoal separated by a 2-mm portion of urethane foam. The activated charcoal is prepared from coconut shells and is fired at 600 C prior to packing. The absorbing section contains 100 mg of charcoal, the backup section 50 mg. A 3-mm portion of urethane foam is placed between the outlet end of the tube and the backup section. A plug of silylated glass wool is placed in front of the absorbing section. The pressure drop across the tube must be less than 1 inch of mercury at a flowrate of 1 liter/minute. The data presented in this proposed sampling and analytical method for vinyl fluoride were adapted from NIOSH method No. P&CAM 127 for Organic Solvents in Air and information provided by Bales and DW Yeager (written communications, August 1977 and February 1978). The proposed method, as outlined below, has not been tested by NIOSH, but should allow routine analyses in the 1-ppm range.
Principle of the Method (a) A known volume of air is pumped into a Teflon bag. (b) An aliquot of the air sample in the bag is injected into a gas chromatograph.
(c) The area of the resulting peak is determined and compared with areas obtained from the injection of standards.
# Range and Sensitivity
The limit of detection has been reported as 1 ppm (1.88 mg/cu n) (DW Yeager, written communication, February 1978).
# Interferences
(a) It must be emphasized chat any compound which has the same retention time as vinyl fluoride at the operating conditions described in this method is an Interference. Hence, retention time data on a single column, or even on a number of columns, cannot be considered as proof of chemical identity. For this reason it is important that a sample of the bulk solvent(s' be submitted at the same time so that identity(ies) can be established by other means.
(b) If the possibility of interference exists, separation conditions (column packing, temperatures, etc) must be changed to circumvent the problem. Calibration curves are prepared by plotting ' he concentration (mg of vinyl fluoride/2 ml) vs peak area.
# Calculations
(a) The weight, in mg, corresponding to each peak area is read from the standard curve for the particular compound. No volume corrections are needed, because the standard curve is based on mg/2 ml and the volume of sample injected is identical to the volume of the standards injected.
(b) The volume of air sampled (collected i.i bag) is converted to standard conditions of 25 C and 760 mmHg:
Vs " v x _L_ x 298 760 T+273
where: Vs -volume of air in lite rs at 25 C and 760 mmHg V volume of air in liters as measured P barometric pressure in'mmHg T - temperature of air in degress centigrade (c) The concentration of vinyl fluoride in the air sampled can be expressed in mg/cu m, which is numerically equal to yg/liter of air: mg/cu m - ftg /liter - total mg x 1,000 (qg/mg) Vs
# XV. APPENDIX VI
# SAMPLING AND ANALYTICAL METHOD FOR VINYLIDENE FLUORIDE IN AIR
The data presented in this proposed sampling and analytical method for vinylldene fluoride were adapted from NIOSH method No. P&CAM 127 for Organic Solvents in Air and from information provided by the Pennwalt Corporation and JL Sadenwasser (written communication, March 1978). This proposed method, as outlined below, has not been tested by NIOSH, but it should allow routine analyses in the 1-ppm range.
Principle of the Method (a) A known volume of air is drawn through two charcoal tubes in series to trap the vinylidene fluoride present.
(b) The charcoal in the tubes is transferred to a small, graduated test tube and desorbed with carbon disulfide.
(c) An aliquot of the desorbed sample is injected into a gas chromatograph.
(d) The area of the resulting peak is determined and compared with areas obtained from the injection of standard?
# Range and Sensitivity
No data are currently available. However, Sadenwasser (written communication, March 1978) reported measuring vinylidene fluoride concentrations down to about 2 ppm using this general method.
# Interferences
(a) When the amount of water in the air is so great that condensation actually occurs in the tube, vinylidene fluoride w ill not be trapped. Preliminary experiments indicate that high humidity severely decreases the breakthrough volume.
(b) It must be emphasized that any compound which has the same retention time as vinylidene fluoride at the operating conditions described in this method is an interference. Hence, retention time data on a single column, or even on a number of columns, cannot be considered as proof of chemical must be calibrated with representative charcoal tubes in the line. This will minimize errors associated with uncertainties in tha sample volume collected.
(b) Collection and Shipping of Samples (1) Immediately before sampling, the ends of the tube should be broken to provide an opening at least one-half the internal diameter of the tube (2 mm).
(2) Position the second charcoal tube next to the sampling pump in tandem with the firs t tube, to serve as a bickup. If one tube is used the smaller section of charcoal is used ¿s a backup and should be positioned nearest the sampling pump.
(3) The charcoal tube should be vertical during sampling.
(4) Air being sampled should not be passed through any hose or tubing before entering the charcoal tube.
(5) The flowrate, time, and/or volume must be measured as accurately as possible. The sample should be taken at a flowrate of 0.5 liter/minute or less to attain Che total sample volume required. The sensitivity of the method is increased by using lover flowrates to increase the amount of sample collected (JL Sadenwasser, written communication, March 1978). (6) The temperature and pressure of the atmosphere being sampled should be measured and recorded. (7) The charcoal tubes should be carped with Che supplied plastic caps immediately after sampling. Under no circumstances should rubber caps be used.
(8) One tube should be handled in the same manner as a sample tube (break, seal, and transport), except that no air is sampled through this tube. This tube should be labeled as a blank.
(9) Capped tubes should be packed tightly before they are shipped to minimize tube breakage during shipping.
(10) Samples received at the laboratory are logged in and immediately stored in a refrigerator.
(c) Cleaning of Equipment. All glassware used for the laboratory analysis should be detergent washed and thoroughly rinsed with tap water and distilled water.
(d) Analysis of Samples (1) Preparation of Samples. The two tubes used in the collection of injection, the plunger is pulled back a short distance to minimize evaporation of the sample from the tip of the needle. Duplicate injections of each sample and standard should be made. No more than a 3% difference in area is to be expected. A larger 3ample injection may be used to increase the sensitivity of the uethpd (JL Sadenwasser, written communication, March 1978).
(5) Measurement of Area. The area of the sample peak is measured by an electronic integrator or some other suitable form of area measurement, and preliminary results are read from a standard curve prepared as discussed below.
(e) Determination of Desorption Efficiency (1) Importance of Determination. The desorption efficiency cf a particular compound can vary from one laboratory to another and also from one batch of charcoal to another. Thus, it is necessary to determine at least once the percentage of the specific compound that is removed in the desorption process for a given compound, provided the same batch of charcoal is used.
(2) Procedure for Determining Desorption Efficiency. Activated charcoal equivalent to the amount in the first section of the sampling tube (100 mg) is measured into a 5-cm, 4-mm inner diameter glass tube, flame-sealed at one end (similar to commercially available culture tubes). This charcoal nuic be from the same batch as that used in .obtaining the samples and can be obtained from unused charcoal tubes. The open end is capped with Parafiim. A known amount of the vinylidene fluoride is injected directly into the activated charcoal with a m icroliter syringe, and the tube is capped with more Parafiim.
At least five tubes are prepared in this manner and allowed to stand for at least overnight to assure complete absorption of the vinylidene fluoride onto the charcoal. These five tubes are referred to as the samples. A parallel blank tube should be treated in the same manner except that no sample is added to it. The sample and blank tubes are desorbed and analyzed in exactly the same manner as the sampling tube described in Analysis of Samples.
Two or three standards are prepared by injecting the same volume of vinylidene fluoride into 2 ml of carbon disulfide with the same syringe used in the preparation of the sample. These are analyzed with the samples.
The desorption efficiency equals the difference between the average peak area of the samples and the peak area of the blank divided by the average peak area of the standards, or:
Desorption Efficiency - Area sample -Area blank Area standard
It is convenient to express concentration of standards in terms of mg/2 ml of carbon disulfide because samples are desorbed in this amount of carbon disulfide. To minimize error due to the volatility of carbon disulfide, one can inject five times the volume of vinylidene fluoride into 10 ml of carbon disulfide. For example, to prepare 0.3 mg/2 ml of standard, one would inject 1.5 mg into exactly 10 ml of carbon disulfide in a glass-stoppered flask. The density of the specific compound is used to convert 1.5 mg into ql for easy measurement with a microliter syringe. A series of standards, varying in concentration over the range of interest, is prepared and analysed under the same gas-chromatographic conditions and during the sane time period as the unknown samples. Curves are established by plotting concentration in ng/2 ml vs peak area. NOTE: Since no internal standard is used in the method, standard solutions must be analyzed at the same time that the sample analysis is done. This will minimize the effect of known day-to-day variations and variations during the same day of the flame-ionization detector response.
# Calculations
(a) The weight, in mg, corresponding to each peak area is read from the standard curve. No volume corrections are needed, because the standard curve is based on mg/2 ml of carbon disulfide and the volume of sample injected is identical to the volume of the standards injected.
(b) Corrections for the blank must be made for each sample:
Correct mg = mg(s) -mg(b) where: mg(s) - mg found in front section of sample tube mg(b) - mg found in front section of blank tube A similar procedure is followed for the backup sections.
(c) The corrected amounts present in the front and backup sections of the same sample tube are added to determine the total measured amount in the sample.
(d) This total weight is divided by the determined desorption efficiency to obtain the total mg/sample. The following items of information which are applicable to a specific product or material shall be provided in the appropriate block of the Material Safety Data Sheet (MSDS).
The product designation Is inserted in the block in the upper left corner of r.he first page to facilitate filing and retrieval. Print in upper case letters as large as possible. It should be printed to read upright with the sheet turned sideways. The product designation is that name or code designation which appears on the label, or by which the product is sold or known by employees. The relative numerical hazard ratings and key statements are those determined by the rules in Chapter V, Part B, of the NIOSH publication, An Identification System for Occupationally Hazardous Materials. The company identification may be printed in the upper right corner if desired.
(a) Section I. Product Identification The manufacturer's name, address, and regular and emergency telephone mmbers (including area cods) are inserted in the appropriate blocks of Section I. The company listed should be a source of detailed backup informition on the hazards of the material(s) covered by the MSDS. The listing of suppliers or wholesale distributors is discouraged. The trade name should be the product designation or common name associated with the material. The synonyms are those commonly used for the product, especially formal chemical nomenclature. Every kr.jwn chemical designation or competitor's trade name need not be listed.
(b) Section II. Hazardous Ingredients The "materials" listed in Section II shall be those substances which are part of the hazardous product covered by the MSDS and individually meet any of the criteria defining a hazardous material.
Thus, one component of a multicomponent product might be listed because of its toxicity, another component because of its flammability, while a third component could be included both for its toxicity and its reactivity. Note that a MSDS for a single component product must have the name of the material repeated in this section to avoid giving the impression that there are no hazardous ingredients. Chemical substances should be listed according to their complete name derived from a recognized system of nomenclature. Where possible, avoid using common names and general class names such as "aromatic amine," "safety solvent," or "aliphatic hydrocarbon" when the specific name is known.
The "%" may be the approximate percentage by weight or volume (indicate basis) which each hazardous ingredient of the mixture bears to the "hole mixture. This may be indicated as a range or maximum amount, ie, "10-40% vol" or "10% max wt" to avoid disclosure of trade secrets.
Toxic hazard data shall be stated in terms of concentration, mode of exposure or test, and animal used, eg, "100 ppm LC50-rat," "25 mg/kg LD50skin-rabbit," "75 ppm LC man," or "permissible exposure from 29 CFR 1910.1000," or, if not available, from ocher sources of publications such as Che American Conference of Governmental Industrial Hygienists or the American National Standards Institute Inc. Flashpoint, shcck sensitivity, or similar descriptive data may be used to indicate flanmability, reactivity, or similar hazardous properties of the material.
(c) Section III. Physical Data The data in Section III should be for the total mixture and should include the boiling point and melting point in degrees Fahrenheit (Celsius in parentheses); vapor pressure, in conventional millimeters of mercury (ramHg); vapor density of gas or vapor (air 1); solubility in water, in parts/hundred parts of water by weight; specific gravity (water = 1); percent volatiles (indicated if by weight or volume) at 70 F (21.1 C); evaporation rate for liquids or sublimable solids, relative to butyl acetate; and appearance and odor. These data are useful for the control of toxic substances. Boiling point, vapor density, percent volatiles, vapor pressure, and evaporation are useful for designing proper ventilation equipment. This information is also useful for design and deployment of adequate fire and sp ill containment equipment. The appearance and odor may facilitate identification of substances stored in improperly marked containers, or when spilled.
(d) Section IV. Fire and Explosion Data Section IV should contain complete fire and explosion data for the product, including flashpoint and autoignition temperature in degrees Fahrenheit (Celsius in parentheses); flammable lim its, in percent by volume in air; suitable extinguishing media or materials; special firefighting procedures; and unusual fire and explosion hazard information. If the product presents no fire hazard, insert "NO FIRE HAZARD" on the line labeled "Extinguishing Media."
(e) Section V. Health Hazard Information The "Health Hazard Data" should be a combined estimate of the hazard of the total product. This can be expressed as a TWA concentration, as a permissible exposure, or by some other indication of an acceptable standard. Other data are acceptable, such as lowest LD50 if multiple components are involved.
Under "Routes of Exposure," comments in each category should reflect the potential hazard from absorption by the route in question. Comments should indicate the severity of the effect and the basis for the statement if possible. The basis might be animal studies, analogy with similar products, or human experiences. Comments such as "yes" or "possible" are not helpful. Typical comments might be:
Skin Contact-single short contact, no adverse effects likely; prolonged or repeated contact, possibly mild irritatio n . Eye Contact-some pain and mild transient irritatio n ; no corneal scarring. "Emergency and First Aid Procedures" should be written in lay language and should primarily represent first-aid treatment that could be provided by paramedical personnel or individuals trained in first aid.
Information in the "Notes to Physician" section should include any special medical information which would be of assistance to an attending physician including required or recommended preplacement and periodic medical examinations, diagnostic'procedures, and medical management of overexposed employees.
(f) Section VI. Reactivity Data The comments in Section VI relate to safe storage and handling of hazardous, unstable substances. It is particularly important to highlight instability or incompatibility to common substances or circumstances, such as water, direct sunlight, steel or copper piping, acids, alkalies, etc. "Hazardous Decomposition Products" shall include those products released under fire conditions. It must also include dangerous products produced by aging, such as peroxides in the case of some ethers. Where applicable, shelf life should also be indicated.
(g) Section VII. Spill or Leak Procedures Detailed procedures for cleanup and disposal should be , listed with emphasis on precautions to be taken to protect employees assigned to cleanup detail. Specific neutralizing chemicals or procedures should be described in detail. Disposal methods should be explicit including proper labeling of containers holding residues and ultimate disposal methods such as "sanitary landfill" or "incineration." Warnings such as "comply with local, state, and Federal antipollution ordinances" are proper but not sufficient. Specific procedures shall be identified.
(h) Section VIII. Special Protection Information Section VIII requires specific information. Statements such as "Yes," "No," or "If necessary" are not informative. Ventilation requirements should be specific as to type and preferred methods. Respirators shall be specified as to type and NIOSH or MSHA approval class, ie, "Supplied air," "Organic vapor canister," etc. Protective equipment must be specified as to type and materials of construction.
(i) Section IX. Special Precautions "Precautionary Statements" shall consist of the label statements selected for use on the container or placard. Additional information on any aspect of safety or health not covered in other sections should be inserted in Section IX. The lower block can contain references to published guides or in-house procedures for handling and storage. Department of Transportation markings and classifications and other freight, handling, or storage requirements and environmental controls can be noted.
(j) Signature a.id Filing Finally, the name and address of the responsible person who completed the MSDS and the date of completion are entered. This will facilitate correction of errors and identify a source of additional information.
The MSDS shall be filed in a location readily accessible to employees exposed to the hazardous substance. . trowa-i..
# XVII. TABLES AND FIGURES
The vinyl chloride concentration was later determined by gas chromatography.
The authors stated that temperature and humidity had no measurable effect on the determination of vinyl chloride. Of the compounds tested (including sulfur dioxide, nitrogen dioxide, and ozone) only ethylene chloride was reported to cause positive interference during analysis.
# Hill et al 1244] evaluated breakthrough volumes for vinyl chloride on 20
sorbents, 6 activated charcoals and 14 gas-chromatographic column packings, each contained in 1.5-cm sections of glass tubing with inner diameters of 4 mm.
Breakthrough volume was defined as the air volume sampled when 5% of the synthetic atmospheric concentration of vinyl chloride was detected in the tube effluent. Vinyl chloride was measured using a portable gas chromatograph with a flame ionization detector.
The results are shown In Table XVTI- High humidity or high concentrations of other organic contaminants could reduce the breakthrough volume, but this was not investigated . The authors suggested that maximum sample volumes of 5 liters at a flowrate of 50 ml/minute would not result in significant breakthrough.
These suggested values have been adopted for the NIOSH-accepted method .
# Cuddeback
-al tested commercially available charcoal tubes from two manufacturers.
By examining the packings of the front sections of the tubes, they determined that MSA tubes averaged 99.7 mg of charcoal (±62) for three samples in 16.5 mm (±10.9%) tubes, and SKC tubes averaged 86.2 mg (±3.1%) for six samples in 15.9 mm (±7.8%) tubes. Breakthrough volumes, defined as those at which the effluent concentration of vinyl chloride was 10% of the inlet concentration, were measured using the front sections of the MSA tubes. As shown in Table XVII-6 , there was no consistent correlation between breakthrough volume and sampling rate.
# Several activated charcoals were evaluated
for vinyl chloride collection and breakthrough by Severs and Skory . They concluded that the Pittsburgh PCB had superior breakthrough characteristics for vinyl chloride sampling.
Breakthrough volumes for commercial tubes with different packings were also compared. Tubes packed with 600 mg of SKC (Lot 105) charcoal or 700 mg of Pittsburgh PCB carbon had a breakthrough of less than 2% for vinyl chloride at 25 ppm (64 mg/cu m) at a flowrate of 1 liter/minute for 30 minutes.
The same tubes packed with 150 mg of the SKC charcoal had breakthrough of 2% within 2 minutes for vinyl chloride at 1 ppm (2.56 mg/cu m) sampled at 1 liter/minute.
(2) Vinylidene Chloride Severs and Skory used charcoal tubes for collecting vinylidene chloride in workplace samples.
No data on collection efficiency or storage stability were reported. Tubes packed with 600 mg of SKC charcoal had "good" retention capacity for vinylidene chloride.
At 31 ppm (123 mg/cu m) of vinylidene chloride, samples collected at a flowrate of 1 liter/minute had a breakthrough below 0.08% after 75 minutes.
Russell reported that the breakthrough volume of vinylidene chloride on Carbosieve B was greater than 10 liters of air. The available methods for determination of'vinyls in the workplace include gas-chromatographic and infrared techniques, among others, but vinyl chloride f ! For solvent d«sorption of vinyl chloride and other vinyls, carbon disulfide has generally been used. However, tetrahydrofluran and a brominehexane mixture have also been used.
# Hill et al
reported using a 2-ml vial containing 0.5 or 1.0 ml of carbon disulfide for desorption of vinyl chloride fvom the front section of a charcoal tube (100 mg of charcoal). The 13-Mg samples were analyzed after desorption periods of 10-30 minutes at ambient temperature. The authors found that desorption efficiencies were generally in the 80-902 range.
Hill et al also determined that the addition of the charcoal to the carbon disulfide enabled more precise analyses at ambient temperatures than If the solvent were added to the ch.ircoal. They concluded that solvent temperature and volume had little effect on precision, although only one set of tests was performed at other than ambient temperature (0 C).
Studies of the stability of vinyl chloride samples by the same authors demonstrated that vinyl chloride w as stable on charcoal for periods of over 2 weeks, but that migration from the front to the back section occurred when the tubes were stored at ambient temperatures.
Cooling to ^20 C retarded this effect. The authors suggested using two tubes in series as the front and backup sections to obviate the need for storage at low temperatures. studied a desorption technique by which 1 g of PCB 12/30 charcoal was added slowly to 10 ml of carbon disulfide, the mixture was cooled in a dry ice/acetone slurry and agitated for 30 minutes.
# Severs and Skory
Samples were stored under refrigeration and held In a wet ice bath while they were analyzed.
An average recovery of 982 (93-1012) was reported. When the same procedure was applied to the desorption of vinylidene chloride, recovery ranged from 95 to 1002.
In neither case was sample loading specified. indicated that guinea pigs exposed to vinyl chloride at 5,000 or 10,000 ppm (12,800 or 25,600 mg/cu m) for as long as 500 minutes "showed no symptoms." Cook recommended an MAC of 1,000 ppm for prolonged exposure. Citing the lack of long-term animal experimentation data and of data on industrial exposure at known concentrations, Cook recommended medical observation of workers exposed to vinyl chloride at concentrations near the suggested limit.
In 1946, the American Conference of Governmental Industrial Hygienists (ACGIH) recommended an MAC of 500 ppm (1,280 mg/cu m) for vinyl chloride. When the ACGIH changed its terminology in 1949 , this limit became the Threshold Limit .Value (TLV) for vinyl chloride. According to the 1962 Documentation of Threshold Limit Values , the ACGIH TLV was also based on the study by Patty et al . The 1962 documentation noted that narcosis was the most important effect of exposure to vinyl chloride, and that the TLV of 500 ppm (approximately 1,300 mg/cu m) "appears to be sufficiently low to prevent significant narcosis."
In the Threshold Limit Values for 1963 , it was noted that a TLV in the form of a time-weighted average (TWA) concentration limit might not provide a sufficient safety factor for acutely acting substances. Consequently, a "C" or "ceiling" designation was appended to the value for vinyl chloride, indicating that the TLV, which remained at 500 ppm, was a limit that should not be exceeded.
Although the TLV had not changed, the 1966 Documentation of Threshold - Limit Values cited several studies that presented conflicting data. Torkelson et al found liver damage in rabbits exposed repeatedly for 7 hours to vinyl chloride at 200 ppm (512 mg/cu m) and slight increases in liver weights of rats exposed at 100 ppm. Other animals were unaffected at 100 ppm. The authors suggested that worker exposure be controlled so that results for practically all analytical measurements were less than 100 ppm (256 mg/cu m) and that the TWA concentration for all exposures be limited to 50 ppm (128 mg/cu m). Lester et al found some increase in the relative weights of the liver apd spleen in rats exposed repeatedly to vinyl chloride at concentrations of 20,000 ppm (51,200 mg/cu m) for 92 days and 50,000 ppm (128,000-mg/cu m) for 19 days. They did not consider these changes (2) This section applies to the manu facture, reaction, packaging. repaci:aslng, storage, handling or use of vinyl chloride or polyvinyl chloride, but does not apply to the handling or use of fabri cated products made of polyvinyl chlo ride.
(3) This section applies to the trans portation of vinyl chloride or polyvinyl chloride except to the extent that the Department of Transportation maregulate the hazards covered by this sec tion.
(b) D e/in itio r.s. (1) "Action level" means a co.icentration of 'inyl chloride of 0.5 ppm averaged over an 8-hour work day.
(2) "Assistant Secretary" means the Assistant Secretary of Labor for Occupa tional Safety and Health, U.S. Depart ment of Labor, or his designee.
(3) "Authorized person" means any person specifically authorized by the em ployer whose duties requlr» him to enter a regulated area or any person entering such an area as a designated representa tive of employees for the purpose oi ex ercising an opportunity to observe moni toring ar.d measuring procedures.
(4) "Director" means the Director. National Institute tor Occupational Safety and Health. U.S. Department oi Health. Education, and Welfare, or his designee.
(5) "Emergency" means any occur rence such as, but not limited to. equip ment failure, or operation of a relief de vice which is likely to. or does, result In massive release of vinyl chloride.
(6) "Fabricated product" means a product made wholly or partly from polyvinyl chloride, and which does not require further processing at tempera tures, and for times, sufficient to cause mass meltins of the polyvinyl chloride resulting in the release of vinyl chloride. 7) "H.-' .. ardous narration" mean-, any operation. pr'ii.ediire. or activity where a release of cither vinyl chloride liquid or cas mijht be expected as a consequence of the operation or because of an acci dent In the o^^r.Ttion, v/hich wculd result In an employee exposure in excess of the permissible exposure limit. Advantages and Disadvantages of the Method (a) The sampling device is small, portable, and involves no liquids. Interferences are minimal, and most of those that do occur can be eliminated by altering chromatographic conditions. The tubes are analyzed by a rapid instrumental method. The method can also be used for the simultaneous determination of two or more components suspected to be present in the same sample by changing gas-chromatographic conditions rrom isothermal to a temperature-programmed mode of operation.
(b) One disadvantage of the method is that the amount of sample that can be taken is limited by the amount of vinyl chloride that the cube w ill hold before i t becomes overloaded. When the value obtained for the backup section of the sorbent tube exceeds 20% of that found on the front section, there is a possibility of sample loss. During storage, volatile compounds such as vinyl chloride will migrate throughout the tube until equilibrium is reached. At this time, 332 of these compounds will be found in the backup section. This (j) Volumetric flasks (10 ml or convenient sizes for making solutions), preferably vich plastic stoppers.
(k) Gas bags, Tedlar or equivalent.
# Reagents
# Procedure (a) Collection and Shipping of Samples
(1) Immediately before sampling, break the ends of the two tubes to provide an opening of at least 2 mm, one-half the internal diameter of the tube.
(2) Position the second sorbent tube next tc the sampling pump in tandem with the first tube, to serve as a backup. If one tube is used, position the smaller section of tube nearest the sampling pump.
(3) Place the sorbent tubes in a vertical position with the larger section of sorbent pointing up during sampling to minimize channeling of the vinyl chloride through the sorbent.
(4) Do not allow air being sampled to pass through any hose or tubing before entering the sorbent tubes.
(5) Measure the flowrate and time, or the sampling volume, as accurately as possible. Take the sample at a flowrate of 50 ml/minute. The maximum volume to be sampled should not exceed 5 lite rs.
(6) Sample relatively large volumes (10-20 liters) of air through other sorbent tubes at the same time personal samples are taken. These bulk air samples w ill be used by the analyst to identify possible interferences before the personal samples are analyzed.
# K
(e) Vials (2 ml) that can be sealed with caps containing Teflon-lined silicone rubber septa.
(f) Microliter syringes, 10 m1 . and convenient sizes for making s tandards.
(g) Pipet, 1.0 ml.
# Reagents
All reagents used should be ACS Reagent Grade or better. (b) Collection and Shipping of Samples (1) Immediately before sampling, the ends of the tube are broken to provide an opening (2 mm) at least one-half the internal diameter of the tube.
(2) The tube is connected to the sampling pump via rubber tubing. The smaller section of charcoal is the backup and is positioned nearest the. sampling pump.
(3) .The charcoal tube should be vertical during sampling to prevent channeling through the tube.
(4) Air being sampled should not be passed through any hose or tubing before entering the charcoal tubes. (4) Injection. Inject a 5-jil aliquot into the i;as chromatograph. A syringe equipped with a Chaney adapter may be used in lieu of the solvent flush technique.
(5) Measurement of Area: Measure the area under the sample peak using an electronic integrator or another suitable form of area measurement. Area measurements are compared with a standard curve prepared as discussed in Preparation of Standards.
(6) Measurement of Peak Height. The product of peak height and attenuator setting is linear over the analytical range. The peak height is multiplied by the attenuator setting necessary to keep the peak on scale. Preliminary results are read from a standard curve prepared as discussed below.
(d) Determination of Desorption Efficiency (1) Importance of Determination. The desorpuion efficiency of a particular compound can vary between laboratories and batches of charcoal. Also, for a given batch of charcoal the desorption efficiency can vary with the weight of contaminant adsorbed. The charcoal used for the study of this method gave a desorption efficiency of 802 for a loading of 7 ^g of vinylidene chloride/LQ0 m3 bed of charcoal.
(2) Procedure for Determining Desorption Efficiency. The desorption efficiency should be determined at three levels with a minimum of three samples at each level. Vinylidene chloride can be dissolved in cyclohexane to giva stc~k solutions. The concentrations should be such that no more than 8 Ml of a stock solution will be injected onto the charcoal. Activated charcoal in an amount equivalent to that found in the larger section of the charcoal tube (100 mg) is placed in a small vial and capped. An aliquot of the stock solution is injected into the charcoal tube. Two of the levels should reflect the extremes of the analytical range while the third level is inbetween the high and low levels. Each vial is allowed to stand overnight to assure complete adsorption of vinylidene chloride onto the charcoal. Standards are (e) A mechanical or electronic integrator or a recorder and some method for determining peak area.
(f) Glass stoppered micro tubes. The 2.5-ml graduated microcentrifuge tubes are recommended.
(g) Hamilton syringes: 10 m1, and convenient sizes for making standards. (h-) Pipets: 0.5-ml delivery pipets or 1.0-ml type graduated in 0.1-ml increments.
(i) Volumetric flasks: 10 ml, or convenient sizes for making standard solutions. (1) Immediately before sampling, the ends of the charcoal tube should be broken to provide an opening at least one-half the internal diameter of the tube (2 mm).
# Reagents
(2) The smaller section of charcoal is used as a backup and should be positioned nearest the sampling pump. f (3) The charcoal cube should be vercical during sampling. (4) Air being sampled should noc be passed chrough any hose cr Cubing before eacering Che charcoal Cube.
(5) The flowrace, Cime, and/or volume musC be measured as accuracely as possible. The sample should be caken at a flowrace of 1 liter/minute or less Co aCCain Che Cotal sample volume required.
(6) The cemperacure and pressure of the atmosphere being sampled should be measured and recorded. (7) The charcoal tubes should be capped with the supplied plastic caps immediately after sampling. Under no circumstances should' rubber caps be used.
(8) One Cube should be handled in Che same manner as Che sample cube (break, seal, and Cransport), cxcept chat no air is sampled through this tube. This Cube should be labeled as a blank.
(9) Capped Cubes should be packed cighcly before they are shipped to minimize cube breakage during shipping.
(10) Samples received at the laboratory are logged in and immediacely scored in a refrigeracor.
(c) Cleaning of EquipmenC. All glassware used for Che laboratory analysis should be decergenc washed and choroughly rinsed wlch Cap wacer and discllled water.
(d) Analysis of Samples (1) Preparation of Samples. In preparation for analysis, each charcoal Cube is scored with a file in fronc of Che firsc secCion of charcoal and broken open. The glass wool is removed and discarded. The charcoal in Che firsC (larger) seccion is cransferred Co a small scoppered cesc Cube. The separaCing seccion of foam is removed and discarded; Che second seccion is Cransferred Co another CesC Cube. These cwo sections are analyzed separately.
(2) Desorpcion of Samples. Prior Co analysis, 0.5 ml of carbon disulfide is plpecced inco each cesc cube and Che glass sCopper is inserCed. (All work wich carbon disulfide should be performed in a hood because of ics high coxicicy.) Tescs indicace chat desorpcion is complece in 30 minuces if Che sample is agicaced occasionally during chis period. The use of graduated glass-scoppered, microcenCrifuge cubes is recommended so chac one can observe any change in volume during Che desorpcion process. Carbon disulfide is a very volacile solvenc, so volume changes can occur during che desorpcion process depending on Che surrounding cemperacure. The inicial volume occupied - I | J by the charcoal plus the 0.5 ml of carbon disulfide should be noted and corresponding voluat adjustments should be made whenever necessary just before gas chromatographic analysis.
(3) Gas-Chromatographic Conditions.
The typical operating 'conditions for the gas chromatograph are:
( Three microliters of solvent are drawn into the syringe to increase the acouracy and reproducibility of the injected sample volume. The needle is removed from *he solvent, and the plunger is pulled back about 0.2 to separate the solvent flush from the sample with a pocket of air to be used as a marker. The needle is then immersed in the sample, and a 5-;il aliquot is withdrawn, taking into consideration the volume of rhe needle, since the sample in the needle will be completely injected. After the needle is removed from the sample arid prior to injection, the plunger is pulled back a short distance to minimize evaporation of the sample from the tip of the needle. Duplicate Injections of each sample and standard should be made. No more than a 31 difference in area is to he expected.
(5) Measurement of Area. The area of the sample peak is measured by an electronic Integrator or some other suitable form of area measurement, and preliminary results are read from a standard curve prepared as discussed below.
(a) Determination of Desorption Efficiency (1) Importance of Dtermlnatlon. The desorption efficiency of a particular compound can vary from one laboratory to another and also from one batch of charcoal to another. Thus, it is necessary to determine at least once the percentage of t:he specific compound that is removed in the desorption process for a given compound, provided the same batch of charcoal is used.
(2) Procedure for Determining Desorption Efficiency. Activat charcoal equivalent to the amount In the first section of the sampling tube (100 ng) is measured lato a 5-cm, 4-mm inner diameter glass tube, flame-sealed at one end (similar to commercially available culcure tubes). This charcoal must be from the same batch as that used in obtaining the samples and can be obtained from unused charcoal cubes. The open end is capped with Parafilm. A known amount of the vinyl bromide is injected directly into che activated charcoal with a microliter syringe, and the tube is capped with more Parafilm.
At least five tubes are prepared in this manner and allowed to stand for at least overnight to assure complete absorption of the vinyl bromide onto the charcoal. These five tubes are referred to as the samples. A parallel blank tube should be treated in the same manner except that no sample is added to it. The sample and blank tubes are aesorbed and analyzed in exactly the same manner as the sampling tube described in Analysis of Samples.
Two or three standards are prepared by injecting the same volume of vinyl bromide into 0.5 ml of carbon disulfide with the same syringe used in the preparation of the sample. These are anal Samplingyzed «rith the samples.
The desorption efficiency equals the difference between the average peak area of the samples and the peak area of the blank divided by the average peak area of the standards, or:
Desorption Efficiency - Area sample -Area blank Area standard
# Calibration and Standards
It is convenient to express concentration of standards in terms of mg/0.5 ml of carbon disulfide because samples are desorbed in this amount of carbon disulfide. To minimize error due to the volatility of carbon disulfide, one can inject 20 times the volume of vinyl bromide into 10 ml of carbon disulfide. For example, to prepare a 0.3 mg/0.5 ml of standard, one would inject 6.0 mg into exactly 10 ml of carbon disulfide in a glass-stoppered flask. The density of the specific compound is used to convert 6.0 mg into yl for easy measurement with a microliter syringe. A series of standards, varying in concentration over the range of interest, is prepared and analyzed under the same gas-chromatographic conditions and during the same time period as the unknown samples. Curves are established by plotting concentration in fg/0.5 ml vs peak area. NOTE: Since no Internal standard is used in the method, standard solutions must be analyzed at the same time that the sample analysis is done. This will minimize the effect of known day-to-day variations and variations during the same day cf the flame-ionization detector response.
# Calculations
(a) The weight, in mg, corresponding to each peak area is read froo the standard curve. No volume corrections are needed, because the standard curve is based on mg/0.5 ml of carbon disulfide and the volume of sample injected is identical to the volume of the standards Injected.
(b) Corrections for the blank must be made for each sample.
Correct mg -mg(s) -mg(b) where: mg(s) -mg found in front section of sample tube mg(b) - mg found in front section of blank tube A similar procedure is followed for the ba<_kup sections.
(c) The corrected amounts present in the front and backup sections of the same sample tube are added to determine the total measured amount in the sample.
(d) This total weight is divided by the determined desorption efficiency to obtain the total mg/sample. Advantages and Disadvantages of the Method (a) The sampling device is portable and Involves no liquids. Interferences are minimal, and most of those which do occur can be eliminated by altering chromatographic conditions. The samples are analyzed by means of a quick, instrumental method. No solvent desorption is necessary.
(b) One disadvantage of the method is that the amount of sample which can be taken is limited by the volume capacity of the bag. Full sample bags may interfere with the free movement of the worker.
(c) Furthermore, the precision of the method .s limited by the reproducibility of th*i sampling rate of the pump.
# Apparatus
(a) An approved and calibrated peristaltic sampling pump, diaphragm-pump, or vacuum pump, with filtered outlet to remove oil, for personal samples. For an area sample any vacuum pump whose flow can be determined accurately at 1 liter/minute or less. (a) Calibration of Personal Sampling Pumps. Each personal smapling pump must be calibrated with a representative bag In the line. This will minimize errors associated with uncertainties in the sample volume collected.
(b) Collection and Shipment of Samples (1) Tue flowrate, tine, and/or volume must be measured as accurately as possible. The sampling bags should be flushed before use. The sample should be taken at a flowrate of 1 liter/minute or less to attain the total sample volume required.
(2) The temperature and pressure of the atmosphere being sampled should be measured and recorded.
(3) Air samples are shipped to the laboratory for analysis in the Teflon bags. Appropriate precautions should be taken to prevent damage of the bags while in transit. (2) Injection Five m illiliters of air from the Teflon bag is withdrawn with a syringe. Two m illiliters are injected directly into the gas chromatograph.
(3) Measurement of Area. The area of the sample peak is measured by an electronic integrator or some other suitable form of area measurement, and preliminary results are read from a standard curve prepared as discussed below.
Procedure identity. For this reason it is important that a sample of the solvent(s) be submitted at the same time so that identity(ies) can be established by ~cher means.
(c) If the possibility of interference exists, separation conditions (column packing, temperatures, etc) must be changed to circuuvent the problem.
(d) If samples are not analyzed within 5 days, significant sample loss may occur. Although no specific data were provided, JL Sadenwasser (written communication, March 1978) stated that vinylidene fluoride was retained by the charcoal for at least 5 days with little loss.
# Precision and Accuracy
No data are currently available.
Advantages and Disadvantages of the. Method (a) The sampling device is small, portable, and involves no liquids. Interferences are minimal, and most of those that do occur can be eliminated by altering chromatographic conditions. The charcoal tubes are analyzed by means of a quick, instrumental method.
(b) One disadvantage of the method is that the amount of sample which can be taken is limited by the number of mg that the tubes w ill hold before overloading. When the sample value obtained for the backup section of the charcoal trap exceeds approximately 20% of that found on the front section, the possibility of sample loss exists. Sampling at 1 liter/minute caused a significant breakthrough after collection of 3 lite rs. Curing sample storage, volatile compounds such as vinylidene fluoride will migrate throughout the tube until equilibrium is reached.
(c) Furthermore, the precision of the method is limited by the reproducibility of the pressure drop across the two sections of the sampling tubes. This drop w ill affect the flowrate and cause the volume to be imprecise, because the pump is usually calibrated for one particular tube only. This disadvantage could be eliminated by calibrating the pump with representative charcoal tubes.
# Apparatus (a)
An approved and calibrated personal-sampling pump for personal samples. For an area sample any vacuum pump whose flow can be determined accurately at 0.5 liter/minute or less.
(b) Charcoal tubes: glass tube with both ends flame-sealed, 7 cm long with a 6-mm outer diameter and a 4-mm inner diameter, containing two sections of 20/40-mesh activated charcoal separated by a 2-mm portion of urethane foam. The activated charcoal is prepared from coconut shells and is fired at 600 C prior to packing. The absorbing section contains 100 mg of charcoal, the backup section 50 mg. A 3-mm portion of urethane foam is placed between the outlet end of the tube and the backup section. A plug of silylated glass wool is placed in front of the absorbing section. The pressure drop across the tube must be less than 1 inch of mercury at a flowrate of 1 liter/minute.
(c) Gas chromatograph equipped with a flame-ionization detPctoi.
(d) Column, stainless steel, 6-feet x 1/8-inch outer diameter, packed with Chromosorb 102, 80/100 mesh. Other columns capable of performing the required separations may be used.
(e) A mechanical or electronic integrator or a recorder and some method for determining pea»*. area. - (f) Glass stoppered micro tubes. The 2.5-ml graduated microcentrifuge cubes are recommended.
(g) Hamilton syringes: 10 yl, and convenient sizes for making standards.
(h) Pipets: 0.5-ml delivery pipets or 1.0-ml type graduated in 0.1 j 1 increments.
(i) Volumetric flasks: 10 ml or convenient sizes for making standard solutions.
# Reagents
(a) Carbon disulfide, "spectroquality" or better. (b) Sample of the specific compound under study, preferably "chromatoquality" grade. In preparation for analysis, each charcoal tube is scored with a file in front of the first section of charcoal and broken open. The glass wool is removed and discarded. The charcoal in the first (larger) section is transferred to a small stoppered test tub«. The separating section of foam is removed and discarded; the second section is transferred to another test tube. These two sections are analyzed separately.
(2) Desorption of Samples. Prior to analysis, 2 ml of carbon disulfide is pipetted into each test tube and the glass stopper is inserted. (All work with carbon disulfide should be performed in a hood because of its high toxicity.) Tests indicate that desorption is complete in 15 minutes if the sample is agitated occasionally during this period. The use of graduated glass-stoppered, :nicrocentrifuge tubes is recommended so that one ca:i observe any change in volume during the desorption process. Carbon disulfide is a very volatile solvent, so volume changes can occur during the desorption process depending on the surrounding temperature. The in itia l volume occupied by the charcoal plus the 2 ml of carbon disulfide should be noted and corresponding volume adjustments should be made whenever necessary just before gas-chromatographic analysis.
(3) Gas-Chromatographic Conditions. The typical operating conditions for the gas chromatograph are:
( Three microliters of solvent is drawn into the syringe to increase the accuracy and reproducibility of the injected sample volume. The needle is removed from the solvent, and the plunger is pulled back about 0.2 atl to separate the solvent flush from the sample with a pocket of air to be used as a marker. The needle is then immersed In the sample, and a 1-^1 aliquot is withdrawn, taking into consideration the volume of the needle, since the sample in the needle will be completely injected. After the needle is removed from the sample and prior to | 100,967 | {
"id": "519c83d02afa47a6950ee536f8a5305d692391be",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | These recommendations update information regarding the polysaccharide vaccine licensed in the United States for use against disease caused by Neisseria meningitidis sero chemoprophylaxis against meningococcal disease (superseding MMWR 1985;34:255-9). This report provides additional information regarding meningococcal vaccines and alternatives to rifampin for chemoprophylaxis in selected populations. Ciprofloxacin Adults 500 mg Single dose Ceftriaxone Children <15 yrs 125 mg Single IM + dose Ceftriaxone Adults 250 mg Single IM dose - Oral administration unless indicated otherwise. + Intramuscular.="============================= Return to top. Disclaimer All MMWR HTML versions of articles are electronic conversions from ASCII text into HTML. This conversion may have resulted in character translation or format errors in the HTML ver the electronic PDF version and/or the original MMWR paper copy for the official text, figures, and tables. An original paper copy of this issue can be obtained from the Superintendent of Documents, U.S. Gov (202) 512-1800. Contact GPO for current prices.# INTRODUCTION
Neisseria meningitidis causes both endemic and epidemic disease, principally meningitis and meningococcemia (1). As a result of the control of Haemophilus influenzae typ of bacterial meningitis in children and young adults in the United States, with an estimated 2,600 cases each year (2). The case-fatality rate is 13% for meningitic disease (de fluid) and 11.5% for persons who have N. meningitidis isolated from blood (2), despite therapy with antimicrobial agents (e.g., penicillin) to which U.S. strains remain clinic
The incidence of meningococcal disease peaks in late winter to early spring. Attack rates are highest among children 3-12 months of age and then steadily decline among old conducted during 1989-1991, serogroup B organisms accounted for 46% of all cases and serogroup C for 45%; serogroups W-135 and Y and strains that could not be serotyp data indicate that the proportion of cases caused by serogroup Y strains is increasing (4). Serogroup A, which rarely causes disease in the United States, is the most common localized community outbreaks of serogroup C disease and a statewide serogroup B epidemic have recently been reported (5,6).
Persons who have certain medical conditions are at increased risk for developing meningococcal infection. Meningococcal disease is particularly common among persons w complement pathway (C3, C5-C9); many of these persons experience multiple episodes of infection (6). Asplenic persons also may be at increased risk for acquiring mening Persons who have other diseases associated with immunosuppression (e.g., human immunodeficiency virus {HIV} and Streptococcus pneumoniae) may be at higher risk for some other encapsulated bacteria. Evidence suggests that HIV-infected persons are not at substantially increased risk for epidemic serogroup A meningococcal disease (9); h meningococcal disease or disease caused by other meningococcal serogroups (10). Previously, military recruits had high rates of meningococcal disease, particularly serogro vaccination of recruits with the bivalent A/C meningococcal vaccine in 1971, the high rates of meningococcal disease caused by those serogroups have decreased substantia routinely receive the quadrivalent A,C,Y, W-135 meningococcal vaccine.
# MENINGOCOCCAL POLYSACCHARIDE VACCINE
The quadrivalent A,C,Y,W-135 vaccine (Menomune -A,C,Y,W-135, manufactured by Connaught Laboratories, Inc.) is the formulation currently available in the United State subcutaneous injection. Each vaccine dose consists of 50 ug each of the purified bacterial capsular polysaccharides. Menomune is available in single-dose, 10-dose, and 50-d
# Vaccine Efficacy
The immunogenicity and clinical efficacy of the serogroups A and C meningococcal vaccines have been well established. The serogroup A polysaccharide induces antibody response comparable with that among adults is not achieved until 4 or 5 years of age; the serogroup C component is poorly immunogenic in recipients who are less than 18-2 have demonstrated estimated clinical efficacies of 85%-100% in older children and adults and are useful in controlling epidemics (9,(14)(15)(16)(17). Serogroups Y and W-135 polysa greater than 2 years of age (18)(19)(20)(21); although clinical protection has not been documented, vaccination with these polysaccharides induces bactericidal antibody. The antibod quadrivalent vaccine are serogroup-specific and independent.
# Duration of Efficacy
Measurable levels of antibodies against the group A and C polysaccharides decrease markedly during the first 3 years following a single dose of vaccine (13,(22)(23)(24)(25). This de children than in adults. Similarly, although vaccine-induced clinical protection probably persists in schoolchildren and adults for at least 3 years, the efficacy of the group A passage of time: in a 3-year study, efficacy declined from greater than 90% to less than 10% among children who were less than 4 years of age at the time of vaccination, wh years of age when vaccinated, efficacy was 67% 3 years later (26).
# RECOMMENDATIONS FOR USE OF MENINGOCOCCAL VACCINE
Routine vaccination of civilians with the quadrivalent meningococcal polysaccharide vaccine is not recommended because of its relative ineffectiveness in children less than highest) and its relatively short duration of protection. However, the polysaccharide meningococcal vaccine is useful for controlling serogroup C meningococcal outbreaks (
# Indications for Use
In general, use of polysaccharide meningococcal vaccine should be restricted to persons greater than or equal to 2 years of age; however, children as young as 3 months of ag serogroup A meningococcal disease (two doses administered 3 months apart should be considered for children 3-18 months of age) (28).
Routine vaccination with the quadrivalent vaccine is recommended for certain high-risk groups, including persons who have terminal complement component deficiencies a whose spleens have been removed because of trauma or nonlymphoid tumors and persons who have inherited complement deficiencies have acceptable antibody responses t vaccination has not been documented for these persons, and they may not be protected by vaccination (7,29). Research, industrial, and clinical laboratory personnel who rou aerosolized should be considered for vaccination.
Vaccination with the quadrivalent vaccine may benefit travelers to and U.S. citizens residing in countries in which N. meningitidis is hyperendemic or epidemic, particularly dose vials of the quadrivalent vaccine are now available and may be more convenient than multidose vials for use in international health clinics for travelers (30). Epidemics Saharan Africa known as the "meningitis belt," which extends from Senegal in the west to Ethiopia in the east (Figure_2) (31). Epidemics in the meningitis belt usually occu vaccination is recommended for travelers visiting this region during that time. Epidemics occasionally are identified in other parts of the world and recently have occurred in Burundi, and Mongolia. Information concerning geographic areas for which vaccination is recommended can be obtained from international health clinics for travelers, state
# Primary Vaccination
For both adults and children, vaccine is administered subcutaneously as a single 0.5-mL dose. The vaccine can be administered at the same time as other vaccines but at a di Protective levels of antibody are usually achieved within 7-10 days after vaccination.
# Revaccination
Revaccination may be indicated for persons at high risk for infection (e.g., persons remaining in areas in which disease is epidemic), particularly for children who were first children should be considered for revaccination after 2-3 years if they remain at high risk. Although the need for revaccination of older children and adults has not been dete indications still exist for immunization, revaccination may be considered within 3-5 years.
# PRECAUTIONS AND CONTRAINDICATIONS Reactions to Vaccination
Adverse reactions to meningococcal vaccine are mild and consist principally of pain and redness at the injection site, for 1-2 days. Estimates of incidence of mild-to-modera greater than 40% among vaccine recipients (32,33). Pain at the site of injection is the most commonly reported adverse reaction, and a transient fever might develop in less t Vaccination During Pregnancy Studies of vaccination during pregnancy have not documented adverse effects among either pregnant women or newborns (34,35). In addition, these studies have documente following vaccination during pregnancy. Antibody levels in the infants decreased during the first few months after birth; subsequent response to meningococcal vaccination confirmed in more recent studies of other polysaccharide vaccines administered during pregnancy (36). Based on data from studies involving use of meningococcal vaccines pregnancy, altering meningococcal vaccination recommendations during pregnancy is unnecessary.
# PROSPECTS FOR NEW MENINGOCOCCAL VACCINES
To enhance the immunogenicity and protective efficacy of A and C polysaccharides in infants and young children, methods similar to those used for H. influenzae type b con serogroups A and C vaccines (37,38). Capsular polysaccharides are being covalently linked to carrier proteins to convert the T-cell-independent polysaccharide to a T-cell-de evaluated.
Because the serogroup B capsular polysaccharide is poorly immunogenic in humans, vaccine development for serogroup B meningococci has focused on the outer membran protective efficacy of several outer membrane protein vaccines against several serogroup B meningococci have been evaluated recently. Evaluation of those vaccines docum children and adults (39)(40)(41). However, a subsequent study of one of these vaccines did not document efficacy in children less than 4 years of age, the group often at highest r B meningococcal vaccines are licensed for use in the United States.
# ANTIMICROBIAL CHEMOPROPHYLAXIS
Antimicrobial chemoprophylaxis of close contacts of sporadic cases of meningococcal disease is the primary means for prevention of meningococcal disease in the United S members, b) day care center contacts, and c) anyone directly exposed to the patient's oral secretions (e.g., through kissing, mouth-to-mouth resuscitation, endotracheal intuba household contacts exposed to patients who have sporadic meningococcal disease has been estimated to be four cases per 1,000 persons exposed, which is 500-800 times gre secondary disease for close contacts is highest during the first few days after onset of disease in the primary patient, antimicrobial chemoprophylaxis should be administered identified). Conversely, chemoprophylaxis administered greater than 14 days after onset of illness in the index case-patient is probably of limited or no value. Oropharyngeal need for chemoprophylaxis and may unnecessarily delay institution of this preventive measure.
Rifampin is administered twice daily for 2 days (600 mg every 12 hours for adults, 10 mg/kg of body weight every 12 hours for children greater than or equal to 1 month of of age). Rifampin is effective in eradicating nasopharyngeal carriage of N. meningitidis (44). Rifampin is not recommended for pregnant women, because the drug is teratog urine to reddish-orange and is excreted in tears and other body fluids; it may cause permanent discoloration of soft contact lenses. Because the reliability of oral contraceptiv be given to using alternate contraceptive measures while rifampin is being administered.
In addition to rifampin, other antimicrobial agents are effective in reducing nasopharyngeal carriage of N. meningitidis. Ciprofloxacin in various dosage regimens is greater t (45,46). A single 500-mg oral dose of ciprofloxacin is a reasonable alternative to the multidose rifampin regimen. Ciprofloxacin levels in nasal secretions far exceed the MIC Ciprofloxacin is not generally recommended for persons less than 18 years of age or for pregnant and lactating women because the drug causes cartilage damage in immatur consensus report has concluded that ciprofloxacin can be used for chemoprophylaxis of children when no acceptable alternative therapy is available (48).
When ceftriaxone was administered in a single parenteral dose (an intramuscular dose of 125 mg for children and 250 mg for adults), it was 97%-100% effective in eradicati ceftriaxone (diluted in 1% lidocaine to reduce local pain after injection) is also a reasonable alternative for chemoprophylaxis. Systemic antimicrobial therapy of meningococcal disease with agents other than ceftriaxone or other third-generation cephalosporins may not reliably eradicate nasopharyng used for treatment, the index patient should receive chemoprophylactic antibiotics for eradication of nasopharyngeal carriage before being discharged from the hospital (51) CONCLUSIONS N. meningitidis is the leading cause of bacterial meningitis in older children and young adults in the United States. The quadrivalent A, C, Y, and W-135 meningococcal vac of serogroup C meningococcal disease outbreaks and for use among certain high-risk groups, including a) persons who have terminal complement deficiencies, b) persons w personnel who routinely are exposed to N. meningitidis in solutions that may be aerosolized. Vaccination also may benefit travelers to countries in which disease is hyperend meningococcal vaccines are being developed by using methods similar to those used for H. influenzae type b conjugate vaccines, and the efficacies of several experimental s in older children and young adults. Antimicrobial chemoprophylaxis of close contacts of patients who have sporadic cases of meningococcal disease is the primary means for prevention of meningococcal dise for chemoprophylaxis; however, data from recent studies document that single doses of ciprofloxacin or ceftriaxone are reasonable alternatives to the multidose rifampin reg
# Figure_2
Return to top.
# Table_1
Note: To print large tables and graphs users may have to change their printer settings to landscape and use a small font size. ---------------------------------------------------------------------------------------------------- | 3,096 | {
"id": "d0a1ecb83d89adbe9e7854ce73ce5817d450f36f",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | Iron deficiency is the most common known form of nutritional deficiency. Its prevalence is highest among young children and women of childbearing age (particularly pregnant women). In children, iron deficiency causes developmental delays and behavioral disturbances, and in pregnant women, it increases the risk for a preterm delivery and delivering a low-birthweight baby. In the past three decades, increased iron intake among infants has resulted in a decline in childhood iron-deficiency anemia in the United States. As a consequence, the use of screening tests for anemia has become a less efficient means of detecting iron deficiency in some populations. For women of childbearing age, iron deficiency has remained prevalent. To address the changing epidemiology of iron deficiency in the United States, CDC staff in consultation with experts developed new recommendations for use by primary health-care providers to prevent, detect, and treat iron deficiency. These recommendations update the 1989 "CDC Criteria for Anemia in Children and Childbearing-Aged Women" (MMWR 1989;38(22):400-4) and are the first comprehensive CDC recommendations to prevent and control iron deficiency. CDC emphasizes sound iron nutrition for infants and young children, screening for anemia among women of childbearing age, and the importance of low-dose iron supplementation for pregnant women.# INTRODUCTION
In the human body, iron is present in all cells and has several vital functions-as a carrier of oxygen to the tissues from the lungs in the form of hemoglobin (Hb), as a facilitator of oxygen use and storage in the muscles as myoglobin, as a transport medium for electrons within the cells in the form of cytochromes, and as an integral part of enzyme reactions in various tissues. Too little iron can interfere with these vital functions and lead to morbidity and mortality.
In the United States, the prevalence of iron-deficiency anemia among children declined during the 1970s in association with increased iron intake during infancy (1)(2)(3). Because of this decline, the value of anemia as a predictor of iron deficiency has also declined, thus decreasing the effectiveness of routine anemia screening among children. In contrast, the rate of anemia among low-income women during pregnancy is high, and no improvement has been noted since the 1970s (4 ). These findings, plus increased knowledge about screening for iron status, raised questions about the necessity and effectiveness of existing U.S. programs to prevent and control iron deficiency. CDC requested the Institute of Medicine to convene an expert committee to develop recommendations for preventing, detecting, and treating iron-deficiency anemia among U.S. children and U.S. women of childbearing age. The committee met throughout 1992, and in 1993 the Institute of Medicine published the committee's recommendations (5 ). These guidelines are not practical for all primary health-care and public health settings, however, because they require serum ferritin testing during pregnancy (6 ). This testing may be appropriate in practices where women consistently visit their physician throughout pregnancy, but it is less feasible when analysis of serum ferritin concentration is unavailable or when prenatal care visits are sporadic. The CDC recommendations in this report-including those for pregnant women-were developed for practical use in primary health-care and public health settings.
Beside the Institute of Medicine (5,7 ), the American Academy of Pediatrics (8,9 ), the U.S. Preventive Services Task Force (10 ), the American College of Obstetricians and Gynecologists (9,11 ), the Federation of American Societies for Experimental Biology (12 ), and the U.S. Public Health Service (13 ) have all published guidelines within the past 9 years for health-care providers that address screening for and treatment of iron deficiency in the United States. Preventing and controlling iron deficiency are also addressed in Nutrition and Your Health: Dietary Guidelines for Americans (14 ).
The CDC recommendations differ from the guidelines published by the U.S. Preventive Services Task Force (10 ) in two major areas. First, the Task Force recommended screening for anemia among infants at high risk for anemia and pregnant women only. The CDC recommends periodic screening for anemia among high-risk populations of infants and preschool children, among pregnant women, and among nonpregnant women of childbearing age. Second, the Task Force stated there is insufficient evidence to recommend for or against iron supplementation during pregnancy, but the CDC recommends universal iron supplementation to meet the iron requirements of pregnancy. The CDC recommendations for iron supplementation during pregnancy are similar to the guidelines issued by the American Academy of Pediatrics and the American College of Obstetricians and Gynecologists (9 ). This report is intended to provide guidance to primary health-care providers and emphasizes the etiology and epidemiology of iron deficiency, the laboratory tests used to assess iron status, and the screening for and treatment of iron deficiency at all ages. The recommendations in this report are based on the 1993 Institute of Medicine guidelines; the conclusions of an expert panel convened by CDC in April 1994; and input from public health nutrition program personnel, primary health-care providers, and experts in hematology, biochemistry, and nutrition.
National health objective 2.10 for the year 2000 is to "reduce iron deficiency to <3% among children aged 1-4 and among women of childbearing age" (15 ). The recommendations in this report for preventing and controlling iron deficiency are meant to move the nation toward this objective.
# BACKGROUND Iron Metabolism
Total body iron averages approximately 3.8 g in men and 2.3 g in women, which is equivalent to 50 mg/kg body weight for a 75-kg man (16,17 ) and 42 mg/kg body weight for a 55-kg woman (18 ), respectively. When the body has sufficient iron to meet its needs, most iron (>70%) may be classified as functional iron; the remainder is storage or transport iron. More than 80% of functional iron in the body is found in the red blood cell mass as Hb, and the rest is found in myoglobin and intracellular respiratory enzymes (e.g., cytochromes) (Table 1). Iron is stored primarily as ferritin, but some is stored as hemosiderin. Iron is transported in blood by the protein transferrin. The total amount of iron in the body is determined by intake, loss, and storage of this mineral (16 ).
# Iron Intake
Regulation of iron balance occurs mainly in the gastrointestinal tract through absorption. When the absorptive mechanism is operating normally, a person maintains functional iron and tends to establish iron stores. The capacity of the body to absorb iron from the diet depends on the amount of iron in the body, the rate of red blood cell production, the amount and kind of iron in the diet, and the presence of absorption enhancers and inhibitors in the diet.
The percentage of iron absorbed (i.e., iron bioavailability) can vary from 50% (19 ). The main factor controlling iron absorption is the amount of iron stored in the body. The gastrointestinal tract increases iron absorption when the body's iron stores are low and decreases absorption when stores are sufficient. An increased rate of red blood cell production can also stimulate iron uptake severalfold (16,20 ).
Among adults, absorption of dietary iron averages approximately 6% for men and 13% for nonpregnant women in their childbearing years (19 ). The higher absorption efficiency of these women reflects primarily their lower iron stores as a result of menstruation and pregnancy. Among iron-deficient persons, iron absorption is also high (21 ). Absorption of iron increases during pregnancy, but the amount of the increase is not well defined (6 ); as iron stores increase postpartum, iron absorption decreases.
Iron bioavailability also depends on dietary composition. Heme iron, which is found only in meat, poultry, and fish, is two to three times more absorbable than nonheme iron, which is found in plant-based foods and iron-fortified foods (19,20 ). The bioavailability of non-heme iron is strongly affected by the kind of other foods ingested at the same meal. Enhancers of iron absorption are heme iron (in meat, poultry, and fish) and vitamin C; inhibitors of iron absorption include polyphenols (in certain vegetables), tannins (in tea), phytates (in bran), and calcium (in dairy products) (16,22 ). Vegetarian diets, by definition, are low in heme iron. However, iron bioavailability in a vegeterian diet can be increased by careful planning of meals to include other sources of iron and enhancers of iron absorption (14 ). In the diet of an infant, before the introduction of solid foods, the amount of iron absorbed depends on the amount and bioavailability of iron in breast milk or formula (8 ) (Table 2).
# Iron Turnover and Loss
Red blood cell formation and destruction is responsible for most iron turnover in the body. For example, in adult men, approximately 95% of the iron required for the production of red blood cells is recycled from the breakdown of red blood cells and only 5% comes from dietary sources. In contrast, an infant is estimated to derive approximately 70% of red blood cell iron from the breakdown of red blood cells and 30% from the diet (23 ).
In adults, approximately 1 mg of iron is lost daily through feces and desquamated mucosal and skin cells (24 ). Women of childbearing age require additional iron to compensate for menstrual blood loss (an average of 0.3-0.5 mg daily during the childbearing years) (18 ) and for tissue growth during pregnancy and blood loss at delivery and postpartum (an average of 3 mg daily over 280 days' gestation) (25 ). In all persons, a minute amount of iron is lost daily from physiological gastrointestinal blood loss. Pathological gastrointestinal iron loss through gastrointestinal bleeding occurs in infants and children sensitive to cow's milk and in adults who have peptic ulcer disease, inflammatory bowel syndrome, or bowel cancer. Hookworm infections, although not common in the United States (26 ), are also associated with gastrointestinal blood loss and iron depletion (27 ).
# Iron Stores
Iron present in the body beyond what is immediately needed for functional purposes is stored as the soluble protein complex ferritin or the insoluble protein complex hemosiderin (16,17 ). Ferritin and hemosiderin are present primarily in the liver, bone marrow, spleen, and skeletal muscles. Small amounts of ferritin also circulate in the plasma. In healthy persons, most iron is stored as ferritin (an estimated 70% in men and 80% in women) and smaller amounts are stored as hemosiderin (Table 1). When long-term negative iron balance occurs, iron stores are depleted before iron deficiency begins.
Men store approximately 1.0-1.4 g of body iron (17,28 ), women approximately 0.2-0.4 g (18,28 ), and children even less (23 ). Full-term infants of normal or high birthweight are born with high body iron (an average of 75 mg/kg body weight), to which iron stores contribute approximately 25% (23 ). Preterm or low-birthweight in- fants are born with the same ratio of total body iron to body weight, but because their body weight is low, the amount of stored iron is low too.
# Manifestations of Iron Deficiency
Iron deficiency is one of the most common nutritional deficiencies worldwide (29 ) and has several causes (Exhibit 1). Iron deficiency represents a spectrum (Table 3) ranging from iron depletion, which causes no physiological impairments, to iron-deficiency anemia, which affects the functioning of several organ systems. In iron depletion, the amount of stored iron (e.g., as measured by serum ferritin concentration) is reduced but the amount of functional iron may not be affected (30,31 ). Persons who have iron depletion have no iron stores to mobilize if the body requires more iron. In iron-deficient erythropoiesis, stored iron is depleted and transport iron (e.g., as measured by transferrin saturation) is reduced further; the amount of iron absorbed is not sufficient to replace the amount lost or to provide the amount needed for growth and function. In this stage, the shortage of iron limits red blood cell production and results in increased erthryocyte protoporphyrin concentration. In iron-deficiency anemia, the most severe form of iron deficiency, the shortage of iron leads to underproduction of iron-containing functional compounds, including Hb. The red blood cells of persons who have iron-deficiency anemia are microcytic and hypochromic (30,31 ).
In infants (persons aged 0-12 months) and preschool children (persons aged 1-5 years), iron-deficiency anemia results in developmental delays and behavioral disturbances (e.g., decreased motor activity, social interaction, and attention to tasks) (32,33). These developmental delays may persist past school age (i.e., 5 years) if the iron deficiency is not fully reversed (32)(33)(34). In these studies of development and behavior, iron-deficiency anemia was defined as a Hb concentration of ≤10.0 g/dL or ≤10.5 g/dL; further study is needed to determine the effects of mild iron-deficiency anemia (for example, a Hb concentration of >10.0 g/dL but <11.0 g/dL in children aged 1-<2 years) on infant and child development and behavior. Iron-deficiency anemia also contributes to lead poisoning in children by increasing the gastrointestinal tract's ability to absorb heavy metals, including lead (35 ). Iron-deficiency anemia is associated with conditions that may independently affect infant and child development (e.g., low birthweight, generalized undernutrition, poverty, and high blood level of lead) that need to be taken into account when interventions addressing iron-deficiency anemia are developed and evaluated (34 ). In adults (persons aged ≥18 years), iron-deficiency anemia among laborers (e.g., tea pickers, latex tappers, and cotton mill workers) in the developing world impairs work capacity; the impairment appears to be at least partially reversible with iron treatment (36,37 ). It is not known whether iron-deficiency anemia affects the capacity to perform less physically demanding labor that is dependent on sustained cognitive or coordinated motor function (37 ).
# EXHIBIT 1. Causes of iron deficiency
Among pregnant women, iron-deficiency anemia during the first two trimesters of pregnancy is associated with a twofold increased risk for preterm delivery and a threefold increased risk for delivering a low-birthweight baby (38 ). Evidence from randomized control trials indicates that iron supplementation decreases the incidence of iron-deficiency anemia during pregnancy (10,(39)(40)(41)(42), but trials of the effect of universal iron supplementation during pregnancy on adverse maternal and infant outcomes are inconclusive (10,43,44 ).
# Risk for and Prevalence of Iron Deficiency in the United States
A rapid rate of growth coincident with frequently inadequate intake of dietary iron places children aged <24 months, particularly those aged 9-18 months, at the highest risk of any age group for iron deficiency (3 ). The iron stores of full-term infants can meet an infant's iron requirements until ages 4-6 months, and iron-deficiency anemia generally does not occur until approximately age 9 months. Compared with full-term infants of normal or high birthweight, preterm and low-birthweight infants are born with lower iron stores and grow faster during infancy; consequently, their iron stores are often depleted by ages 2-3 months (5,23 ) and they are at greater risk for iron deficiency than are full-term infants of normal or high birthweight. Data from the third National Health and Nutrition Examination Survey (NHANES III), which was conducted during 1988-1994, indicated that 9% of children aged 12-36 months in the United States had iron deficiency (on the basis of two of three abnormal values for erythrocyte protoporphyrin concentration, serum ferritin concentration, and transferrin saturation) and that 3% also had iron-deficiency anemia (Table 4). The prevalence of iron deficiency is higher among children living at or below the poverty level than among those living above the poverty level and higher among black or Mexican-American children than among white children (45 ).
Evidence from the Continuing Survey of Food Intakes by Individuals (CSFII), which was conducted during 1994-1996, suggests that most infants meet the recommended dietary allowance for iron through diet (Table 5; these data exclude breast-fed infants). However, the evidence also suggests that more than half of children aged 1-2 years may not be meeting the recommended dietary allowance for iron through their diet (Table 5; these data do not include iron intake from supplemental iron). An infant's diet is a reasonable predictor of iron status in late infancy and early childhood (23,48 ). For example, approximately 20%-40% of infants fed only non-ironfortified formula or whole cow's milk and 15%-25% of breast-fed infants are at risk for iron deficiency by ages 9-12 months (23,48 ). Infants fed mainly iron-fortified formula (≥1.0 mg iron/100 kcal formula) (8 ) are not likely to have iron deficiency at age 9 months (48 ). Another study has documented that intake of iron-fortified cereal protects against iron deficiency: among exclusively breast-fed infants who were fed cereal starting at age 4 months, 3% of infants who were randomized to receive ironfortified cereal compared with 15% of infants who were randomized to receive non-iron-fortified cereal had iron-deficiency anemia at age 8 months (49 ). The effect of prolonged exclusive breast feeding on iron status is not well understood. One nonrandomized study with a small cohort suggested that exclusive breast feeding for >7 months is protective against iron deficiency compared with breast feeding plus the introduction of non-iron-fortified foods at age ≤7 months (50 ); infants weaned to ironfortified foods were not included in this study.
Early introduction (i.e., before age 1 year) of whole cow's milk and consumption of >24 oz of whole cow's milk daily after the 1st year of life are risk factors for iron deficiency because this milk has little iron, may replace foods with higher iron content, and may cause occult gastrointestinal bleeding (8,48,51,52 ). Because goat's milk and cow's milk have similar compositions (53,54 ), infants fed goat's milk are likely to have the same risk for developing iron deficiency as do infants fed cow's milk. Of all milks and formulas, breast milk has the highest percentage of bioavailable iron, and breast milk and iron-fortified formulas provide sufficient iron to meet an infant's needs (55 ). Iron-fortified formulas are readily available, do not cost much more than non-iron-for- tified formulas, and have few proven side effects except for darker stools (56,57 ). Controlled trials and observational studies have indicated that iron-fortified formula causes no more gastrointestinal distress than does non-iron-fortified formula (56)(57)(58), and there is little medical indication for non-iron-fortified formula (59 ). After age 24 months, when the growth rate of children slows and the diet becomes more diversified, the risk for iron deficiency drops (28,45,47 ). In children aged >36 months, dietary iron and iron status are usually adequate (45,47 ). For these older children, risks for iron deficiency include limited access to food (e.g., because of low family income (45 ) or because of migrant or refugee status), a low-iron or other specialized diet, and medical conditions that affect iron status (e.g., inflammatory or bleeding disorders) (3 ).
During adolescence (ages 12-<18 years), iron requirements (46 ) and hence the risk for iron deficiency increase because of rapid growth (60,61 ). Among boys, the risk subsides after the peak pubertal growth period. Among girls and women, however, menstruation increases the risk for iron deficiency throughout the childbearing years. An important risk factor for iron-deficiency anemia among nonpregnant women of childbearing age is heavy menstrual blood loss (≥80 mL/month) (18 ), which affects an estimated 10% of these women in the United States (17,18 ). Other risk factors include use of an intrauterine device (which is associated with increased menstrual blood loss), high parity, previous diagnosis of iron-deficiency anemia, and low iron intake (45,60 ). Use of oral contraceptives is associated with decreased risk for iron deficiency (18,62 ).
Data from CSFII suggest that only one fourth of adolescent girls and women of childbearing age (12-49 years) meet the recommended dietary allowance for iron through diet (Table 5). Indeed, data from the complete NHANES III indicated that 11% of nonpregnant women aged 16-49 years had iron deficiency and that 3%-5% also had iron-deficiency anemia (Table 4).
Among pregnant women, expansion of blood volume by approximately 35% and growth of the fetus, placenta, and other maternal tissues increase the demand for iron threefold in the second and third trimesters to approximately 5.0 mg iron/day (18,46 ). Although menstruation ceases and iron absorption increases during pregnancy, most pregnant women who do not take iron supplements to meet increased iron requirements during pregnancy cannot maintain adequate iron stores, particularly during the second and third trimesters (63 ). After delivery, the iron in the fetus and placenta is lost to the woman, but some of the iron in the expanded blood volume may be returned to the woman's iron stores (18 ).
The prevalence of anemia in low-income, pregnant women enrolled in public health programs in the United States has remained fairly stable since 1979 (4 ). In 1993, the prevalence of anemia among these women was 9%, 14%, and 37% in the first, second, and third trimesters, respectively (4 ). Comparable data for the U.S. population of all pregnant women are unavailable. The low dietary intake of iron among U.S. women of childbearing age (47 ), the high prevalence of iron deficiency and iron-deficiency anemia among these women (45 ), and the increased demand for iron during pregnancy (18,46 ) suggest that anemia during pregnancy may extend beyond low-income women.
Published data on iron supplement use by a representative sample of pregnant U.S. women are limited. In the 1988 National Maternal and Infant Health Survey of a nationally representative sample of U.S. women who delivered a child in that year, 83% of respondents reported that they took supplements with multiple vitamins and minerals ≥3 days/week for 3 months after they found out they were pregnant (64 ). Significantly smaller percentages of black women; Eskimo, Aleut, or American Indian women; women aged <20 years; and women having less than a high school education reported taking these supplements. In this survey, self-reported use of supplementation was within the range (55%-95%) found in a review of studies using objective measures to estimate adherence (e.g., pill counts and serum ferritin concentration) (65 ) . The survey results suggest that the groups of women at high risk for iron deficiency during nonpregnancy are less likely to take supplements with multiple vitamins and minerals during pregnancy. This survey did not question respondents about changes in supplement use during pregnancy or what dose of iron supplements was consumed.
In the United States, the main reasons for lack of a recommended iron supplementation regimen during pregnancy may include lack of health-care provider and patient perceptions that iron supplements improve maternal and infant outcomes (65 ), com-plicated dose schedules (5,65 ), and uncomfortable side effects (e.g., constipation, nausea, and vomiting) (66,67 ). Low-dose supplementation regimens that meet pregnancy requirements (i.e., 30 mg iron/day) (46 ) and reduce unwanted side effects are as effective as higher dose regimens (i.e., 60 or 120 mg iron/day) in preventing irondeficiency anemia (66 ). Simplified dose schedules (e.g., 1 dose/day) may also improve compliance (65 ). Methods to improve compliance among pregnant women at high risk for iron deficiency require further study.
Among men (males aged ≥18 years) and postmenopausal women in the United States, iron-deficiency anemia is uncommon. Data from NHANES III indicated that ≤2% of men aged ≥20 years and 2% of women aged ≥50 years had iron-deficiency anemia (Table 4). Data from CFSII indicate that most men and most women aged ≥50 years meet the recommended dietary allowance for iron through diet (Table 5). In a study of adults having iron-deficiency anemia, 62% had clinical evidence of gastrointestinal bleeding as a result of lesions (e.g., ulcers and tumors) (68 ). In NHANES I, which was conducted during 1971-1975, about two thirds of anemia cases among men and postmenopausal women were attributable to chronic disease or inflammatory conditions (69 ). The findings of these studies suggest that, among these populations, the primary causes of anemia are chronic disease and inflammatory conditions and that low iron intake should not be assumed to be the cause of the anemia.
# TESTS USED TO ASSESS IRON STATUS
Iron status can be assessed through several laboratory tests. Because each test assesses a different aspect of iron metabolism, results of one test may not always agree with results of other tests. Hematological tests based on characteristics of red blood cells (i.e., Hb concentration, hematocrit, mean cell volume, and red blood cell distribution width) are generally more available and less expensive than are biochemical tests. Biochemical tests (i.e., erythrocyte protoporphyrin concentration, serum ferritin concentration, and transferrin saturation), however, detect earlier changes in iron status.
Although all of these tests can be used to assess iron status, no single test is accepted for diagnosing iron deficiency (70 ). Detecting iron deficiency in a clinical or field setting is more complex than is generally believed.
Lack of standardization among the tests and a paucity of laboratory proficiency testing limit comparison of results between laboratories (71 ). Laboratory proficiency testing is currently available for measuring Hb concentration, hematocrit, red blood cell count, serum ferritin concentration, and serum iron concentration; provisional proficiency testing was added in 1997 for total iron-binding capacity in the College of American Pathologists survey and was added to the American Association of Bioanalysts survey in 1998. As of April 1998, three states (New York, Pennsylvania, and Wisconsin) had proficiency testing programs for erthrocyte protoporphryin concentration. Regardless of whether test standardization and proficiency testing become routine, better understanding among health-care providers about the strengths and limitations of each test is necessary to improve screening for and diagnosis of irondeficiency anemia, especially because the results from all of these tests can be affected by factors other than iron status.
Only the most common indicators of iron deficiency are described in this section. Other indicators of iron deficiency (e.g., unbound iron-binding capacity and the concentrations of transferrin receptor, serum transferrin, and holo-ferritin) are less often used or are under development.
# Hb Concentration and Hematocrit
Because of their low cost and the ease and rapidity in performing them, the tests most commonly used to screen for iron deficiency are Hb concentration and hematocrit (Hct). These measures reflect the amount of functional iron in the body. The concentration of the iron-containing protein Hb in circulating red blood cells is the more direct and sensitive measure. Hct indicates the proportion of whole blood occupied by the red blood cells; it falls only after the Hb concentration falls. Because changes in Hb concentration and Hct occur only at the late stages of iron deficiency, both tests are late indicators of iron deficiency; nevertheless, these tests are essential for determining iron-deficiency anemia.
Because iron deficiency is such a common cause of childhood anemia, the terms anemia, iron deficiency, and iron-deficiency anemia are often used interchangeably (3 ) . The only cases of anemia that can be classified as iron-deficiency anemia, however, are those with additional evidence of iron deficiency. The concept of a close association between anemia and iron deficiency is closest to correct when the prevalence of iron deficiency is high. In the United States, the prevalence and severity of anemia have declined in recent years; hence, the proportion of anemia due to causes other than iron deficiency has increased substantially. As a consequence, the effectiveness of anemia screening for iron deficiency has decreased in the United States.
Iron deficiency may be defined as absent bone marrow iron stores (as described on bone marrow iron smears), an increase in Hb concentration of >1.0 g/dL after iron treatment, or abnormal values on certain other biochemical tests (17 ). The recent recognition that iron deficiency seems to have general and potentially serious negative effects (32)(33)(34) has made identifying persons having iron deficiency as important as identifying persons having iron-deficiency anemia.
The case definition of anemia recommended in this report is <5th percentile of the distribution of Hb concentration or Hct in a healthy reference population and is based on age, sex, and (among pregnant women) stage of pregnancy (45,72 ). This case definition for anemia was shown to correctly identify 37% of women of childbearing age and 25% of children aged 1-5 years who were iron deficient (defined as two of three positive test results ) (sensitivity) and to correctly classify 93% of women of childbearing age and 92% of children aged 1-5 years as not having iron deficiency (specificity) (73 ). Lowering the Hb concentration or Hct cut-off would result in identifying fewer people who have anemia due to causes other than iron deficiency (false positives) but also in overlooking more people with iron deficiency (true positives) (74) .
The distributions of Hb concentration and Hct and thus the cutoff values for anemia differ between children, men, nonpregnant women, and pregnant women and by age or weeks of gestation (Table 6). The distributions also differ by altitude, smoking status, and race. Among pregnant women, Hb concentration and Hct decline during the first and second trimesters because of an expanding blood volume (18,(39)(40)(41)(42). Among pregnant women who do not take iron supplements, Hb concentration and Hct remain low in the third trimester, and among pregnant women who have adequate iron intake, Hb concentration and Hct gradually rise during the third trimester toward the prepregnancy levels (39,40 ). Because adequate data are lacking in the United States, the cutoff values for anemia are based on clinical studies of European women who had taken iron supplementation during pregnancy (39)(40)(41)(42)72 ). For pregnant women, a test result >3 standard deviations (SD) higher than the mean of the reference population (i.e., a Hb concentration of >15.0 g/dL or a Hct of >45.0%), particularly in the second trimester, likely indicates poor blood volume expansion (72 ). High Hb concentration or Hct has been associated with hypertension and poor pregnancy outcomes (e.g., fetal growth retardation, fetal death, preterm delivery, and low birthweight) (75)(76)(77)(78). In one study, women who had a Hct of ≥43% at 26-30 weeks' gestation had more than a twofold increased risk for preterm delivery and a fourfold increased risk for delivering a child having fetal growth retardation than did women who had a Hct of 33%-36% (76 ). Hence, a high Hb concentration or Hct in the second or third trimester of pregnancy should not be considered an indicator of desirable iron status.
Long-term residency at high altitude (≥3,000 ft) ( 79) and cigarette smoking (80 ) cause a generalized upward shift in Hb concentration and Hct (Table 7). The effectiveness of screening for anemia is lowered if the cutoff values are not adjusted for these factors (72,79,80 ). Adjustment allows the positive predictive value of anemia screening to be comparable between those who reside near sea-level and those who live at high altitude and between smokers and nonsmokers (72 ).
In the United States, the distribution of Hb concentration values is similar among whites and Asian Americans (81 ), and the distribution of Hct values is similar among whites and American Indians (82 ). The distributions are lower among blacks than whites, however, even after adjustment for income (83,84 ). These different distributions are not caused by a difference in iron status indicators (e.g., iron intake, serum ferritin concentration, or transferrin saturation); thus, applying the same criteria for anemia to all races results in a higher rate of false-positive cases of iron deficiency for blacks (84 ). For example, in the United States during 1976-1980, 28% of nonpregnant black women but only 5% of nonpregnant white women had a Hb concentration of <12 g/dL and, according to the anemia criteria, would be classified as iron deficient, even though other tests for iron status suggested these women were not iron deficient (84 ) . For this reason, the Institute of Medicine recommends lowering Hb concentration and Hct cutoff values for black children aged <5 years by 0.4 g/dL and 1%, respectively, and for black adults by 0.8 g/dL and 2%, respectively (5 ). Because the reason for this disparity in distributions by race has not been determined, the recommendations in this report do not provide race-specific cutoff values for anemia. Regardless, health-care providers should be aware of the possible difference in the positive predictive value of anemia screening for iron deficiency among blacks and whites and consider using other iron status tests (e.g., serum ferritin concentration and transferrin saturation) for their black patients. Accurate, low-cost, clinic-based instruments have been developed for measuring Hb concentration and Hct by using capillary or venous blood (85,86 ). Small diurnal variations are seen in Hb concentration and Hct measurements, but these variations are neither biologically nor statistically significant (87,88 ). A potential source of error of using capillary blood to estimate Hb concentration and Hct in screening is improper sampling technique. For example, excessive squeezing (i.e., "milking") of the finger contaminates the blood with tissue fluid, leading to false low readings (89 ). Confirmation of a low reading is recommended by obtaining a second capillary blood sample from the finger or by venipuncture.
Although measures of Hb concentration and Hct cannot be used to determine the cause of anemia, a diagnosis of iron-deficiency anemia can be made if Hb concentration or Hct increases after a course of therapeutic iron supplementation (23,51 ). Alternatively, other laboratory tests (e.g., mean cell volume, red blood cell distribution width, and serum ferritin concentration) can be used to differentiate iron-deficiency anemia from anemia due to other causes.
In the United States in recent years, the usefulness of anemia screening as an indicator of iron deficiency has become more limited, particularly for children. Studies using transferrin saturation (a more sensitive test for iron deficiency) have documented that iron deficiency in most subpopulations of children has declined such that screening by Hb concentration no longer efficiently predicts iron deficiency (3,45,51,90 ) . Data from NHANES II, which was conducted during 1976-1980, indicated that <50% of children aged 1-5 years and women in their childbearing years who had anemia (as defined by Hb concentration <5th percentile) were iron deficient (i.e., had at least two of the following: low mean cell volume, high erythrocyte protoporphyrin concentration, or low transferrin saturation) (70,73,83 ). Causes of anemia other than iron deficiency include other nutritional deficiencies (e.g., folate or vitamin B 12 deficiency), hereditary defects in red blood cell production (e.g., thalassemia major and sickle cell disease), recent or current infection, and chronic inflammation (91 ). The current pattern of iron-deficiency anemia in the United States (28,45 ) indicates that selective anemia screening of children at known risk for iron deficiency or additional measurement of indicators of iron deficiency (e.g., erythrocyte protoporphyrin concentration and serum ferritin concentration) to increase the positive predictive value of screening are now suitable approaches to assessing iron deficiency among most U.S. children (3,73 ). The costs and feasibility of screening using additional indicators of iron deficiency may preclude the routine use of these indicators.
# Mean Cell Volume
Mean cell volume (MCV), the average volume of red blood cells, is measured in femtoliters (10 -15 liters). This value can be calculated as the ratio of Hct to red blood cell count or measured directly using an electronic counter. MCV is highest at birth, decreases during the first 6 months of life, then gradually increases during childhood to adult levels (23,51 ). A low MCV corresponds with the 5th percentile for age for the reference population in NHANES III (28 ). Some anemias, including iron-deficiency anemia, result in microcytic red blood cells; a low MCV thus indicates microcytic anemia (Table 8). If cases of lead poisoning and the anemias of infection, chronic inflammatory disease, and thalassemia minor can be excluded, a low MCV serves as a specific index for iron-deficiency anemia (28,87,94,95 ).
# Red Blood Cell Distribution Width
Red blood cell distribution width (RDW) is calculated by dividing the SD of red blood cell volume by MCV and multiplying by 100 to express the result as a percentage: RDW (%) = × 100 A high RDW is generally set at >14.0%, which corresponds to the 95th percentile of RDW for the reference population in NHANES III (20 ). The RDW value obtained depends on the instrument used (51,95 ).
# TABLE 8. Cutoff values for laboratory tests for iron deficiency
# Test
# Cutoff value Reference
# Hemoglobin concentration
See An RDW measurement often follows an MCV test to help determine the cause of a low MCV. For example, iron-deficiency anemia usually causes greater variation in red blood cell size than does thalassemia minor (96 ). Thus, a low MCV and an RDW of >14.0% indicates iron-deficiency anemia, whereas a low MCV and an RDW ≤14.0% indicates thalassemia minor (51 ).
# Erythrocyte Protoporphyrin Concentration
Erythrocyte protoporphyrin is the immediate precursor of Hb. The concentration of erythrocyte protoporphyrin in blood increases when insufficient iron is available for Hb production. A concentration of >30 µg/dL of whole blood or >70 µg/dL of red blood cells among adults and a concentration of >80 µg/dL of red blood cells among children aged 1-2 years indicates iron deficiency (28,45,91 ). The normal range of erythrocyte protoporphyrin concentration is higher for children aged 1-2 years than for adults, but no consensus exists on the normal range for infants (28,90 ). The sensitivity of free erythrocyte protoporphyrin to iron deficiency (as determined by response to iron therapy) in children and adolescents aged 6 months-17 years is 42%, and the estimated specificity is 61% (74 ).
Infection, inflammation, and lead poisoning as well as iron deficiency can elevate erythrocyte protoporphyrin concentration (23,92 ). This measure of iron status has several advantages and disadvantages relative to other laboratory measures. For example, the day-to-day variation within persons for erythrocyte protoporphyrin concentration is less than that for serum iron concentration and transferrin saturation (87 ). A high erythrocyte protoporphyrin concentration is an earlier indicator of irondeficient erythropoiesis than is anemia, but it is not as early an indicator of low iron stores as is low serum ferritin concentration (30 ). Inexpensive, clinic-based methods have been developed for measuring erythrocyte protoporphyrin concentration, but these methods can be less reliable than laboratory methods (92 ).
# Serum Ferritin Concentration
Nearly all ferritin in the body is intracellular; a small amount circulates in the plasma. Under normal conditions, a direct relationship exists between serum ferritin concentration and the amount of iron stored in the body (97 ), such that 1 µg/L of serum ferritin concentration is equivalent to approximately 10 mg of stored iron (98 ). In the United States, the average serum ferritin concentration is 135 µg/L for men (28), 43 µg/L for women (28 ), and approximately 30 µg/L for children aged 6-24 months (23 ) .
Serum ferritin concentration is an early indicator of the status of iron stores and is the most specific indicator available of depleted iron stores, especially when used in conjunction with other tests to assess iron status. For example, among women who test positive for anemia on the basis of Hb concentration or Hct, a serum ferritin concentration of ≤15 µg/L confirms iron deficiency and a serum ferritin concentration of >15 µg/L suggests that iron deficiency is not the cause of the anemia (93 ). Among women of childbearing age, the sensitivity of low serum ferritin concentration (≤15 µg/L) for iron deficiency as defined by no stainable bone marrow iron is 75%, and the specificity is 98%; when low serum ferritin concentration is set at <12 µg/L, the sensitivity for iron deficiency is 61% and the specificity is 100% (93 ). Although low serum ferritin concentration is an early indicator of low iron stores, it has been questioned whether a normal concentration measured during the first or second trimester of pregnancy can predict adequate iron status later in pregnancy (6 ).
The cost of assessing serum ferritin concentration and the unavailability of clinicbased measurement methods hamper the use of this measurement in screening for iron deficiency. In the past, methodological problems have hindered the comparability of measurements taken in different laboratories (87 ), but this problem may be reduced by proficiency testing and standardized methods. Factors other than the level of stored iron can result in large within-individual variation in serum ferritin concentration (99 ). For example, because serum ferritin is an acute-phase reactant, chronic infection, inflammation, or diseases that cause tissue and organ damage (e.g., hepatitis, cirrhosis, neoplasia, or arthritis) can raise its concentration independent of iron status (97 ). This elevation can mask depleted iron stores.
# Transferrin Saturation
Transferrin saturation indicates the extent to which transferrin has vacant ironbinding sites (e.g., a low transferrin saturation indicates a high proportion of vacant iron-binding sites). Saturation is highest in neonates, decreases by age 4 months, and increases throughout childhood and adolescence until adulthood (23,28 ). Transferrin saturation is based on two laboratory measures, serum iron concentration and total iron-binding capacity (TIBC). Transferrin saturation is calculated by dividing serum iron concentration by TIBC and multiplying by 100 to express the result as a percentage:
Transferrin saturation (%) = × 100 Serum iron concentration is a measure of the total amount of iron in the serum and is often provided with results from other routine tests evaluated by automated, laboratory chemistry panels. Many factors can affect the results of this test. For example, the concentration of serum iron increases after each meal (71 ), infections and inflammations can decrease the concentration (69 ), and diurnal variation causes the concentration to rise in the morning and fall at night (100 ). The day-to-day variation of serum iron concentration within individuals is greater than that for Hb concentration and Hct (88,101 ).
TIBC is a measure of the iron-binding capacity within the serum and reflects the availability of iron-binding sites on transferrin (94 ). Thus, TIBC increases when serum iron concentration (and stored iron) is low and decreases when serum iron concentration (and stored iron) is high. Factors other than iron status can affect results from this test. For example, inflammation, chronic infection, malignancies, liver disease, nephrotic syndrome, and malnutrition can lower TIBC readings, and oral contraceptive use and pregnancy can raise the readings (87,102 ). Nevertheless, the day-to-day variation is less than that for serum iron concentration (87,101 ). TIBC is less sensitive to iron deficiency than is serum ferritin concentration, because changes in TIBC occur after iron stores are depleted (17,31,94 ).
A transferrin saturation of <16% among adults is often used to confirm iron deficiency (93 ). Among nonpregnant women of childbearing age, the sensitivity of low transferrin saturation (<16%) for iron deficiency as defined by no stainable bone marrow iron is 20%, and the specificity is 93% (93 ).
The factors that affect serum iron concentration and TIBC, such as iron status, diurnal variation (87,103 ), and day-to-day variation within persons (101 ), can affect the measured transferrin saturation as well. The diurnal varation is larger for transferrin saturation than it is for Hb concentration or Hct (87,103 ). Transferrin saturation is an indicator of iron-deficient erythropoiesis rather than iron depletion; hence, it is less sensitive to changes in iron stores than is serum ferritin concentration (30,31 ). The cost of assessing transferrin saturation and the unavailability of simple, clinic-based methods for measuring transferrin saturation hinder the use of this test in screening for iron deficiency.
# JUSTIFICATION FOR RECOMMENDATIONS
These recommendations are intended to guide primary health-care providers in preventing and controlling iron deficiency in infants, preschool children, and women of childbearing age (especially pregnant women). Both primary prevention through appropriate dietary intake and secondary prevention through detecting and treating iron-deficiency anemia are discussed.
# Primary Prevention
Primary prevention of iron deficiency means ensuring an adequate intake of iron. A reliable source of dietary iron is essential for every infant and child's growth and development, because a rapid rate of growth and low dietary iron may predispose an infant to exhaustion of iron stores by ages 4-6 months (23 ). Primary prevention of iron deficiency is most important for children aged <2 years, because among all age groups they are at the greatest risk for iron deficiency caused by inadequate intake of iron (28,45,47,48,91 ). The adequacy of the iron content of an infant's diet is a major determinant of the iron status of the infant as a young child, as indicated by declines in the prevalence of iron-deficiency anemia that correspond with improvements in infant feeding practices (1)(2)(3). In infants and young children, iron deficiency may result in developmental and behavioral disturbances (33,34 ).
The evidence for the effectiveness of primary prevention among pregnant women is less clear. Although iron-deficiency anemia during pregnancy is associated with preterm delivery and delivering a low-birthweight baby (38 ), well designed, randomized control trials are needed to evaluate the effectiveness of universal iron supplementation on mitigating adverse birth outcomes. Some studies have indicated that adequate iron supplementation during pregnancy reduces the prevalence of irondeficiency anemia (6,10,(39)(40)(41)(42)66,104 ), but over the last few decades, the recommendation by the Council on Foods and Nutrition and other groups to supplement iron intake during pregnancy has not resulted in a reduced prevalence of anemia among low-income, pregnant women (4,9,105 ). Evidence on iron supplement use is limited, however, so it is not known how well the recommendation has been followed. Conclusive evidence of the benefits of universal iron supplementation for all women is lacking, but CDC advocates universal iron supplementation for pregnant women because a large proportion of women have difficulty maintaining iron stores during pregnancy and are at risk for anemia (6,18,63 ), iron-deficiency anemia during pregnancy is associated with adverse outcomes (38 ), and supplementation during pregnancy is not associated with important health risks (10,65,66 ).
# Potential Adverse Effects of Increasing Dietary Iron Intake
Approximately 3.3 million women of childbearing age and 240,000 children aged 1-2 years have iron-deficiency anemia (45 ); conversely, up to one million persons in the United States may be affected by iron overload due to hemochromatosis (106,107 ) . Hemochromatosis is a genetic condition characterized by excessive iron absorption, excess tissue iron stores, and potential tissue injury. If undetected and untreated, iron overload may eventually result in the onset of morbidity (e.g., cirrhosis, hepatomas, diabetes, cardiomyopathy, arthritis or athropathy, or hypopituitarism with hypogonadism), usually between ages 40 and 60 years. Clinical expression of iron overload depends on the severity of the metabolic defect, the presence of sufficient quantities of absorbable iron in the diet, and physiological blood loss from the body (e.g., menstruation) (16 ). Transferrin saturation is the recommended screening test for hemochromatosis; a repeated high value indicates hemochromatosis (108 ). Preventing or treating the clinical signs of hemochromatosis involves repeated phlebotomy to remove excess iron from the body (108 ).
Although increases in iron intake would seem contraindicated in persons with hemochromatosis, there is no evidence that iron fortification of foods or the use of a recommended iron supplementation regimen during pregnancy is associated with increased risk for clinical disease due to hemochromatosis (16 ). Even when their dietary intake of iron is approximately average, persons with iron overload due to hemochromatosis will require phlebotomy to reduce their body's iron stores (108 ).
# Secondary Prevention
Secondary prevention involves screening for, diagnosing, and treating iron deficiency. Screening tests can be for anemia or for earlier indicators of iron deficiency (e.g., erythrocyte protoporphyrin concentration or serum ferritin concentration). The cost, feasibility, and variability of measurements other than Hb concentration and Hct currently preclude their use for screening. The decision to screen an entire population or to screen only persons at known risk for iron deficiency should be based on the prevalence of iron deficiency in that population (73 ).
The percentage of anemic persons who are truly iron deficient (i.e., the positive predictive value of anemia screening for iron deficiency) increases with increasing prevalence of iron deficiency in the population (73 ). In the United States, children from low-income families, children living at or below the poverty level, and black or Mexican-American children are at higher risk for iron deficiency than are children from middle-or high-income families, children living above the poverty level, and white children, respectively (2,3,45 ). Routine screening for anemia among populations of children at higher risk for iron deficiency is effective, because anemia is predictive of iron deficiency. In populations having a low prevalence of anemia or a prevalence of iron deficiency <10% (e.g., children from middle-or high-income families and white children) (2,3,45 ), anemia is less predictive of iron deficiency (73 ), and selectively screening only the persons having known risk factors for iron deficiency increases the positive predictive value of anemia screening (3,70 ). Because the iron stores of a fullterm infant of normal or high birthweight can meet the body's iron requirements up to age 6 months (23 ), anemia screening is of little value before age 6 months for these infants.
Anemia among pregnant women and anemia among all nonpregnant women of childbearing age should be considered together, because childbearing increases the risk for iron deficiency (both during and after pregnancy) (41,42 ), and iron deficiency before pregnancy likely increases the risk for iron deficiency during pregnancy (109 ). Periodic screening for anemia among adolescent girls and women of childbearing age is indicated for several reasons. First, most women have dietary intake of iron below the recommended dietary allowance (46,47 ). Second, heavy menstrual blood loss, which increases iron requirements to above the recommended dietary allowance, affects an estimated 10% of women of childbearing age (17,18 ). Finally, the relatively high prevalence of iron deficiency and iron-deficiency anemia among nonpregnant women of childbearing age (45 ) and of anemia among low-income, pregnant women (4 ) suggests that periodic screening for anemia is indicated among adolescent girls and nonpregnant women of childbearing age during routine medical examinations (73 ) and among pregnant women at the first prenatal visit. Among men and postmenopausal women, in whom iron deficiency and iron-deficiency anemia are uncommon (45 ), anemia screening is not highly predictive of iron deficiency.
# RECOMMENDATIONS Infants (Persons Aged 0-12 Months) and Preschool Children (Persons Aged 1-5 Years)
Primary prevention of iron deficiency in infants and preschool children should be achieved through diet. Information on diet and feeding is available in the Pediatric Nutrition Handbook (8 ), Guide to Clinical Preventive Services (10 ), Nutrition and Your Health: Dietary Guidelines for Americans (14 ), Breastfeeding and the Use of Human Milk (110 ), and Clinician's Handbook of Preventive Services: Put Prevention into Practice (111 ). For secondary prevention of iron deficiency in this age group, screening for, diagnosing, and treating iron-deficiency anemia are recommended.
# Primary Prevention
Milk and Infant Formulas
- Encourage breast feeding of infants.
- Encourage exclusive breast feeding of infants (without supplementary liquid, formula, or food) for 4-6 months after birth.
- When exclusive breast feeding is stopped, encourage use of an additional source of iron (approximately 1 mg/kg per day of iron), preferably from supplementary foods.
- For infants aged <12 months who are not breast fed or who are partially breast fed, recommend only iron-fortified infant formula as a substitute for breast milk.
- For breast-fed infants who receive insufficient iron from supplementary foods by age 6 months (i.e., <1 mg/kg per day), suggest 1 mg/kg per day of iron drops.
- For breast-fed infants who were preterm or had a low birthweight, recommend 2-4 mg/kg per day of iron drops (to a maximum of 15 mg/day) starting at 1 month after birth and continuing until 12 months after birth.
- Encourage use of only breast milk or iron-fortified infant formula for any milkbased part of the diet (e.g., in infant cereal) and discourage use of low-iron milks (e.g., cow's milk, goat's milk, and soy milk) until age 12 months.
- Suggest that children aged 1-5 years consume no more than 24 oz of cow's milk, goat's milk, or soy milk each day.
# Solid Foods
- At age 4-6 months or when the extrusion reflex disappears, recommend that infants be introduced to plain, iron-fortified infant cereal. Two or more servings per day of iron-fortified infant cereal can meet an infant's requirement for iron at this age.
- By approximately age 6 months, encourage one feeding per day of foods rich in vitamin C (e.g., fruits, vegetables, or juice) to improve iron absorption, preferably with meals.
- Suggest introducing plain, pureed meats after age 6 months or when the infant is developmentally ready to consume such food.
# Secondary Prevention Universal Screening
- In populations of infants and preschool children at high risk for iron-deficiency anemia (e.g., children from low-income families, children eligible for the Special Supplemental Nutrition Program for Women, Infants, and Children , migrant children, or recently arrived refugee children), screen all children for anemia between ages 9 and 12 months, 6 months later, and annually from ages 2 to 5 years.
# Selective Screening
- In populations of infants and preschool children not at high risk for iron-deficiency anemia, screen only those children who have known risk factors for the condition. These children are described in the next three bulleted items.
- Consider anemia screening before age 6 months for preterm infants and lowbirthweight infants who are not fed iron-fortified infant formula.
- Annually assess children aged 2-5 years for risk factors for iron-deficiency anemia (e.g., a low-iron diet, limited access to food because of poverty or neglect, or special health-care needs). Screen these children if they have any of these risk factors.
- At ages 9-12 months and 6 months later (at ages 15-18 months), assess infants and young children for risk factors for anemia. Screen the following children:
-Preterm or low-birthweight infants -Infants fed a diet of non-iron-fortified infant formula for >2 months -Infants introduced to cow's milk before age 12 months -Breast-fed infants who do not consume a diet adequate in iron after age 6 months (i.e., who receive insufficient iron from supplementary foods) -Children who consume >24 oz daily of cow's milk -Children who have special health-care needs (e.g., children who use medications that interfere with iron absorption and children who have chronic infection, inflammatory disorders, restricted diets, or extensive blood loss from a wound, an accident, or surgery).
# Diagnosis and Treatment
- Check a positive anemia screening result by performing a repeat Hb concentration or Hct test. If the tests agree and the child is not ill, a presumptive diagnosis of iron-deficiency anemia can be made and treatment begun.
- Treat presumptive iron-deficiency anemia by prescribing 3 mg/kg per day of iron drops to be administered between meals. Counsel the parents or guardians about adequate diet to correct the underlying problem of low iron intake.
- Repeat the anemia screening in 4 weeks. An increase in Hb concentration of ≥1 g/dL or in Hct of ≥3% confirms the diagnosis of iron-deficiency anemia. If irondeficiency anemia is confirmed, reinforce dietary counseling, continue iron treatment for 2 more months, then recheck Hb concentration or Hct. Reassess Hb concentration or Hct approximately 6 months after successful treatment is completed.
- If after 4 weeks the anemia does not respond to iron treatment despite compliance with the iron supplementation regimen and the absence of acute illness, further evaluate the anemia by using other laboratory tests, including MCV, RDW, and serum ferritin concentration. For example, a serum ferritin concentration of ≤15 µg/L confirms iron deficiency, and a concentration of >15 µg/L suggests that iron deficiency is not the cause of the anemia.
# School-Age Children (Persons Aged 5-<12 Years) and Adolescent Boys (Males Aged 12-<18 Years)
Among school-age children and adolescent boys, only those who have a history of iron-deficiency anemia, special health-care needs, or low iron intake should be screened for anemia. Age-specific anemia criteria should be used (Table 6). Treatment for iron-deficiency anemia includes one 60-mg iron tablet each day for school-age children and two 60-mg iron tablets each day for adolescent boys and counseling about dietary intake of iron. Follow-up and laboratory evaluation are the same for schoolage children and adolescent boys as they are for infants and preschool children.
# Adolescent Girls (Females 12-<18 Years) and Nonpregnant Women of Childbearing Age
Primary prevention of iron deficiency for adolescent girls and nonpregnant women of childbearing age is through diet. Information about healthy diets, including good sources of iron, is available in Nutrition and Your Health: Dietary Guidelines for Americans (14 ). Screening for, diagnosing, and treating iron-deficiency anemia are secondary prevention approaches. Age-specific anemia criteria should be used during screening (Table 6).
# Primary Prevention
- Most adolescent girls and women do not require iron supplements, but encourage them to eat iron-rich foods and foods that enhance iron absorption.
- Women who have low-iron diets are at additional risk for iron-deficiency anemia; guide these women in optimizing their dietary iron intake.
# Secondary Prevention
# Screening
- Starting in adolescence, screen all nonpregnant women for anemia every 5-10 years throughout their childbearing years during routine health examinations.
- Annually screen for anemia women having risk factors for iron deficiency (e.g., extensive menstrual or other blood loss, low iron intake, or a previous diagnosis of iron-deficiency anemia).
# Diagnosis and Treatment
- Confirm a positive anemia screening result by performing a repeat Hb concentration or Hct test. If the adolescent girl or woman is not ill, a presumptive diagnosis of iron-deficiency anemia can be made and treatment begun.
- Treat adolescent girls and women who have anemia by prescribing an oral dose of 60-120 mg/day of iron. Counsel these patients about correcting iron deficiency through diet.
- Follow up adolescent girls and nonpregnant women of childbearing age as is done for infants and preschool children, except that for a confirmed case of irondeficiency anemia, continue iron treatment for 2-3 more months.
- If after 4 weeks the anemia does not respond to iron treatment despite compliance with the iron supplementation regimen and the absence of acute illness, further evaluate the anemia by using other laboratory tests, including MCV, RDW, and serum ferritin concentration. In women of African, Mediterranean, or Southeast Asian ancestry, mild anemia unresponsive to iron therapy may be due to thalassemia minor or sickle cell trait.
# Pregnant Women
Primary prevention of iron deficiency during pregnancy includes adequate dietary iron intake and iron supplementation. Information about healthy diets, including good sources of iron, is found in Nutrition and Your Health: Dietary Guidelines for Americans (14 ). More detailed information for pregnant women is found in Nutrition During Pregnancy and Lactation: An Implementation Guide (112 ). Secondary prevention involves screening for, diagnosing, and treating iron-deficiency anemia.
# Primary Prevention
- Start oral, low-dose (30 mg/day) supplements of iron at the first prenatal visit.
- Encourage pregnant women to eat iron-rich foods and foods that enhance iron absorption.
- Pregnant women whose diets are low in iron are at additional risk for iron-deficiency anemia; guide these women in optimizing their dietary iron intake.
# Secondary Prevention
# Screening
- Screen for anemia at the first prenatal care visit. Use the anemia criteria for the specific stage of pregnancy (Table 6).
# Diagnosis and Treatment
- Confirm a positive anemia screening result by performing a repeat Hb concentration or Hct test. If the pregnant woman is not ill, a presumptive diagnosis of iron-deficiency anemia can be made and treatment begun.
- If Hb concentration is <9.0 g/dL or Hct is <27.0%, refer the patient to a physician familiar with anemia during pregnancy for further medical evaluation.
- Treat anemia by prescribing an oral dose of 60-120 mg/day of iron. Counsel pregnant women about correcting iron-deficiency anemia through diet.
- If after 4 weeks the anemia does not respond to iron treatment (the woman remains anemic for her stage of pregnancy and Hb concentration does not increase by 1 g/dL or Hct by 3%) despite compliance with an iron supplementation regimen and the absence of acute illness, further evaluate the anemia by using other tests, including MCV, RDW, and serum ferritin concentration. In women of African, Mediterranean, or Southeast Asian ancestry, mild anemia unresponsive to iron therapy may be due to thalassemia minor or sickle cell trait.
- When Hb concentration or Hct becomes normal for the stage of gestation, decrease the dose of iron to 30 mg/day.
- During the second or third trimester, if Hb concentration is >15.0 g/dL or Hct is >45.0%, evaluate the woman for potential pregnancy complications related to poor blood volume expansion.
# Postpartum Women
Women at risk for anemia at 4-6 weeks postpartum should be screened for anemia by using a Hb concentration or Hct test. The anemia criteria for nonpregnant women should be used (Table 6). Risk factors include anemia continued through the third trimester, excessive blood loss during delivery, and a multiple birth. Treatment and follow-up for iron-deficiency anemia in postpartum women are the same as for nonpregnant women. If no risk factors for anemia are present, supplemental iron should be stopped at delivery.
# Men (Males Aged ≥18 Years) and Postmenopausal Women
No routine screening for iron deficiency is recommended for men or postmenopausal women. Iron deficiency or anemia detected during routine medical examinations should be fully evaluated for its cause. Men and postmenopausal women usually do not need iron supplements.
# CONCLUSION
In the United States, iron deficiency affects 7.8 million adolescent girls and women of childbearing age and 700,000 children aged 1-2 years (45 ). Primary health-care providers can help prevent and control iron deficiency by counseling individuals and families about sound iron nutrition during infancy and beyond and about iron supplementation during pregnancy, by screening persons on the basis of their risk for iron deficiency, and by treating and following up persons with presumptive iron deficiency. Implementing these recommendations will help reduce manifestations of iron deficiency (e.g., preterm births, low birthweight, and delays in infant and child development) and thus improve public health.
# The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on | 13,892 | {
"id": "70c486e50928fe13f2e400751212a875ebc3383f",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | CLPPP databases contain numerous errors that must be corrected before they can be processed by geocoding software. If each record in a database is formatted properly, the software can correct for errors such as misspellings of street names and cities. A local street map is an excellent resource for verifying street names and number ranges.# Purpose of These Guidelines
The challenge for public health practitioners and policy makers is to prevent childhood lead poisoning, not just react to it (1). GIS technology is a powerful tool that can be used to effectively target lead poisoning preventive interventions. The addresses of old housing units can be geocoded (geographically located) to identify areas where children at risk for lead poisoning live. Interventions can then be directed to those areas and specifi c properties to address lead hazards. These guidelines will focus on mapping applications, although GIS also can be used for statistical modeling to predict risk for lead exposure (2). Examples are provided of how GIS mapping technology can use blood lead screening, tax assessor (property), and U.S. census data to develop and improve preventive interventions, especially primary prevention (before children are poisoned).
# Who is at Risk for Lead Exposure?
Lead poisoning is a preventable environmental disease in children (3).
Children under the age of 6 years commonly put things in their mouths that they fi nd around them. This hand-to-mouth behavior increases the young child's risk for ingesting lead-contaminated dust and soil.
CDC estimates that 434,000 children have BLLs >10 µg/dL, CDC's level of concern (4).
Children at greatest risk for lead poisoning are those whose families are poor and live in substandard housing built before 1950.
These children tend to be African American or of Hispanic ethnicity.
Since the 1970s, policies have been implemented to limit the use of lead in products such as gasoline, food and drink cans, solder in pipes, and residential lead paint. Those polices have resulted in dramatic reductions in BLLs for children and adults (5,6,7). They have also reduced lead in our environment. Today, the most common high-dose source of lead exposure for young children in the United States is leaded house paint. That includes the dust and soil that becomes contaminated as the paint deteriorates (8,9). Although leadbased paint was banned for residential use in 1978, millions of properties built before that time are still lead hazards. House paint used before 1950 contained up to 50% lead by weight (10).
In the years between 1950 and 1977, manufacturers voluntarily reduced the concentration of lead in paint. Consequently, even though there is lead-based paint in nearly all houses built before 1977, houses built before 1950 place children at considerably higher risk (11).
At the end of the 20th century, an estimated 38 million housing units had lead-based paint, and 24 million of these had signifi cant lead-based paint hazards.
Low-income families (<$30,000/year) with children younger than 6 years of age occupied 1.2 million of those hazardous units (12).
A 1991-1994 national survey showed the prevalence of children with BLLs >10µg/ dL varied by age of housing: 8.6% for children living in houses built before 1946, 4.6% for those living in houses built from 1946-1973, and 1.6% for those living in houses built after 1973 (13). Children who live in old housing units and are poor are at higher risk for having elevated BLLs than are children from higher income families. For example, the prevalence of elevated BLLs among children living in homes built before 1946 was 16.4% for those from low-income families compared with 4.1% and 0.95% among those from middle-and high-income families, respectively (14). Studies show that property valuation or the assessed values of houses (tax assessor data) can be used as a proxy measure of the structures' condition.
Children living in lesser-valued houses were found to be at greater risk of having elevated BLLs, even after controlling for the age of the house (15). Rental units also have been linked with higher prevalence of lead poisoning. That may refl ect a greater likelihood that paint in old rental housing is deteriorated and becomes accessible to children (16).
# How Can GIS Help?
With GIS, maps can be created that show the location and age of every housing unit in an area. These maps can include information on other risk factors for lead poisoning, including population distributions, housing conditions, and BLLs of resident children during a given period.
This information can be used to show the relationship between housing units and risk factors. GIS visually presents these geospatial and temporal relationships in data that have many uses:
- To identify where high-risk children live;
- To assess screening penetration among high risk groups;
- To obtain a better understanding of changes over time in areas where children at high risk live;
- To evaluate the impact of our targeted screening and other intervention efforts and improve them; and
- To identify those housing units responsible, over a period of years, for multiple cases of childhood lead poisoning (17).
# What is GIS?
GIS is a computer-assisted system for the acquisition, storage, analysis, and display of geographic data (18). GIS software allows the user to create maps that display spatially related or geographically based data. Data representing geographic features (landscape elements) can be visually displayed as points, lines, and polygons (19). Each type of feature element in a GIS is contained in its own feature layer. A layer can only contain one type of feature. For example, a map displaying a city with streets and locations of individual residences, is composed of three feature layers:
- One polygon layer denoting the area boundaries of a city,
- One line layer to denote streets,
- One point layer denoting the location of individual residences.
In a GIS, every layer's geometric relationship to all other layers is prescribed through a process called topology (not to be confused with topography). Topology represents angular relationships (order, adjacency, etc.) that remain constant regardless of map distortion.
# Polygon
# Lines
# Points
Any record containing geographic information can be mapped. The geographic data may represent any size geographic area (e.g., a single housing unit, a building, a census block or tract, municipality, ZIP code, county, state, etc.).
Address Geocoding is the process whereby specialized software matches an address against a database of standardized addresses and assigns unique map coordinates for location (i.e., latitude and longitude). Some software contain "tables of alternate street names" that allow correction for streets having different names over time. Once an address is geocoded, it can be added to a GIS for spatial analysis (19). Geocoding allows us to establish relationships to other geographic identifi ers (ZIP code, census area, municipality, block group) and query the data (20).
# Dot-Density
# Choropleth
The symbology used to picture the data displayed refl ects the classifi cation of the Populations for census tracts are usually between 2, 500 and 8,000 and block groups between 600 and 3,000 (detailed descriptions of census levels can be found at ). Census data related to childhood lead poisoning is available from CDC's Lead Poisoning Prevention Branch (see the How CDC Can Help CLPPPs section below for further details).
# Tax Assessor
Tax assessor data (real estate property data) is a source of detailed housing information (22). County and municipal government offi ces, known for our purpose as the "tax assessor offi ce," collect detailed information on each parcel of land (property) as a record of real estate transactions and to valuate property for taxes. Each parcel of property in a county has its own unique number-the "assessor's parcel number" (APN)-and every county has a unique numbering method. A tax assessor database can be linked to a geographical information system to reveal the actual street location of housing. Tax assessor data is public information, meaning every member of the public has a right to view these data.
There can be more than 400 variables in the real estate database, including whether the property is residential or commercial.
Residential property type is further divided into categories such as single-family, apartment, or condominium. Each record can contain hundreds of variables, including sales price, date of sale, property address, and owner address (street number, street name, street type, ZIP code, and county). Additional information usually includes subdivision name, number of housing units (apartments), number of rooms, number of stories, whether owner or renter occupied, and most important, the year the structure was built. The database may even include information on renovation history. Address information is listed in two categories: owner information and property information.
Owner information is the mailing address provided by the owner to the assessor for tax billing purposes. Property information is the actual property location. If the owner lives at the property, the mailing and property address should be the same.
Real estate data is available commercially but is costly. Tax assessor data is available at a little cost and is relatively accurate and updated frequently. As a result, it is recommended that local tax assessor data be used, when it is available.
# Getting Started
# GIS Offi ce
Many counties and municipalities have a GIS offi ce (usually in the planning department). CLPPPs are encouraged to seek out their help, especially through a personal visit. Even if the tax assessor does not have a GIS property map nor is aware of the existence of a map, a visit to the county or local GIS offi ce may reveal the existence of one. The GIS staff also know the "lingo" of the GIS world and will probably be happy to cooperate. GIS offi ces can provide additional information that the tax assessor may not be knowledgeable about (resources, persons to contact, current GIS fi les, etc.). While in the area, visit the local county and city GIS offi ces, the library (main branch), and the local health offi cer. It is a good idea to set up appointments with a planned agenda for each stop. In all, expect at least a 1-day visit.
# Obtaining Tax Assessor Data
Tax
# Preparing to Visit the Tax Assessor Offi ce
To prepare for the visit, learn about the differences between census data and tax assessor data. For example, "owner address" is where the tax bill is sent, "situs address" is the property location, and "assessor parcel number (APN)" is the unique number assigned to each property.
# Preparing for the Tax Assessor Meeting
Tax assessors are usually very receptive when their data can be used for positive purposes. Explain the reason for the visit, describe the project, and show what information has been collected so far. If commercial data has been acquired, share that with the tax assessor and show where there is missing information (i.e., "year built" may be missing on some early properties).
Present a list of the data one would like to have and be prepared to leave a copy of this request with the tax assessor. A list of fi elds one might fi nd useful may be found in Appendix D. Most tax assessors will have the bulk of this information. At a minimum, request the following:
- owner information (street number, name, and type; city; state; ZIP code);
- situs information (street number, name, and type; city; state; ZIP code);
- number of units;
- number of stories;
- type of housing (single family residence, apartment, condominium, duplex, triplex);
- year built (the year construction was begun); and
- effective year built (the year construction was completed).
Stress that it is important to obtain information on properties with "year built" data, as well as information on properties that are missing "year built" data entries. - Many properties contain more than one house, but the properties are only assigned one APN.
- APNs might not be assigned consecutively and therefore cannot be used as a method to ascertain the "year built" when "year built" was missing from a record. (In some counties, however, APNs might be consecutive.)
- There are more municipalities (census places) in the county than listed in the U.S. census, which excludes census places that do not meet criteria, such as minimum population size.
- Most municipalities are represented by a single "tax district" and each property record includes the tax district.
Properties in the same tax district will probably share the same "ZIP code."
This can be useful when fi lling in missing information, such as ZIP codes.
- City boundaries in the assessor database differ from those in the U.S.
Census Bureau's database (larger taxable areas).
- The "offi cial" county street name fi le can be used to verify spellings, directions, and street number ranges.
- ZIP code information may be incomplete on property data, but almost never on owner data (because the tax assessor needs the "ZIP code" to send the owner of the property the tax bill!).
- Residential properties do not include apartments; apartments are listed as commercial property.
- The county may not have collected "year built" information until a few years ago. However, whether a unit was built before 1950 could be determined through the tax category "percent good," which is a method used to fairly tax older housing. Each time property was reassessed, the assessment is reduced by a few percentage points to compensate for infl ation (increased value). Therefore, on the basis of accumulating percentage reductions, a property assessed at "percent good" below a certain level (75% in this case)
was determined to be a pre-1950 building.
# Analysis
# Questions to Consider
# Housing
- How can pre-1950 housing be located?
- How to know which housing has been remediated?
- Will remediation of contaminated housing before habitation prevent a child from being lead poisoned?
- How can the age and condition of housing in which children reside be determined?
- How can housing responsible for multiple poisonings be identifi ed?
# Screening
- Have all of the children at risk been screened?
Have some groups of high-risk children been missed?
- Are all children living in poverty at risk? This is a useful tracking tool for health departments.
- Indicate which houses have been remediated through renovation and are presumably no longer hazardous.
- Monitor other children who later live in the homes to confi rm that remediation was done properly and test whether this is an effective method of primary prevention.
- Map addresses where owners have applied for renovation licenses.
Improperly performed renovation and remodeling can create lead hazards when old paint is disturbed. The health department may wish to target special educational programs to residents of neighborhoods where these activities are common.
- Identify homes that are associated with more than one case of lead poisoning.
These data can help a health department focus on neighborhoods of very high risk. It also justifi es the case to Medicaid for reimbursement of environmental investigation services to prevent future lead poisonings in the same housing unit.
# Limitations
# Census
There are limitations to using census data:
- Although certain basic demographic and housing questions were asked for every person in the United States, detailed information on housing (including the year a structure was built)
was collected on a subset sampled at a 1-in-6 rate (long form).
- Data on the year the structure was built are susceptible to errors of response and nonreporting because respondents must rely on their memories or on estimates by persons who have lived in the neighborhood a long time (26).
Available evidence indicates there is underreporting in the older yearstructure-built categories, especially "built in 1939 or earlier" (27).
- Data are only available every 10 years. In the time from one census to the next, some units are lost through attrition, and the demographics of neighborhoods changes.
- Age of housing below the block group level is not available.
# Tax Assessor
There are some limitations to using tax assessor data, too. Not all counties have tax assessor information in an electronic form. If the structure was renovated properly, other children who eventually live in these renovated housing units will not be lead poisoned (28). Children who are lead poisoned and move into these renovated units should initially demonstrate decreases in their BLLs, followed by lead values below 10 µg/dL with time.
The capacity to achieve the 2010 elimination goal is directly related to the ability to target strategies to geographic areas (29).
Geocoding (street address matching or assignment of map coordinates) will be the basis for data linkage and analysis in the 21st century.
The versatility of GIS supports the exploration of spatial relationships, patterns, and trends that may otherwise go unnoticed (30). This technology also allows for the linking of nongeographic data, such as blood lead levels, to geographic locations. GIS allows for the analysis of all data related to geographic location data.
Traditional biostatistical and spatially based data analytic methods can be used to estimate risk for lead exposure (31).
GIS is a powerful tool that can precisely locate the home of a child at risk from exposure to lead. This level of information is necessary for public health professionals to accurately assess the extent of childhood lead poisoning, to identify new cases, and to evaluate the effectiveness of prevention activities. However, public presentation or release of maps at this level is discouraged.
Public access to data below the county level is prohibited or severely restricted because of confi dentiality and privacy issues. A major challenge in the coming decade will be to increase public access to GIS information without compromising confi dentiality (32).
New methods must be developed to identify these high-risk children, whose homes may be dispersed over a large geographic area. - Eliminate blood leads levels in children >10 µg/dL (section 8-11).
- Increase the proportion of persons living in pre-1950s housing that has been tested for the presence of leadbased paint (section 8-22).
- Increase the proportion of all major national, State, and local health data systems that use geocoding to promote nationwide use of geographic information systems (GIS) at all levels.
Public health rests on information.
Increased geocoding in health data systems will provide the basis for more cost effective disease surveillance and intervention (section 23-3).
# Internet Resources
Listed below are some websites that provide information about using geographical information systems (GIS) for readers who want to learn more about GIS.
# Glossary of Terms
Attribute-Data about a map feature.
Attributes of a property include address, year-built, value, and number of apartments - Multiple spaces-caused by pressing the space bar more than necessary.
- Punctuation -,, ;,., /, \, , ', =, -.
- Shift key errors -!, @, #, $, %, ^, &, *, (, ), _, +, ", :, , ?. Most entries of this type probably represent attempts to enter the numbers 1 through 10.
# Street Numbering Errors
- Street numbers present numerous obstacles (see Table A.1 for summary):
- Address appears as single fi eld-lack of space between number, name, type, and direction.
- Direction is placed in front, the middle, or at the end of a number, such as: W1224 or 1224W or 1W224.
- Punctuation inserted improperly, as noted above.
# Street Name Errors
- Street names can present the most diffi culties (see Table A.1 for summary):
- Misspellings are frequently encountered where names have been merged or separated, transposed, or have letters added or dropped. These errors are especially diffi cult to fi nd and fi x when the fi rst letter is wrong (e.g. Butenberg, rather than Gutenberg).
- Inconsistent Abbreviations-Two common errors involve the words "Mount" and "Saint" in street names.
# APPENDIX A
Preparing Data for GIS Use-Problems to Avoid Geocoding is the process whereby specialized software assigns unique map coordinates (i.e., latitude and longitude) to each address in a database. Once an address is geocoded, it can be added to a geographical information system (GIS) for spatial analysis. Geocoding software can also correct errors in address, but addresses must be set up in a proper format for this to occur.
Many problems are encountered in preparing Childhood Lead Poisoning Prevention Program (CLPPP) databases for GIS use. Table I | 4,290 | {
"id": "da1f3aa4a691502fadd03ad307ac0d5985dae998",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | This document is in the public domain and may be freely copied or reprinted.Mention of any company or product does not constitute endorsement by the Na tional Institute for Occupational Safety and Health (NIOSH). In addition, citations to Web sites external to NIOSH do not constitute NIOSH endorsement of the spon soring organizations or their programs or products. Furthermore, NIOSH is not responsible for the content of these Web sites. All Web addresses referenced in this document were accessible as of the publication date.To receive documents or other information about occupational safety and health topics, contact NIOSH at Telephone: I -8OO-CDC-INFO (1-800-232-4636# Foreword
"Do occupational exposures to engineered nanoparticles pose an unintended risk of adverse health effects?" This is not an abstract or theoretical question that prac titioners have the luxury of debating for years before it becomes a reality. Nanotech nology is a reality, with potential for great growth in the 21st Century. Workers are already engaged in processes in which they may be exposed to materials that never existed before in nature. We do not fully know how these engineered nanoparticles may enter the body, where they may travel once inside, or what effects they may have on the body' s systems. We do not fully know whether or how effects may differ for chemically or structurally different particles at the nanoscale. Diverse stakeholders have agreed that research to address these questions is essential for the responsible development of nanotechnology.
As research progresses to answer those questions, the National Institute for Oc cupational Safety and Health (NIOSH) has recommended prudent precautionary interim measures for reducing work-related exposures and assessing potential risk. In the hierarchy of prevention, it is important to consider where it may be of value to provide medical screening of workers who may be exposed to a potential health hazard, but who may be asymptomatic-that is, who have no identifiable symptom of an occupational disease. On the frontiers of nanotechnology, where as yet little data exist for assessing risk with confidence, it is difficult to recommend specific screening tests. NIOSH has sought a wide range of opinions on the matter and along with its own review of the scientific literature presents this interim guidance for medical screening and hazard surveillance. The evidence base on the health effects of engineered nanoparticles is rapidly growing and NIOSH will continue to monitor and assess it and will update those recommendations as more definitive information becomes available.
# Executive Summary
Concerns have been raised about whether workers exposed to engineered nanopar ticles are at increased risk of adverse health effects. The current body of evidence about the possible health risks of occupational exposure to engineered nanopar ticles is quite small. While there is increasing evidence to indicate that exposure to some engineered nanoparticles can cause adverse health effects in laboratory animals, no health studies of workers exposed to the few engineered nanoparticles tested in animals have been published. The purpose of this document from the Na tional Institute for Occupational Safety and Health (NIOSH) is to provide interim guidance about whether specific medical screening, including performing medical tests on asymptomatic workers, is appropriate for these workers.
Medical screening is only one part of what should be considered a complete safety and health management program. An ideal safety and health management program follows a hierarchy of controls and involves various occupational health surveil lance measures. Since specific medical screening of asymptomatic workers exposed to engineered nanoparticles has not been extensively discussed in the scientific lit erature, this document makes recommendations based upon what is known until more rigorous research can be performed.
Currently there is insufficient scientific and m edical evidence to recom m end the specific m ed ical screening o f w orkers p o ten tially exposed to engineered nanopar ticles. Nonetheless, this lack of evidence does not preclude specific medical screen ing by employers interested in taking precautions beyond existing industrial hygiene measures. If nanoparticles are composed of a chemical or bulk material for which medical screening recommendations exist, these same screening recommendations would be applicable for workers exposed to engineered nanoparticles as well. As research into the hazards of engineered nanoparticles continues, vigilant reas sessment of available data is critical to determine whether specific medical screen ing is warranted for workers. In the in te rim , the fo llo w in g recom m end ations are p ro v id e d fo r w orkplaces w here w orkers m ay be exposed to engineered nan o p ar ticles in the course o f th e ir w ork:
- Take p ru d e n t measures to co n tro l exposures to engineered nanoparticles.
# C o n d u c t h aza rd surveillance as the basis fo r im p le m e n tin g controls.
- C o n tin u e use o f established m ed ical surveillance approaches. NIOSH will continue to collect and evaluate new research findings and update its recom mendations about medical screening programs for workers exposed to nanoparticles.
# Introduction
Nanotechnology is a system of innovative methods for controlling and manipulating matter at the near-atomic scale to produce engineered materials, structures, and devices. Engineered nanoparticles are generally con sidered to include a class or subset of these manufactured materials with at least one di mension of approximately 1 to 100 nanome ters (www.nano.gov/html/facts/whatIsNano. html). At these scales, materials often exhibit unique properties beyond those expected at the chemical or bulk level that affect their physical, chemical, and biological behavior. The term "ultrafine" is also frequently used in the litera ture to describe particles with dimensions less than 100 nanometers that have not been inten tionally produced (e.g., manufactured) but are the incidental products of processes involv ing combustion, welding, or diesel engines. It is currently unclear whether a distinction in particle terminology is justified from a safety and health perspective if the particles have the same physicochemical characteristics. The potential occupational health risks associ ated with the manufacture and use of nanoma terials are not yet clearly understood. Many en gineered nanomaterials and devices are formed from nanometer-scale particles (i.e., nanopar ticles) that are initially produced as aerosols or colloidal suspensions. Exposure to these mate rials during manufacturing and use may occur through inhalation, dermal contact, or inges tion; however, inhalation exposure is the main route of concern . There is very limited information available about dominant exposure routes, the potential for exposure, and material toxicity.
At this time, society in general and companies in particular are faced with the dilemma of bal ancing a desire to expand a potentially boun tiful technology against the potential hazards that may result. The real risks from the technol ogy are not known, and the perceived risks are undetermined. In this regard, nanotechnology is no different from any other emerging tech nology. One of the first areas where exposures to engineered nanoparticles will occur is in the workplace. In the face of uncertainty about the hazards of nanoparticles, a corporate or soci etal response (such as implementing appropri ate occupational health surveillance measures) may assure the public that appropriate efforts are being taken to identify and control poten tial hazards in a timely fashion.
Concerned individuals from government, in dustry, labor, and academia, together with oc cupational health professionals and medical personnel, have raised questions about whether workers exposed to engineered nanoparticles should be provided some type of medical sur veillance. The purpose of this document is to 1 provide interim guidance concerning specific medical screening for these workers-that is, medical tests for asymptomatic workers-until additional research either supports or negates the need for this type of screening. The type and degree of screening recommended here is in addition to any medical surveillance taking place as part of existing occupational health surveillance efforts.
# Frequent Uses for Medical Surveillance
Medical examinations and tests are used in many workplaces to determine whether an employee is able to perform the essential NIOSH CIB 60 - Medical Screening of Workers functions of the job, with or without reason able accommodation, without posing a direct and imminent threat to the safety or health of the worker or others. Workplace medical ex aminations must be conducted in compliance with the Americans with Disabilities Act of 1990 (ADA) (Public Law No. 101-336). For example, this law prohibits making a job offer contingent upon the applicant' s submission to a medical examination. Still, medical examina tions and examinations conducted before plac ing a worker in a given job could potentially provide useful baseline information in a vari ety of ways. For example, even if there appears to be no reason for immediate concern about exposure to engineered nanoparticles in a par ticular workplace setting, this type of baseline data may benefit employers and workers alike if questions come up later regarding potential worker health problems associated with the specific engineered nanoparticle.
Medical surveillance of workers is also re quired by law when there is exposure to a spe cific workplace hazard. Although OSHA does not have a standard that specifically addresses occupational exposure to engineered nano particles, OSHA has a number of standards that require medical surveillance of workers. Workplaces with engineered nanoparticles comprised of chemicals addressed by current OSHA standards (Appendix B) are subject to the requirements of those standards, including the requirements for medical surveillance. In addition, medical surveillance of workers han dling engineered nanoparticles may also be triggered when workers are exposed to other hazardous substances (e.g., those listed in Ap pendix B) present in nanoparticle operations. NIOSH also recommends medical surveillance (including screening) of workers when there is exposure to certain occupational hazards (Ap pendix C). None of the hazards noted in Ap pendix C are identified as engineered nanopar ticles, but medical surveillance would apply to workers exposed to nanoparticles comprised of chemicals for which NIOSH has a recom mendation. The medical surveillance of these workers may provide useful information if questions arise in the future about the health effects of their exposure to nanoparticles.
# Hazard Surveillance and Risk Management
Hazard surveillance involves identifying poten tially hazardous practices or exposures in the workplace and assessing the extent to which they can be linked to workers, the effective The current body of evidence about the pos sible health risks of occupational exposures to engineered nanoparticles is not sufficient to support the determination of specific medical screening to identify preclinical changes asso ciated with exposure to engineered nanopar ticles. No substantial link has been established between occupational exposure to engineered nanoparticles and adverse health effects. In addition, the toxicological research to date is insufficient to recommend such monitoring, the appropriate triggers for it, or components of it. As the volume of research on the poten tial health effects from exposure to engineered nanoparticles increases, continual reassessment will be needed to determine whether medical screening is warranted for workers who are producing or using engineered nanoparticles. NIOSH will continue to examine new research findings and update its recommendations on medical screening programs for workers ex posed to nanoparticles. Appendix D provides a brief discussion concerning occupational health programs that include medical screening and might serve as a model for future reference for one or more engineered nanoparticles. Appen dix E provides discussion highlighting details of instances where sufficient evidence to support recommendations for specific medical screen ing for workers exposed to engineered nanopar ticles is lacking.
# Nanoscale Cadmium
Cadmium is a substance that has medical screen ing recommendations for workers exposed in or der to prevent or assess lung and kidney toxicity (see Appendices B and C). At a minimum, these recommendations should pertain to nanoscale cadmium (e.g., such as that used in the produc tion of quantum dots). Medical screening is typically triggered by the airborne concentra tion of the substance in the workplace (e.g., the "action level" concentration). An action level is some fraction, usually 50%, of an occupational exposure limit (OEL). Whether the action level concentration recommended for nonnanoscale cadmium particles is adequate for nanoscale cadmium is unknown. Workplaces with en gineered nanoparticles of materials addressed by current OSHA standards are subject to the requirements of those standards, including the requirements for medical surveillance. | 2,468 | {
"id": "f8582c2c951b00bec464a15a66254f48567ab65a",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | # SYNOPSIS
This report reviews available scientific and technical information on furfuryl alcohol and recommends a standard for the control of furfuryl alcohol hazards in the workplace.
The acute hazards include eye, skin, and upper respiratory irritation from direct contact and depression of the central nervous system from inhalation or percutaneous absorption. Evidence of a chronic effect is not available.
Other hazards associated with the compound include combustibility and violent reactivity when mixed with aci d s .
About 9,000 workers are potentially exposed to furfuryl alconol in the United
States, and about 100 million pounds are produced and used each year.
Its main use is in the synthesis of furan resins, wnicn are used m such operations as the binding of foundry core sand.
In the absence of data indicating that the existing permissible limit is not protective, the standard proposed to the US Department of Labor contains recommendations for continuation of the present Federal limit of 200 mg/cu m as a time-weighted average concentration and also for a program of medical and environmental surveillance.
Based in part on information gathered from plant site visits and reviewer comments, work practices are recommended to reduce fire and explosion hazards and to minimize skin contact.
Requirements for posting and labeling, a respiratory protection program, an education program for employees, and the maintenance of relevant records are also included.
Suggested research to correct deficiencies m available information includes:
(1 ) epidemiologic studies of employees exposed to furfuryl alcohol; (2 ) animal studies of any chronic effects of furfuryl alcohol, including experimental studies of its possible carcinogenicity;
(3 ) possible effects on reproduction, including whether or not furfuryl alcohol can cause terata or induce mutations; and (4) biotransformation of furfuryl alcohol, its distribution, and elimination. Compliance with all sections of the recommended standard should prevent adverse effects of exposure to furfuryl alcohol on the health of employees.
Methods recommended in the standard are measurable by techniques that are reproducible and available to industry and government agencies.
Sufficient technology exists to permit compliance with the recommended standard.
Although NIOSH considers the workplace environmental limit to be a safe level based on current information, the employer should regard it as the upper boundary of exposure, and every effort should be made to maintain the exposure at levels as low as is technically feasible. The criteria and recommended standard will be reviewed and revised as necessary.
This recommended standard for furfuryl alcohol is based on the limited information available on the effects from exposure to furfuryl alcohol. The standard is designed to safeguard workers occupationally exposed to furfuryl alcohol from absorption of the compound, possible subsequent irritation of the skin, eyes, and respiratory tract, from central nervous system (CNS) effects, and from the hazard arising from its violent, possibly explosive, reaction when in contact with acids.
These criteria and the recommended standard apply to occupational exposure of employees to hydroxymethy1 furan, referred to as "furfuryl alcohol." Synonyms for furfuryl alcohol include 2-furylmethanol, 2-furylcarbinol, 2-furan methanol, furfural alcohol, and 2-(hydroxymethyl) furan.
The major industrial use of furfuryl alcohol is in the production of furan resins, which are used as corrosion-and heat-resistant materials, particularly in the foundry industry.
An action level is defined as one-half the recommended time-weighted average (TWA) environmental limit. Occupational exposure to furfuryl alcohol is defined as exposure to airborne furfuryl alcohol above the action level.
Exposures at lower concentrations will not require adherence to the following sections except for 2(a,e), 3(a), 4(a,c), 5, 6(a,d,e,f), 7, and 8 (a). Procedures for the collection and analysis of environmental samples shall be as provided in the appendix or by any other methods at least equivalent in precision, accuracy, and sensitivity to the methods specified.
# Section 2 -Medical
Medical surveillance as described below shall be made available to all employees occupationally exposed to furfuryl alcohol. The employer shall provide information to the physician performing or responsible for the medical surveillance program such as: the requirements of the applicable standard; an estimate of the employee's potential exposure to furfuryl alcohol, including any available workplace sampling results; and a description of any protective devices or equipment the employee may be required to use. (c) Following completion of the examination, the physician shall give to the employer a written statement specifying any condition or abnormality found which would increase the risk to the employee's health by exposure Respirators specified for use in higher concentrations of furfuryl alcohol may be used in atmospheres of lower concentrations. precautions have been taken to ensure that prescribed procedures will be followed.
Before entry, confined spaces shall be inspected and tested for oxygen deficiency and for the presence of furfuryl alcohol and other known or suspected contaminants such as formaldehyde or phenol.
No employee shall enter any confined space that does not have an entry large enough to admit an employee wearing safety harness, lifeline, and appropriate respiratory equipment as specified in Section 4(b).
# Confined spaces shall be ventilated while work is in progress
to keep the concentration of airborne furfuryl alcohol at or below the recommended environmental limit, to keep the concentration of other contaminants below dangerous levels, and to prevent oxygen deficiency.
Anyone entering a confined space shall be observed from the outside by another properly trained and protected worker. An additional supplied-air or self-contained breathing apparatus with safety harness and lifeline shall be located outside the confined space for emergency use.
The person entering the confined space shall maintain continuous communication with the standby worker.
# (b)
Smoking shall be prohibited in areas where furfuryl alcohol is produced, processed, stored, or otherwise used.
(c) Employees who handle furfuryl alcohol shall be instructed to wash their hands thoroughly before eating, smoking, or using toilet facilities.
# (d)
Facilities such as double lockers should be provided for employees so soiled clothing can be stored separately from clean clothing. animals, described the composition and properties of coffee oil and reviewed previous chemical data on furfuryl alcohol, one of its primary ingredients.
Erdmann administered furfuryl alcohol (derived from coffee oil) to 14 rabbits weighing 1.37-3.10 kg (sex, strain, and use of controls were not described)
and to 1 female dog of unstated breed weighing 10.61 kg. The rabbits received furfuryl alcohol in 25 or 50% aqueous solutions, 11 by subcutaneous (sc) injection and 3 by gastric intubation, at single doses ranging from 230 to 1,330 mg/kg of body weight.
The sc lethal dose was 526-600 mg/kg.
Lethal amounts produced significant decreases in rectal temperature followed by what was described as respiratory paralysis within 4-24 hours.
In addition to effects on body temperature and respiration, the rabbits showed a pattern of increased mucus secretion, salivation, and lacrimation.
There were also increased frequency of urination and defecation, and, at higher doses, lethargy or sleepiness approaching narcosis, labored breathing, coma, and, terminally, cessation of breathing. A necropsy performed on one rabbit 12 hours after it died revealed no unusual organ changes.
The dog was given furfuryl alcohol as a 50% aqueous solution at a total dose of 520 mg/kg in two sc injections 30 minutes apart . Within minutes after the second injection, sneezing and vomiting began and continued for about 20 minutes and for 2 hours, respectively. Diarrhea, bloody feces, appetite loss, lassitude, and reduction in rectal temperature persisted through the next day. Two days later, however, the dog appeared fully recovered.
Erdmann , in his study of three men (including himself), noted that furfuryl alcohol in small, single oral doses of 0.6-1.0 g in 5% aqueous solution consistently increased the rate of respiration.
In 1927, Okubo described the results of in vivo and in vitro studies of the effects of pure furfuryl alcohol on mice, rabbits, and guinea pigs.
The mice were injected sc with furfuryl alcohol in physiologic saline (0.5-1.0% solution). At 10 mg/kg, furfuryl alcohol had little or no effect, whereas at 50 mg/kg the mice showed marked respiratory depression, weakened reflexes, and disturbed gait but recovered within 4-5 hours.
At 100 mg/kg, furfuryl alcohol was lethal within 3-5 hours, death being attributed to respiratory paralysis. Furfuryl alcohol injected intravenously (iv) at unspecified doses reportedly inhibited respiration and also reduced blood pressure in urethane-anesthetized rabbits. No effects from ingestion of furfuryl alcohol at these levels were reported.
A dog was given furfuryl alcohol, 1 g/day, by stomach tube for 42 days. After a 1-month recovery period, the animal then received furfural, at 1 g/day, by stomach tube for 56 days . The only effect seen was occasional salivation after administration.
No changes were evident during the subsequent 1-year observation period.
# Effects on Humans
Jacobson et al , in 1958, reported the results of experiments conducted on 13 volunteers to determine the odor threshold of furfuryl alcohol.
Each volunteer sniffed geometrically increasing concentrations of furfuryl alcohol.
The median detectable concentration for furfuryl alcohol was 7-8 ppm (28-32 mg/cu m ) .
All volunteers were able to detect the furfuryl alcohol at 10 ppm (40 mg/cu m ) , and they described the odor as "sweet," "alcoholic," or "etherlike." Apol , in 1973, reported the results of a health hazard evaluation conducted by NIOSH at a foundry.
One or two workers on each shift produced cores for iron castings prepared by a two-stage, air-set cure process.
The first stage involved the construction of a large core and required 10-15 minutes; the second, the cure stage, required 45 minutes.
The substances used in the process included a mixture of furfuryl alcohol and paraformaldehyde, a phosphoric and sulfuric acid mixture, and sand. After these substances were mechanically mixed, they were poured into the mold. This process was usually performed at room temperature; however, in cold weather the sand was heated before mixing.
The high temperature of the sand apparently caused the release of furfuryl alcohol and formaldehyde v a p o r s .
During the coremaking and the core-curing stages, air samples were collected with charcoal tubes; all such samplings were repeated when hot sand was used .
The furfuryl alcohol concentration was then determined by gas chromatography.
During the 15-minute core preparation, under normal temperature conditions, the concentration of furfuryl alcohol was 8.6 ppm (34.4 mg/cu m ) . When warm sand was used during the same cure period, the concentration of furfuryl alcohol was 10.8 ppm (43.2 mg/cu m ) . None of the three exposed employees reported any discomfort during those operations. However, when the hot sand was used during the 1 5 -m m u t e core preparation, the concentration of furfuryl alcohol was 15.8 ppm ( 63 This was assumed to ensure an approximate saturated test atmosphere in the chamber that also contained a small quantity of liquid furfuryl alcohol.
The concentration of the vapor, however, was calculated from the mass of compound trapped in a collection bubbler containing glacial acetic acid and from the sample volume as measured by a wet-test meter.
The furfuryl alcohol of the absorbing medium was determined using a titrimetric method.
Thirty rats were divided into five groups of six rats each. Three groups were exposed for 4 hours, and the remaining two groups were exposed for 8 hours to the saturated furfuryl alcohol vapor. The first injection of 100 mg/kg produced only a slight and temporary decrease in the blood pressure and respiration.
After the total dose of furfuryl alcohol exceeded 500-600 mg/kg, each injection resulted in a severe drop in blood pressure and in a temporary apnea.
After a total dose of 800-1,400 mg/kg, the animals died from respiratory paralysis.
The No information on human biotransformation of furfuryl alcohol has been found.
The effects of furfuryl alcohol on humans and animals are summarized in Tables II-l, II-2, and II-3.
# Carcinogenicity, Mutagenicity, Teratogenicity, and Effects on Reproduction
No reports were found on whether there are carcinogenic or teratogenic effects from furfuryl alcohol.
There was one report that indicated that furfuryl alcohol was not mutagenic in tests with Salmonella in vitro. The exothermic hardening reaction increased the evaporation rates of these compounds, and, therefore, the concentrations of their airborne compounds.
The concentrations of these airborne gases were measured in the coremaking areas.
From Sampling can be performed by drawing contaminated air through a collecting device at a measured flowrate that is low enough to ensure complete absorption of the contaminant, ie, the slower the sampling rate, the higher the absorption efficiency should be in any specific system. Furfuryl alcohol can be sampled in at least three kinds of absorbers that use liquid collection media. Simple gas washing bottles, such as midget impingers, may be used, but, since the degree and duration of contact between the sampled air and the liquid are not maximized, it may be necessary to use two or more of these absorbers in series in order to achieve maximum collection efficiency.
Simple gas washing bottles have the advantage of being simply constructed and easy to clean, and they require only a small volume of liquid.
The spiral or helical absorber can also be used for organic vapor collection.
Although it is larger and contains more liquid than a gas washing bottle, its chief advantage is a higher collection efficiency because of the longer contact path between the sampled air and the collection liquid.
Fritted bubblers provide high collection efficiency because of the extensive gas-liquid contact and the low flowrate used (0.5-1.0 liters/minute), but do not have the advantage of small size and volume and ease of cleaning.
Collection of furfuryl alcohol vapor in liquid allows a wide variety of analytical techniques such as oxidation by bromine, polarography, or UV spectrophotometry.
However, the use of a liquid impinger for collection of breathing zone samples is at least inconvenient. Successful use of collectors containing liquids requires careful handling of glassware during collection and shipment of samples to avoid breakage and spillage. Potassium iodide was added to the brominated sample to permit subsequent titration of the liberated iodine.
According to the authors, the overall error of this method is ±1%, provided that the total furfural present in the sample does not exceed 20%.
This method has been used to measure furfuryl alcohol in air after sampling at less than 1.5 liters/minute through a fritted bubbler containing glacial acetic acid .
Gas chromatography has become a prevalent method of detection and analysis of organic materials. This technique has been used in the occupational environment in conjunction with sampling the breathing zone on a solid sorbent followed by carbon disulfide desorption.
Pyridine, however, when used as a desorption solvent, has given excellent desorption efficiencies.
This method has been used to evaluate worker exposure in furfuryl alcohol production facilities . For a 10-liter sample, the lower detection limit of the method was 0.8 mg/cu m, but the authors calculated that by sampling 100 liters of air, this limit would be 0.08 mg/cu m.
Gas chromatography has also been tested with an adsorbent normally used as a column packing for gas chromatographs.
In Therefore, when furfuryl alcohol is handled in an open system, a ventilation system, such as a hood, glove box, or local exhaust system, may be necessary.
In addition, a ventilation system is desirable as a standby if a closed system should fail.
The principles set forth in Industrial Ventilation-A Manual of Recommended Practice and Fundamentals Governing the Design and Operation of Local Exhaust Systems, ANSI Z9. 1-1971 , should be applied to control atmospheric concentrations and prevent the release of raw materials, furfuryl alcohol products, or wastes during those operations when exposure is possible.
To ensure effective operation of ventilation systems, routine inspection should include face velocity measurements of the collecting hood, examination of the air mover and collection or dispersion system, and measurements of atmospheric concentrations of furfuryl alcohol in the work environment.
Any changes in the work operation, process, or equipment that may affect the ventilation system must be promptly evaluated to ensure that control measures provide adequate protection for employees.
All facilities require frequent inspection and preventive maintenance to ensure that leaks are readily detected and repaired to avoid exposure of employees. Exhaust gases that may contain furfuryl alcohol or hazardous raw materials or wastes should be prevented from being released into the occupational and community environments. In the event of eye contact with furfuryl alcohol, the affected area should be flushed with a copious flow of water followed by appropriate medical attention.
Standard procedures should be formulated for maintenance and repair of engineering control systems; these procedures should not depend on the use of respiratory protection.
There are several essential elements in these maintenance procedures.
Tanks, pumps, valves, or lines must be drained and thoroughly flushed with water or steam prior to repair activities, especially welding, grinding, or other operations that might offer an ignition source for flammable vapor or combustible liquid.
All personnel entering confined spaces must be supplied with whole body protection, such as overalls or impermeable clothing, and suitable respiratory protection in accordance with Table 1
Workers should wear this respiratory protective equipment unless prior measurements indicate that the air concentration of furfuryl alcohol is at or below the recommended TWA concentration limit and that there is an acceptable oxygen concentration (about 20%).
A second properly protected worker must be on standby outside the confined space, and effective communication must be maintained between all involved persons.
A safety harness and lifeline should be used. A monkey exposed to furfuryl alcohol vapor for 6 hours at 1,040 mg/cu m had only very slight lacrimation, but when exposed to 956 mg/cu m for 6 hours/day for 3 days, it showed no effects whatsoever. In addition, dermal penetration has been demonstrated in animals in tests indicative of similar action in humans; it seems from this and from its lipid solvent characteristics (based on its miscibility with such common lipid solvents as ethyl ether mentioned in Table IX Employees should also be instructed as to their responsibilities, complementing those of their employers, in preventing effects of furfuryl alcohol on their health and in providing for their safety and that of their fellow workers.
These responsibilities of employees apply primarily in the areas of sanitation and work practices, but attention should be given in all relevant areas so that employees faithfully adhere to safe procedures.
# (f) Work Practices
Since adverse effects from exposure to furfuryl alcohol are possible by skin 30,32] For operations that may increase the concentration of airborne furfuryl alcohol in the work environment, adequate ventilation must be used at all times.
Anyone entering the area of an accidental leak or spill must be protectively clothed to prevent accidental contacts with the skin or eyes and must wear suitable respiratory protective devices if needed. All tubes must be packed with Porapak Q from the same manufacturer's lot.
(2) The smaller section of Porapak Q is used as a backup and should be positioned nearer the sampling pump.
The tube should be placed in a vertical direction during sampling to minimize channeling through the Porapak Q.
(4) Air being sampled snould not be passed through any hose or tubing before entering the Porapak Q tube.
(5) A sample size of 6 liters is recommended.
A Cap and shake the sample vigorously. Desorption is complete in 15 minutes.
Complete analysis within 1 day after the furfuryl alcohol is desorbed.
(3) Gas Chromatographic Conditions
The typical operating conditions for the gas chromatograph are: 50 ml/minute (60 psig) nitrogen carrier gas flow. 65 ml/minute (24 psig) hydrogen gas flow to detector. 500 ml/minute (50 psig) airflow to detector. 225 C injector manifold temperature. 225 C detector manifold temperature. 200 C column temperature.
A retention time of approximately 11 minutes is to be expected for furfuryl alcohol under these conditions and using the column recommended in Apparatus (d).
The acetone will elute from the column before the furfuryl alcohol.
(4) Injection
The first step in the analysis is the injection of the sample into the gas chromatograph.
To eliminate difficulties arising from blowback or evaporation of solvent within the syringe needle, one should employ the solvent flush injection technique.
The 10-yl syringe is first flushed with solvent several times to wet the barrel and plunger.
Three microliters of solvent are drawn into the syringe to increase the accuracy and reproducibility of the injected sample volume.
The needle is removed from the solvent, and the plunger is pulled back about 0.2 yl to separate the solvent flush from the sample with a pocket of air to be used as a marker. The needle is then immersed in the sample, and a 5-yl aliquot is withdrawn, taking into consideration the volume of the needle, since the sample in the needle will be completely injected.
After the needle is removed from the sample and prior to injection, the plunger is pulled back 1.2 yl to minimize evaporation of the sample from the tip of the needle. It should be observed that the sample occupies 4.9-5.0 yl in the barrel of the syringe.
Duplicate injections of each sample and standard should be made. No more than a 3% difference in area is to be expected.
# It is not advisable
to use an automatic sample injector because of possible plugging of the syringe needle with Porapak Q.
(5) The area of the sample peak is measured by an electronic integrator or some other suitable form of area measurement, and results are read from a standard curve prepared as discussed below.
(e) Determination of Desorption Efficiency (1) The desorption efficiency of a particular compound can vary from one laboratory to another and also from one batch of Porapak Q to another.
Thus, it is necessary to determine the fraction of the specific compound that is removed in the desorption process for a particular batch of Porapak Q.
(2) Porapak Q, equivalent to the amount in the first section of the sampling tube (150 mg), is measured into a 64-mm, 4-mm ID glass tube, flame sealed at one end. This Porapak Q must be from the same batch as that used in obtaining the samples.
The open end is capped with Parafilm. A known amount of a benzene solution of furfuryl alcohol containing 300 mg/ml is injected directly into the Porapak Q with a microliter syringe, and the tube is capped with more Parafilm. The amount injected is equivalent to that present in a 6-liter air sample at the selected level. It is not practical to inject the neat liquid directly onto the Porapak Q because the amounts to be added would be too small to measure accurately.
Six tubes at each of three levels, equivalent to 100, 200, and 400 mg/cu m, are prepared in this manner and allowed to stand for at least overnight to assure complete adsorption of the furfuryl alcohol onto the Porapak Q.
These tubes are referred to as the samples.
A parallel blank tube should be treated in the same manner except that no sample is added to it.
The sample and blank tubes are desorbed and analyzed in exactly the same manner as the sampling tube described in Analysis of Samples.
Two or three standards are prepared by injecting the same volume of furfuryl alcohol into 1.0 ml of acetone with the same syringe used in the preparation of the samples.
These are analyzed with the samples.
and 400 mg/cu m were 1.6% lower than the "true" concentrations for 18 samples.
Any difference between the "found" and "true" concentrations may not represent a bias in the sampling and analytical method, but rather a random variation from the experimentally determined "true" concentration. Therefore, the method has no evident bias.
The coefficient of variation is a good measure of the accuracy of the method, since the recoveries and storage stability were good.
Storage stability studies on samples collected from a test atmosphere at 224.2 mg/cu m indicate that collected samples are stable for at least 7 days.
# Advantages and Disadvantages
The sampling device is small and portable and involves no liquids. Interferences are minimal, and most of those that occur can be eliminated by altering chromatographic conditions.
The tubes are analyzed by means of a quick instrumental method.
One disadvantage of the method is that the amount of sample that can be taken is limited by the mass of furfuryl alcohol that the tube will hold before overloading. When the amount of furfuryl alcohol found on the backup section of the Porapak Q tube exceeds 25% of that founa on the front section, the probability of sample loss exists.
The precision of the method is limited by the reproducibility of the pressure drop across the tubes. This drop will affect the flowrate and cause the measured volume to be imprecise because the pump is usually calibrated for one tube only.
# Apparatus (a)
Personal sampling pump: A calibrated personal sampling pump, the flowrate of which can be determined within 5% at the recommended flowrate.
(b) Porapak Q tubes: Glass tube with both ends unsealed, 8.5-cm long with a 6-mm OD and a 4-mm ID, containing two sections of 50/80 mesh Porapak Q separated by a 2-mm portion of urethane foam.
The adsorbing section of the tube contains 150 mg of Porapak Q, and the backup section contains 75 mg.
A plug of silylated glass wool is placed at each end of the tube. The pressure drop across the tube must be less than 10 mmHg at a flowrate of 0.05 liter/minute.
Immediately prior to packing, the tubes should be acetone rinsed and dried to eliminate the problem of Porapak Q adhering to the walls of the glass tubes.
The Porapak Q tubes are capped with plastic caps at each end.
# Sorbent washing procedure:
Prior to usage, Porapak Q is washed with acetone and dried to reduce or eliminate the effects of unreacted monomers, solvents, and manufacturer's batch-to-batch differences in production. A quantity of Porapak Q is placed in a sintered glass filter fitted to a large vacuum flask.
Reagent grade acetone, of a volume equal to twice that of Porapak Q, is added to the sorbent and mixed, and the pressure is reduced. Repeat the operation of wash-mix-vacuum six times.
The sorbent is then transferred to an evaporating dish and dried in a vacuum oven at 120 C (248 F) under slight vacuum (t>35 mmHg) for 4 hours.
# (c)
Gas chromatograph equipped with a flame-ionization detector. N o t e : Since no internal standard is used in this method, standard solutions must be analyzed at the same time that the sample analysis is done.
This will minimize the effect of known day-to-day variations and variations during the same day of the flame-ionization detector response.
# (a)
Prepare a stock standard solution containing 120 mg/ml of furfuryl alcohol in acetone.
# (b)
From the above stock solution, appropriate aliquots are withdrawn and dilutions are made in acetone.
Prepare at least five working standards to cover the range of 0.12-3.6 mg/1.0 ml.
This range is based on a 6-liter s amp1e .
# (c)
Prepare a standard calibration curve by plotting concentration of furfuryl alcohol in mg/1.0 ml vs peak area.
# Calculations
(a) Read the weight, in mg, corresponding to each peak area from the standard curve.
No volume corrections are needed because the standard curve is based on mg/1.0 ml acetone and the volume of sample injected is identical to the volume of the standards injected.
# (b)
Corrections for the blank must be made for each sample.
mg sample = mg found in front section of sample tube mg blank = mg found in front section of blank tube A similar procedure is followed for the backup sections. T i | 6,153 | {
"id": "3df06c01001ad310a8079c9b7db2eb30fc314167",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | health and safety concerns associated with its use. It is the responsibility of the user of this document to establish appropriate health and safety practices and determine the applicability of regulatory limitations prior to each use.#
Indoor / Outdoor Environment 111 5.6.1
Lighting 111 5.6.2 A Indoor Aquatic Facility Ventilation 112 5.6. 3 Indoor / Outdoor Aquatic Facility Electrical Systems and Components 113 5.6. 4 Facility Heating 115 5.6.5
First Aid Room 115 5.6.6
Emergency Exit 115 5.6.7
Plumbing 115 5.6.8 Solid Waste Management 116 5.6.9
Decks 116 5.6.10 Aquatic Facility Maintenance 117 5.7
Recirculation and Water Treatment 118 5.7. 1 Recirculation Systems and Equipment 118 5.7.2 Filtration 120 5.7. 3 Water Treatment Chemicals and Systems 122 5.7. 4 Water Sample Collection and Testing 128 5.7.5 A Water Quality Chemical Testing "Air Handling System" means equipment that brings in outdoor air into a building and removes air from a building for the purpose of introducing air with fewer contaminants and removing air with contaminants created while bathers are using aquatic venues. The system contains components that move and condition the air for temperature, humidity, and pressure control, and transport and distribute the air to prevent condensation, corrosion, and stratification, provide acceptable indoor air quality, and deliver outside air to the breathing zone.
"Agitated Water" means an aquatic venue with mechanical means (aquatic features) to discharge, spray, or move the water's surface above and/or below the static water line of the aquatic venue. Where there is no static water line, movement shall be considered above the deck plane.
"Alpha Bar" see "Average Sound Absorption Coefficient" "Aquatic Facility" means a physical place that contains one or more aquatic venues and support infrastructure.
"Aquatic Feature" means an individual component within an aquatic venue. Examples include slides, structures designed to be climbed or walked across, and structures that create falling or shooting water.
"Aquatic Facility or Aquatic Venue Enclosure" means an uninterrupted barrier surrounding and securing an aquatic facility or aquatic venue.
"Aquatic Venue" means an artificially constructed structure or modified natural structure where the general public is exposed to water intended for recreational or therapeutic purpose and where the primary intended use is not watering livestock, irrigation, water storage, fishing, or habitat for aquatic life. Such structures do not necessarily contain standing water, so water exposure may occur via contact, ingestion, or aerosolization. Examples include swimming pools, wave pools, lazy rivers, surf pools, spas (including spa pools and hot tubs), therapy pools, waterslide landing pools, spray pads, and other interactive water venues.
- "Increased Risk Aquatic Venue" means an aquatic venue which due to its intrinsic characteristics and intended users has a greater likelihood of affecting the health of the bathers of that venue by being at increased risk for microbial contamination (e.g., by children less than 5 years old) or being used by people that may be more susceptible to infection (e.g., therapy patients with open wounds). Examples of increased-risk aquatic venues include spray pads, wading pools and other aquatic venues designed for children less than 5 years old as well as therapy pools.
- "Lazy River" means a channeled flow of water of near−constant depth in which the water is moved by pumps or other means of propulsion to provide a river−like flow that transports bathers over a defined path. A lazy river may include play features and devices. A lazy river may also be referred to as a tubing pool, leisure river, leisure pool or a current channel.
- "Spa" means a structure intended for either warm or cold water where prolonged exposure is not intended. Spa structures are intended to be used for bathing or other recreational uses and are not usually drained and refilled after each use. It may include, but is not limited to, hydrotherapy, air induction bubbles, and recirculation. - "Special Use Aquatic Venue" means aquatic venues that do not meet the intended use and design features of any other aquatic venue or pool listed/identified in this Code.
3.0 Glossary of Acronyms, Initialisms, Terms, and Codes 13 "Chlorine" means an element that at room temperature and pressure is a heavy greenish yellow gas with a characteristic penetrating and irritating smell; it is extremely toxic. It can be compressed in liquid form and stored in heavy steel tanks. When mixed with water, chlorine gas forms hypochlorous acid (HOCl), the primary chlorine-based disinfecting agent, hypochlorite ion, and hydrochloric acid. HOCl dissociation to hypochlorite ion is highly pH dependent. Chlorine is a general term used in the MAHC which refers to HOCl and hypochlorite ion in aqueous solution derived from chlorine gas or a variety of chlorine-based disinfecting agents.
- "Available Chlorine" means the amount of chlorine in the +1 oxidation state, which is the reactive, oxidized form. In contrast, chloride ion (Cl -) is in the -1 oxidation state, which is the inert, reduced state. Available Chlorine is subdivided into Free Available Chlorine and Combined Available Chlorine. Pool chemicals containing Available Chlorine are both oxidizers and disinfectants. Elemental chlorine (Cl2) is defined as containing 100% available chlorine. The concentration of Available Chlorine in water is normally reported as mg/L (ppm) "as Cl2", that is, the concentration is measured on a Cl2 basis, regardless of the source of the Available Chlorine. - "Free Chlorine Residual" OR "Free Available Chlorine" means the portion of the total available chlorine that is not "combined chlorine" and is present as HOCl or hypochlorite ion (OCl -).The pH of the water determines the relative amounts of HOCl and hypochlorite ion. HOCl is a very effective bactericide and is the active bactericide in pool water. OClis also a bactericide, but acts more slowly than HOCl. Thus, chlorine is a more effective bactericide at low pH than at high pH. A free chlorine residual must be maintained for adequate disinfection. "Circulation Path" means an exterior or interior way of passage from one part of an aquatic facility to another for pedestrians, including, but not limited to walkways, pathways, decks, and stairways. This must be considered in relation to ADA. "Cleansing Shower" See "Shower."
"Code" means a systematic statement of a body of law, especially one given statutory force.
"Combustion Device" means any appliance or equipment using fire. These include, but may not be limited to, gas or oil furnaces, boilers, pool heaters, domestic water heaters, etc.
"Contamination Response Plan" means a plan for handling contamination from formed-stool, diarrheal-stool, vomit, and blood.
"Contaminant" means a substance that soils, stains, corrupts, or infects another substance by contact or association.
"Corrosive Materials" means pool chemicals, fertilizers, cleaning chemicals, oxidizing cleaning materials, salt, de-icing chemicals, other corrosive or oxidizing materials, pesticides, and such other materials which may cause injury to people or damage to the building, air-handling equipment, electrical equipment, safety equipment, or fire-suppression equipment, whether by direct contact or by contact via fumes or vapors, whether in original form or in a foreseeably likely decomposition, pyrolysis, or polymerization form. Refer to labels and SDS forms.
"Crack" means any and all breaks in the structural shell of a pool vessel or deck.
"Cross-Connection" means a connection or arrangement, physical or otherwise, between a potable water supply system and a plumbing fixture, tank, receptor, equipment, or device, through which it may be possible for nonpotable, used, unclean, polluted and contaminated water, or other substances to enter into a part of such potable water system under any condition.
"CT Inactivation Value" means a representation of the concentration of the disinfectant (C) multiplied by time in minutes (T) needed for inactivation of a particular contaminant. The concentration and time are inversely proportional; therefore, the higher the concentration of the disinfectant, the shorter the contact time required for inactivation. The CT Value can vary with pH or temperature change so these values must also be supplied to allow comparison between values.
# MAHC CODE
3.0 Glossary of Acronyms, Initialisms, Terms, and Codes 14 "Deck" means surface areas serving the aquatic venue, including the dry deck, perimeter deck, and pool deck.
- "Dry Deck" means all pedestrian surface areas within the aquatic venue enclosure not subject to frequent splashing or constant wet foot traffic. The dry deck is not perimeter deck or pool deck, which connect the pool to adjacent amenities, entrances, and exits. Landscape areas are not included in this definition.
- "Perimeter Deck" means the hardscape surface area immediately adjacent to and within 4 feet (1.2 m) of the edge of the swimming pool also known as the "wet deck" area. - "Pool Deck" means surface areas serving the aquatic venue, beyond perimeter deck, which is expected to be regularly trafficked and made wet by bathers. "Diaper-Changing Station" means a hygiene station that includes a diaper-changing unit, hand-washing sink, soap and dispenser, a means for drying hands, trash receptacle, and disinfectant products to clean after use. "Diaper-Changing Unit" means a diaper-changing surface that is part of a diaper-changing station.
"Disinfection" means a treatment that kills or irreversibly inactivates microorganisms (e.g., bacteria, viruses, and parasites); in water treatment, a chemical (commonly chlorine, chloramine, or ozone) or physical process (e.g., ultraviolet radiation) can be used. "Disinfection By-Product" (DBP) means a chemical compound formed by the reaction of a disinfectant (e.g. chlorine) with a precursor (e.g. natural organic matter, nitrogenous waste from bathers) in a water system (pool, water supply). "Diving Pool" See "Pool." "Drop Slide" See "Slide." "Dry Deck" See "Deck." "Emergency Action Plan" (EAP) means a plan that identifies the objectives that need to be met for a specific type of emergency, who will respond, what each person's role will be during the response. and what equipment is required as part of the response.
"Enclosure" means an uninterrupted constructed feature or obstacle used to surround and secure an area that is intended to deter or effectively prevent unpermitted, uncontrolled, and unfettered access . It is designed to resist climbing and to prevent passage through it and under it. Enclosure can apply to aquatic facilities or aquatic venues.
"EPA Registered" means all products regulated and registered under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) by the EPA; . EPA registered products will have a registration number on the label (usually it will state "EPA Reg No." followed by a series of numbers). This registration number can be verified by using the EPA National Pesticide Information Retrieval System (/#). "Equipment Room or Area" means a space intended for the operation of pool pumps, filters, heaters, and controllers. This space is not intended for the storage of hazardous pool chemicals.
"Exit Gate" means an emergency exit, which is a gate or door allowing free exit at all times.
"Expansion Joint" means a watertight joint provided in a pool vessel used to relieve flexural stresses due to movement caused by thermal expansion/contraction. "Flat Water" means an aquatic venue in which the water line is static except for movement made by users. Diving spargers do not void the flat water definition. "Floatation Tank Solution" means a saturated solution of magnesium sulfate having a specific gravity of 1.23 to 1.3.
"Flume" means the riding channels of a waterslide which accommodate riders using or not using mats, tubes, rafts, and other transport vehicles as they slide along a path lubricated by a water flow.
"Foot Baths" means standing water in which bathers or aquatics staff rinse their feet.
"Free Chlorine Residual" OR "Free Available Chlorine" See "Chlorine."
"Ground-Fault Circuit Interrupter" (GFCI) means a device for protection of personnel that de-energizes an electrical circuit or portion thereof in the event of excessive ground current.
"Hand Wash Station" means a location which has a hand wash sink, adjacent soap with dispenser, hand drying device or paper towels and dispenser, and trash receptacle.
"Hot Water" means an aquatic venue with water temperature over 90 degrees Fahrenheit (30 degrees Celsius).
"Hygiene Facility" means a structure or part of a structure that contains toilet, shower, diaper-changing unit, hand wash station, and dressing capabilities serving bathers and patrons at an aquatic facility.
"Hygiene Fixtures" means all components necessary for hygiene facilities including plumbing fixtures, diaperchanging stations, hand wash stations, trashcans, soap dispensers, paper towel dispensers or hand dryers, and toilet paper dispensers.
"Hyperchlorination" means the intentional and specific raising of chlorine levels for a prolonged period of time to inactivate pathogens following a fecal or vomit release in an aquatic venue as outlined in MAHC 6.5.
"Imminent Health Hazard" means a significant threat or danger to health that is considered to exist when there is evidence sufficient to show that a product, practice, circumstance, or event creates a situation that requires immediate correction or cessation of operation to prevent injury based on the number of potential injuries and the nature, severity, and duration of the anticipated injury or illness. "Increased Risk Aquatic Venue" See "Aquatic Venue." "Indoor Aquatic Facility" means a physical place that contains one or more aquatic venues and the surrounding bather and spectator/stadium seating areas within a structure that meets the definition of "Building" per the 2012 International Building Code (IBC). It does not include equipment, chemical storage, or bather hygiene rooms or any other rooms with a direct opening to the aquatic facility. Otherwise known as a natatorium.
"Infinity Edge" means a pool wall structure and adjacent perimeter deck that is designed in such a way where the top of the pool wall and adjacent deck are not visible from certain vantage points in the pool or from the opposite side of the pool. Water from the pool flows over the edge and is captured and treated for reuse through the normal pool filtration system. They are often also referred to as "vanishing edges," "negative edges," or "zero edges." "Inlet" means wall or floor fittings where treated water is returned to the pool.
"Interactive Water Play Aquatic Venue" means any indoor or outdoor installation that includes sprayed, jetted or other water sources contacting bathers and not incorporating standing or captured water as part of the bather activity area. These aquatic venues are also known as splash pads, spray pads, wet decks. For the purposes of the MAHC, only those designed to recirculate water and intended for public use and recreation shall be regulated.
# MAHC CODE
3.0 Glossary of Acronyms, Initialisms, Terms, and Codes 16 "Interior Space" means any substantially enclosed space having a roof and having a wall or walls which might reduce the free flow of outdoor air. Ventilation openings, fans, blowers, windows, doors, etc., shall not be construed as allowing free flow of outdoor air.
"Island" means a structure inside a pool where the perimeter is completely surrounded by the pool water and the top is above the surface of the pool. "Landing Pool" See "Pool." "Lazy River" See "Aquatic Venue." "Lifeguard Supervisor" means an individual responsible for the oversight of lifeguard performance and emergency response at an aquatic facility. A qualified lifeguard supervisor is an individual who has successfully completed a lifeguard supervisor training course and holds an unexpired certificate for such training; and who has met the pre-service and continuing in-service requirements of the aquatic facility according to this code.
"mg/L" means milligrams per liter and is the equivalent metric measure to parts per million (ppm).
"Monitor" means the regular and purposeful observation and checking of systems or facilities and recording of data, including system alerts, excursions from acceptable ranges, and other facility issues. Monitoring includes human or electronic means.
"Moveable Floors" means a pool floor whose depth varies through the use of controls.
"No Diving Marker" means a sign with the words "No Diving" and the universal international symbol for "No Diving" pictured as an image of a diver with a red circle with a slash through it.
"Noise Criterion" means the single number rating that is somewhat sensitive to the relative loudness and speech interference properties of a given noise spectrum. The method consists of a family of criterion curves extending from 63 to 8,000 Hz and a tangency rating procedure. The criterion curves define the limits of octave band spectra that must not be exceeded to meet occupant acceptance in certain spaces.
"Oocyst" means the thick-walled, environmentally resistant structure released in the feces of infected animals that serves to transfer the infectious stages of sporozoan parasites (e.g., Cryptosporidium) to new hosts.
"Oxidation" means the process of changing the chemical structure of water contaminants by either increasing the number of oxygen atoms or reducing the number of electrons of the contaminant or other chemical reaction, which allows the contaminant to be more readily removed from the water or made more soluble in the water. It is the "chemical cleaning" of pool water. Oxidation can be achieved by common disinfectants (e.g., chlorine, bromine), secondary disinfection/sanitation systems (e.g. ozone) and oxidizers (e.g. potassium monopersulfate).
"Oxidation Reduction Potential" (ORP) means a measure of the tendency for a solution to either gain or lose electrons; higher (more positive) oxidation reduction potential indicates a more oxidative solution.
"Patron" means a bather or other person or occupant at an aquatic facility who may or may not have contact with aquatic venue water either through partial or total immersion. Patrons may not have contact with aquatic venue water, but could still be exposed to potential contamination from the aquatic facility air, surfaces, or aerosols. "Peninsula / Wing Wall" means a structural projection into a pool intended to provide separation within the body of water. "Perimeter Deck" See "Deck." "Perimeter Gutter System" means the alternative to skimmers as a method to remove water from the pool's surface for treatment. The gutter provides a level structure along the pool perimeter versus intermittent skimmers.
3.0 Glossary of Acronyms, Initialisms, Terms, and Codes 17 "Plumbing Fixture" means a receptacle, fixture, or device that is connected to a water supply system or discharges to a drainage system or both and may be used for the distribution and use of water; for example: toilets, urinals, showers, and hose bibs. Such receptacles, fixtures, or devices require a supply of water; or discharge liquid waste or liquid-borne solid waste; or require a supply of water and discharge waste to a drainage system.
"pH" means the negative log of the concentration of hydrogen ions. When water ionizes, it produces hydrogen ions (H+) and hydroxide ions (OH-). If there is an excess of hydrogen ions the water is acidic. If there is an excess of hydroxide ions the water is basic. pH ranges from 0 to 14. Pure water has a pH of 7.0. If pH is higher than 7.0, the water is said to be basic, or alkaline. If the water's pH is lower than 7.0, the water is acidic. As pH is raised, more HOCl ionization occurs and chlorine disinfectants decrease in effectiveness.
"Pool" means a subset of aquatic venues designed to have standing water for total or partial bather immersion. This does not include spas.
- "Activity Pool" means a water attraction designed primarily for play activity that uses constructed features and devices including pad walks, flotation devices, and similar attractions. - "Diving Pool" means a pool used exclusively for diving.
- "Landing Pool" means an aquatic venue or designated section of an aquatic venue located at the exit of one or more waterslide flumes. The body of water is intended and designed to receive a bather emerging from the flume for the purpose of terminating the slide action and providing a means of exit to a deck or walkway area.
- "Skimmer Pool" means a pool using a skimmer system.
- "Surf Pool" means any pool designed to generate waves dedicated to the activity of surfing on a surfboard or analogous surfing device commonly used in the ocean and intended for sport as opposed to general play intent for wave pools. - "Therapy Pool" means a pool used exclusively for aquatic therapy, physical therapy, and/or rehabilitation to treat a diagnosed injury, illness, or medical condition, wherein the therapy is provided under the direct supervision of a licensed physical therapist, occupational therapist, or athletic trainer. This could include wound patients or immunocompromised patients whose health could be impacted if there is not additional water quality protection. - "Wading Pool" means any pool used exclusively for wading and intended for use by young children where the depth does not exceed 2 feet (0.6 m).
- "Wave Pools" means any pool designed to simulate breaking or cyclic waves for purposes of general play. A wave pool is not the same as a surf pool, which generates waves dedicated to the activity of surfing on a surfboard or analogous surfing device commonly used in the ocean and intended for sport as opposed to general play intent for wave pools. "Pool Deck" See "Deck." "Pool Slide" See "Slide." "Public Water Systems" means water systems including community water systems, non-transient/noncommunity water systems, or transient non-community water systems with exceptions as noted by AHJ and EPA.
# "Recessed
Steps" means a way of ingress/egress for a pool similar to a ladder but the individual treads are recessed into the pool wall.
"Recirculation System" means the combination of the main drain, gutter or skimmer, inlets, piping, pumps, controls, surge tank or balance tank to provide pool water recirculation to and from the pool and the treatment systems.
"Reduction Equivalent Dose (RED) bias" means a variable used in UV system validation to account for differences in UV sensitivity between the UV system challenge microbe (e.g., MS2 virus) and the actual microbe to be inactivated (e.g., Cryptosporidium).
"Re-entrainment" means a situation where the exhaust(s) from a ventilated source such as an indoor aquatic facility is located too close to the air handling system intake(s), which allows the exhausted air to be re-captured by the air handling system so it is transported directly back into the aquatic facility.
"Responsible Supervisor" means an individual on-site that is responsible for water treatment operations when a "qualified operator" is not on-site at an aquatic facility. "Rinse Shower" See "Shower." "Robotic Cleaner" means a modular vacuum system consisting of a motor-driven, in-pool suction device, either self-powered or powered through a low voltage cable, which is connected to a deck-side power supply.
"Runout" means that part of a waterslide where riders are intended to decelerate and/or come to a stop. The runout is a continuation of the waterslide flume surface.
"Safety" (as it relates to construction items) means a design standard intended to prevent inadvertent or hazardous operation or use (i.e., a passive engineering strategy).
"Safety Plan" means a written document that has procedures, requirements and/or standards related to safety which the aquatic facility staff shall follow. These plans include training, emergency response, and operations procedures.
"Safety Team" means any employee of the aquatic facility with job responsibilities related to the aquatic facility's emergency action plan.
"Sanitize" means reducing the level of microbes to that considered safe by public health standards (usually 99.999%). This may be achieved through a variety of chemical or physical means including chemical treatment, physical cleaning, or drying. "Saturation Index" means a mathematical representation or scale representing the ability of water to deposit calcium carbonate, or dissolve metal, concrete or grout.
"Secondary Disinfection Systems" means those disinfection processes or systems installed in addition to the standard systems required on all aquatic venues, which are required to be used for increased risk aquatic venues.
"Shower" means a device that sprays water on the body.
- "Cleansing Shower" means a shower located within a hygiene facility using warm water and soap. The purpose of these showers is to remove contaminants including perianal fecal material, sweat, skin cells, personal care products, and dirt before bathers enter the aquatic venue.
3.0 Glossary of Acronyms, Initialisms, Terms, and Codes 19 - "Rinse Shower" means a shower typically located in the pool deck area with ambient temperature water.
The main purpose is to remove dirt, sand, or organic material prior to entering the aquatic venue to reduce the introduction of contaminants and the formation of disinfection by-products. "Skimmer" means a device installed in the pool wall whose purpose is to remove floating debris and surface water to the filter. They shall include a weir to allow for the automatic adjustment to small changes in water level, maintaining skimming of the surface water. "Skimmer Pool" See "Pool." "Skimmer System" means periodic locations along the top of the pool wall for removal of water from the pool's surface for treatment.
"Slide" means an aquatic feature where users slide down from an elevated height into water.
- "Drop Slide" means a slide that drops bathers into the water from a height above the water versus delivering the bather to the water entry point.
- "Pool Slide" means a slide having a configuration as defined in The Code of Federal Regulations (CFR)
Ch. II, Title 16 Part 1207 by CSPC, or is similar in construction to a playground slide used to allow users to slide from an elevated height to a pool. They shall include children's (tot) slides and all other nonflume slides that are mounted on the pool deck or within the basin of a public swimming pool.
- "Waterslide" means a slide that runs into a landing pool or runout through a fabricated channel with flowing water. "Sound Absorption" means (1) the process of dissipating sound energy and (2) the property possessed by materials, objects and structures, such as rooms, for absorbing sound energy.
"Spa" See "Aquatic Venue." "Special Use Aquatic Venue" See "Aquatic Venue." "Standard" means something established by authority, custom, or general consent as a model or example.
"Storage" means the condition of remaining in one space for 1 hour or more. Materials in a closed pipe or tube awaiting transfer to another location shall not be considered to be stored. "Structural Crack" means a break or split in the pool surface that weakens the structural integrity of the vessel. "Substantial Alteration" means the alteration, modification, or renovation of an aquatic venue (for outdoor aquatic facilities) or indoor aquatic facility (for indoor aquatic facilities) where the total cost of the work exceeds 50% of the replacement cost of the aquatic venue (for outdoor aquatic facilities) or indoor aquatic facility (for indoor aquatic facilities).
"Superchlorination" means the addition of large quantities of chlorine-based chemicals to kill algae, destroy odors, or improve the ability to maintain a disinfectant residual. This process is different from hyperchlorination, which is a prescribed amount to achieve a specific CT inactivation value whereas superchlorination is the raising of free chlorine levels for water quality maintenance.
"Supplemental Treatment Systems" means those disinfection processes or systems which are not required on an aquatic venue for health and safety reasons. They may be used to enhance overall system performance and improve water quality. "Surf Pool" See "Pool." "Theoretical Peak Occupancy" means the anticipated peak number of bathers in an aquatic venue or the anticipated peak number of occupants of the decks of an aquatic facility. This is the lower limit of peak occupancy to be used for design purposes for determining services that support occupants. Theoretical peak occupancy is 2018 MAHC CODE 3.0 Glossary of Acronyms, Initialisms, Terms, and Codes 20 used to determine the number of showers. For aquatic venues, the theoretical peak occupancy is calculated around the type of water use or space:
- "Flat Water" means an aquatic venue in which the water line is static except for movement made by users usually as a horizontal use as in swimming. Diving spargers do not void the flat water definition.
- "Agitated Water" means an aquatic venue with mechanical means (aquatic features) to discharge, spray, or move the water's surface above and/or below the static water line of the aquatic venue so people are standing or playing vertically. Where there is no static water line, movement shall be considered above the deck plane.
- "Hot Water" means an aquatic venue with a water temperature over 90 o F (32 o C).
- "Stadium Seating" means an area of high-occupancy seating provided above the pool level for observation. "Therapy Pool" See "Pool." "Toe Ledge" See "Underwater Ledge." "Turnover" or "Turnover Rate" or "Turnover Time" means the period of time, usually expressed in hours, required to circulate a volume of water equal to the capacity of the aquatic venue.
"Underwater Bench" means a submerged seat with or without hydrotherapy jets. "Underwater Ledge" or "Underwater Toe Ledge" means a continuous step in the pool wall that allows swimmers to rest by standing without treading water. "Wading Pool" See "Pool." "Waterslide" See "Slide." "Water Replenishment System" means a way to remove water from the pool as needed and replace with makeup water in order to maintain water quality. "Water Quality Testing Device" (WQTD) means a product designed to measure the level of a parameter in water. A WQTD includes a device or method to provide a visual indication of a parameter level, and may include one or more reagents and accessory items. "Wave Pools" See "Pool." "Wing Wall / Peninsula" See "Peninsula / Wing Wall." "Zero Depth Entry" means a sloped entry into a pool from deck level into the interior of the pool as a means of access and egress.
# Codes, Standards, and Laws Referenced in the MAHC Code
# Air Conditioning Contractors of America (ACCA)
# Transitional Areas
Stairs shall not be used underwater to transition between two sections of POOL of different depths. Note: The bottom riser may vary due to potential cross slopes with the POOL floor; however, the bottom step riser may not exceed the maximum allowable height required by this section.
# Top Surface
The top surface of the uppermost stair tread shall be located not more than 12 inches (30.5 cm) below the POOL coping or DECK.
# A
Perimeter Gutter Systems For POOLS with PERIMETER GUTTER SYSTEMS, the gutter may serve as a step, provided that the gutter is provided with a grating or cover and conforms to all construction and dimensional requirements herein specified.
# Corrosion-resistant
Handrails shall be constructed of corrosion-resistant materials, and anchored securely.
# A Upper Railing
The upper railing surface of handrails shall extend above the POOL coping or DECK a minimum of 28 inches (71.1 cm).
# Wider Than Five Feet
Support Handrails shall be designed to resist a load of 50 pounds (22.7 kg) per linear foot applied in any direction and independently a single concentrated load of 200 pounds (90.7 kg) applied in any direction at any location.
# Transfer Loads
Hand rails shall be designed to transfer these loads through the supports to the POOL or DECK structure.
# A Dimensions
Dimensions of handrails shall conform to requirements of MAHC Table 4.5.5.7
and MAHC Figure 4.5.5.7.1.
# Anchored
Grab rails shall be anchored securely.
Provided Grab rails shall be provided at both sides of RECESSED STEPS.
# Clear Space
The horizontal clear space between grab rails shall be not less than 18 inches (45.7 cm) and not more than 24 inches (61.0 cm).
# Upper Railing
The upper railing surface of grab rails shall extend above the POOL coping or DECK a minimum of 28 inches (71.1 cm).
Support Grab rails shall be designed to resist a load of 50 pounds (22.7 kg) per linear foot applied in any direction and independently a single concentrated load of 200 pounds (90.7 kg) applied in any direction at any location.
# Transfer Loads
# Vertical Measurement
The posted water depth shall be the water level to the floor of the AQUATIC VENUE according to a vertical measurement taken 3 feet (0.9 m) from the AQUATIC VENUE wall.
# Signage
A sign shall be posted to inform the public that the AQUATIC VENUE has a varied depth and refer to the sign showing the current depth. 4.6.1.5.1.1 Location Such underwater lights, in conjunction with overhead or equivalent DECK lighting, shall be located to provide illumination so that all portions of the AQUATIC VENUE, including the AQUATIC VENUE bottom and drain(s), may be readily seen.
# Spas
Higher Light Levels Higher underwater light levels shall be considered for deeper water to achieve this outcome.
Dimmable Lighting Dimmable lighting shall not be used for underwater lighting.
# Footcandles
The path of egress shall be illuminated to at least a value of 0.5 footcandles (5.4 lux).
# A Relative Humidity
The AIR HANDLING SYSTEM shall maintain the relative humidity in the space as defined in ASHRAE Handbook: HVAC Applications, 2011, Places of Assembly, Natatoriums.
# Dew Point
The AIR HANDLING SYSTEM shall be designed to maintain the dew point of the INTERIOR SPACE less than the dew point of the interior walls at all times so as to prevent damage to structural members and to prevent biological growth on walls.
Condensation & Mold Control The AIR HANDLING SYSTEM shall be designed to achieve several objectives including 1) Maintaining space conditions, 2) Delivering the outside air to the breathing area, and 3) Flushing the outside walls and windows, which can have the lowest surface temperature and therefore the greatest chance for condensation.
Access Control The AIR HANDLING SYSTEM shall be designed to provide a means to limit physical or electronic access to system control to the operator and anyone the operator deems to have access.
Air Handling System Commissioning
System Commissioning A qualified, licensed professional shall commission the AIR HANDLING SYSTEM to verify that the installed system is operating properly in accordance with the system design.
# Written Statement
A written statement of commissioning shall be provided to the AQUATIC FACILITY owner including but not limited to:
1) The number of cfm of outdoor air flowing into the INDOOR AQUATIC FACILITY at the time of commissioning; 2) The number of cfm of exhaust air flowing through the system at the time of commissioning; and, 3) A statement that the amount of outdoor air meets the performance requirements of MAHC 4.6.2.7.
Indoor
Sealed and Inert Where required, the electrical conduit in an interior CHEMICAL STORAGE SPACE shall be sealed and made of materials that will not interact with any chemicals in the CHEMICAL STORAGE SPACE.
# A
Electrical Devices Electrical devices or equipment shall not occupy an interior CHEMICAL STORAGE SPACE, except as required to service devices integral to the function of the room, such as pumps, vessels, controls, lighting and SAFETY devices.
# A
Protected Against Breakage Lamps, including fluorescent tubes, installed in interior CHEMICAL STORAGE SPACES shall be protected against breakage with a lens or other cover, or be otherwise protected against the accidental release of hot materials.
# Pressure Relief Device
Where POOL water heating equipment is installed with valves capable of isolating the heating equipment from the POOL, a listed pressure-relief device shall be installed to limit the pressure on the heating equipment to no more than the maximum value specified by the heatingequipment manufacturer and applicable CODES.
Code Compliance POOL-water heating equipment shall be selected and installed to preserve compliance with the applicable CODES, the terms of listing and labeling of equipment, and with the equipment manufacturer's installation instructions and applicable CODES.
# A Equipment Room Requirements
Where POOL water heaters use COMBUSTION and are located inside a building, the space in which the heater is located shall be considered to be an EQUIPMENT ROOM, and the requirements of MAHC 4.9.1 shall apply.
Carbon Monoxide Detector A carbon monoxide detector with local alarming, CERTIFIED, LISTED, AND LABELED in accordance with UL 2075, shall be installed in all such EQUIPMENT ROOMS.
Adjacent Rooms All rooms that are immediately adjacent to spaces containing fuel burning equipment or vents carrying the products of combustion shall also be provided with locally alarming carbon monoxide detectors.
Exception Heaters CERTIFIED, LISTED, AND LABELED for the atmosphere shall be acceptable without isolation from chemical fumes and vapors.
Alternative Alternate locations or the use of bottled water shall be evaluated by the AHJ.
# Common Use Area
# Readily Accessible
The drinking fountain shall be located where it is readily accessible and not a hazard to BATHERS per MAHC 4.10.2.
# Not Located
The drinking fountain shall not be located in a SHOWER area or toilet area.
Single Fountain A single drinking fountain shall be allowed for one or more AQUATIC VENUES within an AQUATIC FACILITY.
# Angle Jet Type
The drinking fountain shall be an angle jet type installed according to applicable plumbing CODES.
# Potable Water Supply
The drinking fountain shall be supplied with water from an approved potable water supply.
# Wastewater
The wastewater discharged from a drinking fountain shall be routed to an approved sanitary sewer system or other approved disposal area according to applicable plumbing CODES.
Garbage Receptacles
# Sufficient Number
A sufficient number of receptacles shall be provided within an AQUATIC FACILITY to ensure that garbage and refuse can be disposed of properly to maintain safe and sanitary conditions.
# Number and Location
The number and location of receptacles shall be at the discretion of the AQUATIC FACILITY manager.
# Closable
Receptacles shall be designed to be closed with a lid or other cover so they remain closed until intentionally opened.
Openings The BARRIER may have one or more openings directly into the BATHER areas.
Demarcation Line A demarcation line on the DECK that shows the separation between the DECK used by spectators and the PERIMETER DECK used by BATHERS.
# A Balcony
# A Concave Room Surfaces
The design of INDOOR AQUATIC FACILITIES with a domed roof, gable roof, or other shape that may cause sound focusing, irrespective of the ALPHA BAR, shall address sound focusing, reverberation, and echoes that would interfere with speech intelligibility.
# Recirculation and Water Treatment
Recirculation Systems and Equipment
# A General
# Equipped and Operated
All AQUATIC VENUES shall be equipped and operated with a recirculation and filtration system capable of meeting the provisions outlined in MAHC 4.7.
# Component Installation
The installation of the recirculation and the filtration system components shall be performed in accordance with the designer's and manufacturer's instructions.
# Recirculation System
# Inlets
# A General
Other Inlet Types All other types of INLET systems not covered in this section shall be subject to approval by the AHJ with proper engineering justification.
Hydraulically Sized INLETS shall be hydraulically sized to provide the design flow rates for each POOL area of multi-zone POOLS based on the required design TURNOVER RATE for each zone.
# A
Floor Inlets
Directional Flow Wall INLETS shall not require design to provide directional flow if part of a manufactured gutter system in which the filtered return water conduit is contained within the gutter structure.
# A
Dye Testing The AHJ may require dye testing to evaluate the mixing characteristics of the RECIRCULATION SYSTEM.
Failed Test If dye test reveals inadequate mixing in the POOL after 20 minutes, the RECIRCULATION SYSTEM shall be adjusted or modified to assure adequate mixing.
Perimeter Overflow Systems/Gutters
Zero Depth Entry ZERO DEPTH ENTRY POOLS shall have a continuous overflow trench that terminates as close to the side walls as practical including any zero-depth portion of the POOL perimeter.
Ends Where a POS cannot be continuous, the ends of each section shall terminate as close as practical to each other.
# A
Perimeter Overflow System Size and Shape
# Continuous Water Removal
The gutter system shall be designed to allow continuous removal of water from the POOL'S upper surface at a rate of at least 125 percent of the approved total recirculation flow rate chosen by the designer.
Inspection Gutters shall permit ready inspection, cleaning, and repair.
# A
Gutter Outlets Drop boxes, converters, return piping, or FLUMES used to convey water from the gutter shall be designed to: 1) Prevent flooding and BACKFLOW of skimmed water into the POOL, and 2) Handle at least 125 percent of the approved total recirculation flow.
# Surge Tank Capacity
# A
Net Surge Capacity All POSs shall be designed with an effective net surge capacity of not less than one gallon for each square foot (40.7 L/m 2 ) of POOL surface area.
Surge Components Surge shall be provided within a surge tank, or the gutter or filter above the normal operating level, or elsewhere in the system.
Tank Capacity The tank capacity specified shall be the net capacity.
# Tank Levels
The design professional shall define the minimum, maximum, and normal POOL operating water levels in the surge tank.
Marked The surge tank's minimum, maximum, and normal POOL operating water levels shall be marked on the tank so as to be readily visible for inspection.
Overflow Pipes Surge tanks, shall have overflow pipes to convey excess water to waste via an air gap or other approved BACKFLOW prevention device.
# A
Tolerances Gutters shall be level within a tolerance of plus or minus 1 /16 inch (1.6 mm) around the perimeter of the AQUATIC VENUE.
# A Makeup Water System
# A
Provided Where SKIMMERS are used, at least one surface SKIMMER shall be provided for each 500 square feet (46 m 2 ) of surface area or fraction thereof.
Conditions Additional SKIMMERS may be required to achieve effective skimming under site-specific conditions (e.g., heavy winds and/or CONTAMINANT loading) and/or to comply 4.0 Aquatic Facility Design & Construction 53 with all applicable building CODES.
# A
Hybrid Systems Hybrid systems that incorporate surge weirs in the overflow gutters to provide for in-POOL surge shall meet all of the requirements specified for overflow gutters (with the exception of the surge or balance tank, since the surge capacity requirement will be alternately met by the in-POOL surge capacity).
# A
Surge Weirs The number of surge weirs shall be based on the individual surge weir capacity and the operational apportionment of the design recirculation flow rate.
Locations The location of the required number of surge weirs shall be uniformly spaced in the gutter sections.
# A
Design Capacity When used, the SKIMMER SYSTEM shall be designed to handle up to 100% of the total recirculation flow rate chosen by the designer.
Pool Width Limitations POOLS using SKIMMERS shall not exceed 30 feet (9.1 m) in width.
# Skimmer Location
Effective SKIMMERS shall be so located as to provide effective skimming of the entire water surface.
Steps and Recessed Areas SKIMMERS shall be located so as not to be affected by restricted flow in areas such as near steps and within small recesses.
Wind Direction Wind direction shall be considered in number and placement of SKIMMERS.
# A Skimmer Flow Rate
The flow rate for the SKIMMERS shall comply with manufacturer data plates or NSF/ANSI 50 including Annex K.
# Control
Weir Each SKIMMER shall have a weir that adjusts automatically to variations in water level over a minimum range of 4 inches (10.2 cm).
Trimmer Valve Each SKIMMER shall be equipped with a trimmer valve capable of distributing the total flow between individual SKIMMERS.
# Tolerances
# Skimmer Base
The base of each SKIMMER shall be level with all other SKIMMERS in the POOL within a tolerance of plus or minus ¼ inch (6.4 mm).
# A
Submerged Suction Outlet
# General
Submerged suction outlets, including sumps and covers, shall be CERTIFIED, LISTED, AND LABELED to the requirements of ANSI/APSP-16 2011.
# Number and Spacing
Hydraulically Balanced A minimum of two hydraulically balanced filtration system outlets are required in the bottom.
Located on the Bottom One of the outlets may be located on the bottom of a side/end wall at the deepest level.
Connected The outlets shall be connected to a single main suction pipe by branch lines piped to provide hydraulic balance between the drains.
Valved The branch lines shall not be valved so as to be capable of operating independently. Located Outlets shall be located no less than 3 feet (0.9 m) apart, measuring between the centerlines of the suction outlet covers.
# Tank Connection
Where gravity outlets are used, the main drain outlet shall be connected to a surge tank, collection tank, or balance tank/pipe.
# A Flow Distribution and Control
4.7.1.6.4.1 Design Capacity The main drain system shall be designed at a minimum to handle recirculation flow of 100% of total design recirculation flow rate.
Two Main Drain Outlets Where there are two main drain outlets, the branch pipe from each main drain outlet shall be designed to carry 100% of the recirculation flow rate.
Three or More Drains Where three or more main drain outlets are connected by branch piping in accordance with MAHC 4.7.1.6.2.1.1 through MAHC 4.7.1.6.2.1.3, the design flow through each branch pipe from each main drain outlet may be as follows: 1) Qmax for each drain= Q(total recirculation rate) / (number of drains less one), and 2) Qmax=Qtotal / (N-1).
Proportioning Valve The single main drain suction pipe to the pump shall be equipped with a proportioning valve(s) to adjust the flow distribution between the main drain piping and the surface overflow system piping.
Flow Velocities
Certified Piping and piping system component materials shall be CERTIFIED, LISTED, AND LABELED to a specific STANDARD by an ANSI-accredited certification organization.
Velocity in Pipes
# A
Suction Piping Suction piping shall be sized so that the water velocity does not exceed 6 feet per second (1.8 m/s) unless alternative values have proper engineering justification.
# A
Additional Considerations Gravity piping shall be sized with consideration of available system head or as demonstrated by detailed hydraulic calculations at the design recirculation flow rate.
# A
Drainage and Installation
Supported All piping shall be supported continuously or at sufficiently close intervals to prevent sagging and settlement.
Piping and Component Identification
Schematic Displayed A complete, easily readable schematic of the entire AQUATIC VENUE RECIRCULATION SYSTEM shall be openly displayed in the mechanical room or available to maintenance and inspection personnel.
# Testing
4.7.1.7.5.1 Static Water Pressure Test Suction and supply POOL piping shall be subjected to a static hydraulic water pressure test for the duration specified by the design engineer and/or AHJ.
Greater Suction and supply AQUATIC VENUE piping shall be able to maintain the greater of the two following amounts of pressure: 1) 25% greater than the maximum design operating pressure of the system, or 2) 25 psi (172 KPa).
Strainers and Pumps
# Strainers
Strainer / Screen All filter recirculation pumps, except those for vacuum filter installations, shall have a strainer/screen device on the suction side to protect the filtration and pumping equipment.
Materials Strainers shall be CERTIFIED, LISTED, AND LABELED to NSF/ANSI 50.
# Pumping Equipment
# A Variable Frequency Drives
VFDs may be installed to control all recirculation and feature pumps.
# A
Total Dynamic Head The recirculation pump(s) shall have adequate capacity to meet the recirculation flow design requirements in accordance with the maximum TDH required by the entire RECIRCULATION SYSTEM under the most extreme operating conditions (e.g., clogged filters in need of backwashing).
Required Flow Rate The pump shall be designed to maintain design recirculation flows under all conditions.
Vacuum Limit Switches Where vacuum filters are used, a vacuum limit switch shall be provided on the pump suction line.
Maximum The vacuum limit switch shall be set for a maximum vacuum of 18 inches (45.7 cm) of mercury.
Pump Priming All recirculation pumps shall be self-priming or flooded-suction.
Net Positive Suction Head Requirement All recirculation pumps shall meet the minimum NPSH requirement for the system.
# A
Operating Gauges Vacuum Gauge A compound vacuum-pressure gauge shall be installed on the pump suction line as close to the pump as possible.
Suction Lift A vacuum gauge shall be used for pumps with suction lift.
Installed A pressure gauge shall be installed on the pump discharge line adjacent to the pump.
Easily Read Gauges shall be installed so they can be easily read.
Valves All gauges shall be equipped with valves to allow for servicing under operating conditions.
Flow Measurement and Control
# A Flow Meters
A flow meter accurate to within +/-5% of the actual design flow shall be provided for each filtration system. 4.7.1.9.1.1 Certified, Listed, and Labeled Flow meters shall be CERTIFIED, LISTED, AND LABELED to NSF/ANSI Standard 50 by an ANSI-accredited certification organization.
Valves All pumps shall be installed with a manual adjustable discharge valve to provide a backup means of flow control as well as for system isolation.
# A Flow Rates / Turnover Times
# A Calculated
The TURNOVER TIME shall be calculated based on the total volume of water divided by the flow rate through the filtration process.
# A
Unfiltered Water Unfiltered water such as water that may be withdrawn from and returned to the AQUATIC VENUE for such AQUATIC FEATURES as SLIDES by a pump separate from the filtration system, shall not factor into TURNOVER TIME.
# A
Turnover Times TURNOVER TIMES shall be calculated based solely on the flow rate through the filtration system.
# Required
The required TURNOVER TIME shall be the lesser of the following options: 1) The specified time in MAHC Table 4.7.1.10, or 2) The time required for individual components (e.g., three SKIMMERS with flow rates set by the manufacturer and an additional 20% for the main drains could exceed the minimum value in the table).
# Total Volume
The total volume of the AQUATIC VENUE system shall include the AQUATIC VENUE and any surge/balance tank.
Supply Water Where water is drawn from the AQUATIC VENUE to supply water to AQUATIC FEATURES (e.g., SLIDES, tube rides), the water may be reused prior to filtration provided the DISINFECTANT and pH levels of the supply water are maintained at required levels.
# A Reuse Ratio
The ratio of INTERACTIVE WATER PLAY AQUATIC VENUE FEATURE water to filtered water shall be no greater than 3:1 in order to maintain the efficiency of the FILTRATION SYSTEM.
# A Flow Turndown System
For AQUATIC FACILITIES that intend to reduce the recirculation flow rate below the minimum required design values when the POOL is unoccupied, the flow turndown system shall be designed as follows in MAHC 4.7.1.10.5.1 through MAHC 4.7.1.10.5.2.
# Flowrate
The system flowrate shall not be reduced more than 25% lower than the minimum design requirements and only reduced when the AQUATIC VENUE is unoccupied.
# Clarity
The system flowrate shall be based on ensuring the minimum water clarity required under MAHC 5.7.6 is met before opening to the public.
Disinfectant Levels The turndown system shall be required to maintain required DISINFECTANT and pH levels at all times.
Increase When the turndown system is also used to intelligently increase the recirculation flow rate above the minimum requirement (e.g., in times of peak use to maintain water quality goals more effectively), the following requirements shall be met at all times: 1) Velocity requirements inside of pipes (per MAHC 4.7.1.7.2), and 2) Maximum filtration system flows.
Labeled If installed and labeled as SECONDARY DISINFECTION SYSTEMS, then they shall conform to all requirements specified under MAHC 4.7.3.3.
Conform If not labeled as SECONDARY DISINFECTION SYSTEMS, then they shall be labeled as SUPPLEMENTAL TREATMENT SYSTEMS and conform to requirements listed under MAHC 4.7.3.4.
# A
Log Inactivation and Oocyst Reduction
# A
Installation The SECONDARY DISINFECTION SYSTEM shall be located in the treatment loop (post filtration) and treat a portion (up to 100%) of the filtration flow prior to return of the water to the AQUATIC VENUE or AQUATIC FEATURE.
Manufacturer's Instructions The SECONDARY DISINFECTION SYSTEM shall be installed according to the manufacturer's directions.
# A
Minimum Flow Rate Calculation The flow rate (Q) through the SECONDARY DISINFECTION SYSTEM shall be determined based upon the total volume of the AQUATIC VENUE or AQUATIC FEATURE (V) and a prescribed dilution time (T) for theoretically reducing the number of assumed infective Cryptosporidium OOCYSTS from an initial total number of 100 million (10 8 ) OOCYSTS to a concentration of one OOCYST/100 mL.
Time for Dilution Reduction The dilution time shall be the lesser of 9 hours or 75% of the uninterrupted time an AQUATIC VENUE is closed in a 24 hour period.
# A
Flow Rate Measurements Where a SECONDARY DISINFECTION SYSTEM is installed, a means shall be installed to confirm the required flow rate to maintain a minimum required log inactivation of infective Cryptosporidium OOCYSTS at the minimum flow rate.
# A
Ultraviolet Light Systems To prevent mercury exposure, UV systems shall be installed to avoid lamp breakage according to the guidelines in EPA 815-R-06-007 Appendix E.
# A
Third Party Validation UV equipment shall be third party validated in accordance with the practices outlined in the EPA Ultraviolet Disinfectant Guidance Manual dated November, 2006, publication number EPA 815-R-06-007.
# A
Validation Standard The EPA Ultraviolet Disinfectant Guidance Manual shall be considered a recognized national STANDARD in the MAHC.
Suitable for Intended Use UV systems and all materials used therein shall be suitable for their intended use and be installed: 1) In accordance with the MAHC, 2) As CERTIFIED, LISTED, AND LABELED to a specific STANDARD by an ANSI-accredited certification organization, and 3) As specified by the manufacturer.
# Installation
The UV equipment shall be installed after the filtration and before addition of primary DISINFECTANT.
Labeled UV equipment shall be labeled with the following design specifications: maximum flow rate, minimum transmissivity, minimum intensity, and minimum dosage.
Strainer Installation An inline strainer shall be installed after the UV unit to capture broken lamp glass or sleeves.
Electrically Interlocked The equipment shall be electrically interlocked with feature pump(s) or automated feature supply valves, such that when the UV equipment fails to produce the required dosage as measured by automated sensor, the water features do not operate.
# A
Alarm/Interlock Setpoint The UV alarm/interlock setpoint shall be such that it ensures that the minimum required dose is delivered under all possible conditions of water UV transmittance and lamp output at the actual flow rate.
Operation UV systems shall not operate if the RECIRCULATION SYSTEM is not operating.
Calibrated UV Sensors The UV equipment shall be complete with calibrated UV sensors, which record the output of all the UV lamps installed in a system.
Multiple Lamps Where multiple lamps are fitted, sufficient sensors shall be provided to measure each lamp.
Fewer Sensors If the design utilizes fewer sensors than lamps, the location of lamps and sensors shall be such that the output of all lamps is adequately measured.
Automated Shut Down The automated shut down of the UV equipment for any reason shall initiate a visual alarm or other indication which will alert staff on-site or remotely.
Signage Signage instructing staff or PATRONS to notify facility management shall be posted adjacent to the visual indication.
Not Staffed If the AQUATIC FACILITY is not staffed, the sign shall include a means to contact management whenever the AQUATIC FACILITY is in use.
Reports and Documentation The UV equipment shall be supplied with the appropriate validation reports and documentation for that equipment model.
Manufacturer Log Inactivation Chart This documentation will include a graph or chart indicating the dose at which the required log inactivation is guaranteed for the system in question.
System Performance Curves System performance curves that do not include such factors are not considered validated systems.
# A
Minimum RED Validation records shall include the graph indicating the minimum intensity reading required at the operational flow for the minimum RED required to achieve the required log reduction.
Minimum Intensity Shown Where systems are validated to a specific dose, the graph shall show the minimum intensity reading required at the operational flow for that dose.
Recommended Validation Protocol Based on the recommended validation protocol presented in the EPA Disinfection Guidance Manual, UV reactors certified by ÖNORM and DVGW for a Bacillus subtilis RED of 40mJ/cm 2 shall be granted 3-log Cryptosporidium and 3-log Giardia inactivation credit as required in this CODE.
Ozone Disinfection
# A
Third Party Validation Ozone systems shall be validated by an ANSI-accredited third party testing and certification organization to confirm that they provide the required log inactivation of Cryptosporidium in the full SECONDARY DISINFECTION SYSTEM flow after any side-stream has remixed into the full SECONDARY DISINFECTION SYSTEM flow and prior to return of the water to the AQUATIC VENUE or AQUATIC FEATURE recirculation treatment loop.
# A
Suitable for Use Ozone systems and all materials used therein shall be suitable for their intended use and be installed: 1) In accordance with all applicable requirements, 2) As CERTIFIED, LISTED, AND LABELED to a specific STANDARD by an ANSI-accredited certification organization, and 3) As specified by the manufacturer.
Ozone System Components An ozone system shall be a complete system consisting of the following (either skid-mounted or components): 1) Ozone generator, 2) Injector / injector manifold, 3) Reaction tank (contact tank) / mixing tank / degas tower, 4) Degas valve (if applicable, to vent un-dissolved gaseous ozone), 5) Ozone destruct (to destroy un-dissolved gaseous ozone), 6) ORP MONITOR / controller, 7) Ambient ozone MONITOR / controller, 8) Air flow meter / controller, and 9) Water BACKFLOW prevention device in gas delivery system.
Appropriate Installation These components (or skid) shall be installed as specified by the manufacturer to maintain the required system validation as noted above.
# ORP Monitor
The ozone generating equipment shall be designed, sized, and controlled utilizing an ORP MONITOR / controller (independent of and in addition to any halogen ORP MONITOR/controller).
Minimum ORP Reading The minimum ORP reading shall be no less than 600 mV measured directly after the ozone side-stream remixes into the full flow of the RECIRCULATION SYSTEM.
Maximum ORP Reading The maximum ORP reading shall be no greater than 900 mV.
Installation and Injection Point The ozone system injection point shall be located in the AQUATIC VENUE return line after the filtration and heating equipment, prior to the primary DISINFECTANT injection point.
Injection and Mixing The injection and mixing system shall not prevent the attainment of the recirculation rate required elsewhere in this CODE.
# A
Gas Monitor / Controller An ambient ozone gas MONITOR/controller located adjacent to the ozone reactor/contact tank shall be utilized to disable the ozone system in the event of an ozone gas leak.
Comply with Fire Code Ozone system installations shall comply with the NFPA 1 Fire Code or the International Fire Code and any other CODES, STANDARDS, or requirements as mandated by the AHJ.
Air Space Testing At the time the ozone generating equipment is installed, again after 24 hours of operation, and annually thereafter, the air space within 6 inches of the AQUATIC VENUE water shall be tested to determine compliance of less than 0.1 ppm (mg/L) gaseous ozone.
Results Results of the test shall be maintained on site for review by the AHJ.
Automatic Shut Down Automatic shutdown shall occur under any condition that would result in the ozone system not operating within the established parameters needed to achieve the required log inactivation of Cryptosporidium (i.e., low feed gas supply, loss of vacuum or pressure, high dew point in feed air, water in ozone gas delivery line).
Electrically Interlocked The equipment shall be electrically interlocked with AQUATIC VENUE pump(s) or automated feature supply valves, such that when the ozone equipment fails to produce the required dosage as measured by ORP, the AQUATIC VENUES do not operate.
ORP Reading Alarm or Visual Indication If the ORP reading for the ozone system drops below 600 mV (regardless of the cause) a visual alarm or other indication shall be initiated that will alert staff on-site or remotely.
Signage Signage to notify facility management shall be present adjacent to the visual alarm.
Regular Audits In order to ensure that the supplied ozone system meets all the requirements of the STANDARD, the manufacturer shall maintain a quality system audited on a regular basis to a recognized quality STANDARD.
Listed Ozone equipment shall be listed to NSF/ANSI Standard 50.
Reports and Documentation The ozone system shall be supplied with the appropriate validation reports and documentation for that equipment model.
Log Inactivation Chart Ozone validation reports shall include a graph, chart, or other documentation which clearly indicates the required operating parameters for which the required log inactivation is guaranteed for the system in question.
Inclusive This dose shall be inclusive of validation factors.
# Conspicuous and Accessible
The telephone or communication system or device shall be conspicuously provided and accessible to AQUATIC VENUE users such that it can be reached immediately.
Alternate Communication Systems Alternate systems, devices, or communication processes are allowed with approval of the AHJ in situations when a telephone is not logistically sound, and an alternate means of communication is available, which meet the requirements of MAHC 5.8.5.2.1.2.
Internal Communication The AQUATIC FACILITY design shall include a method for staff to communicate in cases of emergency.
Signage A sign shall be posted at the telephone providing dialing instructions, address and location of the AQUATIC VENUE location, and the telephone number.
Safety Equipment Required at Facilities with Lifeguards
# A Lifeguard Chair and Stand Placement
The designer shall coordinate with the owner and/or an aquatic consultant to consider the impact on BATHER surveillance zones for placement of chairs and stands designed to be permanently installed so as to provide an unobstructed view of the BATHER surveillance zones.
# A Lifeguard Chair and Stand Design
The chairs/stands shall be designed:
1) With no sharp edges or protrusions; 2) With sturdy, durable, and UV resistant materials; 3) To provide enough height to elevate the lifeguard to an eye level above the heads of the BATHERS; and 4) To provide safe access and egress for the lifeguard.
# A UV Protection for Chairs and Stands
# A
Height For the purposes of this section, height shall be measured from finished grade to the top of the BARRIER on the side outside of the BARRIER surrounding an AQUATIC VENUE.
Change in Grade Where a change in grade occurs at a BARRIER, height shall be measured from the uppermost grade to the top of the BARRIER.
Locked All gates or doors shall be capable of being locked from the exterior.
# Emergency Egress
Gates or doors shall be designed in such a way that they do not prevent egress in the event of an emergency.
Unauthorized Entry EXIT GATES or doors shall be constructed so as to prevent unauthorized entry from outside of the ENCLOSURE around the AQUATIC VENUE.
# Gates
Gates shall be at least equal in height at top and bottom to the BARRIER of which they are a component.
Piping Marked All piping shall be marked with directional arrows as necessary to determine flow direction.
# Valves Identified
All valves shall be clearly identified by number with a brass tag, plastic laminate tags, or permanently affixed alternate.
Valves Described Valves shall be described as to their function and referenced in the operating instruction manual.
Piping Diagram A water-resistant, easily read, wall-mounted piping diagram shall be furnished and installed inside the EQUIPMENT ROOM.
# A
No Openings There shall be no ducts, grilles, pass-throughs, or other openings connecting such EQUIPMENT ROOMS to CHEMICAL STORAGE SPACES, except as permitted by the fire CODE.
# A
Indoor Aquatic Facility Air Spaces containing combustion equipment, air-handling equipment, and/or electrical equipment and spaces sharing air distribution with spaces containing such equipment shall be isolated from INDOOR AQUATIC FACILITY air.
Certified, Listed, and Labeled Equipment Exception: Equipment CERTIFIED, LISTED, AND LABELED for the atmosphere shall be acceptable.
# A
No Openings There shall be no ducts, grilles, pass-throughs, or other openings connecting such spaces to an INDOOR AQUATIC FACILITY.
# A
Openings / Gaps Where building construction leaves any openings or gaps between floors and walls, or between walls and other walls, or between walls and ceilings, such gaps shall be permanently sealed against air leakage.
Indoor Aquatic Facility Access Dike Exception: This requirement may be met by a continuous dike not less than 4 inches (10.2 cm) high located entirely within the EQUIPMENT ROOM, which will prevent spills from reaching the INDOOR AQUATIC FACILITY floor.
Floor Drains Equipment-room floor drains may be required by the AHJ.
# A
Automatic Closer Such door or doors between an EQUIPMENT ROOM and an INDOOR AQUATIC FACILITY shall be equipped with an automatic closer.
Maintained to Close Reliably The door, frame, and automatic closer shall be installed and maintained so as to ensure that the door closes completely and latches without human assistance.
# A
Automatic Lock Such door or doors between an EQUIPMENT ROOM and an INDOOR AQUATIC FACILITY shall be equipped with an automatic lock.
# A
Restrict Access Such lock shall require a key or combination to open from the INDOOR AQUATIC FACILITY side.
One Hand Such lock shall be so designed and installed as to be opened by one hand from the inside of the room under all circumstances, without the use of a key or tool.
Warning Sign Such doors shall be equipped with permanent signage warning against unauthorized entry.
Gasket All sides of such doors shall be equipped with a gasket.
# Prevent Air Passage
The gasket shall be so installed as to prevent the passage of air, fumes, or vapors when the door is closed.
Not Relief This section shall not be construed as granting relief from MAHC 4.9.1.7.2.1.
Other Equipment Room Guidance Automatic Closer Such doors shall be equipped with an automatic door closer that will completely close the door and latch without human assistance.
Air Pressure The door closer shall be able to close the door completely against the specified difference in air pressure.
Limit Switch Such doors shall be equipped with a limit switch and an alarm that will sound if the door remains open for more than 30 minutes. Where an open door will result in loss of air-pressure difference, this requirement can be met by the audible alarm required under MAHC 4.9.2.5.2.4.
# A
Interior Chemical Storage Spaces
# A
Pressure Difference This pressure difference shall be maintained by a continuously operated exhaust system used for no other purpose than to remove air from that one CHEMICAL STORAGE SPACE.
Separate Exhaust System Where more than one CHEMICAL STORAGE SPACE is present, a separate exhaust system shall be provided for each CHEMICAL STORAGE SPACE.
Airflow Rate The exhaust airflow rate shall be the greater of the: 1) OSHA requirements for working in such enclosed spaces, or 2) Amount needed to maintain the concentration of vapors or fumes below the PEL for the expected exposure time (defined by 29 CFR 1910.1000 (OSHA)) for each stored chemical, or 3) Amount specified by International Mechanical Code, or 4) Amount specified by the Uniform Mechanical Code, or 5) Amount needed to maintain the specified pressure difference.
# A
Alarm The function of this exhaust system shall be MONITORED continuously by an audible differential-pressure alarm system which shall sound if the specified differential air pressure is not maintained for a period of thirty minutes.
Minimum Output This alarm shall have a minimum output level of 85 dbA at 10 feet (3.0 m).
Manual Reset The specified alarm shall require manual reset to silence it.
# Exemptions
Applying for Exemption An AQUATIC FACILITY seeking an initial exemption or an existing AQUATIC FACILITY claiming to be exempt according to applicable regulations shall contact the AHJ for application details/forms.
# Change in Exemption Status
An AQUATIC FACILITY that sought and received an exemption from a public regulation shall contact the AHJ if the conditions upon which the exemption was granted change so as to eliminate the exemption status.
# A Variances
# Variance Authority
The AHJ may grant a variance to the requirements of this CODE.
Applying for a Variance An AQUATIC FACILITY seeking a variance shall apply in writing with the appropriate forms to the AHJ.
# Application Components
The application shall include, but not be limited to:
1) A citation of the CODE section to which the variance is requested;
2) A statement as to why the applicant is unable to comply with the CODE section to which the variance is requested; 3) The nature and duration of the variance requested; 4) A statement of how the intent of the CODE will be met and the reasons why the public health or SAFETY would not be jeopardized if the variance was granted; and 5) A full description of any policies, procedures, or equipment that the applicant proposes to use to rectify any potential increase in health or SAFETY risks created by granting the variance.
Revoked Each variance shall be revoked when the permit attached to it is revoked.
Not Transferable A variance shall not be transferable unless otherwise provided in writing at the time the variance is granted. 1) The water shall be recirculated and treated to meet the criteria of this CODE; or 2) The water shall be drained; or 3) An approved SAFETY cover that is CERTIFIED, LISTED, AND LABELED to ASTM F1346-91 by an ANSIaccredited certification organization shall be installed; or 4) Where a safety cover is not used or not practical, access to the AQUATIC VENUE shall be restricted and routine checks of the integrity of the AQUATIC VENUE ENCLOSURE shall be made. Other Testing At the time the ozone generating equipment is installed, again after 24 hours of operation, and annually thereafter, the air space within 6 inches of the AQUATIC VENUE water shall be tested to determine compliance of less than 0.1 ppm (mg/L) gaseous ozone.
# Equipment Standards [N/
# Aquatic Venues Without a Barrier but Open to the Public
Results Results of the test shall be maintained on site for review by the AHJ.
# A
UV Systems When a UV system is utilized as a SECONDARY DISINFECTION SYSTEM, the system shall be MONITORED and data recorded at a frequency consistent with MAHC Table 5.7.3.7.8.
Hygiene Facility Design Detailed scaled and dimensional drawings for each FLOATATION TANK facility shall show the location and number of all available HYGIENE FACILITIES provided including dressing rooms, lockers, SHOWERS, lavatory, and toilet fixtures. Other Approvals The approval shall also state that it is independent of all other required approvals such as Building, Zoning, Fire, Electrical, Structural, and any other approvals as required by local or state law or CODE and the applicant must separately obtain all other required approvals and permits. 4.12.10.1.3.9.1. 3 Plan Review Coordination The AHJ shall coordinate their FLOATATION TANK plan review and communicate their approval with other agencies involved in the FLOATATION TANK facility construction. 4.12.10.1.3.9.1.4
Plan Review Report The AHJ shall provide a plan submission compliance review list to the FLOATATION TANK facility owner with the following information: 1) Categorical items marked satisfactory, unsatisfactory, not applicable, or insufficient information; 2) A comment section keyed to the compliance review list shall detail unsatisfactory and insufficient; 3) Indication of the AHJ approval or disapproval of the AQUATIC FACILITY construction plans; 4) In the case of a disapproval, specific reasons for disapproval and procedure for resubmittal; and 5) Reviewer's name, signature, and date of review. 4.12.10.1.3.9.1.5
Plans Maintained The FLOATATION TANK facility owner shall maintain at least one set of their own approved plans made available to AHJ on-site for as long as the FLOATATION TANK facility is in operation.
Alteration Scope The FLOATATION TANK facility operator shall consult with the AHJ to determine if new or modified plans must be submitted for plan review and approval for other non-SUBSTANTIAL ALTERATIONS proposed. 4.12.10.1.3.9.3 Replacements 4.12.10.1.3.9.3.1 Replacement Approval Prior to replacing equipment, the FLOATATION TANK facility owner shall submit technical verification to the AHJ that all replacement equipment is equal to that which was originally approved and installed.
Replacement Equipment Equivalency The replacement of pumps, filters, feeders, controllers, SKIMMERS, flow-meters, valves, or other similar equipment with identical or substantially similar equipment may be done without submission to the AHJ for approval of new or altered AQUATIC FACILITY plans.
# Emergency Replacement
In emergencies, the replacement may be made prior to receiving the AHJ's approval, with the owner accepting responsibility for proper immediate replacement, if the equipment is not deemed equivalent by the AHJ. Replacement Record Maintenance The AHJ shall provide the FLOATATION TANK facility owner written approval or disapproval of the proposed replacement equipment's equivalency. 4.12.10.1.3.9.3.5 Documentation Documentation of proposed, approved, and disapproved replacements shall be maintained in the AHJ's FLOATATION TANK facility files. 4.12.10.1.3.9. 4 Compliance Certificate 4.12.10.1.3.9.4.1 Construction Compliance Certificate A certificate of construction compliance shall be submitted to the AHJ for all FLOATATION TANK facility plans for new construction and SUBSTANTIAL ALTERATIONS requiring AHJ approvals.
Certificate Preparation This certificate shall be prepared by a licensed professional and be within the scope of their practice as defined by the state or local laws governing professional practice within the jurisdiction of the permit issuing official. 4.12.10.1.3.9.4. 3 Certificate Statement The certificate shall also include a statement that the FLOATATION TANK facility, all equipment, and appurtenances have been constructed and/or installed in accordance with approved plans and specifications. 4.12.10.1.3.9.4.4
Systems Commissioning If commissioning or testing reports for systems such as FLOATATION TANK facility lighting, air handling, recirculation, filtration, and/or DISINFECTION are conducted, then those reports shall be included in furnished documentation. 4.12.10.1.3.9.4.5
Maintenance Documentation of FLOATATION TANK facility new construction or SUBSTANTIAL ALTERATION plan compliance shall be maintained in the AHJ's FLOATATION TANK facility files. 4.12.10.1.3.9.5 Construction Permits 4.12.10.1.3.9.5.1 Building Permit for Construction Construction permits required in this CODE and all other applicable permits shall be obtained before any FLOATATION TANK facility may be constructed.
4.12.10.1.3.9.5. 2 Remodeling Building Permit A construction permit or other applicable permits may be required from the AHJ before SUBSTANTIAL ALTERATION of a FLOATATION TANK facility. 4.12.10.1.3.9.5. 3 Permit Issuance The AHJ shall issue a permit to the owner to operate the FLOATATION TANK facility: 1) After receiving a certificate of completion from the design professional verifying information submitted, and 2) When new construction, SUBSTANTIAL ALTERATIONS, or annual renewal requirements of this CODE have been met.
4.12.10.1.3.9.5.4 Permit Denial The permit (license) to operate may be withheld, revoked or denied by the AHJ for noncompliance of the FLOATATION TANK facility with the requirements of this CODE, and the owner will be provided: 1) Specific reasons for disapproval and procedure for resubmittal; 2) Notice of the rights to appeal this denial and procedures for requesting an appeal; and 3) Reviewer's name, signature and date of review and denial. 4.12.10.1.3.9.5.5 Documentation Documentation of FLOATATION TANK facility permit renewal or denial shall be maintained in the AHJ's FLOATATION TANK facility files.
# Materials
Construction Material FLOATATION TANKS shall be constructed of impervious and structurally sound material(s). Withstand Anticipated Loads The structure shall be capable of withstanding the anticipated stresses/loads for full and empty conditions.
# Hydrostatic Conditions
The structural design shall take into consideration hydrostatic conditions and the integration of the FLOTATION TANK with other structural conditions as required by applicable CODES.
Durability All materials shall be inert, non-toxic, resistant to corrosion, impervious, enduring, and resistant to damages related to environmental conditions of the installation region.
Watertight FLOATATION TANK shall be designed in such a way to maintain their ability to retain the designed amount of water.
Smooth Finish All walls shall have a durable finish suitable for regular scrubbing and cleaning at the waterline.
# Equipment Standards
# General
4.12.10.3.1.1 Accredited Standards Where applicable, all equipment used or proposed for use in FLOATATION TANK facilities governed under this CODE shall be: 1) Of a proven design and construction, and 2) CERTIFIED, LISTED, AND LABELED to a specific STANDARD for the specified equipment use by an ANSIaccredited certification organization. Providing Relief Nothing in this CODE shall be construed as providing relief from any applicable requirements of the NEC or other applicable CODE.
Indoor Aquatic Facilities A FLOATATION TANK and room containing a FLOATATION TANK shall be considered a wet and CORROSIVE environment. 4.12.10.6 Water Supply/Wastewater Disposal 4.12.10.6.1 Water Supply Water serving a FLOATATION TANK facility shall be supplied from a potable water source.
# Sanitary Wastes
4.12.10.6.2.1 Discharged Wastewater from all PLUMBING FIXTURES in the entire FLOATATION TANK facility shall be discharged to a municipal sanitary sewer system, if available.
On-Site Sewer System If a municipal sanitary sewer system is not available, all wastewater shall be disposed to an on-site sewer system that is properly designed to receive the entire wastewater capacity. including filter backwash water, shall be discharged to a sanitary sewer system having sufficient capacity to collect and treat wastewater or to an on-site sewage disposal system designed for this purpose.
# Circulation System
Hydraulically Balanced The RECIRCULATION SYSTEM shall be hydraulically balanced to ensure effective distribution of treated water.
4.12.10.7.2 Filter Sizing Filtration system components shall be designed and sized to meet the applicable volumetric TURNOVER requirements specified in MAHC 5.12.10.8.
Pump Sizing Pump(s) shall be designed and sized to meet the applicable volumetric TURNOVER requirements specified in MAHC 5.12.10.8.
# Submerged Suction Fittings or Suction Outlets
Submerged suction fittings or suction outlets shall be CERTIFIED, LISTED, AND LABELED to ANSI/APSP-16 2011 by an ANSI-accredited organization.
4.12.10.8 A Disinfection 4.12.10.8.1 Disinfection Types DISINFECTION shall be provided by either:
1) Ozone treatment system; or 2) UV treatment system.
# Ozone and UV Disinfection Systems
Ozone and UV DISINFECTION systems when used as the primary DISINFECTION system, shall meet the 3-log reduction of influent bacteria DISINFECTION efficacy as tested in accordance with the criteria specified in Annex H.1 of NSF/ANSI Standard 50-2016 at the design filtration flow rate.
# Ozone Disinfection
When an Ozone DISINFECTION system is used, the criteria for ozone level and ozone production testing specified in Annex H.2 and H.3 respectively, of NSF/ANSI Standard 50-2016 must be met.
Ozone Levels Ozone levels in the FLOATATION TANK SOLUTION shall not exceed 0.1 ppm (mg/L). 4.12.10.8. 4 UV Disinfection When a UV DISINFECTION system is used as the primary DISINFECTION system, the following must be provided: 1) calibrated UV sensors shall be installed per MAHC 4.7.3.3.3.5; and 2) if the UV equipment fails to produce the required dosage as measured by the automated sensor, an alarm or other indication shall be initiated to alert staff.
4.12.10.9 A Ventilation 4.12.10.9.1 Room Air Handling System AIR HANDLING SYSTEM(S) shall be provided when necessary for the room containing FLOATATION TANK(S) and shall be designed, constructed, and installed to support the health and SAFETY of the FLOATATION TANK facility PATRONS.
4.12.10.9.2 Tank Air Quality Ventilation serving the FLOATATION TANK shall be provided when necessary to ensure acceptable air quality for human health within the FLOATATION TANK.
4.12.10.10 Floors Floors in room containing FLOATATION TANK(S) shall have a smooth, easy-to-clean, impervious-to-water, slip-resistant surface.
4.12.10.10.1 Coefficient of Friction All surfaces required to be slip-resistant shall have a minimum dynamic coefficient of friction at least equal to the requirements of ANSI A137. for that installation as measured by the DCOF AcuTest.
Floor Drains Floor drains shall be installed in rooms containing FLOATATION TANKS and dressing areas where PLUMBING FIXTURES are located. Opening Grill Covers Floor drain opening grill covers shall be ½-inch (1.3 cm) or less in width or diameter. 5.0 Facility Maintenance & Operation 110 2) The water shall be drained, and the AQUATIC VENUE shall be staffed to keep BATHERS out; or 3) A temporary BARRIER enclosing the AQUATIC VENUE shall be installed to keep bathers out, and routine checks of the integrity of the temporary AQUATIC VENUE BARRIER shall be made; or 4) An approved SAFETY cover that is CERTIFIED, LISTED, AND LABELED to ASTM F1346-91 by an ANSIaccredited certification organization shall be installed.
# Aquatic Venues Without a Barrier and Closed to the Public
Where the AQUATIC VENUE does not have a BARRIER enclosing it per MAHC 4.8.6, and the AQUATIC FACILITY is closed to the public:
1) The water shall be recirculated and treated to meet the criteria of this CODE; or 2) The water shall be drained; or 3) An approved SAFETY cover CERTIFIED, LISTED, AND LABELED to ASTM F1346-91 by an ANSI-accredited certification organization shall be installed; 4) Where a safety cover is not used or not practical, access to the AQUATIC FACILITY shall be restricted and routine checks of the integrity of the AQUATIC FACILITY ENCLOSURE shall be made.
# A
Reopening An owner or operator of a closed AQUATIC VENUE shall verify that the AQUATIC VENUE meets all applicable criteria of this CODE before reopening the AQUATIC VENUE.
# A
Preventive Maintenance Plan
# Written Plan
# Preventive Maintenance Plan Available
A written comprehensive preventive maintenance plan for each AQUATIC VENUE shall be available at the AQUATIC FACILITY.
# Contents
The AQUATIC FACILITY preventive maintenance plan shall include details and frequency of owner/operator's planned routine facility inspection, maintenance, and replacement of recirculation and water treatment components.
# A Facility Documentation
# Original Plans and Specifications Available
A copy of the approved plans and specifications for each AQUATIC VENUE constructed after the adoption of this CODE shall be available at the AQUATIC FACILITY
# Equipment Inventory
A comprehensive inventory of all mechanical equipment associated with each AQUATIC VENUE shall be available at the AQUATIC FACILITY.
# Inventory Details
This inventory shall include:
1) Equipment name and model number, 2) Manufacturer and contact information, 3) Local vendor/supplier and technical representative, if applicable, and 4) Replacement or service dates and details.
# Equipment Manuals
Operation manuals for all mechanical equipment associated with each AQUATIC VENUE shall be available at the AQUATIC FACILITY.
No Manual If no manufacturer's operation manual is available, then the AQUATIC FACILITY should create a written document that outlines STANDARD operating procedures for maintaining and operating the piece of equipment.
# A
Pool Shell Maintenance 5.5.6.1 Cracking
Repaired CRACKS shall be part of the daily inspection process and be repaired when they change sufficiently to increase the potential for: 1) Leakage, 2) Trips or falls, 3) Lacerations, or 4) Impact the ability to properly clean and maintain the AQUATIC VENUE area.
# Document Cracks
Surface CRACKS under 1/8 inch (3.2 mm) wide shall be documented and MONITORED for any movement or change including opening, closing, and/or lengthening.
# Sharp Edges
Any sharp edges shall be removed.
# Indoor / Outdoor Environment
# Lighting
# Lighting Maintained
# A Light Levels
Lighting systems, including emergency lighting, shall be maintained in all PATRON areas and maintenance areas, to ensure the required lighting levels are met as specified in MAHC 4.6.1.
# A Main Drain Visible
# A
Underwater Lighting Underwater lights, where provided, shall be operational and maintained as designed.
Ground-Fault Circuit Interrupter Branch circuits that supply underwater lights operating at more than the Low Voltage Contact Limit as defined in NEC 680.2 shall be GFCI protected.
Unprotected Light Circuit Operation of an unprotected underwater light circuit shall be prohibited.
Cracked Lenses CRACKED lenses that are physically intact on lights shall be replaced
# Intact Lenses
The AQUATIC VENUE shall be immediately closed if CRACKED lenses are not intact and the lenses shall be replaced before re-opening.
# A Glare
# A Assessments
Reduction Windows and lighting equipment shall be adjusted, if possible, to minimize glare and excessive reflection on the water surface.
# Night Swimming
Night swimming shall be prohibited unless required light levels in accordance with MAHC 4.6.1 are provided.
Hours Night swimming shall be considered one half hour before sunset to one half hour after sunrise.
Emergency Lighting Emergency lighting shall be tested and maintained according to manufacturer's recommendations.
# A
Indoor Aquatic Facility Ventilation
Purpose AIR HANDLING SYSTEMS shall be maintained and operated by the owner/operator to protect the health and SAFETY of the facility's PATRONS.
Original Characteristics AIR HANDLING SYSTEMS shall be maintained and operated to comply with all requirements of the original system design, construction, and installation.
# Indoor Facility Areas
# System Operation
The AIR HANDLING SYSTEM shall operate continuously, including providing the required amount of outdoor air.
# Operation Outside of Operating Hours Exception:
During non-use periods, the amount of outdoor air may be reduced by no more than 50% as long as acceptable air quality is maintained.
Manuals/Commissioning Reports The QUALIFIED OPERATOR shall maintain a copy of the
# Records
The owner shall ensure documents are maintained at the INDOOR AQUATIC FACILITY to be available for inspection, recording the following: 1) A log recording the set points of operational parameters set during the commissioning of the AIR HANDLING SYSTEM and the actual readings taken at least once daily; 2) Maintenance conducted to the system including the dates of filter changes, cleaning, and repairs; 3) Dates and details of modifications to the AIR HANDLING SYSTEM; and 4) Dates and details of modifications to the operating scheme.
Indoor / Outdoor Aquatic Facility Electrical Systems and Components
# A Electrical Repairs
# Local Codes
Repairs or alterations to electrical equipment and associated equipment shall preserve compliance with the NEC, or with applicable local CODES prevailing at the time of construction, or with subsequent versions of those CODES.
# Immediately Repaired
All defects in the electrical system shall be immediately repaired.
# Wiring
Electrical wiring, whether permanent or temporary, shall comply with the NEC or with applicable local CODE.
# A Electrical Receptacles
# New Receptacles
The installation of new electrical receptacles shall be subject to electrical-construction requirements of this CODE and applicable local CODE.
# Repairs
Repairs or maintenance to existing receptacles shall maintain compliance with the NEC and with 29 CFR 1910.304(b) (3) (ii).
# Replacement
Replacement receptacles shall be of the same type as the previous ones (e.g., grounding-type receptacles shall be replaced only by grounding-type receptacles), with all grounding conductors connected and proper wiring polarity preserved.
# Substitutions
Where the original-type of receptacle is no longer available, a replacement and installation shall be in accordance with applicable local CODE.
# A Ground-Fault Circuit Interrupter
# Manufacturer's Recommendations
Where receptacles are required to be protected by GFCI devices, the GFCI devices shall be tested following the manufacturer's recommendations.
# A Grounding
# Maintenance and Repair
Maintenance or repair of electrical circuits or devices shall preserve grounding compliance with the NEC or with applicable local CODES.
# Grounding Conductors
Grounding conductors that have been disconnected shall be reinspected as required by the local building CODE authority prior to AQUATIC VENUE being used by BATHERS.
# Damaged Conductors
Damaged grounding conductors and grounding electrodes shall be repaired immediately.
# Damaged Conductor Repair
# Public Access
The public shall not have access to the AQUATIC VENUE until such grounding conductors or grounding electrodes are repaired.
# Venue Closure
The AQUATIC VENUE with damaged grounding conductors or grounding electrodes, that are associated with recirculation or DISINFECTION equipment or with underwater lighting systems, shall be closed until repairs are completed and inspected by the AHJ.
# Clear Water
Backwashing should be continued until the water leaving the filter is clear.
# A
Backwashing Frequency Backwashing of each filter shall be performed at a differential pressure increase over the initial clean filter pressure, as recommended by the filter manufacturer, unless the system can no longer achieve the design flow rate.
# A
Backwash Scheduling Backwashes shall be scheduled to take place when the AQUATIC VENUE is closed for BATHER use.
Backwashing Without Bathers Present BATHERS shall not be permitted to reenter the AQUATIC VENUE until the RESPONSIBLE SUPERVISOR or QUALIFIED OPERATOR ensures that the recirculation pump and chemical feeders have restarted and run for a minimum of 5 minutes following completion of backwashing.
Backwashing With Bathers Present A filter may be backwashed while BATHERS are in the AQUATIC VENUE if all of the following criteria are met: 1) Multiple filters are used, and 2) The filter to be backwashed can be isolated from the remaining RECIRCULATION SYSTEM and filters, and 3) The recirculation and filtration system still continues to run as per this CODE, and 4) The chemical feed lines inject at a point where chemicals enter the RECIRCULATION SYSTEM after the isolated filter and where they can mix as needed.
# Filter Media Inspections
Sand or other granular media shall be inspected for proper depth and cleanliness at least one time per year, replacing the media when necessary to restore depth or cleanliness.
# Vacuum Sand Filters
The manual air release valve of the filter shall be opened as necessary to remove any air that collects inside of the filter as well as following each backwash.
# A Filtration Enhancing Products
Products used to enhance filter performance shall be used according to manufacturers' recommendations.
# Precoat Filters
# Appropriate
The appropriate media type and quantity as recommended by the filter manufacturer shall be used.
# A
Return to the Pool Precoating of the filters shall be required in closed loop (precoat) mode to minimize the potential for media or debris to be returned to the POOL unless filters are CERTIFIED, LISTED, AND LABELED to NSF/ANSI 50 by an ANSI-accredited certification organization to return water to the POOL during the precoat process.
# A Operation
Filter operation shall be per manufacturer's instructions.
Uninterrupted Flow Flow through the filter shall not be interrupted when switching from precoat mode to filtration mode unless the filters are CERTIFIED, LISTED, AND LABELED to NSF/ANSI 50 by an ANSI-accredited certification organization to return water to the POOL during the precoat process.
Flow Interruption When a flow interruption occurs on precoat filters not designed to bump, the media shall be backwashed out of the filter and a new precoat established according to the manufacturer's recommendations.
Maximum Precoat Media Load Systems designed to flow to waste while precoating shall use the maximum recommended precoat media load permitted by the filter manufacturer to account for media lost to the waste stream during precoating.
# A
Cleaning Backwashing or cleaning of filters shall be performed at a differential pressure increase over the initial clean filter pressure as recommended by the filter manufacturer unless the system can no longer achieve the design flow rate.
# Continuous Feed Equipment
Continuous filter media feed equipment tank agitators shall run continuously.
# Batch Application
Filter media feed may also be performed via batch application.
# A Bumping
Bumping a precoat filter shall be performed in accordance with the manufacturer's recommendations.
# A Filter Media
# A
Diatomaceous Earth Diatomaceous earth (DE), when used, shall be added to precoat filters in the amount recommended by the filter manufacturer and in accordance with the specifications for the filter listing and labeling to NSF/ANSI 50 by an ANSI-accredited certification organization.
Perlite Perlite, when used, shall be added to precoat filters in the amount recommended by the filter manufacturer and in accordance with the specifications for the filter listing and labeling to NSF/ANSI 50 by an ANSI-accredited certification organization.
Cartridge Filters
# A
Approved Cartridge filters shall be operated in accordance with the filter manufacturer's recommendation and be CERTIFIED, LISTED, AND LABELED to NSF/ANSI 50 by an ANSI-accredited certification organization.
# A Filtration Rates
The maximum operating filtration rate for any surface-type cartridge filter shall not: 1) Exceed the lesser of either the manufacturer's recommended filtration rate or 0.375 GPM per square foot (0.26 L/s/m 2 ) or 2) Drop below the design flow rate required to achieve the TURNOVER RATE for the AQUATIC VENUE.
# A
Filter Elements Active filter cartridges shall be exchanged with clean filter cartridges at a differential pressure increase over the initial clean filter pressure as recommended by the filter manufacturer unless the system can no longer achieve the design flow rate.
# A
Cleaning Procedure The filter housing and filter cartridge shall be cleaned per manufacturer's recommendation.
Filter Housing Cleaning The following procedures shall be implemented to clean the filter housing when no manufacturer instructions are established: 1) Drain filter housing to waste; 2) Remove the filter cartridges from the housing; 3) Clean the inside of the filter housing with a brush and mild detergent to remove biofilms and algae; 4) Rinse thoroughly; and 5) Mist the filter housing walls with CHLORINE bleach at a 1:10 dilution.
Filter Cartridge Cleaning The following procedures shall be implemented to clean the filter cartridge when no manufacturer instructions are established.
Rinse Thoroughly The cartridge shall be rinsed thoroughly with a spray nozzle.
# A
Pressure Washer A pressure washer shall not be used to clean cartridge filters.
Degrease Cartridge filters shall be degreased each time they are cleaned per the procedures outlined in this section.
Soak The cartridge shall be soaked overnight in one of the following solutions: 1) A cartridge filter cleaner/degreaser per instructions on product label, or 2) A solution of water with 1 cup (240 ml) of tri-sodium phosphate (TSP) per 5 gallons (18.9 L) of water, or 3) One cup (240 mL) of automatic dishwashing detergent per 5 gallons (18.9 L) of water.
Acid Muriatic acid or products with acid in them shall never be used prior to degreasing.
Rinse The filter cartridge shall be removed from the degreaser solution and rinsed thoroughly.
Sanitize The filter cartridge shall be SANITIZED by soaking for 1 hour in a bleach solution made by mixing 1 quart (950 ml) of household bleach per 5 gallons (18.9 L) of water.
Rinse After soaking for 1 hour, the SANITIZED filter cartridge shall be removed and rinsed thoroughly.
Spare Cartridge One full set of spare cartridges shall be maintained on site in a clean and dry condition.
# Water Treatment Chemicals and Systems
Disinfectants Bromine-based DISINFECTANTS may be applied to AQUATIC VENUES and SPAS through the addition of an organic bromine compound (1,3-Dibromo-5,5dimethylhydantoin (DBDMH) or 1-bromo-3-chloro-5,5-dimethylhydantoin (BCDMH)).
# A
Minimum Bromine Concentrations Minimum bromine concentrations shall be maintained at all times in all areas as follows: 1) All AQUATIC VENUES: 3.0 ppm (mg/L), and 2) SPAS: 4.00 ppm (mg/L).
# A
Maximum Bromine Concentrations The maximum bromine concentration shall not exceed 8.0 ppm (mg/L) at any time the AQUATIC VENUE is open to BATHERS.
# Stabilizers
Replacement Times These AQUATIC VENUES shall no longer use CYA or stabilized CHLORINE products no later than 4 years after adoption of this CODE.
# Aquatic Venues
The CYA level at all AQUATIC VENUES shall remain at or below 90 ppm (mg/L). Fan An electric motor-driven fan shall take suction from near the floor level of the ENCLOSURE and discharge at a suitable point to the exterior above the ground level.
# A Compressed Chlorine Gas
Fan Switch The fan switch shall be able to be operated from outside of the ENCLOSURE.
Trained Operator Any person who operates such chlorinating equipment shall be trained in its use.
Stop Use Facilities shall stop the use of CHLORINE gas if specific SAFETY equipment and training requirements, along with local CODE considerations, cannot be met.
# A Salt Electrolytic Chlorine Generators, Brine Electrolytic Chlorine or Bromine Generators
Pool Grade Salt Only POOL grade salt that has been CERTIFIED, LISTED, AND LABELED to either NSF/ANSI Standard 50 or NSF/ANSI Standard 60 by an ANSI-accredited certification organization, and/or have an EPA FIFRA registration shall be used.
Maintained The saline content of the POOL water shall be maintained in the required range specified by the manufacturer.
Cleaning Cleaning of electrolytic plates shall be performed as recommended by the manufacturer.
Corrosion Protection Corrosion protection systems shall be maintained in the POOL basin.
# A
Secondary or Supplemental Treatment Systems
# A
Log Inactivation Secondary UV systems shall be operated and maintained not to exceed the maximum validated flow rate and meet or exceed the minimum validated output intensity needed to achieve the required dose.
Free Available Chlorine and Bromine Levels Use of UV does not modify any other water quality requirements.
# A
Calibrated Sensors UV sensors shall be calibrated at a frequency in accordance with manufacturer recommendations.
Records Records of calibration shall be maintained by the facility.
# Ozone
Log Inactivation Ozone systems shall be operated and maintained according to the manufacturer's instructions to maintain the required design performance.
Residual Ozone Concentration Residual ozone concentration in the AQUATIC VENUE water shall remain below 0.1 ppm (mg/L).
Free Available Chlorine and Bromine Levels Use of ozone does not modify any other water quality requirements.
Standard Operating Manual A printed STANDARD operating manual shall be provided containing information on the operation and maintenance of the ozone generating equipment, including the responsibilities of workers in an emergency.
Bather Re-entry BATHERS shall not be permitted to reenter the AQUATIC VENUE until the RESPONSIBLE SUPERVISOR or QUALIFIED OPERATOR has successfully understood the cause of the interlock activation and/or recirculation pump interruption and has manually overridden the interlock for restart of the recirculation pump and chemical feeder, and UV or ozone system, if applicable, for 5 minutes following the restart of these systems.
Fail Proof Safety Features Chemical feed system components shall incorporate failure-proof features so the chemicals cannot feed directly into the AQUATIC VENUE, the VENUE piping system not associated with the RECIRCULATION SYSTEM, source water supply system, or area within proximity of the AQUATIC VENUE DECK under any type of failure, low flow, or interruption of operation of the equipment to prevent BATHER exposure to high concentrations of AQUATIC VENUE treatment chemicals.
Maintained All chemical feed equipment shall be maintained in good working condition.
Challenge Testing The system and its components shall be tested on a regular basis to confirm that all safety features are functioning correctly.
Once Monthly or Specified by Manufacturer Unless specified otherwise by the device manufacturer, once monthly challenge testing of the chemical feeder interlock system shall be conducted by turning off recirculation pump flow to the chemical feeder and ensuring triggered shutoff of chemical feeder occurs via electrical interlock with flow meter/flow switch, paddle wheel, or other device being used to assess flow to chemical feeder.
Following Confirmation Following confirmation of triggered shutoff, recirculation flow shall immediately be restarted.
Insufficient Size/Capacity If it is determined that the chemical feed system is incapable of maintaining the minimum required DISINFECTANT level at all times in accordance with the MAHC, additional capacity shall be designed and installed per MAHC 4.7.3.2.2.
Chemical Feeders Chemical feeders shall be installed such that they are not over CHEMICAL STORAGE containers, other feeders, or electrical equipment.
# Dry Chemical Feeders
Chemicals shall be kept dry to avoid clumping and potential feeder plugging for mechanical gate or rotating screw feeders.
# Cleaned and Lubricated
The feeder mechanism shall be cleaned and lubricated to maintain a reliable feed system.
Venturi Inlet Adequate pressure shall be maintained at the venturi INLET to create the vacuum needed to draw the chemical into the RECIRCULATION SYSTEM.
# Erosion Feeders
Erosion feeders shall only have chemicals added that are approved by the manufacturer.
Opened A feeder shall only be opened after the internal pressure is relieved by a bleed valve.
Maintained Erosion feeders shall be maintained according the manufacturer's instructions.
# First Aid Equipment
# A Location for First Aid
The AQUATIC FACILITY shall have designated locations for emergency and first aid equipment.
# A
First Aid Supplies An adequate supply of first aid supplies shall be continuously stocked and include, at a minimum, as follows: 1) A First Aid Guide,
# Signage
# 5.8.5.2.3.1A
Sign Indicating First Aid Location Signage shall be provided at the AQUATIC FACILITY or each AQUATIC VENUE, as necessary, which clearly identifies the following: 1) First aid location(s), and 2) Emergency telephone(s) or approved communication system or device.
# A
Emergency Dialing Instructions A permanent sign providing emergency dialing directions and the AQUATIC FACILITY address shall be posted and maintained at the emergency telephone, system, or device.
# A
Management Contact Info A permanent sign shall be conspicuously posted and maintained displaying contact information for emergency personnel and AQUATIC FACILITY management.
# A
Hours of Operation A sign shall be posted stating the following: 1) The operating hours of the AQUATIC FACILITY, and 2) Unauthorized use of the AQUATIC FACILITY outside of these hours is prohibited.
Safety Equipment Required at Facilities with Lifeguards
# A UV Protection for Chairs and Stands
Lifeguards and lifeguard positions must be provided protection from UV radiation exposure.
# A Backboard
At least one backboard constructed of material easily SANITIZED/DISINFECTED shall be provided.
# Backboard Number and Location
The number and location of backboards shall be sufficient to effect a 2-minute response time to the location of the incident.
Backboard Components The backboard shall be equipped with a head immobilizer and sufficient straps to immobilize a person to the backboard.
# A Rescue Tube Immediately Available Each QUALIFIED LIFEGUARD conducting
PATRON surveillance with the responsibility of in-water rescue in less than 3 feet (0.9 m) of water shall have a rescue tube immediately available for use.
# A Rescue Tube on Person
Each QUALIFIED LIFEGUARD conducting PATRON surveillance in a water depth of 3 feet (0.9 m) or greater shall have a rescue tube on his/her person in a rescue ready position.
# A
Identifying Uniform QUALIFIED LIFEGUARDS shall wear attire that readily identifies them as members of the AQUATIC FACILITY'S lifeguard staff.
# A
Signal Device A whistle or other signaling device shall be worn by each QUALIFIED LIFEGUARD conducting PATRON surveillance for communicating to users and/or staff.
# A Sun Blocking Methods
Lifeguards Responsible QUALIFIED LIFEGUARDS are responsible for protecting themselves from UV radiation exposure and wearing appropriate sunglasses and sunscreen.
# A
Polarized Sunglasses When glare impacts the ability to see below the water's surface, QUALIFIED LIFEGUARDS shall wear polarized sunglasses while conducting BATHER surveillance.
# A
Personal Protective Equipment Personal protective devices including a resuscitation mask with one-way valve and non-latex, non-powdered, one-use disposable gloves shall be worn in the form of a hip pack or attached to the rescue tube of all QUALIFIED LIFEGUARDS on-duty.
# A
Rescue Throwing Device AQUATIC FACILITIES with one QUALIFIED LIFEGUARD shall provide and maintain a U.S. Coast Guard-approved aquatic rescue throwing device as per the specifications of MAHC 5.8.5.4.1.
# A
Reaching Pole AQUATIC FACILITIES with one QUALIFIED LIFEGUARD shall provide and maintain a reaching pole as per the specifications of MAHC 5.8.5.4.2.
# Safety Equipment and Signage Required at Facilities without Lifeguards
# A
Throwing Device AQUATIC VENUES whose depth exceeds 2 feet (61.0 cm) of standing water shall provide and maintain a U.S. Coast Guard-approved aquatic rescue throwing device, with at least a quarter-inch (6.3 mm) thick rope whose length is 50 feet (15.2 m) or 1.5 times the width of the POOL, whichever is less.
# Throwing Device Location
The rescue throwing device shall be located in the immediate vicinity to the AQUATIC VENUE and be accessible to BATHERS.
# A
Reaching Pole AQUATIC VENUES whose depth exceeds 2 feet (61 cm) of standing water shall provide and maintain a reaching pole of 12 foot (3.7 m) to 16 foot (4.9 m) in length, non-telescopic, light in weight, and with a securely attached Shepherd's Crook with an aperture of at least 18 inches (45.7 cm).
# Reaching Pole Location
The reaching pole shall be located in the immediate vicinity to the AQUATIC VENUE and be accessible to BATHERS and PATRONS.
Non-Conductive Material Reaching poles provided by the AQUATIC FACILITY after the adoption date of this CODE shall be of non-conductive material.
# A
CPR Posters CPR posters that are up to date with latest CPR programs and protocols shall be posted conspicuously at all times.
# A Imminent Health Hazard Sign
A sign shall be posted outlining the IMMINENT HEALTH HAZARDS, which require AQUATIC VENUE or AQUATIC FACILITY closure as defined in this CODE per MAHC 6.6.3.1 and a telephone number to report problems to the owner/operator.
# A Additional Signage
For any AQUATIC VENUE with standing water, a sign shall be posted signifying a QUALIFIED LIFEGUARD is not on duty and that the following rules apply: 1) Persons under the age of 14 cannot be in the AQUATIC VENUE without direct adult supervision meaning children shall be in adult view at all times, and 2) Youth and childcare groups, training, lifeguard courses, and swim lessons are not allowed without a QUALIFIED LIFEGUARD providing PATRON surveillance.
# Barriers and Enclosures
# General Requirements
Construction Requirements (N/A)
# Gates and Doors
# Self-Closing and Latching
# Exception
Gates or doors used solely for after-hours maintenance shall remain locked at all times when not in use by staff.
Propping Open Required self-closing and self-latching gates or doors serving as part of a guarded ENCLOSURE may be maintained in the open position when the AQUATIC VENUE is open and staffed as required.
5.9 A Filter/Equipment Room 5.9.1 Chemical Storage
# A
Local Codes CHEMICAL STORAGE shall be in compliance with local building and fire CODES.
# A
OSHA and EPA Chemical handling shall be in compliance with OSHA and EPA regulations.
# A Safety Data Sheets
For each chemical, STORAGE, handling, and use of the chemical shall be in compliance with the manufacturer's SDS and labels.
Access Prevention AQUATIC VENUE chemicals shall be stored to prevent access by unauthorized individuals.
# A
Protected AQUATIC VENUE chemicals shall be stored so that they are protected from getting wet.
# A
No Mixing AQUATIC VENUE chemicals shall be stored so that if the packages were to leak, no mixing of incompatible materials would occur.
Safety Data Sheets Consulted SDS shall be consulted for incompatibilities.
# A Ignition Sources
Possible ignition sources, including but not limited to gasoline, diesel, natural gas, or gas-powered equipment such as lawn mowers, motors, grills, POOL heaters, or portable stoves shall not be stored or installed in the CHEMICAL STORAGE SPACE.
# Smoking
Smoking shall be prohibited in the CHEMICAL STORAGE SPACE.
# A Lighting
Lighting shall be at minimum 30 footcandles (323 lux) to allow operators to read labels on containers throughout the CHEMICAL STORAGE SPACE and pump room.
# A Personal
Protective Equipment PPE shall be available as indicated on the chemical SDSs.
Storage Chemicals shall be stored away from direct sunlight, temperature extremes, and high humidity.
Single Container A single container of a particular chemical that has been opened and that is currently in use in the pump room may be kept in a staging area of the pump room only if the chemical(s) will be protected from exposure to heat and moisture.
# Separate
The CHEMICAL STORAGE SPACE shall be separate from the EQUIPMENT ROOM.
# Waiver
# Chemical Handling
# Identity
Containers of chemicals shall be labeled, tagged, or marked with the identity of the material and a statement of the hazardous effects of the chemical according to OSHA and/or EPA materials labeling requirements.
# Labeling
All AQUATIC VENUE chemical containers shall be labeled according to OSHA and/or EPA materials labeling requirements.
# NSF Standard
The chemical equipment used in controlling the quality of water shall be CERTIFIED, LISTED, AND LABELED to NSF/ANSI 50 by an ANSI-accredited certification organization and used only in accordance with the manufacturer's instructions.
Measuring Devices Chemicals shall be measured using a dedicated measuring device where applicable.
# Clean and Dry
These measuring devices shall be clean, dry, and constructed of material compatible with the chemical to be measured to prevent the introduction of incompatible chemicals.
# Chemical Addition Methods
Automatically Introduced DISINFECTION and pH control chemicals shall be automatically introduced through the RECIRCULATION SYSTEM.
Manual Addition SUPERCHLORINATION or shock chemicals and other POOL chemicals other than DISINFECTION and pH control may be added manually to the POOL.
Absence of Bathers Chemicals added manually directly into the AQUATIC VENUE shall only be introduced in the absence of BATHERS.
Safety Requirements Treatment chemicals shall be added in strict adherence to the manufacturer's use instructions to ensure levels in the water are safe for human exposure. Refer to MAHC 5.7.3.
Diluted Whenever required by the manufacturer, chemicals shall be diluted (or mixed with water) prior to application and as per the manufacturer's directions.
Added Chemicals shall be added to water when diluting as opposed to adding water to a concentrated chemical.
Mixed Each chemical shall be mixed in a separate, labeled container.
Never Mixed Together Two or more chemicals shall never be mixed in the same dilution water.
# Hygiene Facilities
# Hand Wash Station HAND WASH STATIONS shall include the following items:
1) Hand wash sink, 2) Adjacent soap with dispenser, 3) Hand drying device or paper towels and dispenser, and 4) Trash receptacle.
# Cleansing Showers
Cleaned and Sanitized CLEANSING SHOWERS shall be cleaned and SANITIZED daily and more often if necessary with an EPA-REGISTERED product and more often if necessary to provide a clean and sanitary environment.
# A Rinse Showers
Cleaned RINSE SHOWERS shall be cleaned daily and more often if necessary with an EPA-REGISTERED product and more often if necessary to provide a clean and sanitary environment.
Easy Access RINSE SHOWERS shall be easily accessible.
# Not Blocked
Equipment and furniture on the DECK shall not block access to RINSE SHOWERS.
No Soap Soap dispensers and soap shall be prohibited at RINSE SHOWERS.
# All Showers [N/
# Hand Wash Sink Installed and Operational
The adjacent hand wash sink shall be installed and operational within 1 year from the date of the AHJ's adoption of the MAHC.
Cleaned DIAPER-CHANGING STATIONS shall be cleaned and DISINFECTED daily and more often if necessary to provide a clean and sanitary environment.
# Maintained
They shall be maintained in good condition and free of visible contamination.
Disinfectant EPA-REGISTERED DISINFECTANT shall be provided in the form of either of the following: 1) A solution in a spray dispenser with paper towels and dispenser, or 2) Wipes contained within a dispenser.
Covers If disposable DIAPER-CHANGING UNIT covers are provided in addition to DISINFECTANT, they shall cover the DIAPER-CHANGING UNIT surface during use and keep the unit in clean condition.
Portable Hand Wash Station If a portable HAND WASH STATION is provided for use it shall be operational and maintained in good condition at all times.
# A Non-Plumbing Fixture Requirements
# Paper Towels
If paper towels are used for hand drying, a dispenser and paper towels shall be provided for use at HAND WASH STATIONS.
# Soap Soap dispensers shall be provided at HAND WASH STATIONS and CLEANSING
SHOWERS and shall be kept full of liquid or granular soap.
Bar Soap Bar soap shall be prohibited.
Wading Pools 5.12.10 A Floatation Tanks Only the Operation and Maintenance provisions contained in MAHC sections 5.12.10.1 through 5.12.10.15 apply to FLOATATION TANKS unless otherwise noted.
Permit Details The permit to operate shall: 1) Be issued in the name of the owner, 2) List all FLOATATION TANKS included under the permit, and 3) Specify the period of time approved by the AHJ. 5.12.10.1.1.5
Permit Expiration Permits to operate shall terminate according to the AHJ schedule.
# Permit Renewal
The FLOATATION TANK facility owner shall renew the permit to operate prior to the scheduled expiration of an existing permit to operate an FLOATATION TANK facility.
Permit Denial The permit to operate may be withheld, revoked, or denied by the AHJ for noncompliance of the FLOATATION TANK facility with the requirements of this CODE.
Owner Responsibilities The owner of an FLOATATION TANK facility is responsible for the facility being operated, maintained, and managed in accordance with the requirements of this CODE.
Operating Permits 5.12.10.1.2.1 Permit Location The permit to operate shall be posted at the FLOATATION TANK facility in a location conspicuous to the public.
Operating Without a Permit Operation of an FLOATATION TANK facility or newly constructed or substantially altered FLOATATION TANK without a permit to operate shall be prohibited.
Required Closure The AHJ may order a newly constructed or substantially altered FLOATATION TANK without a permit to operate to close until the FLOATATION TANK facility has obtained a permit to operate.
# Inspections
# Preoperational Inspections
# Terms of Operation
The FLOATATION TANK facility may not be placed in operation until an inspection approved by the AHJ shows compliance with the requirements of this CODE or the AHJ approves opening for operation.
# Exemptions
Applying for Exemption An FLOATATION TANK facility seeking an initial exemption or an existing FLOATATION TANK facility claiming to be exempt according to applicable regulations shall contact the AHJ for application details/forms.
Change in Exemption Status An FLOATATION TANK facility that sought and received an exemption from a public regulation shall contact the AHJ if the conditions upon which the exemption was granted change so as to eliminate the exemption status.
# Variances
# Variance Authority
The AHJ may grant a variance to the requirements of this CODE.
Applying for a Variance A FLOATATION TANK facility seeking a variance shall apply in writing with the appropriate forms to the AHJ.
Application Components The application shall include, but not be limited to: 1) A citation of the CODE section to which the variance is requested; 2) A statement as to why the applicant is unable to comply with the CODE section to which the variance is requested; 3) The nature and duration of the variance requested; 2018 MAHC CODE 5.0 Facility Maintenance & Operation 140 4) A statement of how the intent of the CODE will be met and the reasons why the public health or SAFETY would not be jeopardized if the variance was granted; and 5) A full description of any policies, procedures, or equipment that the applicant proposes to use to rectify any potential increase in health or SAFETY risks created by granting the variance.
Revoked Each variance shall be revoked when the permit attached to it is revoked.
Not Transferable A variance shall not be transferable unless otherwise provided in writing at the time the variance is granted.
Replacement Replacement receptacles shall be of the same type as the previous ones (e.g., grounding-type receptacles shall be replaced only by grounding-type receptacles), with all grounding conductors connected and proper wiring polarity preserved.
Substitutions Where the original-type of receptacle is no longer available, a replacement and installation shall be in accordance with applicable local CODE.
Ground-Fault Circuit Interrupter 5.12.10.5.3.1 Manufacturer's Recommendations Where receptacles are required to be protected by GFCI devices, the GFCI devices shall be tested following the manufacturer's recommendations.
Testing Required GFCI devices shall be tested as part of scheduled maintenance on the first day of operation, and monthly thereafter, until the BODY OF WATER is drained and the equipment is prepared for STORAGE.
# Grounding
5.12.10.5.4.1 Maintenance and Repair Maintenance or repair of electrical circuits or devices shall preserve grounding compliance with the NEC or with applicable local CODES.
Grounding Conductors Grounding conductors that have been disconnected shall be re-inspected as required by the local building CODE authority prior to AQUATIC VENUE being used by BATHERS.
Damaged Conductors Damaged grounding conductors and grounding electrodes shall be repaired immediately.
Damaged Conductor Repair Damaged grounding conductors or grounding electrodes associated with recirculation or DISINFECTION equipment or with underwater lighting systems shall be repaired by a qualified person who has the proper and/or necessary skills, training, or credentials to carry out this task.
Public Access The public shall not have access to the FLOATATION TANK until such grounding conductors or grounding electrodes are repaired.
Venue Closure The FLOATATION TANK with damaged grounding conductors or grounding electrodes, that are associated with recirculation or DISINFECTION equipment or with underwater lighting systems, shall be closed until repairs are completed and inspected by the AHJ. 5.12.10.5.5 Bonding 5.12.10.5.5.1 Local Codes Maintenance or repair of all metallic equipment, electrical circuits or devices, or reinforced concrete structures shall preserve bonding compliance with the NEC, or with applicable local CODES.
Bonding Conductors Bonding conductors shall not be disconnected except where they will be immediately reconnected.
Disconnected Conductors The FLOATATION TANK shall not be used by BATHERS while bonding conductors are disconnected. 5.12.10.5.5.4
Removable Covers Removable covers protecting bonding conductors (e.g., at ladders), shall be kept in place except during bonding conductor inspections, repair, or replacement.
2018 MAHC CODE 5.0 Facility Maintenance & Operation 142 5.12.10.5.5.5
Scheduled Maintenance Bonding conductors, where accessible, shall be inspected semi-annually as part of scheduled maintenance.
Corrosion Bonding conductors and any associated clamps shall not be extensively corroded.
Continuity Continuity of the bonding system associated with RECIRCULATION SYSTEM or DISINFECTION equipment or with underwater lighting systems shall be inspected by the AHJ following installation and any major construction around the AQUATIC FACILITY.
# Extension Cords
5.12.10.5.6.1 Temporary Cords and Connectors Temporary extension cords and power connectors shall not be used as a substitute for permanent wiring.
Minimum Distance from Water All parts of an extension cord shall be restrained at a minimum of 6 feet (1.8 m) away when measured along the shortest possible path from a BODY OF WATER during times when the FLOATATION TANK facility is open. 5.12.10.5.6. 3 Exception An extension cord may be used within 6 feet (1.8 m) of the nearest edge of a BODY OF WATER if a permanent wall exists between the BODY OF WATER and the extension cord. 5.12.10.5.6.4 GFCI Protection The circuit supplying an extension cord shall be protected by a GFCI device when the extension cord is to be used within 6 feet (1.8 m) of a BODY OF WATER.
Local Code An extension cord incorporating a GFCI device may be used if that is acceptable under applicable local CODE. 5.12.10.9 Disinfection 5.12.10.9.1 3-log Inactivation Ozone and UV systems shall be operated and maintained to achieve the required design performance for a 3-log inactivation as specified in MAHC 4.12.10.8.2.
# Operation
Ozone and UV systems shall be operated and maintained in accordance with manufacturer's instructions.
# Ozone Concentration
Ozone DISINFECTION systems shall be operated and maintained so as to meet the ozone concentration output and not exceed the limits of off-gassed ozone in accordance with MAHC 4.12.10.8.3
# UV calibrated sensors
1) When UV is used, the UV sensors shall be calibrated at a frequency in accordance with manufacturer recommendations. 2) Records of calibration shall be maintained by the facility and available for review by the AHJ. Info to Include Illness and injury incident report information shall include 1) Date, 2) Time, 3) Location, 4) Incident including type of illness or injury and cause or mechanism, 5) Names and addresses of the individuals involved, 6) Actions taken, 7) Equipment used, and 8) Outcome of the incident.
Notify the AHJ In addition to making such records, the owner/operator shall ensure that the AHJ is notified within 24 hours of the occurrence of an incident recorded in MAHC 5.12.11.14.4.
Bodily Fluids Remediation Log
# A Training Topics
The training shall include at a minimum:
1) How to recognize and avoid chemical hazards;
2) The physical and health hazards of chemicals used at the facility;
3) How to detect the presence or release of a hazardous chemical; 4) Required PPE necessary to avoid the hazards; 5) Use of PPE; 6) Chemical spill response; and 7) How to read and understand the chemical labels or other forms of warning including SDS sheets.
# Training Records
Records of all training shall be recorded and maintained on file.
Secondary Disinfection SECONDARY DISINFECTION SYSTEMS including: 1) How ozone and UV DISINFECTANTS are used in conjunction with residual DISINFECTANTS to inactivate pathogens, and 2) Sizing guidelines/dosing calculations, safe use, and advantages and disadvantages of each method.
Supplemental Treatment SUPPLEMENTAL TREATMENT including other DISINFECTION chemicals or systems on the market and their effectiveness in water treatment.
Water Chemistry Course work for water chemistry shall include:
Source Water Source water including requirements for supply and pre-treatment.
Water Balance Water balance including: 1) Effect of unbalanced water on DISINFECTION, AQUATIC FEATURE surfaces, mechanical equipment, and fixtures; and 2) Details of water balance including pH, total alkalinity, calcium hardness, temperature, and TDS.
Saturation Index SATURATION INDEX including calculations, ideal values, and effects of values which are too low or too high.
Water Clarity Water clarity including: 1) Reasons why water quality is so important; 2) Causes of poor water clarity; 3) Maintenance of good water clarity; and 4) Closure requirements when water clarity is poor.
pH pH including: 1) How pH is a measure of the concentration of hydrogen ions in water; 2) Effects of high and low pH on BATHERS and equipment; 3) Ideal pH range for BATHER and equipment; 4) Factors that affect pH; 5) How pH affects DISINFECTANT efficacy; and 6) How to decrease and increase pH.
Total Alkalinity Total alkalinity including: 1) How total alkalinity relates to pH; 2) Effects of low and high total alkalinity; 3) Factors that affect total alkalinity; 4) Ideal total alkalinity range; and 5) How to increase or decrease total alkalinity.
Calcium Hardness Calcium hardness including: 1) Why water naturally contains calcium; 2) How calcium hardness relates to total hardness and temperature; 3) Effects of low and high calcium hardness; 4) Factors that affect calcium hardness; 5) Ideal calcium hardness range; and 6) How to increase or decrease calcium hardness.
Temperature Water temperature including: 1) How low and high water temperatures increase the likelihood of corrosion and scaling, respectively; 2018 MAHC CODE 6.0 Policies & Management 152 2) Effect on DISINFECTION, its health effects, and other operational considerations; 3) Health effects; and 4) Other operational considerations.
Total Dissolved Solids TDS including: 1) Why the concentration of TDS increases over time; 2) Association with conductivity and organic CONTAMINANTS; and 3) Key TDS levels as they relate to starting up an AQUATIC FACILITY and galvanic corrosion.
Water Treatment Systems Water treatment systems including: 1) Descriptions of system use, MONITORING
Maintenance Calculations Calculations including: 1) Explanations of why particular calculations are important; 2) How to convert units of measurement within and between the English and metric systems; 3) How to determine the surface area of regularly and irregularly shape AQUATIC VENUES; 4) How to determine the water volume of regularly and irregularly shaped AQUATIC VENUES; and 5) Why proper sizing of filters, pumps, pipes, and feeders is important.
Main Drains Main drains including: 1) A description of the role of main drains; 2) Why they should not be resized without engineering and public health consultation;
3) The importance of daily inspection of structural integrity; and 4) Discussion on balancing the need to maximize surface water flow while minimizing the likelihood of entrapment.
Gutters & Surface Skimmers Gutters and surface SKIMMERS including: 1) Why it is important to collect surface water; 2) A description of different gutter types (at a minimum: scum, surge, and rim-flow); 3) How each type generally works; 4) The advantages and disadvantages of each; and 5) Description of the components of SKIMMERS (e.g., weir, basket, and equalizer assembly) and their respective roles. Mechanical System Balance Mechanical system balance including: 1) An understanding of mechanical system balancing; 2) Methodology for setting proper operational water levels; 3) Basic hydraulics which affect proper functioning of the balance tank and AQUATIC VENUE; 4) Methods of setting and adjusting modulation valves; 5) Balance lines; 6) SKIMMERS; 7) Main drains; 8) The operation of the water make-up system; 9) Collector tanks/gravity drainage systems; and 10) Automatic controllers.
Circulation Pump & Motor Circulation pump and motor including: 1) Descriptions of the role of the pump and motor; 2) Self-priming and flooded suction pumps; 3) Key components of a pump and how they work together; 4) Cavitation; 5) Possible causes of cavitation; and 6) Troubleshooting problems with the pump and motor.
Valves Valves including descriptions of different types of valves (e.g., gate, ball, butterfly/wafer, multi-port, globe, modulating/ automatic, and check) and their safe operation.
Return Inlets Return INLETS including a description of the role of return INLETS and the importance of replacing fittings with those that meet original specifications.
Filtration Filtration including: 1) Why filtration is needed; 2) A description of pressure and vacuum filters and different types of filter media; 3) How to calculate filter surface area; 4) How to read pressure gauges; 5) A general description of sand, cartridge, and diatomaceous earth filters and alternative filter media types to include, at a minimum, perlite, zeolite, and crushed glass; 6) The characteristic flow rates and particle size entrapment of each filter type; 7) How to generally operate and maintain each filter type; 8) Troubleshooting problems with the filter; and 9) The advantages and disadvantages of different filters and filter media.
Filter Backwashing/Cleaning Filter backwashing/cleaning including: 1) Determining and setting proper backwash flow rates; 2) When backwashing/cleaning should be done and the steps needed for clearing a filter of fine particles and other CONTAMINANTS; 3) Proper disposal of waste water from backwash; and 4) What additional fixtures/equipment may be needed (i.e., sump, separation tank).
# Health and Safety
# A
Recreational Water Illness Recreational water illness (RWI) including: 1) How water can contain or become contaminated with parasites, bacteria, viruses, fungi, DBPS, or unsafe levels of chemicals; and 2) The role of the operator in reducing risk.
Causes of RWIs Common infectious and chemical causes of RWIs, including but not limited to: 1) Diarrheal illness (Cryptosporidium, Giardia, Shigella, and norovirus); 2) Skin rashes (Pseudomonas aeruginosa, molluscum contagiosum virus); 3) Respiratory illness (Legionella); 4) Neurologic infections (echovirus, Naegleria); 5) Eye/ear illness (Pseudomonas aeruginosa, adenovirus, Acanthamoeba); 6) Hypersensitivity reactions (Mycobacterium avium complex, Pontiac fever, endotoxins); and 7) Health effects of chloramines and DBPS.
# A
RWI Prevention Recreational water illness (RWI) prevention including: 1) Methods of prevention of RWIs, including but not limited to chemical level control; 2) Why public health, operators, and PATRONS need to be educated about RWIs and collaborate on RWI prevention; 3) The role of SHOWERING; 4) The efficacy of swim diapers; 5) Formed-stool and diarrheal fecal incident response; and 6) Developing a plan to minimize PATHOGEN and other biological (e.g., blood, vomit, sweat, urine, and skin and hair care products) contamination of the water.
Risk Management Risk management including techniques that identify hazards and risks and that prevent illness and injuries associated with AQUATIC FACILITIES open to the public.
Record Keeping Record keeping including the need to keep accurate and timely records of the following areas: 1) Operational conditions (e.g., water chemistry, water temperature, filter pressure differential, flow meter reading, and water clarity); 2) Maintenance performed (e.g., backwashing, change of equipment); 3) Incidents and response (e.g., fecal incidents in the water and injuries); and 4) Staff training and attendance.
# A
Chemical Safety Chemical SAFETY including steps to safely store and handle chemicals including: 1) How to read labels and SDS; 2018 MAHC CODE 6.0 Policies & Management 156 2) How to prevent individual chemicals and inorganic and organic CHLORINE products from mixing together or with other substances (including water) or in chemical feeders; and 3) Use of PPE.
# A
Entrapment Prevention Entrapment prevention including: 1) Different types of entrapment (e.g., hair, limb, body, evisceration/disembowelment, and mechanical); 2) How to prevent and/or decrease likelihood of entrapment; and 3) Requirements of the VGB Act.
Electrical Safety Electrical SAFETY including possible causes of electrical shock and steps that can be taken to prevent electrical shock (e.g., bonding, grounding, ground fault interrupters, and prevention of accidental immersion of electrical devices).
Rescue Equipment Rescue equipment including a description and rationale for the most commonly found rescue equipment including: 1) Rescue tubes, 2) Reaching poles, 3) Ring buoys and throwing lines, 4) Backboards, 5) First aid kits, 6) Emergency alert systems, 7) Emergency phones with current numbers posted, and 8) Resuscitation equipment.
Injury Prevention Injury prevention including basic steps known to decrease the likelihood of injury, at a minimum: 1) Banning glass containers at AQUATIC FACILITIES, 2) PATRON education, and 3) Daily visual inspection for hazards.
Drowning Prevention Drowning prevention including causes and prevention of drowning.
Barriers BARRIERS including descriptions of how fences, gates, doors, and SAFETY covers can be used to prevent access to water; and basics of design that effectively prevent access to water.
Signage & Depth Markers Signage and depth markers including the importance of maintaining signage and depth markers.
Facility Sanitation Facility sanitation including: 1) Steps to clean and DISINFECT all surfaces that PATRONS would commonly come in contact with (e.g., DECK, restrooms, and diaper-changing areas), and 2) Procedures for implementation of MAHC 6.5: Fecal-Vomit-Blood Contamination Response, in relation to responding to a body fluid spill on these surfaces.
Emergency Response Plan Emergency response plan including: 1) Steps to respond to emergencies (at a minimum, severe weather events, drowning or injury, contamination of the water, chemical incidents); and 2) Communication and coordination with emergency responders and local health department notification as part of an EAP. Regulations Regulations including the application of local, regional, state, and federal regulations and STANDARDS relating to the operation of AQUATIC FACILITIES.
# A Operations
Immediate Closure Course work shall also highlight reasons why an inspector or operator would immediately close an AQUATIC FACILITY.
Local & State Health Departments Duties and responsibilities of local and state health departments including stressing the importance of a good working relationship with the local and state health department.
Aquatic Facility Types AQUATIC FACILITY types including common AQUATIC VENUE types and settings and a discussion of features and play equipment that require specific operation and maintenance steps.
# A
Daily/Routine Operations Daily/routine operations including listing and describing the daily inspection and maintenance requirements of an AQUATIC FACILITY including, but not limited items listed: 1) Walkways/DECK and exits are clear, clean, free of debris; 2) Drain covers, vacuum fitting covers, SKIMMER equalizer covers, and any other suction outlet covers are in place, secure, and unbroken; 3) SKIMMER baskets, weirs, lids, flow adjusters, and suction outlets are free of any blockage; 4) INLET and return covers and any other fittings are in place, secure, and unbroken; 5) SAFETY warning signs and other signage are in place and in good repair; 6) Entrapment prevention systems are operational; 7) Recirculation, DISINFECTION systems, controller(s), and probes are operating as required; 8) SECONDARY DISINFECTION SYSTEMS and/or SUPPLEMENTAL TREATMENT SYSTEMS are operating as required; 9) Underwater lights and other lighting are intact with no exposed wires or water in lights; 10) Slime and biofilm has been removed from accessible surfaces of AQUATIC VENUE, SLIDES, and other AQUATIC FEATURES; 6.1.2.1.5.9
Air Circulation Air circulation including: 1) AIR HANDLING SYSTEM considerations for an INDOOR AQUATIC FACILITY, 2) The importance of regulating humidity, 3) The need to maintain negative pressure, 4) How poor indoor air quality can affect PATRONS and staff, and 5) How to balance air change and energy efficiency.
Spa & Therapy Pool Issues SPA and THERAPY POOL issues including: 1) Operational implications of smaller volumes of water and HOT WATER, 2) How to maintain water chemistry, 3) Typical water temperature ranges highlighting maximum temperatures, 4) Risks of hyperthermia and hypothermia, 5) Need for emergency shut-off switches, and 6) Frequency of cleaning, draining, and DISINFECTION.
# General Requirements for
# A Final Exam
Operator training course providers shall furnish course final exam information including: 1) Final exam(s), which at a minimum, covers all of the essential topics as outlined in MAHC 6.1.2.1; 2) Final exam passing score criteria; and 3) Final exam security procedures.
# Final Exam Administration
Operator training course providers shall provide final exam administration, proctoring and security procedures including: 1) Checking student's government-issued photo identification, or another established process, to ensure that the individual taking the exam is the same person who is given a certificate documenting course completion and passing of exam, 2) Final exam completion is without assistance or aids that are not allowed by the training agency, and 3) Final exam is passed, prior to issuance of a QUALIFIED OPERATOR certificate.
# A Course Certificates
# A Continuing Education
# A Certificate Renewal
Operator training course providers shall furnish course certificate renewal information including: 1) Criteria for re-examination with a renewal exam that meets the specifications for initial exam requirements and certificate issuance specified in this CODE; or 2) Criteria for a refresher course with an exam that meets the specifications for the initial course, exam, and certificate issuance requirements specified in this CODE.
# A Certificate Suspension and Revocation Course providers shall have procedures in place
for the suspension or revocation of certificates.
# Evidence of Health Hazard
Course providers may suspend or revoke a QUALIFIED OPERATOR'S certificate based on evidence that the QUALIFIED OPERATOR'S actions or inactions unduly created SAFETY and health hazards.
# Evidence of Cheating
Course providers may suspend or revoke a QUALIFIED OPERATOR'S certificate based on evidence of cheating or obtaining the certificate under false pretenses.
# A Additional Training or Testing
The AHJ may, at its discretion, require additional operator training or testing.
# A Certificate Recognition
The AHJ may, at its discretion, choose to recognize, not to recognize, or rescind a previously recognized certificate of a QUALIFIED OPERATOR based upon demonstration of inadequate knowledge, poor performance, or due cause.
# A Course Recognition
# A Emergency Response Skill Set
Emergency response content shall include:
1) Responsibilities of a QUALIFIED LIFEGUARD in reacting to an emergency;
2) Recognition and identification of a person in distress and/or drowning;
3) Methods to communicate in response to an emergency; 4) Rescue skills for a person who is responsive or unresponsive, in distress, or drowning; 5) Skills required to rescue a person to a position of SAFETY; 6) Skills required to extricate a person from the water with assistance from another lifeguard(s) and/or PATRON(S); and 7) Knowledge of the typical components of an EAP for AQUATIC VENUES.
# A
Resuscitation Skills CPR/AED, AED use, BVM (adult & pediatric) use, and other resuscitation skills shall be professional level skills that follow treatment protocols consistent with the current ECC and/or; the ILCOR guidelines for cardiac compressions; foreign body restriction removal; and rescue breathing for infants, children, and adults.
# A First Aid
First Aid training shall include:
1) Basic treatment of bleeding, shock, sudden illness, and muscular/skeletal injuries as per the guidelines of the National First Aid Science Advisory Board;
2018 MAHC CODE 6.0 Policies & Management 162 2) Knowing when and how to activate the EMS; 3) Rescue and emergency care skills to minimize movement of the head, neck and spine until EMS arrives for a person who has suffered a suspected spinal injury on land or in the water; and 4) Use and the importance of universal precautions and PPE in dealing with body fluids, blood, and preventing contamination according to current OSHA guidelines.
# A Legal Issues
Course content related to legal issues shall include but not be limited to:
1) Duty to act, 2) STANDARD of care, 3) Negligence, 4) Consent, 5) Refusal of care, 6) Abandonment, 7) Confidentiality, and 8) Documentation.
# Lifeguard Training Delivery
# A Standardized and Comprehensive
The educational delivery system shall include STANDARDIZED student and instructor materials to convey all topics including but not limited to those listed per MAHC 6.2.1.1.
# A Skills Practice
# A Shallow Water Training If a training agency offers a certification with a distinction
between "shallow water" and "deep water" lifeguards, candidates for shallow water certification shall have training and evaluation in the deepest depth allowed for the certification.
# A Deep Water Training If a training agency offers a certification with a distinction
between "shallow water" and "deep water" lifeguards, candidates for deep water certification shall have training and evaluation in at least the minimum depth allowed for the certification.
# A
Sufficient Time Course length shall provide sufficient time to cover content, practice, skills, and evaluate competency for the topics listed in MAHC 6.2.1.1.
# A Certified Instructors
Lifeguard instructor courses shall be taught only by individuals currently certified as instructor trainers by the training agency which developed the lifeguard course materials.
# A Minimum Prerequisites
Lifeguard training agencies shall develop minimum instructor prerequisites that include, but are not limited to those outlined in MAHC 6.2.1.2.6.2. 3) An evaluation and feedback process to improve instructor candidate presentation skills/techniques; 4) Course management and administration procedures; and 5) Testing and evaluation procedures.
# A Completed Training
# A
Instructor Renewal/Recertification Process Lifeguard training agencies shall have a lifeguard instructor renewal/recertification process.
# A
Quality Control Training agencies shall have a quality control system in place for evaluating a lifeguard instructor's ability to conduct courses.
# A Requirements
Lifeguard training course providers shall have a final exam including but not limited to: 1) Written and practical exams covering topics outlined in MAHC 6.2.1.1;
2) Final exam passing score criteria including the level of proficiency needed to pass practical and written exams; and 3) Security procedures for proctoring the final exam to include: a. Checking student's government-issued photo identification, or another established process, to ensure that the individual taking the exam is the same person who is given a certificate documenting course completion and passing of exam; and b. Final exam is passed, prior to issuance of a certificate.
# A
Instructor Physically Present The instructor of record shall be physically present at all classroom and in-person contact time, skills evaluation, and testing during the course.
# A Certifications
Lifeguard and lifeguard instructor certifications shall be issued to recognize successful completion of the course as per the requirements of MAHC 6.2.1.1 through 6.2.1.3.8.
# A Number of Years
Length of valid certification shall be a maximum of 2 years for lifeguarding and first aid, and a maximum of 1 year for Cardiopulmonary Resuscitation (CPR/AED).
# A Documentation
# A Expired Certificate
When a certificate has expired for more than 45 days, the QUALIFIED LIFEGUARD shall retake the course.
# Expired Less than 45 Days
When a certificate has expired for 45 days or less, the QUALIFIED LIFEGUARD shall retake the course or complete a challenge program.
# A
Challenge Program A QUALIFIED LIFEGUARD challenge program, when utilized, 2018 MAHC CODE 6.0 Policies & Management 164 shall be completed in accordance with the training of the original certifying agency, by an instructor certified by the original certifying agency, and include but not be limited to: 1) Pre-requisite screening; 2) A final practical exam, with certified instructor present, demonstrating all skills, in and out of the water required in the original lifeguard course for certification, which complies with MAHC 6.2.1.1, and uses the equipment specified in MAHC 6.2.1.2.7; and 3) Final written, proctored exam.
# A
Certificate Renewal Certificate renewal, when used, shall include the following: 1) Completion no later than 45 days after certificate expiration; 2) Conducted in accordance with the training of the original certifying agency; 3) Taught by an instructor certified by the original certifying agency; 4) Conducted with a demonstration of skills, in and out of the water, required in the original course, which complies with MAHC 6.2.1.1, and uses the equipment specified in MAHC 6.2.1.2.7; 5) A final written, proctored exam; and 6) A final practical exam with a certified instructor(s) of record present and actively administering the practical testing; or 7) Completion of a Challenge Program in accordance with MAHC 6.2.1.3.7.2, no later than 45 days after certificate expiration.
# A Certificate Suspension and Revocation
Lifeguard training agencies shall have procedures in place for the suspension or revocation of certificates. 5) MONITORING lifeguard performance as it relates to lifeguard and facility-specific training, including preservice assessments; 6) Strategies to reduce risk and mitigate the health and SAFETY hazards to both the PATRONS and the staff; 7) Knowledge of the legal issues and responsibilities relating to lifeguarding as listed in MAHC 6.2.1.1.5; and 8) Knowledge of the proper use and maintenance of the equipment required per MAHC 5.8.5.
Lifeguard Supervisor Training Delivery
# A Standardized and Comprehensive
# Traditional and Blended Courses
For traditional and blended learning courses, the educational delivery system shall include STANDARDIZED student and instructor content and delivery to convey all topics including but not limited to those listed per MAHC 6.
# A
Sufficient Time
Traditional and Blended Courses For traditional and blended learning classes, course length shall provide sufficient time to cover content, demonstration, skill practice, and evaluate competency for the topics listed in MAHC 6.2.2.2.
# E-Learning Courses
For e-learning courses, course length shall provide sufficient time to cover content, provide for on-line activities relating to content as necessary to reinforce comprehension of learning objectives, and assessments sufficient to evaluate competency for the topics listed in MAHC 6.2.2.2.
Course Setting LIFEGUARD SUPERVISOR training courses shall be:
1) Taught in person by a trained LIFEGUARD SUPERVISOR instructors; or 2) Blended learning offerings with electronic content deliverables created, and presented by, and in-person portions taught by, trained LIFEGUARD SUPERVISOR instructors; or 3) On-line offerings created and presented by trained LIFEGUARD SUPERVISOR instructors.
# A
Lifeguard Supervisor Course Instructor Certification LIFEGUARD SUPERVISOR course instructors shall be certified through a training agency or by the facility whose training programs meets the requirements specified in MAHC 6.2.2.
Lifeguard Supervisor Course Instructor LIFEGUARD SUPERVISOR course shall be taught by trained LIFEGUARD SUPERVISOR instructors through a training agency or by the facility whose training programs meets the requirements specified in MAHC 6.2.2.
# A
Minimum Prerequisites Course providers shall develop minimum instructor prerequisites that include, but are not limited to:
# A
Quality Control Course provider shall have a quality control system in place for evaluating a LIFEGUARD SUPERVISOR instructor's ability to conduct courses.
Lifeguard Supervisor Renewal & Recertification LIFEGUARD SUPERVISOR training agencies shall have a LIFEGUARD SUPERVISOR instructor renewal/recertification process.
Competency and Certificate of Completion
# A Lifeguard Supervisor Proficiency LIFEGUARD SUPERVISOR training course providers
shall have a method to evaluate proficiency of the content in MAHC 6.2.2.2.
# A Lifeguard Supervisor Certificate of Completion
# Bathers and Management
A QUALIFIED OPERATOR shall be on site or immediately available within 2 hours during all hours of operation at an AQUATIC FACILITY that is: 1) Permitted BATHER COUNT is greater than 200 BATHERS daily; or 2) Operated by a municipality; or 3) Operated by a school.
Compliance History A QUALIFIED OPERATOR shall be available on-site or immediately available within 2 hours during all hours of operation at an AQUATIC FACILITY that has a history of CODE violations which in the opinion of the permit issuing official require one or more on-site QUALIFIED OPERATORS.
Contracted Off-site Qualified Operators All other AQUATIC FACILITIES shall have an onsite QUALIFIED OPERATOR immediately available within 2 hours or a contract with a QUALIFIED OPERATOR for a minimum of weekly visits and assistance whenever needed.
# Visit Documentation
Written documentation of these visits for contracted off-site QUALIFIED OPERATOR visits and assistance consultations shall be available at the AQUATIC FACILITY for review by the AHJ.
# Documentation Details
The written documentation shall indicate the checking, MONITORING, and testing outlined in MAHC 6.4.1.2.
# Visit Corrective Actions The written documentation shall indicate what corrective
actions, if any, were taken by the contracted off-site QUALIFIED OPERATOR during the scheduled visits or assistance requests.
Onsite Responsible Supervisor All AQUATIC FACILITIES without a full time on-site QUALIFIED OPERATOR shall have a designated on-site RESPONSIBLE SUPERVISOR.
# A Onsite Responsible Supervisor Duties
# A Zone of Patron Surveillance
When QUALIFIED LIFEGUARDS are used, the staffing plan shall include diagrammed zones of PATRON surveillance for each AQUATIC VENUE such that:
2018 MAHC CODE 6.0 Policies & Management 168 1) The QUALIFIED LIFEGUARD is capable of viewing the entire area of the assigned zone of PATRON surveillance, 2) The QUALIFIED LIFEGUARD is able to reach the furthest extent of the assigned zone of PATRON surveillance within 20 seconds, 3) Identify whether the QUALIFIED LIFEGUARD is in an elevated stand, walking, in-water and/or other approved position, 4) Identifying any additional responsibilities for each zone, and 5) All areas of each AQUATIC VENUE are assigned a zone of PATRON surveillance.
# A
Rotation Procedures When QUALIFIED LIFEGUARDS are used, the staffing plan shall include QUALIFIED LIFEGUARD rotation procedures such that: 1) Identifying all zones of PATRON surveillance responsibility at the AQUATIC FACILITY;
2) Operating in a manner so as to provide an alternation of tasks such that no QUALIFIED LIFEGUARD conducts PATRON surveillance activities for more than 60 continuous minutes; and 3) Have a practice of maintaining coverage of the zone of PATRON surveillance during the change of the QUALIFIED LIFEGUARD.
Alternation of Tasks Alternation of tasks may include any one of the following:
1) Change of zone of PATRON surveillance where the QUALIFIED LIFEGUARD must walk or be transported to another zone of PATRON surveillance. 2) Have a period of at least 10 minutes of non-PATRON surveillance activity such as taking a break, conducting maintenance, or conducting ride dispatch.
# Supervision Protocols
When QUALIFIED LIFEGUARDS are used, the STAFFING PLAN shall include lifeguard supervision protocols to achieve the requirements of MAHC 6.3.3.
# A
Emergency Action Plan EAPS and operating procedures shall include but not be limited to:
1) Outline types of emergencies and IMMINENT HEALTH HAZARDS, as per MAHC 6.6.3; 2) Outline the methods of communication between responders, emergency services, and PATRONS; 3) Identify each anticipated responder; 4) Outline the tasks of each responder; 5) Identify required equipment for each task; and 6) Emergency closure requirements.
# A
Coordination of Response When one or more QUALIFIED LIFEGUARDS are used, the SAFETY PLAN and the EAP shall identify the best means to provide additional persons to rapidly respond to the emergency to help the initial rescuer.
# Pre-Service Requirements
The Pre-Service Plan shall include:
1) Policies and procedure training specific to the AQUATIC FACILITY, 2) Demonstration of SAFETY TEAM skills specific to the AQUATIC FACILITY prior to assuming on-duty lifeguard responsibilities, and 3) Documentation of training.
# A Safety Team EAP Training
Prior to active duty, all members of the SAFETY TEAM shall be trained on, and receive a copy of, and/or have a copy posted and always available of the specific policies and procedures for the following: 1) Staffing Plan, 2) EAP, 3) Emergency closure, and 4) Fecal, vomit, or blood contamination on surfaces and in the water as outlined in MAHC 6.5.
Copies Maintained Originals or copies of certificates shall be maintained at the AQUATIC FACILITY and be available for inspection.
# A Documentation of Pre-Service Training Documentation verifying the pre-service
requirements shall be completed by the person conducting the pre-service training, maintained at the facility for 3 full years, and be available for inspection.
Lifeguard Certificate When QUALIFIED LIFEGUARDS are used, they shall present an unexpired certificate as per MAHC 6.2.1.3.4 prior to assuming on-duty lifeguard responsibilities.
Copies Maintained Originals or copies of certificates shall be maintained at the facility and be available for inspection.
In-Service Training During the course of their employment, AQUATIC FACILITY staff shall participate in periodic in-service training to maintain their skills.
# A Documentation of In-Service Training Documentation verifying the in-service
requirements shall be completed by the person conducting the in-service training, maintained at the AQUATIC FACILITY for 3 years, and available for inspection.
# A In-Service Documentation
# A In-Service Training Plan
# A Competency Demonstration
When QUALIFIED LIFEGUARDS are used, they shall be able to demonstrate proficiency in the skills as outlined by MAHC 6.2.1 and have the ability to perform the following water rescue skills consecutively so as to demonstrate the ability to respond to victim and complete the rescue: 1) Reach the furthest edge of zones of BATHER surveillance within 20 seconds; 2) Recover a simulated victim, including extrication to a position of SAFETY consistent with MAHC 6.2.1.1.2; and 3) Perform resuscitation skills consistent with MAHC 6.2.1.1.3.
# A AHJ Authority to Approve Safety Plan
The AHJ shall have the authority, if they so choose, to require: 1) Submittal of the SAFETY PLAN for archiving and reference, or 2) Submittal of the SAFETY PLAN for review and approval prior to opening to the public.
# A Safety Plan on File
The SAFETY PLAN shall be kept on file at the AQUATIC FACILITY.
# A Safety Plan Implemented
The elements detailed in the SAFETY PLAN shall be implemented and in evidence in the AQUATIC FACILITY operation and is subject to review for compliance by the AHJ at any time.
# A
Shallow Water Certified Lifeguards QUALIFIED LIFEGUARDS certified for shallow water depths shall not be assigned to a BODY OF WATER in which any part of the water's depth is greater than the depth for which they are certified.
Emergency Response and Communications Plans
# A Emergency Response and Communication Plan AQUATIC FACILITIES shall create
and maintain an operating procedure manual containing information on the emergency response and communications plan including an EAP, Facility Evacuation Plan, and Inclement Weather Plan.
# Emergency Action Plan
A written EAP shall be developed, maintained, and updated as necessary for the AQUATIC FACILITY.
# Annual Review and Update
The EAP shall be reviewed with the AQUATIC FACILITY staff and management annually or more frequently as required when changes occur with the dates of the review recorded in the EAP.
# Available for Inspection
The written EAP shall be kept at the AQUATIC FACILITY and available for emergency personnel and/or AHJ upon request.
# A
Training Documentation Documentation from employees trained in current EAP shall be available upon request.
# Components
The EAP shall include at a minimum:
1) A diagram of the AQUATIC FACILITY;
2) A list of emergency telephone numbers;
3) The location of first aid kit and other rescue equipment (BVM, AED, if provided, backboard, etc.); 4) An emergency response plan for accidental chemical release; and 5) A fecal/vomit/blood CONTAMINATION RESPONSE PLAN as outlined in MAHC 6.5.1.
# Accidental Chemical Release Plan
The accidental chemical release plan shall include procedures for: 1) How to determine when professional HAZMAT response is needed, 2) How to obtain it, 3) Response and cleanup, 4) Provision for training staff in these procedures, and 5) A list of equipment and supplies for clean-up.
Remediation Supplies The availability of equipment and supplies for remediation procedures shall be verified by the operator at least weekly.
Facility Evacuation Plan A written Facility Evacuation Plan shall be developed and maintained for the facility.
# Evacuation Plan Components
This plan shall include at a minimum: 1) Actions to be taken in cases of drowning, serious illness or injury, chemical handling accidents, weather emergencies, and other serious incidents; and 2) Defined roles and responsibilities for all staff.
Communication Plan A communication plan shall exist to facilitate activation of internal emergency response centers and/or community 911/EMS as necessary.
# A
Inclement Weather Plan AQUATIC FACILITIES shall have a contingency/response plan for localized weather events that may affect their operation (i.e., lightning, hurricanes, tornados, high winds, etc.).
# A Include
The manual shall at minimum include, but not be limited to the following items:
# A
Notify the AHJ In addition to making such records, the owner/operator shall ensure that the AHJ is notified within 24 hours of the occurrence of an incident recorded in MAHC 6.4.1.4.1.
# A Lifeguard Rescue Records
The owner/operator shall also record all lifeguard rescues where the QUALIFIED LIFEGUARD enters the water and activates the aquatic EAP.
# A Signage
# Facility Rules
The operator shall post and enforce the AQUATIC FACILITY rules governing health, SAFETY, and sanitation.
# Lettering
The lettering shall be legible and at least 1 inch (25.4 mm or 36 point type)
high, with a contrasting background.
# A Sign Messages
Signage shall be placed in a conspicuous place at the entrance of the AQUATIC FACILITY communicating expected and prohibited behaviors and other information using text that complies with the intent of the following information: 3 signage requirement number 1 may be amended to include on-site emergency staff contact information if emergency trained personnel are on site so that the response would be faster than calling 911.
Diving Well AQUATIC FACILITIES with diving wells may amend signage requirement number 11 to read that diving is not allowed in all AQUATIC VENUES except for the diving well.
Posters Recreational water illness prevention posters shall be posted conspicuously in the AQUATIC FACILITY at all times.
Unstaffed Aquatic Facilities without Lifeguards In addition to signage messages 1 through 13, unstaffed AQUATIC FACILITIES shall also include signage messages covering: 1) No Lifeguard on Duty: Children under 14 years of age must have adult supervision, and 2) Hours of operation; AQUATIC FACILITY use prohibited at any other time.
Posters In AQUATIC FACILITIES not requiring lifeguards, CPR posters reflecting the latest STANDARDS shall be posted conspicuously at all times.
Multiple Aquatic Venues For AQUATIC FACILITIES with multiple AQUATIC VENUES, MAHC 6.4.2.2.3 signage item numbers 3 and, if applicable, number 11, or text complying with the intent of the information, shall be posted at the entrance to each AQUATIC VENUE except such posting is not required at WATERSLIDES.
Movable Bottom Floor Signage In addition to the MAHC 6.4.2.2.3 requirements, AQUATIC VENUES with moveable bottom floors shall also have the following information or text complying with the intent of the following information: 1) A sign for AQUATIC VENUE water depth in use shall be provided and clearly visible; 2) A "NO DIVING" sign shall be provided; and 3) The floor is movable and AQUATIC VENUE depth varies.
# A
Spa Signs In addition to the MAHC 6.4.2.2.3 requirements, SPAS shall also have the following information or text complying with the intent of the following information: 1) Maximum water temperature is 104° F (40°C); 2) Children under age 5 and people using alcohol or drugs that cause drowsiness shall not use SPAS; 3) Pregnant women and people with heart disease, high blood pressure or other health problems should not use SPAS without prior consultation with a healthcare provider; 4) Children under 14 years of age shall be supervised by an adult; and 5) Use of the SPA when alone is prohibited (if no lifeguards on site).
Diaper-Changing Station Signage Signage shall be posted at DIAPER-CHANGING STATIONS stating or containing information, or text complying with the intent of the following information: 1) Dispose of used disposable diapers in the diaper bucket or receptacle provided; 2) Dump contents from reusable diapers into toilets and bag diapers to take home; 3) Use the materials provided to clean/SANITIZE the surface of the DIAPER-CHANGING STATION before and after each use; 4) Wash your hands and your child's hands after diapering; and 5) Do not swim if ill with diarrhea.
Swimmer Empowerment Methods
# A Public Information and Health Messaging
The owner/operator shall ensure that a public information and health messaging program to inform INDOOR AQUATIC FACILITY PATRONS of their impact on INDOOR AQUATIC FACILITY air quality is developed and implemented.
# A Post Inspection Results
The results of the most recent AHJ inspection of the AQUATIC FACILITY shall be posted at the AQUATIC FACILITY in a location conspicuous to the public.
# A
Minimum A minimum of one person on-site while the AQUATIC FACILITY is open for use shall be: 1) Trained in the procedures for response to formed-stool contamination, diarrheal contamination, vomit contamination, and blood contamination; and 2) Trained in PPE and other OSHA measures including the Bloodborne Pathogens Standard 29 CFR 191029 CFR .1030 to minimize exposure to bodily fluids that may be encountered as employees in an aquatic environment.
# Informed
Staff shall be informed of any updates to the response plan.
# Equipment and Supply Verification
The availability of equipment and supplies for remediation procedures shall be verified by the QUALIFIED OPERATOR at least weekly.
# Plan Review
The response plan shall be reviewed at least annually and updated as necessary.
# A
No Vacuum Cleaners Aquatic vacuum cleaners shall not be used for removal of contamination from the water or adjacent surfaces unless vacuum waste is discharged to a sanitary sewer and the vacuum equipment can be adequately DISINFECTED.
# A
Treated AQUATIC VENUE water that has been contaminated by feces or vomit shall be treated as follows: 1) Check to ensure that the water's pH is 7.5 or lower and adjust if necessary; 2) Verify and maintain water temperature at 77°F (25°C) or higher;
3) Operate the filtration/RECIRCULATION SYSTEM while the POOL reaches and maintains the proper free CHLORINE concentration during the remediation process; 4) Test the CHLORINE RESIDUAL at multiple sampling points to ensure the proper free CHLORINE concentration is achieved throughout the POOL for the entire DISINFECTION time; and 5) Use only non-stabilized CHLORINE products to raise the free CHLORINE levels during the remediation.
Aquatic Venue Water Contamination Disinfection 6.5.
# A Pools Containing Chlorine Stabilizers
In AQUATIC VENUE water that contains CYA or a stabilized CHLORINE product, water shall be treated by doubling the inactivation time required under MAHC 6.5.3.1.
# Measurement of Inactivation Time
# Pools Containing Chlorine Stabilizers
In AQUATIC VENUE water that contains CYA or a stabilized CHLORINE product, water shall be treated by doubling the inactivation time required under MAHC 6.5.3.3.
# Measurement of the Inactivation Time Measurement of the inactivation time
required shall start when the AQUATIC VENUE reaches the intended free CHLORINE level.
# A
Blood-Contamination Blood contamination of a properly maintained AQUATIC VENUE'S water does not pose a public health risk to swimmers.
# Operators Choose Treatment Method
Operators may choose whether or not to close the AQUATIC VENUE and treat as a formed stool contamination as in MAHC 6.5.3.1 to satisfy PATRON concerns.
# A
Procedures for Brominated Pools Formed-stool, diarrheal-stool, or vomit-contaminated water in a brominated AQUATIC VENUE shall have CHLORINE added to the AQUATIC VENUE in an amount that will increase the FREE CHLORINE RESIDUAL to the level specified for the specific type of contamination for the specified time.
# Bromine Residual
The bromine residual shall be adjusted if necessary before reopening the AQUATIC VENUE.
# A
# Legionella Contamination 6.5.3.6.1
Remediation and Testing For remediation and testing of AQUATIC VENUES suspected of being contaminated with Legionella the QUALIFIED OPERATOR shall: 1) Close the SPA tub to BATHERS immediately, and shut down the hydrotherapy jets and circulation pumps, but do not drain the water. 2) Contact the state or local public health agency having jurisdiction for information about laboratory testing for Legionella. If the health department determines that laboratory testing is needed, water and biofilm samples should be taken from the SPA tub, hydrotherapy jets, drain, and filters/filter media to test for Legionella by culture before taking the steps below. Sampling and laboratory testing are complicated and should always be done in collaboration with your state or local public health agency and a laboratory with Legionella testing expertise. 3) Proceed as directed below after samples have been taken; it is not necessary to wait for laboratory test results. However, the SPA should not be reopened to BATHERS until all test results are negative for Legionella. 4) Scrub vigorously all SPA surfaces, skimming devices, circulation components with FREE CHLORINE at a minimum concentration of 5 parts per million (ppm) to remove any biofilm or slime. After scrubbing, rinse the SPA with clean water and flush to waste.
2018 MAHC CODE 6.0 Policies & Management 181 5) Drain all water from the SPA. Dispose of the water to waste or as directed by the local regulatory authority. 6) Replace filters (for cartridge or DE filters) or filter media (for sand filters). Bag these filters and dispose as normal solid waste. 7) Inspect the SPA thoroughly for any broken or poorly functioning components such as valves, sensors, tubing, or DISINFECTANT feeders. Make any needed repairs. 8) Refill the SPA with clean water. 9) HYPERCHLORINATE using 20 ppm FREE CHLORINE. a.) Keep the hydrotherapy jets off and let the HYPERCHLORINATED water circulate for 1 hour in all of the components of the SPA including the compensation/surge tank, filter housing, and piping. b.) Turn on the hydrotherapy jets to circulate the HYPERCHLORINATED water for 9 additional hours. Ensure that 20 ppm of FREE CHLORINE is maintained in the system for the entire 10 hours. 10) Flush the entire system to remove the HYPERCHLORINATED water from all equipment prior to repeat sampling. 11) Take repeat samples for culture-based laboratory testing to confirm that Legionella has been eliminated.
Water and biofilm samples should be taken from the SPA tub, hydrotherapy jets, drain, filters/filter media, and any part of the SPA that originally tested positive for Legionella. 12) Keep the SPA closed to BATHERS until this repeat testing has confirmed the elimination of Legionella. If laboratory testing is positive for Legionella, repeat steps 4-11 until all testing is negative for Legionella. When all tests are negative, the SPA can be reopened to BATHERS. 13) Ensure that halogen (CHLORINE or bromine) and pH levels meet local and state STANDARDS before reopening the SPA to BATHERS. Maintain water quality according to local and state STANDARDS. 14) If the SPA is associated with an outbreak, the following continued laboratory testing schedule shall be conducted: conduct culture-based testing every 2 weeks for 3 months, then every month for 3 months to ensure complete elimination of Legionella. If at any time during this laboratory testing schedule Legionella is found, DISINFECT again and start the testing schedule over. For AQUATIC VENUES that continue to grow Legionella, consider hiring a consultant with expertise in Legionella.
Surface Contamination Cleaning and Disinfection 6.5.4.1 A Limit Access If a bodily fluid, such as feces, vomit, or blood, has contaminated a surface in an AQUATIC FACILITY, facility staff shall limit access to the affected area until remediation procedures have been completed.
# A
Clean Surface Before DISINFECTION, all visible CONTAMINANT shall be cleaned and removed with disposable cleaning products effective with regard to type of CONTAMINANT present, type of surface to be cleaned, and the location within the facility.
# A Contaminant Removal and Disposal
CONTAMINANT removed by cleaning shall be disposed of in a sanitary manner or as required by law.
# A
Disinfect Surface Contaminated surfaces shall be DISINFECTED with one of the following DISINFECTION solutions: 1) A 1:10 dilution of fresh household bleach with water; or 2) An equivalent EPA REGISTERED DISINFECTANT that has been approved for body fluids DISINFECTION.
# Soak
The DISINFECTANT shall be left to soak on the affected area for a minimum of 20 minutes or as otherwise indicated on the DISINFECTANT label directions.
Remove DISINFECTANT shall be removed by cleaning and shall be disposed of in a sanitary manner or as required by the AHJ.
# High pH Violations
If pH testing equipment does not measure above 8.0, pH level must be at or above the highest value of the test equipment.
6.6.4 Enforcement 6.6.4.1 Placarding of Pool Where an imminent public health hazard is found and remains uncorrected, the AQUATIC VENUE shall be placarded to prohibit use until the hazard is corrected in order to protect the public health or SAFETY of BATHERS. | 37,933 | {
"id": "5ca670a394bc20386d4dbcfadf76287f0362e759",
"source": "cdc",
"title": "None",
"url": "None"
} |
epfl-llm/guidelines | depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices#
made cyanide an unlikely etiology. Law enforcement personnel with the New Jersey State Police responded to the outbreak and tested samples of the heroin involved; the presence of clenbuterol, a β 2 adrenergic receptor agonist, was reported.
Information regarding the atypical reactions to heroin use was disseminated by NJPIES and local public health agencies to the general public, public health agencies in neighboring states, national toxicology organizations, and federal agencies. One patient reported atypical symptoms on multiple occasions after using heroin but only sought medical attention after seeing a flyer informing heroin users of suspected drug adulteration.
Case 1. The first reported patient was a man aged 21 years who went to the emergency department (ED) of a New Jersey hospital January 28, 2005, complaining of chest pain, palpitations, and shortness of breath, which had begun soon after intranasal exposure to what he believed was heroin. While in the ED, his highest recorded heart rate was 137 beats per minute (bpm), and his lowest recorded systolic blood pressure was 69 mmHg. On physical examination, the patient had tachycardia, tachypnea, pale skin, and mydriasis (dilated pupils). Laboratory studies revealed the following serum values: potassium, 2.2 mmol/L (reference range: 3.5-5.3 mmol/ L); glucose, 243 mg/dL (reference range: 65-115 mg/dL); CO 2 , 13 mmol/L (reference range: 22-32 mmol/L); an elevated anion gap; and an elevated lactate level (1). An electrocardiogram (ECG) revealed ischemic changes. The patient required intravenous fluid replacement, potassium supplementation, and an intravenous calcium channel blocker for persistent tachycardia. His laboratory, ECG, and vital sign abnormalities resolved during his 4 days in the intensive care unit. The patient left against medical advice on the fifth day of hospitalization with no apparent remaining impairments. Case 2. A man aged 23 years visited the ED at the same New Jersey hospital on January 29, 2005, a day after the patient in case 1. The man had headache, nausea, palpitations, chest pain, and anxiety after intranasal exposure to heroin the night before. He had no known connection to the patient in case 1. While in the ED, he was tachypneic and hypotensive; he had a widened pulse pressure (120/48 mmHg) and was persistently tachycardic (120-122 bpm). He was noted to have agitation and mydriasis on physical examination. Laboratory serum values included potassium, 2.9 mmol/L, and blood glucose, 157 mg/dL. The patient was admitted to the intensive care unit and discharged from the hospital on the fifth day with no known impairments.
# Provisional Case Definition for Future Cases
To facilitate uniform reporting of future cases of heroin adulterated with clenbuterol, a provisional case definition (Box) was created by CDC, in coordination with PCCs and public health agencies involved with this investigation. Because the assay for clenbuterol is not available in the majority of laboratories, only eight of the 26 cases described in this report were confirmed; 16 cases were classified as probable and two as suspected.
# BOX. Provisional case definition- for heroin-related clenbuterol toxicity
# Clinical Description
After reported heroin use, signs, symptoms, and laboratory findings † indicating clenbuterol toxicity include tachycardia, hypokalemia, palpitations, hyperglycemia, chest pain, hypotension, nausea, shortness of breath, agitation, or tremor. Laboratory Criteria for Diagnosis - Biologic: Detection of clenbuterol in urine or blood samples, as determined by a commercial laboratory. or - Environmental: Detection of clenbuterol in environmental samples (e.g., heroin), as determined by the Drug Enforcement Agency, the Federal Bureau of Investigation, or other appropriate agency.
# Case Classification
- Suspected: A case in which a potentially exposed person is being evaluated by health-care workers or public health officials for poisoning by clenbuterol. - Probable: A clinically compatible case in which a high index of suspicion for clenbuterol exposure exists (e.g., patient history regarding location and time of day), or a case with an epidemiologic link to a laboratory-confirmed case. - Confirmed: A clinically compatible case in which laboratory tests of biologic or environmental samples have confirmed exposure. A case can also be considered confirmed without laboratory testing if a predominant amount of clinical and nonspecific laboratory evidence of clenbuterol was present.
Editorial Note: Clenbuterol is a β 2 adrenergic receptor agonist with a rapid onset and long duration of action approved for limited veterinary use in the United States (2,3). Clenbuterol is also used illicitly as an alternative to anabolic steroids in humans and livestock because it can increase muscle mass (4,5).
Most adverse health effects are related to its stimulation of β 2 adrenergic receptors and clinical manifestations, including hypokalemia, hyperglycemia, hyperlactemia, agitation, tachycardia, and hypotension (6). Adverse human health effects have been reported previously in a case of clenbuterol ingestion (7) and from ingestion of meat from livestock fed clenbuterol (3). However, the 26 cases described in this report are the first published accounts of poisoning from clenbuterol associated with reported heroin use. Whether these cases represent adulteration of a single source of heroin before widespread distribution or adulteration of multiple sources is unknown. Also unclear is whether the substance used by each patient was heroin contaminated with clenbuterol or pure clenbuterol sold as heroin. The presence of adulterants in heroin is common. In some years, substances such as caffeine were detected in more than half of samples tested (8). Widespread poisoning secondary to adulterated heroin has occurred before as in the case of scopolamine-adulterated heroin reported in four states during the mid-1990s (9).
For various reasons, the 26 cases described in this report likely represent a fraction of actual cases of clenbuterol poisoning. Patients might not have medical evaluation for fear of legal repercussions. Passive reporting to public health agencies or PCCs might not have occurred because ED physicians, hospital intensivists, and the patients themselves might have presumed that the effects were related to a known coingestant. The identification of potential cases during the PCC record review process might have been limited by each center's database classification. The etiologic agent in suspicious cases might have been coded by using words other than "heroin" or "clenbuterol," such as "unknown drug" or "presumed coingestant."
Communication and cooperation among PCCs, EDs, CDC, and local public health agencies allowed for coordination of an appropriate response to the clenbuterol incidents. Local public health agencies and PCCs (available 24 hours a day at telephone 800-222-1222) should be notified of any case of suspected or known human exposure to an adulterated product. Early and rapid collaboration among local, state, and federal public health and law enforcement agencies might be necessary to identify, respond to, and minimize the effects of unintentional or intentional adulteration of substances used by the public.
# Mercury Exposure -Kentucky, 2004
In November 2004, a student aged 15 years brought a small vial of liquid mercury onto a school bus and into a high school in Kentucky. A subsequent investigation revealed that mercury had been in the student's possession for more than a year and that substantial amounts had been spilled in multiple locations. This report describes the results of that investigation, which indicated that 1) duration of exposure was associated with the amount of mercury absorbed by exposed persons and 2) extensive multiagency collaboration facilitated an efficient response. The investigation further revealed that, although mercury exposure is common, clinicians might not be aware of how to evaluate and treat patients with mercury exposure. State and federal health agencies should provide schools, clinicians, and local health department staff with readily accessible guidelines- for use in mercury spills and exposures.
On November 10, school officials at a county high school in rural Kentucky discovered approximately 15 students playing with liquid mercury in the school cafeteria. School officials separated the students, confiscated and bagged their clothes, and closed the cafeteria. Local health department and environmental protection officials were notified. Questioning revealed that a boy aged 15 years had brought a vial of mercury to school on a school bus. Parents were advised to consult their health-care providers about whether their child should be tested for mercury exposure. Several children were tested at the local hospital, but none had concentrations exceeding background levels other than the student who brought the mercury to the school.
During November 10-24, local and state health department staff coordinated a public health investigation of the mercury exposure, and the U.S. Environmental Protection Agency (EPA) conducted an environmental investigation. Law enforcement and health department staff interviewed relevant observers and persons who directly handled the mercury. Serum and 24-hour urine mercury samples (measured in micrograms per liter ) were collected for all persons who reported substantial exposure (i.e., persons who were known to have handled the mercury on multiple occasions or who spent 1 hour or more in rooms or vehicles during periods in which those places were known to be contaminated) and were tested at a local hospital. EPA and Kentucky Department for Environmental Protection (KDEP) personnel collected environmental air samples (measured in nanograms per cubic meter ) at implicated locations and conducted ongoing cleaning and environmental assessment until ambient mercury levels were brought within acceptable limits (i.e., <3,000 ng/m 3 ) (2) or the site was deemed unrecoverable.
EPA and KDEP officials assessed the student's school and home environments and initiated cleanup procedures. The school cafeteria contained mercury levels ranging from 5,280 ng/m 3 to 36,600 ng/m 3 . The school was closed by the school superintendent to limit the potential for exposure of children and to facilitate cleaning of the cafeteria. After 2 days of cleanup, heating, and venting, EPA deemed the school safe for students to return.
Approximately 15 school buses were also tested and/or cleaned. The family's mobile home and possessions were deemed unrecoverable (ambient mercury was >50,000 ng/m 3 at outset of investigation and later reduced to 11,550 ng/m 3 ) and were removed and destroyed. The family van (14,950 ng/ m 3 reduced to 1,285 ng/m 3 ) and an additional vehicle (>50,000 ng/m 3 reduced to 174 ng/m 3 ) were eventually cleaned and returned to the family. However, a third vehicle (41,275 ng/m 3 reduced to 36,610 ng/m 3 ), belonging to the family of a friend of the student, was determined unrecoverable and removed by EPA.
During the cleanup process, more liquid mercury was collected than could be contained in the vial that the student had carried to school. The student claimed that he had found the mercury in the trash of a dentist's office during a visit on November 9. Investigation revealed that the mercury was kept in a storage area at the dentist's office that doubled as a restroom for patients. Examination of dental office records indicated that the student had visited the dentist on August 29, 1997, August 21, 2003, and November 9, 2004. Additional evidence suggested that the student had mercury for several months before the school exposure. Under further questioning, the student admitted having obtained the mercury during a previous visit to the dentist (presumably the August 2003 visit). Investigators suspected that the student took mercury during each of the last two visits, accounting for the excess mercury recovered in the cleanup process. EPA personnel disposed of all remaining mercury in the dentist's office.
Nine family members, including the student, had lived in the mobile home during different periods preceding the incident. In addition, the student's friend and his family, including a pregnant female, indicated that they had spent considerable time in one of the contaminated vehicles. Moreover, an additional 12 persons were said to have spent substantial amounts of time in the mobile home.
Blood concentrations were obtained for the student and seven family members who were living in the mobile home. Blood mercury levels ranged from 32 µg/L to 72 µg/L (normal: 0-10 µg/L) (3). The 24-hour urine mercury concentrations obtained from seven of these patients ranged from 28 - Such as a toxicological profile for mercury (1).
# MMWR
August 19, 2005 µg/L to 496 µg/L (normal: 0-19 µg/L) (4). The student had the highest mercury levels for both blood and urine (i.e., 72 µg/L blood and 496 µg/L for initial urine concentration). Urine mercury concentrations were directly associated with amount of time spent in the mobile home. Three of the children, including the student, lived in the contaminated home for 15 months and had urinary concentrations ranging from 193 µg/ L to 496 µg/L, whereas three of the children who lived in the home for only 10 weeks had urinary concentrations ranging from 28 µg/L to 68 µg/L. The additional family member, a woman who had not been in the mobile home since June 2004, had a urine mercury concentration of 241 µg/L. Three additional persons, who were exposed to the contaminated vehicle that had to be destroyed, had urinary mercury levels ranging from 4 µg/L to 8 µg/L. An infant born to one of these persons in May 2004 had no signs of mercury exposure. Five family members, including the student responsible for the initial exposure, were chelated by using succimer. The three adolescent family members with the longest exposures received chelation in multiple sessions. Final urine mercury levels were 48, 44, and 35 µg/L, for the student and the two other children, respectively. Several of the children living in the mobile home experienced itchy rashes and headaches. In late 2003, one girl aged 13 years residing in the mobile home had experienced several months of illness consistent with mercury exposure (e.g., unexplained tachycardia, hypertension, desquamation of soles and palms, rashes, diaphoresis, muscle pain, insomnia, vomiting, and behavioral and psychiatric changes). She was hospitalized for approximately 30 days. Mercury toxicity was not considered at the time, so testing was not performed. The patient improved with a cardiac stent concurrent with removal from the exposure setting.
After the investigation, the Kentucky Department for Public Health (KDPH) held a meeting with all agencies involved to discuss lessons learned. Participants agreed to 1) better identify a lead coordinator for future investigations, 2) continue to increase coordination and communication between all agencies, and 3) increase awareness of school and local public health officials regarding mercury exposure. KDPH produced a flyer for schools that was distributed on April 15, 2005. Information related to the dangers of mercury and the proper response to a mercury spill also was sent to all local health departments. that period, 10 of which were associated with schools and five with residences only. After publicity mounted regarding the case described in this report, the local health department and the Kentucky Regional Poison Center received numerous inquiries from private citizens about quantities of mercury in their possession. Thus, local public health officials and health-care providers should be familiar with the symptoms of mercury exposure, how to respond appropriately in cases of spills, and what local resources are available for mercury cleanup and disposal.
During this investigation, a strong association was observed between the duration of exposure and remaining levels of mercury in patients. Compared with three children who had recent exposures of 10 weeks' duration, a woman who had been exposed for 8-10 months but left that setting approximately 5 months before the November incident had substantially higher levels of mercury, as evidenced by high urine concentrations. Children exposed for 15 months in the mobile home had substantially higher levels than those who had only 10 weeks' exposure. Only those children who experienced the 15-month exposure were recommended for chelation. Finally, although the family acquaintances were exposed to high levels of mercury (i.e., in their contaminated vehicle), their exposures were periodic and brief, which might have resulted in limited mercury levels.
The mercury exposures described in this report, which occurred in multiple locations and resulted in extensive property loss and intensive cleanup efforts, highlight the utility of multiagency collaboration in investigations. Collaboration of local, state, private, and federal officials improved the response time and investigation outcome. This coordination is essential to mount a public health response to exposures such as this, which quickly outstrip local resources.
The events described in this report also underscore the need for appropriate and consistent medical advice for clinicians when responding to similar events. Resources are needed at the local level to help health-care providers and public health officials recognize, evaluate, and treat patients with mercury exposures.
# Update: Interim Guidance for Minimizing Risk for Human Lymphocytic Choriomeningitis Virus Infection Associated with Pet Rodents
On August 12, this report was posted as an MMWR Dispatch on the MMWR website ().
In May 2005, CDC received reports of illness in four solidorgan transplant recipients who were later determined to have been infected with lymphocytic choriomeningitis virus (LCMV) from a common organ donor (1). Three of the four organ recipients died, 23-27 days after transplantation. This report updates information about the ongoing investigation and provides interim measures for reducing the risk for LCMV infection from pet rodents associated with this outbreak.
Epidemiologic investigation traced the source of the virus to a pet hamster recently purchased by the organ donor from a pet store in Rhode Island. LCMV testing of other rodents at the pet store identified three other LCMV-infected rodents (two hamsters and a guinea pig). All four pet rodents had been supplied by a single distributor, MidSouth Distributors of Ohio. Preliminary test results determined that four (3.4%) of 115 hamsters sampled from the Ohio distributor had active LCMV infection. On the basis of sequence analysis, the LCMV from the transplant recipients, the donor's pet rodent, and from rodents obtained from the Rhode Island pet store and the Ohio distributor were determined to have the same lineage (i.e., likely to share a common source). Under the authority of the Ohio Department of Agriculture, the MidSouth facility was quarantined. The MidSouth owner voluntarily depopulated the facility; the premises also will be disinfected.
LCMV test results for the sampled rodents and records reviewed at the Rhode Island pet store and at MidSouth Distributors indicate that LCMV-infected pet rodents might have been transported from the Ohio facility to pet stores in the northeastern and midwestern United States as early as February 2005. Ohio authorities and CDC are working to determine which stores and states have received potentially affected shipments from the Ohio facility. CDC also is conducting an ongoing traceback investigation of the breeding facilities that supplied MidSouth Distributors.
# Background Information
LCMV infection in humans with normal immune systems usually causes either asymptomatic or mild, self-limited illness. Aseptic meningitis also can occur in some patients, but the infection is rarely fatal (2). However, LCMV infection during the first or second trimester of pregnancy can cause severe illness or developmental defects in the fetus, including hydrocephalus, psychomotor retardation, blindness, and fetal death (3). The frequency with which developmental defects occur after in utero LCMV infection is not known. In addition, LCMV can be a serious infection in persons with impaired immune systems.
Pet hamsters and guinea pigs are not known to be natural reservoirs for LCMV. However, pet rodents can become infected if they have contact with wild house mice (Mus musculus) (e.g, in a breeding facility, pet store, or home). Although infection of other animals with LCMV might be possible, documented infections in humans have occurred only after exposure to infected mice, guinea pigs, and hamsters (2,4). Most human cases are associated with wild house mice, which are considered the primary reservoir (5).
Serologic testing of pet rodent species for antibodies against LCMV has not been reliable; the tests have not detected antibodies in animals with active infections demonstrated by other tests (i.e., immunohistochemistry staining of tissues and virus isolation). The unreliability of serologic testing is of concern because certain species of pet rodents infected with LCMV can shed virus for up to 8 months without signs of illness and thus can be a source of infection for humans (4,6).
A large outbreak of LCMV infection associated with pet hamsters sold by a single distributor was reported in 1974, when 181 symptomatic human cases were identified in 12 states; no deaths occurred (7). The outbreak was controlled by voluntary cessation of the sale of pet hamsters and subsequent destruction of the infected breeding stock. Stores were advised that all caging material be decontaminated or destroyed before receiving new animals. In addition, the public was informed of the risk for infection from hamsters purchased during the outbreak at stores supplied by the affected distributor (8).
# Pet Stores with Potentially Infected Rodents in Stock
Two national retail chains have temporarily stopped the sale of potentially affected rodents (e.g., hamsters, guinea pigs, gerbils, rats, chinchillas, and mice) originating from MidSouth Distributors since February 2005. Pet stores that have received rodents from MidSouth Distributors since February should contact the appropriate authority in their states (i.e., state health department or state department of agriculture) for additional information and guidance.
Although LCMV is known to infect hamsters and guinea pigs, data are insufficient to determine the potential for infection of other rodent species (e.g., chinchillas, dwarf hamsters, or gerbils). However, husbandry practices in breeding facilities, distribution centers, and pet stores make cross-contamination with LCMV of other species a possibility. CDC is working with retailers in the pet industry to consider appropriate testing of these other rodent species.
Practices that can lead to cross-contamination of rodents include 1) housing healthy rodents in the same room or bin or in cages near potentially infected rodents (i.e., rodents from the MidSouth Distributors facility in Ohio); 2) handling or caring for rodents without washing hands or changing gloves after handling other rodents and between other animal-care activities, such as cleaning cages; 3) placing rodents in cages that previously housed other rodents without first decontaminating the cages with bleach or other appropriate disinfectants; and 4) reusing materials (e.g., water bottles, food dishes, bedding, or toys) that might be contaminated by potentially infected rodents.
Pet rodents that did not originate from MidSouth Distributors of Ohio and were not exposed to potential cross-contamination can be sold or distributed as normal. In addition, nonrodent species (e.g., ferrets and rabbits) can be sold or distributed as normal.
Pet stores are advised to work with state authorities to minimize the risk for transmission of LCMV from affected rodents to humans. Options considered by state authorities include 1) stopping sale or distribution of all rodents originating from MidSouth Distributors of Ohio since February, 2) stopping sale or distribution of hamsters and guinea pigs originating from MidSouth Distributors of Ohio since February, or 3) allowing distribution (i.e., sale or adoption), provided that appropriate educational material (e.g., state-approved informed consent or fact sheet) is provided to purchasers of pet rodents originating from MidSouth Distributors since February. Educational material should disclose the specific LCMV risk in this population of pet rodents and potential outcomes in humans, including birth defects and fetal deaths. If sale of rodents is allowed to continue, populations at high risk (i.e., pregnant women, women who think they might become pregnant, and persons with weakened immune systems) should be advised against purchasing a pet rodent (9).
# Preventing LCMV Infection in New Supplies of Rodents
Efforts are under way to ensure that animal facilities and equipment in retail outlets are disinfected, that new supplies of rodents come from sources free from LCMV, and that crosscontamination between new supplies of rodents and potentially infected animals will not occur. Surfaces, cages, and any reusable equipment that has been in contact with affected animals, their waste, or bedding material should be cleaned and disinfected by using a household disinfectant according to the manufacturer's instructions. Persons who are pregnant or have compromised immune systems should not engage in cleaning and disinfection related to these affected animals or other rodents. CDC and other partners will work with breeders and retailers in the pet industry to implement quality-assurance programs to minimize the risk for LCMV infection in rodents that are sold to the public.
# Previously Purchased Pet Rodents
Testing of individual pet rodents in households is not a recommended strategy to minimize risk for LCMV infection; the probability of any one rodent in the United States being infected is low. The greatest infection risk for a pet owner is likely to occur soon after purchase of a pet rodent. Thus, most exposures likely already have occurred for existing owners and substantial added risk is unlikely to result from continued ownership of the rodent. However, women who are or who plan to become pregnant and persons who are immunocompromised should avoid contact with all rodents.
To prevent any possible infection of other rodents in stores, owners should not return pet rodents from MidSouth Distributors to pet stores. For legal, ethical, and wildlife conservation considerations, owners should not release pet rodents into the wild. Persons who no longer wish to keep their pet rodent should consult a veterinarian.
CDC continues to work with state public health officials and retailers in the pet industry to educate the public regarding safe handling of pet rodents and has prepared educational material for reducing the risk for LCMV infection from pet rodents. Rodents and other pets from any pet store pose some risk for transmitting certain infectious diseases and should be handled appropriately. Additional information about reduc-ing the risk for infectious diseases from pets is available at . More detailed information about LCMV is available at / spb/mnpages/dispages/lcmv.htm. under which public health and emergency management work together); 2) detecting and declaring emergencies; 3) protecting persons (e.g., use of quarantine and isolation); 4) managing property; 5) mobilizing professional resources; and 6) advanced topics (e.g., legal implications of public communications during emergencies). The course also provides an interactive case study to reinforce learning points delivered during lectures.
Detailed information about PHEL and copies of the CD-ROM containing all of the course components are available from PHEL field coordinators at telephone, 770-220-0608, or email, [email protected] or [email protected].
# Notice to Readers
# Partners in Information Access for the Public Health Workforce Website
The Partners in Information Access for the Public Health Workforce is a collaboration of CDC and other federal agencies, public health organizations, and health sciences libraries. The group has created a website () to help members of the public health workforce find and use information effectively. The content of all linked sites has been reviewed by the group's editorial board.
The website's links are organized into 10 main categories: health promotion and health education, literature and guidelines, health data tools and statistics, grants and funding, education and training, legislation, conferences and meetings, finding people, discussion and e-mail lists, and jobs and careers. In addition, the website offers news items of interest to public health practitioners and links to several In 2003, U.S. adults reported spending an average of 5 days in bed during the preceding 12 months because of illness or injury. Younger adults had fewer bed days than older adults, and adults aged 18-44 years had the fewest bed days.
- Iowa - 1 - 1 - - - - Mo. 35 24 - - - - 7 4 N . D a k . 1 3 - - - - 1 - S . D a k .
-- Please note: An erratum has been published for this issue. To view the erratum, please click here.
- - - 1 1 H a w a i i 8 6 - - - - - - G u a m - - - - - - - - P . R . 1 2 - - - - - 2 V . I . - - - - - - - - A m e r . S
M o . 1 7 1 4 4 3 2 0 2 2 1 2 1 3 N . D a k . 1 1 2 - - - - 3 S . D a k . 7 3 - - - - - 1 N e b r . 1 1 - 1 - 7 - 2 K a n | 6,197 | {
"id": "2a18c095181a69b8cffe2a6105b7427f88b0edc7",
"source": "cdc",
"title": "None",
"url": "None"
} |
Subsets and Splits